2022-01-26 15:19:15 +00:00
|
|
|
|
|
|
|
Build & Release Notes
|
|
|
|
*********************
|
|
|
|
|
2020-01-16 09:34:45 +00:00
|
|
|
``rustup`` Toolchain
|
|
|
|
====================
|
|
|
|
|
2020-01-16 09:45:46 +00:00
|
|
|
We normally want to build with the ``rustc`` Debian package. To do that
|
|
|
|
you can set the following ``rustup`` configuration:
|
2020-01-16 09:34:45 +00:00
|
|
|
|
|
|
|
# rustup toolchain link system /usr
|
2020-01-16 09:45:46 +00:00
|
|
|
# rustup default system
|
2020-01-16 09:34:45 +00:00
|
|
|
|
|
|
|
|
2020-01-02 13:10:18 +00:00
|
|
|
Versioning of proxmox helper crates
|
|
|
|
===================================
|
|
|
|
|
|
|
|
To use current git master code of the proxmox* helper crates, add::
|
|
|
|
|
2020-10-12 08:34:52 +00:00
|
|
|
git = "git://git.proxmox.com/git/proxmox"
|
2020-01-02 13:10:18 +00:00
|
|
|
|
2020-01-03 08:40:33 +00:00
|
|
|
or::
|
|
|
|
|
|
|
|
path = "../proxmox/proxmox"
|
|
|
|
|
2020-01-02 13:10:18 +00:00
|
|
|
to the proxmox dependency, and update the version to reflect the current,
|
|
|
|
pre-release version number (e.g., "0.1.1-dev.1" instead of "0.1.0").
|
|
|
|
|
2020-10-09 09:34:55 +00:00
|
|
|
|
2020-01-03 08:40:33 +00:00
|
|
|
Local cargo config
|
|
|
|
==================
|
2020-01-02 13:10:18 +00:00
|
|
|
|
2020-01-03 08:40:33 +00:00
|
|
|
This repository ships with a ``.cargo/config`` that replaces the crates.io
|
|
|
|
registry with packaged crates located in ``/usr/share/cargo/registry``.
|
2020-01-02 13:10:18 +00:00
|
|
|
|
2020-01-03 08:40:33 +00:00
|
|
|
A similar config is also applied building with dh_cargo. Cargo.lock needs to be
|
|
|
|
deleted when switching between packaged crates and crates.io, since the
|
2020-01-02 13:10:18 +00:00
|
|
|
checksums are not compatible.
|
2020-01-03 08:40:33 +00:00
|
|
|
|
|
|
|
To reference new dependencies (or updated versions) that are not yet packaged,
|
|
|
|
the dependency needs to point directly to a path or git source (e.g., see
|
|
|
|
example for proxmox crate above).
|
2020-10-09 09:34:55 +00:00
|
|
|
|
|
|
|
|
|
|
|
Build
|
|
|
|
=====
|
2022-01-26 15:19:15 +00:00
|
|
|
on Debian 11 Bullseye
|
2020-10-09 09:34:55 +00:00
|
|
|
|
|
|
|
Setup:
|
2022-01-26 15:19:15 +00:00
|
|
|
1. # echo 'deb http://download.proxmox.com/debian/devel/ bullseye main' | sudo tee /etc/apt/sources.list.d/proxmox-devel.list
|
|
|
|
2. # sudo wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
|
2020-10-12 08:34:52 +00:00
|
|
|
3. # sudo apt update
|
|
|
|
4. # sudo apt install devscripts debcargo clang
|
|
|
|
5. # git clone git://git.proxmox.com/git/proxmox-backup.git
|
2022-01-26 15:19:15 +00:00
|
|
|
6. # cd proxmox-backup; sudo mk-build-deps -ir
|
2020-10-12 08:34:52 +00:00
|
|
|
|
|
|
|
Note: 2. may be skipped if you already added the PVE or PBS package repository
|
2020-10-09 09:34:55 +00:00
|
|
|
|
2022-01-26 15:19:15 +00:00
|
|
|
You are now able to build using the Makefile or cargo itself, e.g.::
|
2020-12-01 07:47:15 +00:00
|
|
|
|
2022-01-26 15:19:15 +00:00
|
|
|
# make deb-all
|
|
|
|
# # or for a non-package build
|
|
|
|
# cargo build --all --release
|
2020-12-01 07:47:15 +00:00
|
|
|
|
|
|
|
Design Notes
|
2022-01-26 15:19:15 +00:00
|
|
|
************
|
2020-12-01 07:47:15 +00:00
|
|
|
|
|
|
|
Here are some random thought about the software design (unless I find a better place).
|
|
|
|
|
|
|
|
|
|
|
|
Large chunk sizes
|
2022-01-26 15:19:15 +00:00
|
|
|
=================
|
2020-12-01 07:47:15 +00:00
|
|
|
|
2022-01-26 15:19:15 +00:00
|
|
|
It is important to notice that large chunk sizes are crucial for performance.
|
|
|
|
We have a multi-user system, where different people can do different operations
|
|
|
|
on a datastore at the same time, and most operation involves reading a series
|
|
|
|
of chunks.
|
2020-12-01 07:47:15 +00:00
|
|
|
|
2022-01-26 15:19:15 +00:00
|
|
|
So what is the maximal theoretical speed we can get when reading a series of
|
|
|
|
chunks? Reading a chunk sequence need the following steps:
|
2020-12-01 07:47:15 +00:00
|
|
|
|
2022-01-26 15:19:15 +00:00
|
|
|
- seek to the first chunk's start location
|
2020-12-01 07:47:15 +00:00
|
|
|
- read the chunk data
|
2022-01-26 15:19:15 +00:00
|
|
|
- seek to the next chunk's start location
|
2020-12-01 07:47:15 +00:00
|
|
|
- read the chunk data
|
|
|
|
- ...
|
|
|
|
|
|
|
|
Lets use the following disk performance metrics:
|
|
|
|
|
|
|
|
:AST: Average Seek Time (second)
|
|
|
|
:MRS: Maximum sequential Read Speed (bytes/second)
|
|
|
|
:ACS: Average Chunk Size (bytes)
|
|
|
|
|
|
|
|
The maximum performance you can get is::
|
|
|
|
|
|
|
|
MAX(ACS) = ACS /(AST + ACS/MRS)
|
|
|
|
|
|
|
|
Please note that chunk data is likely to be sequential arranged on disk, but
|
|
|
|
this it is sort of a best case assumption.
|
|
|
|
|
|
|
|
For a typical rotational disk, we assume the following values::
|
|
|
|
|
|
|
|
AST: 10ms
|
|
|
|
MRS: 170MB/s
|
|
|
|
|
|
|
|
MAX(4MB) = 115.37 MB/s
|
|
|
|
MAX(1MB) = 61.85 MB/s;
|
|
|
|
MAX(64KB) = 6.02 MB/s;
|
|
|
|
MAX(4KB) = 0.39 MB/s;
|
|
|
|
MAX(1KB) = 0.10 MB/s;
|
|
|
|
|
|
|
|
Modern SSD are much faster, lets assume the following::
|
|
|
|
|
|
|
|
max IOPS: 20000 => AST = 0.00005
|
|
|
|
MRS: 500Mb/s
|
|
|
|
|
|
|
|
MAX(4MB) = 474 MB/s
|
|
|
|
MAX(1MB) = 465 MB/s;
|
|
|
|
MAX(64KB) = 354 MB/s;
|
|
|
|
MAX(4KB) = 67 MB/s;
|
|
|
|
MAX(1KB) = 18 MB/s;
|
2020-12-01 09:28:06 +00:00
|
|
|
|
|
|
|
|
|
|
|
Also, the average chunk directly relates to the number of chunks produced by
|
|
|
|
a backup::
|
|
|
|
|
|
|
|
CHUNK_COUNT = BACKUP_SIZE / ACS
|
|
|
|
|
|
|
|
Here are some staticics from my developer worstation::
|
|
|
|
|
|
|
|
Disk Usage: 65 GB
|
|
|
|
Directories: 58971
|
|
|
|
Files: 726314
|
|
|
|
Files < 64KB: 617541
|
|
|
|
|
|
|
|
As you see, there are really many small files. If we would do file
|
|
|
|
level deduplication, i.e. generate one chunk per file, we end up with
|
|
|
|
more than 700000 chunks.
|
|
|
|
|
|
|
|
Instead, our current algorithm only produce large chunks with an
|
|
|
|
average chunks size of 4MB. With above data, this produce about 15000
|
|
|
|
chunks (factor 50 less chunks).
|