930a71460f2556c057d3ec8e1768d5226f9c97c3
The new SizeUnit type takes over the auto scaling logic and could be
used on its own too.
Switch the internal type of HumanByte from u64 to f64, this results
in a slight reduce of usable sizes we can represent (there's no
unsigned float type after all) but we support pebibyte now with quite
the precision and ebibytes should be also work out ok, and that
really should us have covered for a while..
Partially adapted by Dietmar's version, but split up and change so:
* there's no None type, for a SizeUnit that does not makes much sense
* print the unit for byte too, better consistency and one can still
use as_u64() or as_f64() if they do not want/need the unit rendered
* left the "From usize/u64" impls intact, just convenient to have and
avoids all over the tree changes to adapt to loosing that
* move auto-scaling into SizeUnit, good fit there and I could see
some re-use potential for non-human-byte users in the future
* impl Display for SizeUnit instead of the separate unit_str method,
better usability as it can be used directly in format (with zero
alloc/copy) and saw no real reason of not having that this way
* switch the place where we auto-scale in HumanByte's to the new_X
helpers which allows for slightly reduced code usage and simplify
implementation where possible
* use rounding for the precision limit algorithm. This is a stupid
problem as in practices there are cases for requiring every variant:
- flooring would be good for limits, better less than to much
- ceiling would be good for file sizes, to less can mean ENOSPACE
and user getting angry if their working value is messed with
- rounding can be good for rendering benchmark, closer to reality
and no real impact
So going always for rounding is really not the best solution..
Some of those changes where naturally opinionated, if there's a good
practical reason we can switch back (or to something completely
different).
The single thing I kept and am not _that_ happy with is being able to
have fractional bytes (1.1 B or even 0.01 B), which just does not
makes much sense as most of those values cannot exist at all in
reality - I say most as multiple of 1/8 Byte can exists, those are
bits.o
Note, the precission also changed from fixed 2 to max 3 (trailing
zeros stripped), while that can be nice we should see if we get
a better precision limiting algorithm, e.g., directly in the printer.
Rust sadly does not supports "limit to precision of 3 but avoid
trailing zeros" so we'd need to adapt their Grisu based algorithm our
own - way to much complexity for this though..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
``rustup`` Toolchain
====================
We normally want to build with the ``rustc`` Debian package. To do that
you can set the following ``rustup`` configuration:
# rustup toolchain link system /usr
# rustup default system
Versioning of proxmox helper crates
===================================
To use current git master code of the proxmox* helper crates, add::
git = "git://git.proxmox.com/git/proxmox"
or::
path = "../proxmox/proxmox"
to the proxmox dependency, and update the version to reflect the current,
pre-release version number (e.g., "0.1.1-dev.1" instead of "0.1.0").
Local cargo config
==================
This repository ships with a ``.cargo/config`` that replaces the crates.io
registry with packaged crates located in ``/usr/share/cargo/registry``.
A similar config is also applied building with dh_cargo. Cargo.lock needs to be
deleted when switching between packaged crates and crates.io, since the
checksums are not compatible.
To reference new dependencies (or updated versions) that are not yet packaged,
the dependency needs to point directly to a path or git source (e.g., see
example for proxmox crate above).
Build
=====
on Debian Buster
Setup:
1. # echo 'deb http://download.proxmox.com/debian/devel/ buster main' >> /etc/apt/sources.list.d/proxmox-devel.list
2. # sudo wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
3. # sudo apt update
4. # sudo apt install devscripts debcargo clang
5. # git clone git://git.proxmox.com/git/proxmox-backup.git
6. # sudo mk-build-deps -ir
Note: 2. may be skipped if you already added the PVE or PBS package repository
You are now able to build using the Makefile or cargo itself.
Design Notes
============
Here are some random thought about the software design (unless I find a better place).
Large chunk sizes
-----------------
It is important to notice that large chunk sizes are crucial for
performance. We have a multi-user system, where different people can do
different operations on a datastore at the same time, and most operation
involves reading a series of chunks.
So what is the maximal theoretical speed we can get when reading a
series of chunks? Reading a chunk sequence need the following steps:
- seek to the first chunk start location
- read the chunk data
- seek to the first chunk start location
- read the chunk data
- ...
Lets use the following disk performance metrics:
:AST: Average Seek Time (second)
:MRS: Maximum sequential Read Speed (bytes/second)
:ACS: Average Chunk Size (bytes)
The maximum performance you can get is::
MAX(ACS) = ACS /(AST + ACS/MRS)
Please note that chunk data is likely to be sequential arranged on disk, but
this it is sort of a best case assumption.
For a typical rotational disk, we assume the following values::
AST: 10ms
MRS: 170MB/s
MAX(4MB) = 115.37 MB/s
MAX(1MB) = 61.85 MB/s;
MAX(64KB) = 6.02 MB/s;
MAX(4KB) = 0.39 MB/s;
MAX(1KB) = 0.10 MB/s;
Modern SSD are much faster, lets assume the following::
max IOPS: 20000 => AST = 0.00005
MRS: 500Mb/s
MAX(4MB) = 474 MB/s
MAX(1MB) = 465 MB/s;
MAX(64KB) = 354 MB/s;
MAX(4KB) = 67 MB/s;
MAX(1KB) = 18 MB/s;
Also, the average chunk directly relates to the number of chunks produced by
a backup::
CHUNK_COUNT = BACKUP_SIZE / ACS
Here are some staticics from my developer worstation::
Disk Usage: 65 GB
Directories: 58971
Files: 726314
Files < 64KB: 617541
As you see, there are really many small files. If we would do file
level deduplication, i.e. generate one chunk per file, we end up with
more than 700000 chunks.
Instead, our current algorithm only produce large chunks with an
average chunks size of 4MB. With above data, this produce about 15000
chunks (factor 50 less chunks).
Description
Languages
Rust
88.1%
JavaScript
11.3%
Makefile
0.3%
CSS
0.2%