Go to file
Dominik Csapak 0bd9c87010 datastore: lookup: reuse ChunkStore on stale datastore re-open
When re-opening a datastore due to the cached entry being stale
(config change) but also if the last re-open was >60s ago). On
datastore open the chunk store was also re-opened, which in turn
creates a new ProcessLocker, loosing any existing shared lock which
can cause conflicts between long running (24h+) backups  and GC.

To fix this, reuse the existing ChunkStore, and thus  its
ProcessLocker, when creating a up-to-date datastore instance on
lookup, since only the datastore config should be reloaded. This is
fine as the ChunkStore path is not updatable over our API.

This was always a potential issue but got exposed in practice by
commit 118deb4db8 which introduced the
unconditional "re-open after 60s" mechanism.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
 [ T: reword commit message a bit and reference commit that made the
   issue much more likely ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 17:00:49 +02:00
.cargo cargo: switch to use packaged crates by default 2020-01-03 09:40:33 +01:00
debian update to nix 0.24 / rustyline 9 / proxmox-sys 0.3 2022-06-02 14:33:33 +02:00
docs docs: fix yet another typo 2022-05-26 13:26:56 +02:00
etc update enterprise repository to bullseye 2021-06-28 19:57:50 +02:00
examples split the namespace out of BackupGroup/Dir api types 2022-05-12 09:33:50 +02:00
pbs-api-types api types: clippy lints 2022-06-02 15:57:07 +02:00
pbs-buildcfg bump version to 2.2.2-1 2022-06-01 13:04:37 +02:00
pbs-client client: clippy lints 2022-06-02 15:57:33 +02:00
pbs-config tree wide: clippy lint fixes 2022-06-02 15:59:55 +02:00
pbs-datastore datastore: lookup: reuse ChunkStore on stale datastore re-open 2022-06-02 17:00:49 +02:00
pbs-fuse-loop update to nix 0.24 / rustyline 9 / proxmox-sys 0.3 2022-06-02 14:33:33 +02:00
pbs-tape update to nix 0.24 / rustyline 9 / proxmox-sys 0.3 2022-06-02 14:33:33 +02:00
pbs-tools tree wide: clippy lint fixes 2022-06-02 15:59:55 +02:00
proxmox-backup-banner update to nix 0.24 / rustyline 9 / proxmox-sys 0.3 2022-06-02 14:33:33 +02:00
proxmox-backup-client tree wide: clippy lint fixes 2022-06-02 15:59:55 +02:00
proxmox-file-restore tree wide: clippy lint fixes 2022-06-02 15:59:55 +02:00
proxmox-rest-server tree wide: clippy lint fixes 2022-06-02 15:59:55 +02:00
proxmox-restore-daemon tree wide: clippy lint fixes 2022-06-02 15:59:55 +02:00
proxmox-rrd update to nix 0.24 / rustyline 9 / proxmox-sys 0.3 2022-06-02 14:33:33 +02:00
pxar-bin tree wide: clippy lint fixes 2022-06-02 15:59:55 +02:00
src tree wide: clippy lint fixes 2022-06-02 15:59:55 +02:00
tests manager: hidden command to move datastore prune opts into jobs 2022-05-30 13:58:43 +02:00
www ui: add prune job worker task description and renderer 2022-05-31 13:11:23 +02:00
zsh-completions zsh: fix completions 2021-09-03 10:29:48 +02:00
.gitignore .gitignore: do not ingnor .html files - we have some of them in the repository 2021-02-21 10:04:52 +01:00
Cargo.toml update to nix 0.24 / rustyline 9 / proxmox-sys 0.3 2022-06-02 14:33:33 +02:00
defines.mk docs: add datastore.cfg.5 man page 2021-02-10 11:05:02 +01:00
Makefile buildsys: drop hack that moved testing after dh_install 2021-12-15 14:25:32 +01:00
README.rst README: update for bullseye 2022-01-26 16:19:21 +01:00
rustfmt.toml import rustfmt.toml 2019-08-22 13:44:57 +02:00
TODO.rst tape: add/use rust scsi changer implementation using libsgutil2 2021-01-25 13:14:07 +01:00

Build & Release Notes
*********************

``rustup`` Toolchain
====================

We normally want to build with the ``rustc`` Debian package. To do that
you can set the following ``rustup`` configuration:

    # rustup toolchain link system /usr
    # rustup default system


Versioning of proxmox helper crates
===================================

To use current git master code of the proxmox* helper crates, add::

   git = "git://git.proxmox.com/git/proxmox"

or::

   path = "../proxmox/proxmox"

to the proxmox dependency, and update the version to reflect the current,
pre-release version number (e.g., "0.1.1-dev.1" instead of "0.1.0").


Local cargo config
==================

This repository ships with a ``.cargo/config`` that replaces the crates.io
registry with packaged crates located in ``/usr/share/cargo/registry``.

A similar config is also applied building with dh_cargo. Cargo.lock needs to be
deleted when switching between packaged crates and crates.io, since the
checksums are not compatible.

To reference new dependencies (or updated versions) that are not yet packaged,
the dependency needs to point directly to a path or git source (e.g., see
example for proxmox crate above).


Build
=====
on Debian 11 Bullseye

Setup:
  1. # echo 'deb http://download.proxmox.com/debian/devel/ bullseye main' | sudo tee /etc/apt/sources.list.d/proxmox-devel.list
  2. # sudo wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
  3. # sudo apt update
  4. # sudo apt install devscripts debcargo clang
  5. # git clone git://git.proxmox.com/git/proxmox-backup.git
  6. # cd proxmox-backup; sudo mk-build-deps -ir

Note: 2. may be skipped if you already added the PVE or PBS package repository

You are now able to build using the Makefile or cargo itself, e.g.::

  # make deb-all
  # # or for a non-package build
  # cargo build --all --release

Design Notes
************

Here are some random thought about the software design (unless I find a better place).


Large chunk sizes
=================

It is important to notice that large chunk sizes are crucial for performance.
We have a multi-user system, where different people can do different operations
on a datastore at the same time, and most operation involves reading a series
of chunks.

So what is the maximal theoretical speed we can get when reading a series of
chunks? Reading a chunk sequence need the following steps:

- seek to the first chunk's start location
- read the chunk data
- seek to the next chunk's start location
- read the chunk data
- ...

Lets use the following disk performance metrics:

:AST: Average Seek Time (second)
:MRS: Maximum sequential Read Speed (bytes/second)
:ACS: Average Chunk Size (bytes)

The maximum performance you can get is::

  MAX(ACS) = ACS /(AST + ACS/MRS)

Please note that chunk data is likely to be sequential arranged on disk, but
this it is sort of a best case assumption.

For a typical rotational disk, we assume the following values::

  AST: 10ms
  MRS: 170MB/s

  MAX(4MB)  = 115.37 MB/s
  MAX(1MB)  =  61.85 MB/s;
  MAX(64KB) =   6.02 MB/s;
  MAX(4KB)  =   0.39 MB/s;
  MAX(1KB)  =   0.10 MB/s;

Modern SSD are much faster, lets assume the following::

  max IOPS: 20000 => AST = 0.00005
  MRS: 500Mb/s

  MAX(4MB)  = 474 MB/s
  MAX(1MB)  = 465 MB/s;
  MAX(64KB) = 354 MB/s;
  MAX(4KB)  =  67 MB/s;
  MAX(1KB)  =  18 MB/s;


Also, the average chunk directly relates to the number of chunks produced by
a backup::

  CHUNK_COUNT = BACKUP_SIZE / ACS

Here are some staticics from my developer worstation::

  Disk Usage:       65 GB
  Directories:   58971
  Files:        726314
  Files < 64KB: 617541

As you see, there are really many small files. If we would do file
level deduplication, i.e. generate one chunk per file, we end up with
more than 700000 chunks.

Instead, our current algorithm only produce large chunks with an
average chunks size of 4MB. With above data, this produce about 15000
chunks (factor 50 less chunks).