Compare commits

..

159 Commits

Author SHA1 Message Date
77d634710e bump version to 0.8.7-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 12:05:34 +02:00
5c5181a252 d/lintian-overrides: ignore systemd-service-file-refers-to-unusual-wantedby-target
proxmox-backup-banner.service needs getty.target

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 11:08:36 +02:00
67042466e8 ui: datastore edit: avoid an extra indentation level
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 10:56:36 +02:00
757d0ccc76 warning fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-14 10:37:14 +02:00
4a55fa87d5 bump version to 0.8.7-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-14 10:25:53 +02:00
032cd1b862 pxar: restore file attributes, improve errors
and use the correct integer types for these operations

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-14 10:25:45 +02:00
ec2434fe3c ui: buildsys: add lint target
not yet automatically called on build, as it still fails.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 07:43:01 +02:00
34389132d9 docs: installation: add note where to find the webinterface
As the 8007 vs 8006 port is new and could confuse people, especially
if they did not used the PBS installer.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 07:35:59 +02:00
78ee20d72d docs: fix typo s/PBS_REPOSTOR/PBS_REPOSITOR/
Reported-by: Piotr Paszkowski aka patefoniQ
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-13 19:23:50 +02:00
601e42ac35 ui: running tasks: update limit to 100
else we'll never see the 99+ tasks ..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-11 12:53:32 +02:00
e1897b363b docs: add secure-apt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 14:12:51 +02:00
cf063c1973 bump version to 0.8.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:35:04 +02:00
f58233a73a src/backup/data_blob_reader.rs: avoid unwrap() - return error instead 2020-07-10 11:28:19 +02:00
d257c2ecbd ui: fingerprint: add icon to copy button
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:17:20 +02:00
e4ee7b7ac8 ui: fingerprint: add copy button
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:13:54 +02:00
1f0d23f792 ui: add show fingerprint button to dashboard
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
bfcef26a99 api2/node/status: add fingerprint
and rename get_usage to get_status (since its not usage only anymore)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
ec01eeadc6 refactor CertInfo to tools
we want to reuse some of the functionality elsewhere

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
660a34892d update proxmox crate to 0.2.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-10 11:08:27 +02:00
d86034afec src/bin/proxmox_backup_client/catalog.rs: fix keyfile handling 2020-07-10 10:36:45 +02:00
62593aba1e src/backup/manifest.rs: fix signature (exclude 'signature' property) 2020-07-10 10:36:45 +02:00
0eaef8eb84 client: show key path when creating/changing default key
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-10 09:58:24 +02:00
e39974afbf client: add simple version command
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 09:34:07 +02:00
dde18bbb85 proxmox-backup-client benchmark: improve output format 2020-07-10 09:13:52 +02:00
a40e1b0e8b src/server/rest.rs: avoid compiler warning 2020-07-10 09:13:52 +02:00
a0eb0cd372 ui: running task: increase active limit we show in badge to 99
Two digits fit nicely, and the extra plus for the >99 case doesn't
takes that much space either. So that and the fact that 9 is just
really low makes me bump this to 99 as cut-off value.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:56:46 +02:00
41067870c6 ui: tune badge styling a bit
the idea is to blend in when no task is running, thus no
background-color there. When tasks are running use the proxmox
branding guideline dark-grey, it isn't used as often so it should
fall into ones eye when changing but it has some use so it doesn't
seems out of place.

Reduce the border radius by a lot, so that it seems similar to the
one our ExtJS theme uses for the buttons outside - the original
border radius seems like it comes from the time where this was
intended to be a floating badge, there it'd make sense but as
integrated button one this seems to fit the style much more.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:51:25 +02:00
33a87bc39a docs: reference PDF variant in HTML output
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:31:38 +02:00
bed3e15f16 debian/proxmox-backup-docs.links: fix name and target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:23:41 +02:00
c687da9e8e datastore: chown base dir on creation
When creating a new datastore the basedir is only owned by the backup
user if it did not exist beforehand (create_path chowns only if it
creates the directory), and returns false if it did not create the
directory).

This improves the experience when adding a new datastore on a fresh
disk or existing directory (not owned by backup) - backups/pulls can
be run instead of terminating with EPERM.

Tested on my local testinstall with a new disk, and a existing directory:

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 18:20:16 +02:00
be30e7d269 ui: dashboard/TaskSummary: fade icons if count is zero
so that users can see the relevant counts faster

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 17:10:47 +02:00
106603c58f ui: fix crypt mode caluclation
also include 'mixed' in the calculation of the overall mode of a
snapshot and group

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 17:09:56 +02:00
7ba2c1c386 docs: add initial basic software stack definition
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 17:09:05 +02:00
4327a8462a proxmox-backup-client benchamrk: add more speed tests 2020-07-09 17:07:22 +02:00
e193544b8e src/server/rest.rs: disable debug logs 2020-07-09 16:18:14 +02:00
323b2f3dd6 proxmox-backup-client benchmark: add --verbose flag 2020-07-09 16:16:39 +02:00
7884e7ef4f bump version to 0.8.5-1 2020-07-09 15:35:07 +02:00
fae11693f0 fix cross process task listing
it does not make sense to check if the worker is running if we already
have an endtime and state

our 'worker_is_active_local' heuristic returns true for non
process-local tasks, so we got 'running' for all tasks that were not
started by 'our' pid and were still running

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 15:30:52 +02:00
22231524e2 docs: expand datastore documentation
document retention settings and schedules per datastore with
some minimal examples.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 15:04:26 +02:00
9634ca07db docs: add remotes and sync-jobs and schedules
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 15:04:26 +02:00
62f6a7e3d9 bump pathpatterns to 0.1.2
Fixes `**/foo` not matching "foo" without slashes.
(`**/lost+found` now matches the `lost+found` dir at the
root of our tree properly).

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 14:34:10 +02:00
86443141b5 ui: align version and user-menu spacing with pve/pmg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 14:31:21 +02:00
f6e964b96e ui: make username a menu-button
like we did in PVE and PMG

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 14:31:21 +02:00
c8bed1b4d7 bump version to 0.8.4-1 2020-07-09 14:28:44 +02:00
a3970d6c1e ui: add TaskButton in header
opens a grid with the running tasks and a shortcut the the node tasks

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 14:26:57 +02:00
cc83c13660 ui: add RunningTasksStore
so that we have a global store for running tasks

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 14:26:57 +02:00
bf7e2a4648 simpler lost+found pattern
the **/ is not required and currently also mistakenly
doesn't match /lost+found which is probably buggy on the
pathpatterns crate side and needs fixing there

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 14:06:42 +02:00
e284073e4a bump version to 0.8.3-1 2020-07-09 13:55:15 +02:00
3ec99affc8 get_disks: don't fail on zfs_devices
zfs does not have to be installed, so simply log an error and
continue, users still get an error when clicking directly on
ZFS

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:47:31 +02:00
a9649ddc44 disks/zpool_status: add test for pool with special character
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
4f9096a211 disks/zpool_list: allow some more characters for pool list
not exhaustive of what zfs allows (space is missing), but this
can be done easily without problems

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
c3a4b5e2e1 zpool_list: add tests for special pool names
those names are allowed for zpools

these will fail for now, but it will be fixed in the next commit

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
7957fabff2 api: add ZPOOL_NAME_SCHEMA and regex
poolnames can containe spaces and some other special characters

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
20a4e4e252 minor optimization to 'to_canonical_json'
* don't clone hash keys, just use references
* we don't need a String, stick to Vec<u8> and use
  serde_json::to_writer to avoid a temporary strings
  altogether

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 13:32:11 +02:00
2774566b03 ui: adapt for new sign-only crypt mode
we can now show 'none', 'encprypted', 'signed' or 'mixed' for
the crypt mode

also adds a different icon for signed files, and adds a hint that
signatures cannot be verified on the server

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:28:55 +02:00
4459ffe30e src/backup/manifest.rs: add default toömake it compatible with older backus 2020-07-09 13:25:38 +02:00
d16ed66c88 bump version toö 0.8.2-1 2020-07-09 11:59:10 +02:00
3ec6e249b3 buildsys: also upload debug packages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 11:39:10 +02:00
dfa517ad6c src/backup/manifest.rs: rename into_string -> to_string
And do not consume self.
2020-07-09 11:28:05 +02:00
8b2ad84a25 bump version to 0.8.1-1 2020-07-09 10:01:31 +02:00
3dacedce71 src/backup/manifest.rs: use serde_json::from_value() to deserialize data
Also modified from_data compute signature ditectly from json.
2020-07-09 09:50:28 +02:00
512d50a455 typos
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 09:34:58 +02:00
b53f637914 src/backup/manifest.rs: cleanup signature generation 2020-07-09 09:20:49 +02:00
152a926149 tests/blob_writer.rs: make it work again 2020-07-09 09:15:15 +02:00
7f388acea8 ship pbstest repo as sources.list.d file for beta
NOTE: the repo url is not yet working at time of commit, this is a
preparatory step.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 19:09:31 +02:00
b2bfb46835 docs: package repos: drop non-tests for now
they won't work and thus just confuse people, re-add them once we're
releasing final.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 18:17:55 +02:00
24406ebc0c docs: move host sysadmin out to own chapter, fix ZFS one
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 18:15:33 +02:00
1f24d9114c docs: add missing todos
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 18:14:17 +02:00
859fe9c1fb add local-zfs.rst
content is > 90% same as local-zfs.adoc in pve-docs.

adapted the format for .rst

fixed some typos and wrote some parts slightly different (wording).

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-07-08 16:49:40 +02:00
2107a5aebc src/backup/manifest.rs: include signature inside the manifest
This is more flexible, because we can choose what fileds we want to sign.
2020-07-08 16:23:26 +02:00
3638341aa4 src/backup/file_formats.rs: remove signed chunks
We can include signature in the manifest instead (patch will follow).
2020-07-08 16:23:26 +02:00
067fe514e6 docs: fix repo paths
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 15:41:09 +02:00
8c6e5ce23c improve administration guide
fixing some typos and grammar errors.

added example file layout for datastores.

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-07-08 14:20:49 +02:00
0351f23ba4 client: introduce --keyfd parameter
This is a more convenient way to pass along the key when
creating encrypted backups of unprivileged containers in PVE
where the unprivileged user namespace cannot access
`/etc/pve/priv`.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 13:56:38 +02:00
c1ff544eff src/backup/crypt_config.rs - compute_digest: make it more secure 2020-07-08 12:53:04 +02:00
69e5d71961 ui: ds/content: disable some button for in-progress backup
We cannot verify, download, file-browse backups which are currently
in progress.

'Forget' could work but is probably not desirable?

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 12:22:00 +02:00
48e22a8900 ui: ds/content: do not count in-progress backups for last made one
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 12:11:27 +02:00
a7a5f56daa ui: ds/content: show spinner for backups in progress
use the fact that they do not have a size property at all

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 12:09:21 +02:00
05389a0109 more xdg cleanup and encryption parameter improvements
Have a single common function to get the BaseDirectories
instance and a wrapper for `find()` and `place()` which
wrap the error with some context.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 10:57:28 +02:00
b65390ebc9 client: xdg usage: place() vs find()
place() is used when creating a file, as it will create
intermediate directories, only use it when actually placing
a new file.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 10:57:28 +02:00
3bad3e6e52 src/client/backup_writer.rs - upload_stream: add crypt_mode 2020-07-08 10:43:28 +02:00
24be37e3f6 client: fix schema to include --crypt-mode parameter
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 10:09:15 +02:00
1008a69a13 pxar: less confusing logic
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:58:29 +02:00
521a0acb2e DataStore::load_manifest: also return CryptMode
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:19:53 +02:00
3b66040de6 add DataBlob::crypt_mode
and move use statements up

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:19:53 +02:00
af3a0ae7b1 remove CryptMode::sign_only special method
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:19:53 +02:00
4e36f78438 src/backup/manifest.rs: support old encrypted property
Just to avoid confusion.
2020-07-08 08:52:27 +02:00
f28d9088ed introduce a CryptMode enum
This also replaces the recently introduced --encryption
parameter on the client with a --crypt-mode parameter.

This can be "none", "encrypt" or "sign-only".

Note that this introduces various changes in the API types
which previously did not take the above distinction into
account properly:

Both `BackupContent` and the manifest's `FileInfo`:
    lose `encryption: Option<bool>`
    gain `crypt_mode: Option<CryptMode>`

Within the backup manifest itself, the "crypt-mode" property
will always be set.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-07 15:24:19 +02:00
56b814e378 docs: add getting help section
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-07 13:24:39 +02:00
0c136efe30 docs: features: minor wording
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-07 13:23:17 +02:00
cdead6cd12 docs: drop initial out of context sentence
the footer mentions sphinx and this feels weird to read as user
(which doesn't really cares what language/format the source of the
docs are in)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-07 13:22:03 +02:00
c950826e46 bump version to 0.8.0-1 2020-07-07 10:15:44 +02:00
f91d58e157 src/tools/runtime.rs: implement get_runtime_with_builder 2020-07-07 10:11:04 +02:00
1ff840ffad bump version to 0.7.0-1 2020-07-07 07:40:22 +02:00
7443a6e092 src/client/remote_chunk_reader.rs: implement clone for RemoteChunkReader 2020-07-07 07:34:58 +02:00
3a9988638b docs: move todolist to own document, don't link in release build
It is always build for html, but not linked if the devbuild tag isn't
set. This tag is set in the Makefile if the $(BUILD_MODE) variable
isn't "release".

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-06 14:44:53 +02:00
96ee857752 client: add --encryption boolen parameter
This can be used to explicitly disable encryption even if a
default key file exists in ~/.config.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
887018bb79 client: use default encryption key if it is available
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
9696f5193b client: move key management into separate module
and use api macro for methods and Kdf type

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
e13c4f66bb minor style & whitespace fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 10:55:25 +02:00
8a25809573 docs: sync up copyright years
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:57:47 +02:00
d87b193b0b docs: todo: avoid leaking build details, link only
One can just search for them... If really wanted, we could set it to
true for dev builds (i.e., no DEB_VERSION defined)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:54:00 +02:00
ea5289e869 d/rules: do not compress .pdf files
as else the docs .pdf is a PITA to use for some endusers..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:53:04 +02:00
1f6a4f587a docs: do not hardcode version
use the debian package ones, if not defined we're doing a dev build

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:51:58 +02:00
705b2293ec d/control: add missing dependencies for lvm, smartmontools and ZFS
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 19:37:43 +02:00
d2c7ef09ba docs: rework and add a bit to introduction
Contributed-by: Daniela Häsler <daniela@proxmox.com>
[ discussed and edited some parts live with me, Thomas ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:58:17 +02:00
27f86f997e docs: fix index title
Contributed-by: Daniela Häsler <daniela@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:57:04 +02:00
fc93d38076 ui: ZFS create: set name-field minLength to 3 to match backend
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:03:51 +02:00
a5a85d41ff ui: ZFS create: use correct typeParameter name for disk selector
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:00:12 +02:00
08cb2038bd api: disks: indentation fixup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:59:30 +02:00
6f711c1737 ui: ZFS list: fix details top-bar button handler
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:20:33 +02:00
42ec9f577f ui: buildsys: actually include PBS.window.ZFSCreate component in source
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:19:59 +02:00
9de69cdb1a src/bin/proxmox_backup_client/catalog.rs: split out catalog code 2020-07-03 16:45:47 +02:00
bd260569d3 ui: fix glitch on some zoom steps
if the baseCls is not 'x-plain' the background of the flex
element is white, and on some zoom steps it gets taller
than one pixel and appears as a white line

making it have the plain baseCls, so it does not get any
background color and is always invisible

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 13:05:19 +02:00
36cb4b30ef add beta text with link to bugtracker
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 13:05:08 +02:00
4e717240bf bump version to 0.6.0-1 2020-07-03 09:46:19 +02:00
e9764238df make ReadChunk not require mutable self.
That way we can reduce lock contentions because we lock for much shorter
times.
2020-07-03 07:37:29 +02:00
26f499b17b ui: increase timeout for snapshot listing
the api call can take a very long time (for now), until we can
improve that, increase the timeout from the default of 30s

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 06:14:21 +02:00
cc7995ac40 src/bin/proxmox_backup_client/task.rs: split out task command 2020-07-02 18:04:29 +02:00
43abba4b4f src/bin/proxmox_backup_client/mount.rs: split out mount code 2020-07-02 17:49:59 +02:00
58f950c546 ui: consistently spell Datastore without space between words
Not even hard feeling on 'Datastore' vs. 'Data Store' but consistency
is desired in such names.
Talked shortly with Dominik, which also slightly favored the one
without space - so just go for that one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-02 17:20:41 +02:00
c426e65893 ui: disk create: sync and improve 'add-datastore' checkbox label
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-02 17:06:37 +02:00
caea8d611f proxmox-backup-client: add benchmark command
This is just a start, We need to add more useful things here...
2020-07-02 14:01:57 +02:00
7d0754a6d2 pxar: fixup 'vanished-file' logic a bit
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-30 14:41:42 +02:00
5afa0755ea pxar: fix missing newlines in warnings
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-30 14:37:20 +02:00
40b63186a6 DataStoreConfig.js: add verify button 2020-06-30 13:28:42 +02:00
8f6088c130 DataStoreContent.js: add verify button 2020-06-30 13:22:02 +02:00
2162e2c15d src/api2/admin/datastore.rs: avoid slash in UPID strings 2020-06-30 13:11:22 +02:00
0d5ab04a90 bump version to 0.5.0-1 2020-06-29 13:01:11 +02:00
4059285649 fix typo 2020-06-29 12:59:25 +02:00
2e079b8bf2 partially revert commit 1f82f9b7b5
do it backwards compatible. Also, code was wrong because FixedIndexWriter
still computed old style csums...
2020-06-29 12:44:45 +02:00
4ff2c9b832 ui: allow to Forget (delete) backup snapshots. 2020-06-26 15:58:06 +02:00
a8e2940ff3 pxar: deal with files changing size during archiving
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-26 11:49:51 +02:00
d5d5f2174e bump version to 0.4.0-1 2020-06-26 10:43:52 +02:00
2311238450 depend on proxmox 0.1.41 2020-06-26 10:40:47 +02:00
2ea501ffdf ui: add ZFS management
adds a ZFSList and ZFSCreate class, modeled after the one in pve

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 10:33:23 +02:00
4eb4e94918 fix test output
field separator for pools is always a tab when using -H

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 10:31:11 +02:00
817bcda848 src/backup/verify.rs: do not stop on server shutdown
This is a read-only task, so there is no need to stop.
2020-06-26 09:45:59 +02:00
f6de2c7359 WorkerTask: add warnings and count them
so that we have one level more between errors and OK

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:42:11 +02:00
3f0b9c10ec ui: dashboard: remove 'wobbling' of tasks that have the same duration
by sorting them by upid after sorting by duration

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:13:33 +02:00
2b66abbfab ui: dashboard: use last value for holes in history graph
it is only designed to be a quick overview, so having holes there
is not really pretty and since we do not even show any date
for the points, we can simply reuse the last value for holes

the 'real' graph with holes is still available on the
DataStoreStatistics panel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:13:16 +02:00
402c8861d8 fix typo
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:12:29 +02:00
3f683799a8 improve 'debug' parameter
instead of checking on '1' or 'true', check that it is there and not
'0' and 'false'. this allows using simply

https://foo:8007/?debug

instead of

https://foo:8007/?debug=1

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:12:14 +02:00
573bcd9a92 ui: automatically add 'localhost' as nodename for all panels
this will make refactoring easier for panels that are reused from pve
(where we always have a hostname)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:11:36 +02:00
90779237ae ui: show proper loadMask for DataStoreContent
we have to use the correct store, and we have to manually show the
error (since monStoreErrors only works for Proxmox Proxies)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:11:10 +02:00
1f82f9b7b5 src/backup/index.rs: add compute_csum
And use it for fixed and dynamic index. Please note that this
changes checksums for fixed indexes, so restore older backups
will fails now (not backward compatible).
2020-06-26 09:00:34 +02:00
19b5c3c43e examples/upload-speed.rs: fix compile error 2020-06-26 08:59:51 +02:00
fe3e65c3ea src/api2/backup.rs: call register_chunk in previous download api 2020-06-26 08:22:46 +02:00
fdaab0df4e src/backup/index.rs: add chunk_info method 2020-06-26 08:14:45 +02:00
b957aa81bd update backup api for incremental backup
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-06-26 07:17:08 +02:00
8ea00f6e49 allow to abort verify jobs
And improve job description rendering on gui.
2020-06-25 12:56:36 +02:00
4bd789b0fa ui: file browser: expand child node if only one archive present
Get the first visible node through the Ext.data.NodeInterface defined
"firstChild" element and expand that if there's only one archive
present.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-25 12:52:42 +02:00
2f050cf2ed ui: file browser: adapt height for 4:3 instead of weird 2:1 ratio
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-25 11:59:23 +02:00
e22f4882e7 extract create_download_response API helper
and put it into a new "api2::helpers" module.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-06-25 11:57:37 +02:00
c65bc99a41 [chore] bump to using pxar 0.2.0
This breaks all previously created pxar archives!

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-25 09:46:56 +02:00
355c055e81 src/bin/proxmox-backup-manager.rs: implement verify 2020-06-24 13:35:21 +02:00
c2009e5309 src/api2/admin/datastore.rs: add verify api 2020-06-24 13:35:21 +02:00
23f74c190e src/backup/backup_info.rs: impl Display for BackupGroup 2020-06-24 13:35:21 +02:00
a6f8728339 update to pxar 0.1.9, update ReadAt implementations
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-24 11:57:12 +02:00
100 changed files with 4827 additions and 2186 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "0.3.0"
version = "0.8.7"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018"
license = "AGPL-3"
@ -37,12 +37,12 @@ pam = "0.7"
pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pathpatterns = "0.1.1"
proxmox = { version = "0.1.40", features = [ "sortable-macro", "api-macro" ] }
pathpatterns = "0.1.2"
proxmox = { version = "0.2.0", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro" ] }
proxmox-fuse = "0.1.0"
pxar = { version = "0.1.8", features = [ "tokio-io", "futures-io" ] }
pxar = { version = "0.2.0", features = [ "tokio-io", "futures-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io", "futures-io" ] }
regex = "1.2"
rustyline = "6"

View File

@ -37,11 +37,15 @@ CARGO ?= cargo
COMPILED_BINS := \
$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN))
export DEB_VERSION DEB_VERSION_UPSTREAM
SERVER_DEB=${PACKAGE}-server_${DEB_VERSION}_${ARCH}.deb
SERVER_DBG_DEB=${PACKAGE}-server-dbgsym_${DEB_VERSION}_${ARCH}.deb
CLIENT_DEB=${PACKAGE}-client_${DEB_VERSION}_${ARCH}.deb
CLIENT_DBG_DEB=${PACKAGE}-client-dbgsym_${DEB_VERSION}_${ARCH}.deb
DOC_DEB=${PACKAGE}-docs_${DEB_VERSION}_all.deb
DEBS=${SERVER_DEB} ${CLIENT_DEB}
DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB}
DSC = rust-${PACKAGE}_${DEB_VERSION}.dsc
@ -56,7 +60,7 @@ $(SUBDIRS):
test:
#cargo test test_broadcast_future
#cargo test $(CARGO_BUILD_ARGS)
$(CARGO) test $(tests) $(CARGO_BUILD_ARGS)
#$(CARGO) test $(tests) $(CARGO_BUILD_ARGS)
doc:
$(CARGO) doc --no-deps $(CARGO_BUILD_ARGS)
@ -140,5 +144,5 @@ install: $(COMPILED_BINS)
upload: ${SERVER_DEB} ${CLIENT_DEB} ${DOC_DEB}
# check if working directory is clean
git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - ${SERVER_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve" --dist buster
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve" --dist buster

View File

@ -30,8 +30,6 @@ Chores:
* move tools/xattr.rs and tools/acl.rs to proxmox/sys/linux/
* recompute PXAR_ header types from strings: avoid using numbers from casync
* remove pbs-* systemd timers and services on package purge

143
debian/changelog vendored
View File

@ -1,6 +1,147 @@
rust-proxmox-backup (0.8.7-2) unstable; urgency=medium
* support restoring file attributes from pxar archives
* docs: additions and fixes
* ui: running tasks: update limit to 100
-- Proxmox Support Team <support@proxmox.com> Tue, 14 Jul 2020 12:05:25 +0200
rust-proxmox-backup (0.8.6-1) unstable; urgency=medium
* ui: add button for easily showing the server fingerprint dashboard
* proxmox-backup-client benchmark: add --verbose flag and improve output
format
* docs: reference PDF variant in HTML output
* proxmox-backup-client: add simple version command
* improve keyfile and signature handling in catalog and manifest
-- Proxmox Support Team <support@proxmox.com> Fri, 10 Jul 2020 11:34:14 +0200
rust-proxmox-backup (0.8.5-1) unstable; urgency=medium
* fix cross process task listing
* docs: expand datastore documentation
* docs: add remotes and sync-jobs and schedules
* bump pathpatterns to 0.1.2
* ui: align version and user-menu spacing with pve/pmg
* ui: make username a menu-button
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 15:32:39 +0200
rust-proxmox-backup (0.8.4-1) unstable; urgency=medium
* add TaskButton in header
* simpler lost+found pattern
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 14:28:24 +0200
rust-proxmox-backup (0.8.3-1) unstable; urgency=medium
* get_disks: don't fail on zfs_devices
* allow some more characters for zpool list
* ui: adapt for new sign-only crypt mode
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 13:55:06 +0200
rust-proxmox-backup (0.8.2-1) unstable; urgency=medium
* buildsys: also upload debug packages
* src/backup/manifest.rs: rename into_string -> to_string
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 11:58:51 +0200
rust-proxmox-backup (0.8.1-1) unstable; urgency=medium
* remove authhenticated data blobs (not needed)
* add signature to manifest
* improve docs
* client: introduce --keyfd parameter
* ui improvements
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 10:01:25 +0200
rust-proxmox-backup (0.8.0-1) unstable; urgency=medium
* implement get_runtime_with_builder
-- Proxmox Support Team <support@proxmox.com> Tue, 07 Jul 2020 10:15:26 +0200
rust-proxmox-backup (0.7.0-1) unstable; urgency=medium
* implement clone for RemoteChunkReader
* improve docs
* client: add --encryption boolen parameter
* client: use default encryption key if it is available
* d/rules: do not compress .pdf files
* ui: various fixes
* add beta text with link to bugtracker
-- Proxmox Support Team <support@proxmox.com> Tue, 07 Jul 2020 07:40:05 +0200
rust-proxmox-backup (0.6.0-1) unstable; urgency=medium
* make ReadChunk not require mutable self.
* ui: increase timeout for snapshot listing
* ui: consistently spell Datastore without space between words
* ui: disk create: sync and improve 'add-datastore' checkbox label
* proxmox-backup-client: add benchmark command
* pxar: fixup 'vanished-file' logic a bit
* ui: add verify button
-- Proxmox Support Team <support@proxmox.com> Fri, 03 Jul 2020 09:45:52 +0200
rust-proxmox-backup (0.5.0-1) unstable; urgency=medium
* partially revert commit 1f82f9b7b5d231da22a541432d5617cb303c0000
* ui: allow to Forget (delete) backup snapshots
* pxar: deal with files changing size during archiving
-- Proxmox Support Team <support@proxmox.com> Mon, 29 Jun 2020 13:00:54 +0200
rust-proxmox-backup (0.4.0-1) unstable; urgency=medium
* change api for incremental backups mode
* zfs disk management gui
-- Proxmox Support Team <support@proxmox.com> Fri, 26 Jun 2020 10:43:27 +0200
rust-proxmox-backup (0.3.0-1) unstable; urgency=medium
* support incrtemental backups mode
* support incremental backups mode
* new disk management

3
debian/control.in vendored
View File

@ -3,11 +3,14 @@ Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8),
lvm2,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4),
smartmontools,
${misc:Depends},
${shlibs:Depends},
Recommends: zfsutils-linux,
Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface.

2
debian/lintian-overrides vendored Normal file
View File

@ -0,0 +1,2 @@
proxmox-backup-server: package-installs-apt-sources etc/apt/sources.list.d/pbstest-beta.list
proxmox-backup-server: systemd-service-file-refers-to-unusual-wantedby-target lib/systemd/system/proxmox-backup-banner.service getty.target

1
debian/proxmox-backup-docs.links vendored Normal file
View File

@ -0,0 +1 @@
/usr/share/doc/proxmox-backup/proxmox-backup.pdf /usr/share/doc/proxmox-backup/html/proxmox-backup.pdf

View File

@ -1,6 +1,7 @@
etc/proxmox-backup-proxy.service /lib/systemd/system/
etc/proxmox-backup.service /lib/systemd/system/
etc/proxmox-backup-banner.service /lib/systemd/system/
etc/pbstest-beta.list /etc/apt/sources.list.d/
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner

3
debian/rules vendored
View File

@ -45,3 +45,6 @@ override_dh_installsystemd:
# TODO: remove once available (Debian 11 ?)
override_dh_dwz:
dh_dwz --no-dwz-multifile
override_dh_compress:
dh_compress -X.pdf

View File

@ -1,11 +1,5 @@
include ../defines.mk
ifeq ($(BUILD_MODE), release)
COMPILEDIR := ../target/release
else
COMPILEDIR := ../target/debug
endif
GENERATED_SYNOPSIS := \
proxmox-backup-client/synopsis.rst \
proxmox-backup-client/catalog-shell-synopsis.rst \
@ -26,6 +20,15 @@ SPHINXOPTS =
SPHINXBUILD = sphinx-build
BUILDDIR = output
ifeq ($(BUILD_MODE), release)
COMPILEDIR := ../target/release
SPHINXOPTS += -t release
else
COMPILEDIR := ../target/debug
SPHINXOPTS += -t devbuild
endif
# Sphinx internal variables.
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) .

View File

@ -1,8 +1,7 @@
Administration Guide
====================
The administration guide.
Backup Management
=================
.. The administration guide.
.. todo:: either add a bit more explanation or remove the previous sentence
Terminology
@ -13,16 +12,16 @@ Backup Content
When doing deduplication, there are different strategies to get
optimal results in terms of performance and/or deduplication rates.
Depending on the type of data, one can split data into *fixed* or *variable*
Depending on the type of data, it can be split into *fixed* or *variable*
sized chunks.
Fixed sized chunking needs almost no CPU performance, and is used to
Fixed sized chunking requires minimal CPU power, and is used to
backup virtual machine images.
Variable sized chunking needs more CPU power, but is essential to get
good deduplication rates for file archives.
The backup server supports both strategies.
The Proxmox Backup Server supports both strategies.
File Archives: ``<name>.pxar``
@ -31,7 +30,7 @@ File Archives: ``<name>.pxar``
.. see https://moinakg.wordpress.com/2013/06/22/high-performance-content-defined-chunking/
A file archive stores a full directory tree. Content is stored using
the :ref:`pxar-format`, split into variable sized chunks. The format
the :ref:`pxar-format`, split into variable-sized chunks. The format
is optimized to achieve good deduplication rates.
@ -39,7 +38,7 @@ Image Archives: ``<name>.img``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is used for virtual machine images and other large binary
data. Content is split into fixed sized chunks.
data. Content is split into fixed-sized chunks.
Binary Data (BLOBs)
@ -56,7 +55,7 @@ Catalog File: ``catalog.pcat1``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The catalog file is an index for file archives. It contains
the list of files and is used to speed-up search operations.
the list of files and is used to speed up search operations.
The Manifest: ``index.json``
@ -74,12 +73,12 @@ The backup server groups backups by *type*, where *type* is one of:
``vm``
This type is used for :term:`virtual machine`\ s. Typically
contains the virtual machine's configuration and an image archive
consists of the virtual machine's configuration file and an image archive
for each disk.
``ct``
This type is used for :term:`container`\ s. Contains the container's
configuration and a single file archive for the container content.
This type is used for :term:`container`\ s. Consists of the container's
configuration and a single file archive for the filesystem content.
``host``
This type is used for backups created from within the backed up machine.
@ -90,7 +89,7 @@ The backup server groups backups by *type*, where *type* is one of:
Backup ID
~~~~~~~~~
An unique ID. Usually the virtual machine or container ID. ``host``
A unique ID. Usually the virtual machine or container ID. ``host``
type backups normally use the hostname.
@ -122,6 +121,13 @@ uniquely identifies a specific backup within a datastore.
As you can see, the time format is RFC3399_ with Coordinated
Universal Time (UTC_, identified by the trailing *Z*).
Backup Server Management
------------------------
The command line tool to configure and manage the backup server is called
:command:`proxmox-backup-manager`.
:term:`DataStore`
~~~~~~~~~~~~~~~~~
@ -134,20 +140,18 @@ Datastores are identified by a simple *ID*. You can configure it
when setting up the backup server.
Backup Server Management
------------------------
The command line tool to configure and manage the backup server is called
:command:`proxmox-backup-manager`.
Datastore Configuration
~~~~~~~~~~~~~~~~~~~~~~~
A :term:`datastore` is a place to store backups. You can configure
multiple datastores. At least one datastore needs to be
configured. The datastore is identified by a simple `name` and points
to a directory.
You can configure multiple datastores. Minimum one datastore needs to be
configured. The datastore is identified by a simple `name` and points to a
directory on the filesystem. Each datastore also has associated retention
settings of how many backup snapshots for each interval of ``hourly``,
``daily``, ``weekly``, ``monthly``, ``yearly`` as well as an time independent
number of backups to keep in that store. :ref:`Pruning <pruning>` and
:ref:`garbage collection <garbage-collection>` can also be configured to run
periodically based on a configured :term:`schedule` per datastore.
The following command creates a new datastore called ``store1`` on :file:`/backup/disk1/store1`
@ -166,6 +170,30 @@ To list existing datastores run:
│ store1 │ /backup/disk1/store1 │ This is my default storage. │
└────────┴──────────────────────┴─────────────────────────────┘
You can change settings of a datastore, for example to set a prune and garbage
collection schedule or retention settings using ``update`` subcommand and view
a datastore with the ``show`` subcommand:
.. code-block:: console
# proxmox-backup-manager datastore update store1 --keep-last 7 --prune-schedule daily --gc-schedule 'Tue 04:27'
# proxmox-backup-manager datastore show store1
┌────────────────┬─────────────────────────────┐
│ Name │ Value │
╞════════════════╪═════════════════════════════╡
│ name │ store1 │
├────────────────┼─────────────────────────────┤
│ path │ /backup/disk1/store1 │
├────────────────┼─────────────────────────────┤
│ comment │ This is my default storage. │
├────────────────┼─────────────────────────────┤
│ gc-schedule │ Tue 04:27 │
├────────────────┼─────────────────────────────┤
│ keep-last │ 7 │
├────────────────┼─────────────────────────────┤
│ prune-schedule │ daily │
└────────────────┴─────────────────────────────┘
Finally, it is possible to remove the datastore configuration:
.. code-block:: console
@ -179,17 +207,58 @@ Finally, it is possible to remove the datastore configuration:
File Layout
^^^^^^^^^^^
.. todo:: Add datastore file layout example
After creating a datastore, the following default layout will appear:
.. code-block:: console
# ls -arilh /backup/disk1/store1
276493 -rw-r--r-- 1 backup backup 0 Jul 8 12:35 .lock
276490 drwxr-x--- 1 backup backup 1064960 Jul 8 12:35 .chunks
`.lock` is an empty file used for process locking.
The `.chunks` directory contains folders, starting from `0000` and taking hexadecimal values until `ffff`. These
directories will store the chunked data after a backup operation has been executed.
.. code-block:: console
# ls -arilh /backup/disk1/store1/.chunks
545824 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 ffff
545823 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffe
415621 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffd
415620 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffc
353187 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffb
344995 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffa
144079 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff9
144078 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff8
144077 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff7
...
403180 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000c
403179 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000b
403177 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000a
402530 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0009
402513 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0008
402509 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0007
276509 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0006
276508 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0005
276507 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0004
276501 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0003
276499 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0002
276498 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0001
276494 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0000
276489 drwxr-xr-x 3 backup backup 4.0K Jul 8 12:35 ..
276490 drwxr-x--- 1 backup backup 1.1M Jul 8 12:35 .
User Management
~~~~~~~~~~~~~~~
Proxmox Backup support several authentication realms, and you need to
Proxmox Backup Server supports several authentication realms, and you need to
choose the realm when you add a new user. Possible realms are:
:pam: Linux PAM standard authentication. Use this if you want to
authenticate as Linux system user (Users needs to exist on the
authenticate as Linux system user (Users need to exist on the
system).
:pbs: Proxmox Backup Server realm. This type stores hashed passwords in
@ -216,8 +285,8 @@ normally want to add other users with less privileges:
# proxmox-backup-manager user create john@pbs --email john@example.com
The create command lets you specify many option like ``--email`` or
``--password``, but you can update or change any of them using the
The create command lets you specify many options like ``--email`` or
``--password``. You can update or change any of them using the
update command later:
.. code-block:: console
@ -225,11 +294,10 @@ update command later:
# proxmox-backup-manager user update john@pbs --firstname John --lastname Smith
# proxmox-backup-manager user update john@pbs --comment "An example user."
.. todo:: Mention how to set password without passing plaintext password as cli argument.
The resulting use list looks like this:
The resulting user list looks like this:
.. code-block:: console
@ -242,16 +310,16 @@ The resulting use list looks like this:
│ root@pam │ 1 │ │ │ │ │ Superuser │
└──────────┴────────┴────────┴───────────┴──────────┴──────────────────┴──────────────────┘
Newly created users do not have an permissions. Please read the next
Newly created users do not have any permissions. Please read the next
section to learn how to set access permissions.
If you want to disable an user account, you can do that by setting ``--enable`` to ``0``
If you want to disable a user account, you can do that by setting ``--enable`` to ``0``
.. code-block:: console
# proxmox-backup-manager user update john@pbs --enable 0
Or completely remove the users with:
Or completely remove the user with:
.. code-block:: console
@ -261,20 +329,20 @@ Or completely remove the users with:
Access Control
~~~~~~~~~~~~~~
Users do not have any permission by default. Instead you need to
specify what is allowed and what not. You can do this by assigning
By default new users do not have any permission. Instead you need to
specify what is allowed and what is not. You can do this by assigning
roles to users on specific objects like datastores or remotes. The
following roles exist:
**NoAccess**
Disable Access - nothing is allowed.
**Admin**
The Administrator can do anything.
**Audit**
An Auditor can view things, but is not allowed to change settings.
**NoAccess**
Disable Access - nothing is allowed.
**DatastoreAdmin**
Can do anything on datastores.
@ -301,6 +369,63 @@ following roles exist:
Is allowed to read data from a remote.
:term:`Remote`
~~~~~~~~~~~~~~
A remote is a different Proxmox Backup Server installation and a user on that
installation, from which you can `sync` datastores to a local datastore with a
`Sync Job`.
For adding a remote you need its hostname or ip, a userid and password on the
remote and its certificate fingerprint to add it. To get the fingerprint use
the ``proxmox-backup-manager cert info`` command on the remote.
.. code-block:: console
# proxmox-backup-manager cert info |grep Fingerprint
Fingerprint (sha256): 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe
With the needed information add the remote with:
.. code-block:: console
# proxmox-backup-manager remote create pbs2 --host pbs2.mydomain.example --userid sync@pam --password 'SECRET' --fingerprint 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe
Use the ``list``, ``show``, ``update``, ``remove`` subcommands of
``proxmox-backup-manager remote`` to manage your remotes:
.. code-block:: console
# proxmox-backup-manager remote update pbs2 --host pbs2.example
# proxmox-backup-manager remote list
┌──────┬──────────────┬──────────┬───────────────────────────────────────────┬─────────┐
│ name │ host │ userid │ fingerprint │ comment │
╞══════╪══════════════╪══════════╪═══════════════════════════════════════════╪═════════╡
│ pbs2 │ pbs2.example │ sync@pam │64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe │ │
└──────┴──────────────┴──────────┴───────────────────────────────────────────┴─────────┘
# proxmox-backup-manager remote remove pbs2
Sync Jobs
~~~~~~~~~
Sync jobs are configured to pull the contents of a datastore on a `Remote` to a
local datastore. You can either start the sync job manually on the GUI or
provide it with a :term:`schedule` to run regularly. The
``proxmox-backup-manager sync-job`` command is used to manage sync jobs:
.. code-block:: console
# proxmox-backup-manager sync-job create pbs2-local --remote pbs2 --remote-store local --store local --schedule 'Wed 02:30'
# proxmox-backup-manager sync-job update pbs2-local --comment 'offsite'
# proxmox-backup-manager sync-job list
┌────────────┬───────┬────────┬──────────────┬───────────┬─────────┐
│ id │ store │ remote │ remote-store │ schedule │ comment │
╞════════════╪═══════╪════════╪══════════════╪═══════════╪═════════╡
│ pbs2-local │ local │ pbs2 │ local │ Wed 02:30 │ offsite │
└────────────┴───────┴────────┴──────────────┴───────────┴─────────┘
# proxmox-backup-manager sync-job remove pbs2-local
Backup Client usage
-------------------
@ -316,8 +441,8 @@ on the backup server.
[[username@]server:]datastore
The default value for ``username`` ist ``root``. If no server is specified, the
default is the local host (``localhost``).
The default value for ``username`` ist ``root``. If no server is specified,
the default is the local host (``localhost``).
You can pass the repository with the ``--repository`` command
line option, or by setting the ``PBS_REPOSITORY`` environment
@ -381,7 +506,7 @@ This section explains how to create a backup from within the machine. This can
be a physical host, a virtual machine, or a container. Such backups may contain file
and image archives. There are no restrictions in this case.
.. note:: If you want to backup virtual machines or containers on Proxmov VE, see :ref:`pve-integration`.
.. note:: If you want to backup virtual machines or containers on Proxmox VE, see :ref:`pve-integration`.
For the following example you need to have a backup server set up, working
credentials and need to know the repository name.
@ -416,7 +541,7 @@ environment variable ``PBS_REPOSITORY``.
.. code-block:: console
# export PBS_REPOSTORY=backup-server:store1
# export PBS_REPOSITORY=backup-server:store1
After this you can execute all commands without specifying the ``--repository``
option.
@ -726,6 +851,8 @@ To remove the ticket, issue a logout:
# proxmox-backup-client logout
.. _pruning:
Pruning and Removing Backups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -896,7 +1023,3 @@ After that you should be able to see storage status with:
.. include:: command-line-tools.rst
.. include:: services.rst
.. include host system admin at the end
.. include:: sysadmin.rst

View File

@ -17,7 +17,7 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
@ -45,8 +45,11 @@ PygmentsBridge.latex_formatter = CustomLatexFormatter
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo"]
todo_link_only = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@ -76,9 +79,11 @@ author = 'Proxmox Support Team'
# built documents.
#
# The short X.Y version.
version = '0.2'
vstr = lambda s: '<devbuild>' if s is None else str(s)
version = vstr(os.getenv('DEB_VERSION_UPSTREAM'))
# The full version, including alpha/beta/rc tags.
release = '0.2-1'
release = vstr(os.getenv('DEB_VERSION'))
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
@ -107,7 +112,7 @@ exclude_patterns = [
'pxar/man1.rst',
'epilog.rst',
'pbs-copyright.rst',
'sysadmin.rst',
'local-zfs.rst'
'package-repositories.rst',
]

View File

@ -11,8 +11,10 @@
.. _Container: https://en.wikipedia.org/wiki/Container_(virtualization)
.. _Zstandard: https://en.wikipedia.org/wiki/Zstandard
.. _Proxmox: https://www.proxmox.com
.. _Proxmox Community Forum: https://forum.proxmox.com
.. _Proxmox Virtual Environment: https://www.proxmox.com/proxmox-ve
.. _Proxmox Backup: https://www.proxmox.com/proxmox-backup
.. _PBS Development List: https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
.. _reStructuredText: https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html
.. _Rust: https://www.rust-lang.org/
.. _SHA-256: https://en.wikipedia.org/wiki/SHA-2

View File

@ -46,3 +46,19 @@ Glossary
kernel driver handles filesystem requests and sends them to a
userspace application.
Remote
A remote Proxmox Backup Server installation and credentials for a user on it.
You can pull datastores from a remote to a local datastore in order to
have redundant backups.
Schedule
Certain tasks, for example pruning and garbage collection, need to be
performed on a regular basis. Proxmox Backup Server uses a subset of the
`systemd Time and Date Specification
<https://www.freedesktop.org/software/systemd/man/systemd.time.html#>`_.
The subset currently supports time of day specifications and weekdays, in
addition to the shorthand expressions 'minutely', 'hourly', 'daily'.
There is no support for specifying timezones, the tasks are run in the
timezone configured on the server.

View File

@ -1,19 +1,20 @@
.. Proxmox Backup documentation master file
Welcome to Proxmox Backup's documentation!
==========================================
Welcome to the Proxmox Backup documentation!
============================================
Copyright (C) 2019 Proxmox Server Solutions GmbH
Copyright (C) 2019-2020 Proxmox Server Solutions GmbH
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with no
Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A
copy of the license is included in the section entitled "GNU Free
Documentation License".
Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.3 or any later version
published by the Free Software Foundation; with no Invariant Sections, no
Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included
in the section entitled "GNU Free Documentation License".
.. todolist::
.. only:: html
A `PDF` version of the documentation is `also available here <./proxmox-backup.pdf>`_
.. toctree::
:maxdepth: 3
@ -22,6 +23,7 @@ Documentation License".
introduction.rst
installation.rst
administration-guide.rst
sysadmin.rst
.. raw:: latex
@ -37,5 +39,14 @@ Documentation License".
glossary.rst
GFDL.rst
.. only:: html and devbuild
.. toctree::
:maxdepth: 2
:caption: Developer Appendix
todos.rst
* :ref:`genindex`

View File

@ -83,6 +83,10 @@ In general this is not trivial, especially when LVM_ or ZFS_ is used.
The network configuration is completely up to you as well.
.. note:: You can access the webinterface of the Proxmox Backup Server with
your web browser, using HTTPS on port 8007. For example at
``https://<ip-or-dns-name>:8007``
Install Proxmox Backup server on `Proxmox VE`_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -99,6 +103,10 @@ After configuring the
server to store backups. Should the hypervisor server fail, you can
still access the backups.
.. note:: You can access the webinterface of the Proxmox Backup Server with
your web browser, using HTTPS on port 8007. For example at
``https://<ip-or-dns-name>:8007``
Client installation
-------------------

View File

@ -1,120 +1,169 @@
Introduction
============
This documentation is written in :term:`reStructuredText` and formatted with :term:`Sphinx`.
What is Proxmox Backup Server
-----------------------------
Proxmox Backup Server is an enterprise-class client-server backup software that
backups :term:`virtual machine`\ s, :term:`container`\ s, and physical hosts.
It is specially optimized for the `Proxmox Virtual Environment`_ platform and
allows you to backup your data securely, even between remote sites, providing
easy management with a web-based user interface.
What is Proxmox Backup
----------------------
Proxmox Backup is an enterprise class client-server backup software,
specially optimized for the `Proxmox Virtual Environment`_ to backup
:term:`virtual machine`\ s and :term:`container`\ s. It is also
possible to backup physical hosts.
It supports deduplication, compression and authenticated encryption
(AE_). Using :term:`Rust` as implementation language guarantees high
Proxmox Backup Server supports deduplication, compression, and authenticated
encryption (AE_). Using :term:`Rust` as implementation language guarantees high
performance, low resource usage, and a safe, high quality code base.
Encryption is done at the client side. This makes backups to not fully
trusted targets possible.
It features strong encryption done on the client side. Thus, it's possible to
backup data to not fully trusted targets.
Architecture
------------
Proxmox Backup uses a `Client-server model`_. The server is
responsible to store the backup data and provides an API to create
backups and restore data. It is possible to manage disks and
other server side resources using this API.
Proxmox Backup Server uses a `client-server model`_. The server stores the
backup data and provides an API to create backups and restore data. With the
API it's also possible to manage disks and other server side resources.
A backup client uses this API to access the backed up data,
i.e. ``proxmox-backup-client`` is a command line tool to create
backups and restore data. We deliver an integrated client for
QEMU_ with `Proxmox Virtual Environment`_.
The backup client uses this API to access the backed up data. With the command
line tool ``proxmox-backup-client`` you can create backups and restore data.
For QEMU_ with `Proxmox Virtual Environment`_ we deliver an integrated client.
A single backup is allowed to contain several archives. For example,
when you backup a :term:`virtual machine`, each disk is stored as a
separate archive inside that backup. The VM configuration also gets an
extra file. This way, it is easy to access and restore important parts
of the backup without having to scan the whole backup.
A single backup is allowed to contain several archives. For example, when you
backup a :term:`virtual machine`, each disk is stored as a separate archive
inside that backup. The VM configuration itself is stored as an extra file.
This way, it is easy to access and restore only important parts of the backup
without the need to scan the whole backup.
Main Features
-------------
:Proxmox VE: The `Proxmox Virtual Environment`_ is fully
supported. You can backup :term:`virtual machine`\ s and
:Support for Proxmox VE: The `Proxmox Virtual Environment`_ is fully
supported and you can easily backup :term:`virtual machine`\ s and
:term:`container`\ s.
:GUI: We provide a graphical, web based user interface.
:Performance: The whole software stack is written in :term:`Rust`,
to provide high speed and memory efficiency.
:Deduplication: Incremental backups produce large amounts of duplicate
data. The deduplication layer removes that redundancy and makes
incremental backups small and space efficient.
:Deduplication: Periodic backups produce large amounts of duplicate
data. The deduplication layer avoids redundancy and minimizes the used
storage space.
:Data Integrity: The built in `SHA-256`_ checksum algorithm assures the
:Incremental backups: Changes between backups are typically low. Reading and
sending only the delta reduces storage and network impact of backups.
:Data Integrity: The built-in `SHA-256`_ checksum algorithm assures the
accuracy and consistency of your backups.
:Remote Sync: It is possible to efficiently synchronize data to remote
sites. Only deltas containing new data are transferred.
:Performance: The whole software stack is written in :term:`Rust`,
to provide high speed and memory efficiency.
:Compression: Ultra fast Zstandard_ compression is able to compress
:Compression: The ultra fast Zstandard_ compression is able to compress
several gigabytes of data per second.
:Encryption: Backups can be encrypted client-side using AES-256 in
GCM_ mode. This authenticated encryption mode (AE_) provides very
high performance on modern hardware.
:Encryption: Backups can be encrypted on the client-side using AES-256 in
Galois/Counter Mode (GCM_) mode. This authenticated encryption (AE_) mde
provides very high performance on modern hardware.
:Open Source: No secrets. You have access to all the source code.
:Web interface: Manage the Proxmox Backup Server with the integrated web-based
user interface.
:Support: Commercial support options are available from `Proxmox`_.
:Open Source: No secrets. Proxmox Backup Server is free and open-source
software. The source code is licensed under AGPL, v3.
:Support: Enterprise support will be available from `Proxmox`_ once the beta
phase is over.
Why Backup?
-----------
Reasons for Data Backup?
------------------------
The primary purpose of a backup is to protect against data loss. Data
loss can be caused by faulty hardware, but also by human error.
The main purpose of a backup is to protect against data loss. Data loss can be
caused by faulty hardware but also by human error.
A common mistake is to delete a file or folder which is still
required. Virtualization can amplify this problem. It is now
easy to delete a whole virtual machine by pressing a single button.
A common mistake is to accidentally delete a file or folder which is still
required. Virtualization can even amplify this problem; it easily happens that
a whole virtual machine is deleted by just pressing a single button.
Backups can serve as a toolkit for administrators to temporarily
store data. For example, it is common practice to perform full backups
before installing major software updates. If something goes wrong, you
can restore the previous state.
For administrators, backups can serve as a useful toolkit for temporarily
storing data. For example, it is common practice to perform full backups before
installing major software updates. If something goes wrong, you can easily
restore the previous state.
Another reason for backups are legal requirements. Some data must be
kept in a safe place for several years by law, so that it can be accessed if
required.
Another reason for backups are legal requirements. Some data, especially
business records, must be kept in a safe place for several years by law, so
that they can be accessed if required.
Data loss can be very costly as it can severely restrict your
business. Therefore, make sure that you perform a backup regularly
and run restore tests.
In general, data loss is very costly as it can severely damage your business.
Therefore, ensure that you perform regular backups and run restore tests.
Software Stack
--------------
.. todo:: Eplain why we use Rust (and Flutter)
Proxmox Backup Server consists of multiple components:
* server-daemon providing, among others, a RESTfull API, super-fast
asynchronous tasks, lightweight usage statistic collection, scheduling
events, strict separation of privileged and unprivileged execution
environments, ...
* JavaScript management webinterface
* management CLI tool for the server (`proxmox-backup-manager`)
* client CLI tool (`proxmox-backup-client`) to access the server easily from
any `Linux amd64` environment.
Everything besides the web interface are written in the Rust programming
language.
"The Rust programming language helps you write faster, more reliable software.
High-level ergonomics and low-level control are often at odds in programming
language design; Rust challenges that conflict. Through balancing powerful
technical capacity and a great developer experience, Rust gives you the option
to control low-level details (such as memory usage) without all the hassle
traditionally associated with such control."
-- `The Rust Programming Language <https://doc.rust-lang.org/book/ch00-00-introduction.html>`_
.. todo:: further explain the software stack
Getting Help
------------
Community Support Forum
~~~~~~~~~~~~~~~~~~~~~~~
We always encourage our users to discuss and share their knowledge using the
`Proxmox Community Forum`_. The forum is moderated by the Proxmox support team.
The large user base is spread out all over the world. Needless to say that such
a large forum is a great place to get information.
Mailing Lists
~~~~~~~~~~~~~
Proxmox Backup Server is fully open-source and contributions are welcome! Here
is the primary communication channel for developers:
:Mailing list for developers: `PBS Development List`_
Bug Tracker
~~~~~~~~~~~
Proxmox runs a public bug tracker at `<https://bugzilla.proxmox.com>`_. If an
issue appears, file your report there. An issue can be a bug as well as a
request for a new feature or enhancement. The bug tracker helps to keep track
of the issue and will send a notification once it has been solved.
License
-------
Copyright (C) 2019 Proxmox Server Solutions GmbH
Copyright (C) 2019-2020 Proxmox Server Solutions GmbH
This software is written by Proxmox Server Solutions GmbH <support@proxmox.com>
Proxmox Backup is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
Proxmox Backup Server is free and open source software: you can use it,
redistribute it, and/or modify it under the terms of the GNU Affero General
Public License as published by the Free Software Foundation, either version 3
of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
``WITHOUT ANY WARRANTY``; without even the implied warranty of

401
docs/local-zfs.rst Normal file
View File

@ -0,0 +1,401 @@
ZFS on Linux
------------
ZFS is a combined file system and logical volume manager designed by
Sun Microsystems. There is no need to manually compile ZFS modules - all
packages are included.
By using ZFS, it's possible to achieve maximum enterprise features with
low budget hardware, but also high performance systems by leveraging
SSD caching or even SSD only setups. ZFS can replace cost intense
hardware raid cards by moderate CPU and memory load combined with easy
management.
General ZFS advantages
* Easy configuration and management with GUI and CLI.
* Reliable
* Protection against data corruption
* Data compression on file system level
* Snapshots
* Copy-on-write clone
* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
* Can use SSD for cache
* Self healing
* Continuous integrity checking
* Designed for high storage capacities
* Protection against data corruption
* Asynchronous replication over network
* Open Source
* Encryption
Hardware
~~~~~~~~~
ZFS depends heavily on memory, so you need at least 8GB to start. In
practice, use as much you can get for your hardware/budget. To prevent
data corruption, we recommend the use of high quality ECC RAM.
If you use a dedicated cache and/or log disk, you should use an
enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
increase the overall performance significantly.
IMPORTANT: Do not use ZFS on top of hardware controller which has its
own cache management. ZFS needs to directly communicate with disks. An
HBA adapter is the way to go, or something like LSI controller flashed
in ``IT`` mode.
ZFS Administration
~~~~~~~~~~~~~~~~~~
This section gives you some usage examples for common tasks. ZFS
itself is really powerful and provides many options. The main commands
to manage ZFS are `zfs` and `zpool`. Both commands come with great
manual pages, which can be read with:
.. code-block:: console
# man zpool
# man zfs
Create a new zpool
^^^^^^^^^^^^^^^^^^
To create a new pool, at least one disk is needed. The `ashift` should
have the same sector-size (2 power of `ashift`) or larger as the
underlying disk.
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device>
Create a new pool with RAID-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 1 disk
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device1> <device2>
Create a new pool with RAID-1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 2 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
Create a new pool with RAID-10
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 4 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
Create a new pool with RAIDZ-1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 3 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
Create a new pool with RAIDZ-2
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 4 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
Create a new pool with cache (L2ARC)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to use a dedicated cache drive partition to increase
the performance (use SSD).
As `<device>` it is possible to use more devices, like it's shown in
"Create a new pool with RAID*".
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
Create a new pool with log (ZIL)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to use a dedicated cache drive partition to increase
the performance (SSD).
As `<device>` it is possible to use more devices, like it's shown in
"Create a new pool with RAID*".
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device> log <log_device>
Add cache and log to an existing pool
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have a pool without cache and log. First partition the SSD in
2 partition with `parted` or `gdisk`
.. important:: Always use GPT partition tables.
The maximum size of a log device should be about half the size of
physical memory, so this is usually quite small. The rest of the SSD
can be used as cache.
.. code-block:: console
# zpool add -f <pool> log <device-part1> cache <device-part2>
Changing a failed device
^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
# zpool replace -f <pool> <old device> <new device>
Changing a failed bootable device
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Depending on how Proxmox Backup was installed it is either using `grub` or `systemd-boot`
as bootloader.
The first steps of copying the partition table, reissuing GUIDs and replacing
the ZFS partition are the same. To make the system bootable from the new disk,
different steps are needed which depend on the bootloader in use.
.. code-block:: console
# sgdisk <healthy bootable device> -R <new device>
# sgdisk -G <new device>
# zpool replace -f <pool> <old zfs partition> <new zfs partition>
.. NOTE:: Use the `zpool status -v` command to monitor how far the resilvering process of the new disk has progressed.
With `systemd-boot`:
.. code-block:: console
# pve-efiboot-tool format <new disk's ESP>
# pve-efiboot-tool init <new disk's ESP>
.. NOTE:: `ESP` stands for EFI System Partition, which is setup as partition #2 on
bootable disks setup by the {pve} installer since version 5.4. For details, see
xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP].
With `grub`:
Usually `grub.cfg` is located in `/boot/grub/grub.cfg`
.. code-block:: console
# grub-install <new disk>
# grub-mkconfig -o /path/to/grub.cfg
Activate E-Mail Notification
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ZFS comes with an event daemon, which monitors events generated by the
ZFS kernel module. The daemon can also send emails on ZFS events like
pool errors. Newer ZFS packages ship the daemon in a separate package,
and you can install it using `apt-get`:
.. code-block:: console
# apt-get install zfs-zed
To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
.. code-block:: console
ZED_EMAIL_ADDR="root"
Please note Proxmox Backup forwards mails to `root` to the email address
configured for the root user.
IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
other settings are optional.
Limit ZFS Memory Usage
^^^^^^^^^^^^^^^^^^^^^^
It is good to use at most 50 percent (which is the default) of the
system memory for ZFS ARC to prevent performance shortage of the
host. Use your preferred editor to change the configuration in
`/etc/modprobe.d/zfs.conf` and insert:
.. code-block:: console
options zfs zfs_arc_max=8589934592
This example setting limits the usage to 8GB.
.. IMPORTANT:: If your root file system is ZFS you must update your initramfs every time this value changes:
.. code-block:: console
# update-initramfs -u
SWAP on ZFS
^^^^^^^^^^^
Swap-space created on a zvol may generate some troubles, like blocking the
server or generating a high IO load, often seen when starting a Backup
to an external Storage.
We strongly recommend to use enough memory, so that you normally do not
run into low memory situations. Should you need or want to add swap, it is
preferred to create a partition on a physical disk and use it as swapdevice.
You can leave some space free for this purpose in the advanced options of the
installer. Additionally, you can lower the `swappiness` value.
A good value for servers is 10:
.. code-block:: console
# sysctl -w vm.swappiness=10
To make the swappiness persistent, open `/etc/sysctl.conf` with
an editor of your choice and add the following line:
.. code-block:: console
vm.swappiness = 10
.. table:: Linux kernel `swappiness` parameter values
:widths:auto
==================== ===============================================================
Value Strategy
==================== ===============================================================
vm.swappiness = 0 The kernel will swap only to avoid an 'out of memory' condition
vm.swappiness = 1 Minimum amount of swapping without disabling it entirely.
vm.swappiness = 10 Sometimes recommended to improve performance when sufficient memory exists in a system.
vm.swappiness = 60 The default value.
vm.swappiness = 100 The kernel will swap aggressively.
==================== ===============================================================
ZFS Compression
^^^^^^^^^^^^^^^
To activate compression:
.. code-block:: console
# zpool set compression=lz4 <pool>
We recommend using the `lz4` algorithm, since it adds very little CPU overhead.
Other algorithms such as `lzjb` and `gzip-N` (where `N` is an integer `1-9` representing
the compression ratio, 1 is fastest and 9 is best compression) are also available.
Depending on the algorithm and how compressible the data is, having compression enabled can even increase
I/O performance.
You can disable compression at any time with:
.. code-block:: console
# zfs set compression=off <dataset>
Only new blocks will be affected by this change.
ZFS Special Device
^^^^^^^^^^^^^^^^^^
Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
pool is used to store metadata, deduplication tables, and optionally small
file blocks.
A `special` device can improve the speed of a pool consisting of slow spinning
hard disks with a lot of metadata changes. For example workloads that involve
creating, updating or deleting a large number of files will benefit from the
presence of a `special` device. ZFS datasets can also be configured to store
whole small files on the `special` device which can further improve the
performance. Use fast SSDs for the `special` device.
.. IMPORTANT:: The redundancy of the `special` device should match the one of the
pool, since the `special` device is a point of failure for the whole pool.
.. WARNING:: Adding a `special` device to a pool cannot be undone!
Create a pool with `special` device and RAID-1:
.. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
Adding a `special` device to an existing pool with RAID-1:
.. code-block:: console
# zpool add <pool> special mirror <device1> <device2>
ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
`0` to disable storing small file blocks on the `special` device or a power of
two in the range between `512B` to `128K`. After setting the property new file
blocks smaller than `size` will be allocated on the `special` device.
.. IMPORTANT:: If the value for `special_small_blocks` is greater than or equal to
the `recordsize` (default `128K`) of the dataset, *all* data will be written to
the `special` device, so be careful!
Setting the `special_small_blocks` property on a pool will change the default
value of that property for all child ZFS datasets (for example all containers
in the pool will opt in for small file blocks).
Opt in for all file smaller than 4K-blocks pool-wide:
.. code-block:: console
# zfs set special_small_blocks=4K <pool>
Opt in for small file blocks for a single dataset:
.. code-block:: console
# zfs set special_small_blocks=4K <pool>/<filesystem>
Opt out from small file blocks for a single dataset:
.. code-block:: console
# zfs set special_small_blocks=0 <pool>/<filesystem>
Troubleshooting
^^^^^^^^^^^^^^^
Corrupted cachefile
In case of a corrupted ZFS cachefile, some volumes may not be mounted during
boot until mounted manually later.
For each pool, run:
.. code-block:: console
# zpool set cachefile=/etc/zfs/zpool.cache POOLNAME
and afterwards update the `initramfs` by running:
.. code-block:: console
# update-initramfs -u -k all
and finally reboot your node.
Sometimes the ZFS cachefile can get corrupted, and `zfs-import-cache.service`
doesn't import the pools that aren't present in the cachefile.
Another workaround to this problem is enabling the `zfs-import-scan.service`,
which searches and imports pools via device scanning (usually slower).

View File

@ -3,17 +3,16 @@
Debian Package Repositories
---------------------------
All Debian based systems use APT_ as package
management tool. The list of repositories is defined in
``/etc/apt/sources.list`` and ``.list`` files found in the
``/etc/apt/sources.d/`` directory. Updates can be installed directly with
the ``apt`` command line tool, or via the GUI.
All Debian based systems use APT_ as package management tool. The list of
repositories is defined in ``/etc/apt/sources.list`` and ``.list`` files found
in the ``/etc/apt/sources.d/`` directory. Updates can be installed directly
with the ``apt`` command line tool, or via the GUI.
APT_ ``sources.list`` files list one package repository per line, with
the most preferred source listed first. Empty lines are ignored and a
``#`` character anywhere on a line marks the remainder of that line as a
comment. The information available from the configured sources is
acquired by ``apt update``.
APT_ ``sources.list`` files list one package repository per line, with the most
preferred source listed first. Empty lines are ignored and a ``#`` character
anywhere on a line marks the remainder of that line as a comment. The
information available from the configured sources is acquired by ``apt
update``.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
@ -27,17 +26,65 @@ acquired by ``apt update``.
.. FIXME for 7.0: change security update suite to bullseye-security
In addition, Proxmox provides three different package repositories for
the backup server binaries.
In addition, you need a package repositories from Proxmox to get the backup
server updates.
During the Proxmox Backup beta phase only one repository (pbstest) will be
available. Once released, a Enterprise repository for production use and a
no-subscription repository will be provided.
SecureApt
~~~~~~~~~
The `Release` files in the repositories are signed with GnuPG. APT is using
these signatures to verify that all packages are from a trusted source.
If you install Proxmox Backup Server from an official ISO image, the key for
verification is already installed.
If you install Proxmox Backup Server on top of Debian, download and install the
key with the following commands:
.. code-block:: console
# wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
Verify the SHA512 checksum afterwards with:
.. code-block:: console
# sha512sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
The output should be:
.. code-block:: console
acca6f416917e8e11490a08a1e2842d500b3a5d9f322c6319db0927b2901c3eae23cfb5cd5df6facf2b57399d3cfa52ad7769ebdd75d9b204549ca147da52626 /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
and the md5sum:
.. code-block:: console
# md5sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
Here, the output should be:
.. code-block:: console
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
.. comment
`Proxmox Backup`_ Enterprise Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is the default, stable, and recommended repository. It is available for
This will be the default, stable, and recommended repository. It is available for
all `Proxmox Backup`_ subscription users. It contains the most stable packages,
and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default:
.. note:: During the Proxmox Backup beta phase only one repository (pbstest)
will be available.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list``
@ -76,27 +123,30 @@ We recommend to configure this repository in ``/etc/apt/sources.list``.
# PBS pbs-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/bps buster pbs-no-subscription
deb http://download.proxmox.com/debian/pbs buster pbs-no-subscription
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
`Proxmox Backup`_ Test Repository
`Proxmox Backup`_ Beta Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Finally, there is a repository called ``pbstest``. This one contains the
latest packages and is heavily used by developers to test new
During the public beta, there is a repository called ``pbstest``. This one
contains the latest packages and is heavily used by developers to test new
features.
.. warning:: the ``pbstest`` repository should (as the name implies)
.. .. warning:: the ``pbstest`` repository should (as the name implies)
only be used to test new features or bug fixes.
You can configure this using ``/etc/apt/sources.list`` by
adding the following line:
You can configure this using ``/etc/apt/sources.list`` by adding the following
line:
.. code-block:: sources.list
:caption: sources.list entry for ``pbstest``
deb http://download.proxmox.com/debian/bps buster pbstest
deb http://download.proxmox.com/debian/pbs buster pbstest
If you installed Proxmox Backup Server from the official beta ISO you should
have this repository already configured in
``/etc/apt/sources.list.d/pbstest-beta.list``

View File

@ -1,5 +1,5 @@
Host System Administration
--------------------------
==========================
`Proxmox Backup`_ is based on the famous Debian_ Linux
distribution. That means that you have access to the whole world of
@ -23,8 +23,4 @@ either explain things which are different on `Proxmox Backup`_, or
tasks which are commonly used on `Proxmox Backup`_. For other topics,
please refer to the standard Debian documentation.
ZFS
~~~
.. todo:: Add local ZFS admin guide (local.zfs.adoc)
.. include:: local-zfs.rst

6
docs/todos.rst Normal file
View File

@ -0,0 +1,6 @@
Documentation Todo List
=======================
This is an auto-generated list of the todo references in the documentation.
.. todolist::

View File

@ -7,7 +7,7 @@ DYNAMIC_UNITS := \
proxmox-backup.service \
proxmox-backup-proxy.service
all: $(UNITS) $(DYNAMIC_UNITS)
all: $(UNITS) $(DYNAMIC_UNITS) pbstest-beta.list
clean:
rm -f $(DYNAMIC_UNITS)

1
etc/pbstest-beta.list Normal file
View File

@ -0,0 +1 @@
deb http://download.proxmox.com/debian/pbs buster pbstest

View File

@ -17,7 +17,7 @@ async fn upload_speed() -> Result<usize, Error> {
let backup_time = chrono::Utc::now();
let client = BackupWriter::start(client, datastore, "host", "speedtest", backup_time, false).await?;
let client = BackupWriter::start(client, None, datastore, "host", "speedtest", backup_time, false).await?;
println!("start upload speed test");
let res = client.upload_speedtest().await?;

View File

@ -9,6 +9,7 @@ pub mod status;
pub mod types;
pub mod version;
pub mod pull;
mod helpers;
use proxmox::api::router::SubdirMap;
use proxmox::api::Router;

View File

@ -46,20 +46,20 @@ fn check_backup_owner(store: &DataStore, group: &BackupGroup, userid: &str) -> R
fn read_backup_index(store: &DataStore, backup_dir: &BackupDir) -> Result<Vec<BackupContent>, Error> {
let (manifest, index_size) = store.load_manifest(backup_dir)?;
let (manifest, manifest_crypt_mode, index_size) = store.load_manifest(backup_dir)?;
let mut result = Vec::new();
for item in manifest.files() {
result.push(BackupContent {
filename: item.filename.clone(),
encrypted: item.encrypted,
crypt_mode: Some(item.crypt_mode),
size: Some(item.size),
});
}
result.push(BackupContent {
filename: MANIFEST_BLOB_NAME.to_string(),
encrypted: Some(false),
crypt_mode: Some(manifest_crypt_mode),
size: Some(index_size),
});
@ -79,7 +79,11 @@ fn get_all_snapshot_files(
for file in &info.files {
if file_set.contains(file) { continue; }
files.push(BackupContent { filename: file.to_string(), size: None, encrypted: None });
files.push(BackupContent {
filename: file.to_string(),
size: None,
crypt_mode: None,
});
}
Ok(files)
@ -350,7 +354,15 @@ pub fn list_snapshots (
},
Err(err) => {
eprintln!("error during snapshot file listing: '{}'", err);
info.files.iter().map(|x| BackupContent { filename: x.to_string(), size: None, encrypted: None }).collect()
info
.files
.iter()
.map(|x| BackupContent {
filename: x.to_string(),
size: None,
crypt_mode: None,
})
.collect()
},
};
@ -394,6 +406,90 @@ pub fn status(
crate::tools::disks::disk_usage(&datastore.base_path())
}
#[api(
input: {
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
optional: true,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
optional: true,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP, true), // fixme
},
)]
/// Verify backups.
///
/// This function can verify a single backup snapshot, all backup from a backup group,
/// or all backups in the datastore.
pub fn verify(
store: String,
backup_type: Option<String>,
backup_id: Option<String>,
backup_time: Option<i64>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let worker_id;
let mut backup_dir = None;
let mut backup_group = None;
match (backup_type, backup_id, backup_time) {
(Some(backup_type), Some(backup_id), Some(backup_time)) => {
worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_time);
let dir = BackupDir::new(backup_type, backup_id, backup_time);
backup_dir = Some(dir);
}
(Some(backup_type), Some(backup_id), None) => {
worker_id = format!("{}_{}_{}", store, backup_type, backup_id);
let group = BackupGroup::new(backup_type, backup_id);
backup_group = Some(group);
}
(None, None, None) => {
worker_id = store.clone();
}
_ => bail!("parameters do not spefify a backup group or snapshot"),
}
let username = rpcenv.get_user().unwrap();
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let upid_str = WorkerTask::new_thread(
"verify", Some(worker_id.clone()), &username, to_stdout, move |worker|
{
let success = if let Some(backup_dir) = backup_dir {
verify_backup_dir(&datastore, &backup_dir, &worker)?
} else if let Some(backup_group) = backup_group {
verify_backup_group(&datastore, &backup_group, &worker)?
} else {
verify_all_backups(&datastore, &worker)?
};
if !success {
bail!("verfication failed - please check the log for details");
}
Ok(())
})?;
Ok(json!(upid_str))
}
#[macro_export]
macro_rules! add_common_prune_prameters {
( [ $( $list1:tt )* ] ) => {
@ -439,7 +535,7 @@ macro_rules! add_common_prune_prameters {
pub const API_RETURN_SCHEMA_PRUNE: Schema = ArraySchema::new(
"Returns the list of snapshots and a flag indicating if there are kept or removed.",
PruneListItem::API_SCHEMA
&PruneListItem::API_SCHEMA
).schema();
const API_METHOD_PRUNE: ApiMethod = ApiMethod::new(
@ -818,7 +914,7 @@ fn download_file_decoded(
let files = read_backup_index(&datastore, &backup_dir)?;
for file in files {
if file.filename == file_name && file.encrypted == Some(true) {
if file.filename == file_name && file.crypt_mode == Some(CryptMode::Encrypt) {
bail!("cannot decode '{}' - is encrypted", file_name);
}
}
@ -1261,6 +1357,11 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
&Router::new()
.upload(&API_METHOD_UPLOAD_BACKUP_LOG)
),
(
"verify",
&Router::new()
.post(&API_METHOD_VERIFY)
),
];
const DATASTORE_INFO_ROUTER: Router = Router::new()

View File

@ -10,7 +10,7 @@ use proxmox::api::{ApiResponseFuture, ApiHandler, ApiMethod, Router, RpcEnvironm
use proxmox::api::router::SubdirMap;
use proxmox::api::schema::*;
use crate::tools::{self, WrappedReaderStream};
use crate::tools;
use crate::server::{WorkerTask, H2Service};
use crate::backup::*;
use crate::api2::types::*;
@ -199,7 +199,6 @@ pub const BACKUP_API_SUBDIRS: SubdirMap = &[
),
(
"dynamic_index", &Router::new()
.download(&API_METHOD_DYNAMIC_CHUNK_INDEX)
.post(&API_METHOD_CREATE_DYNAMIC_INDEX)
.put(&API_METHOD_DYNAMIC_APPEND)
),
@ -222,10 +221,13 @@ pub const BACKUP_API_SUBDIRS: SubdirMap = &[
),
(
"fixed_index", &Router::new()
.download(&API_METHOD_FIXED_CHUNK_INDEX)
.post(&API_METHOD_CREATE_FIXED_INDEX)
.put(&API_METHOD_FIXED_APPEND)
),
(
"previous", &Router::new()
.download(&API_METHOD_DOWNLOAD_PREVIOUS)
),
(
"speedtest", &Router::new()
.upload(&API_METHOD_UPLOAD_SPEEDTEST)
@ -610,20 +612,17 @@ fn finish_backup (
}
#[sortable]
pub const API_METHOD_DYNAMIC_CHUNK_INDEX: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&dynamic_chunk_index),
pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&download_previous),
&ObjectSchema::new(
r###"
Download the dynamic chunk index from the previous backup.
Simply returns an empty list if this is the first backup.
"### ,
"Download archive from previous backup.",
&sorted!([
("archive-name", false, &crate::api2::types::BACKUP_ARCHIVE_NAME_SCHEMA)
]),
)
);
fn dynamic_chunk_index(
fn download_previous(
_parts: Parts,
_req_body: Body,
param: Value,
@ -636,130 +635,38 @@ fn dynamic_chunk_index(
let archive_name = tools::required_string_param(&param, "archive-name")?.to_owned();
if !archive_name.ends_with(".didx") {
bail!("wrong archive extension: '{}'", archive_name);
}
let empty_response = {
Response::builder()
.status(StatusCode::OK)
.body(Body::empty())?
};
let last_backup = match &env.last_backup {
Some(info) => info,
None => return Ok(empty_response),
None => bail!("no previous backup"),
};
let mut path = last_backup.backup_dir.relative_path();
let mut path = env.datastore.snapshot_path(&last_backup.backup_dir);
path.push(&archive_name);
let index = match env.datastore.open_dynamic_reader(path) {
Ok(index) => index,
Err(_) => {
env.log(format!("there is no last backup for archive '{}'", archive_name));
return Ok(empty_response);
{
let index: Option<Box<dyn IndexFile>> = match archive_type(&archive_name)? {
ArchiveType::FixedIndex => {
let index = env.datastore.open_fixed_reader(&path)?;
Some(Box::new(index))
}
ArchiveType::DynamicIndex => {
let index = env.datastore.open_dynamic_reader(&path)?;
Some(Box::new(index))
}
_ => { None }
};
if let Some(index) = index {
env.log(format!("register chunks in '{}' from previous backup.", archive_name));
env.log(format!("download last backup index for archive '{}'", archive_name));
let count = index.index_count();
for pos in 0..count {
let info = index.chunk_info(pos)?;
let size = info.size() as u32;
env.register_chunk(info.digest, size)?;
for pos in 0..index.index_count() {
let info = index.chunk_info(pos).unwrap();
let size = info.range.end - info.range.start;
env.register_chunk(info.digest, size as u32)?;
}
}
}
let reader = DigestListEncoder::new(Box::new(index));
let stream = WrappedReaderStream::new(reader);
// fixme: set size, content type?
let response = http::Response::builder()
.status(200)
.body(Body::wrap_stream(stream))?;
Ok(response)
}.boxed()
}
#[sortable]
pub const API_METHOD_FIXED_CHUNK_INDEX: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&fixed_chunk_index),
&ObjectSchema::new(
r###"
Download the fixed chunk index from the previous backup.
Simply returns an empty list if this is the first backup.
"### ,
&sorted!([
("archive-name", false, &crate::api2::types::BACKUP_ARCHIVE_NAME_SCHEMA)
]),
)
);
fn fixed_chunk_index(
_parts: Parts,
_req_body: Body,
param: Value,
_info: &ApiMethod,
rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture {
async move {
let env: &BackupEnvironment = rpcenv.as_ref();
let archive_name = tools::required_string_param(&param, "archive-name")?.to_owned();
if !archive_name.ends_with(".fidx") {
bail!("wrong archive extension: '{}'", archive_name);
}
let empty_response = {
Response::builder()
.status(StatusCode::OK)
.body(Body::empty())?
};
let last_backup = match &env.last_backup {
Some(info) => info,
None => return Ok(empty_response),
};
let mut path = last_backup.backup_dir.relative_path();
path.push(&archive_name);
let index = match env.datastore.open_fixed_reader(path) {
Ok(index) => index,
Err(_) => {
env.log(format!("there is no last backup for archive '{}'", archive_name));
return Ok(empty_response);
}
};
env.log(format!("download last backup index for archive '{}'", archive_name));
let count = index.index_count();
let image_size = index.index_bytes();
for pos in 0..count {
let digest = index.index_digest(pos).unwrap();
// Note: last chunk can be smaller
let start = (pos*index.chunk_size) as u64;
let mut end = start + index.chunk_size as u64;
if end > image_size { end = image_size; }
let size = (end - start) as u32;
env.register_chunk(*digest, size)?;
}
let reader = DigestListEncoder::new(Box::new(index));
let stream = WrappedReaderStream::new(reader);
// fixme: set size, content type?
let response = http::Response::builder()
.status(200)
.body(Body::wrap_stream(stream))?;
Ok(response)
env.log(format!("download '{}' from previous backup.", archive_name));
crate::api2::helpers::create_download_response(path).await
}.boxed()
}

23
src/api2/helpers.rs Normal file
View File

@ -0,0 +1,23 @@
use std::path::PathBuf;
use anyhow::Error;
use futures::*;
use hyper::{Body, Response, StatusCode, header};
use proxmox::http_err;
pub async fn create_download_response(path: PathBuf) -> Result<Response<Body>, Error> {
let file = tokio::fs::File::open(path.clone())
.map_err(move |err| http_err!(BAD_REQUEST, format!("open file {:?} failed: {}", path.clone(), err)))
.await?;
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())
.map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze()));
let body = Body::wrap_stream(payload);
// fixme: set other headers ?
Ok(Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, "application/octet-stream")
.body(body)
.unwrap())
}

View File

@ -41,6 +41,9 @@ pub const ZFS_ASHIFT_SCHEMA: Schema = IntegerSchema::new(
.default(12)
.schema();
pub const ZPOOL_NAME_SCHEMA: Schema =StringSchema::new("ZFS Pool Name")
.format(&ApiStringFormat::Pattern(&ZPOOL_NAME_REGEX))
.schema();
#[api(
default: "On",
@ -157,7 +160,7 @@ pub fn list_zpools() -> Result<Vec<ZpoolListItem>, Error> {
schema: NODE_SCHEMA,
},
name: {
schema: DATASTORE_SCHEMA,
schema: ZPOOL_NAME_SCHEMA,
},
},
},

View File

@ -10,6 +10,7 @@ use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use crate::api2::types::*;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_POWER_MANAGEMENT};
use crate::tools::cert::CertInfo;
#[api(
input: {
@ -46,14 +47,24 @@ use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_POWER_MANAGEMENT};
description: "Total CPU usage since last query.",
optional: true,
},
}
info: {
type: Object,
description: "contains node information",
properties: {
fingerprint: {
description: "The SSL Fingerprint",
type: String,
},
},
},
},
},
access: {
permission: &Permission::Privilege(&["system", "status"], PRIV_SYS_AUDIT, false),
},
)]
/// Read node memory, CPU and (root) disk usage
fn get_usage(
fn get_status(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
@ -63,6 +74,10 @@ fn get_usage(
let kstat: procfs::ProcFsStat = procfs::read_proc_stat()?;
let disk_usage = crate::tools::disks::disk_usage(Path::new("/"))?;
// get fingerprint
let cert = CertInfo::new()?;
let fp = cert.fingerprint()?;
Ok(json!({
"memory": {
"total": meminfo.memtotal,
@ -74,7 +89,10 @@ fn get_usage(
"total": disk_usage.total,
"used": disk_usage.used,
"free": disk_usage.avail,
}
},
"info": {
"fingerprint": fp,
},
}))
}
@ -122,5 +140,5 @@ fn reboot_or_shutdown(command: NodePowerCommand) -> Result<(), Error> {
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_GET_USAGE)
.get(&API_METHOD_GET_STATUS)
.post(&API_METHOD_REBOOT_OR_SHUTDOWN);

View File

@ -17,6 +17,7 @@ use crate::server::{WorkerTask, H2Service};
use crate::tools;
use crate::config::acl::PRIV_DATASTORE_READ;
use crate::config::cached_user_info::CachedUserInfo;
use crate::api2::helpers;
mod environment;
use environment::*;
@ -187,26 +188,9 @@ fn download_file(
path.push(env.backup_dir.relative_path());
path.push(&file_name);
let path2 = path.clone();
let path3 = path.clone();
env.log(format!("download {:?}", path.clone()));
let file = tokio::fs::File::open(path)
.map_err(move |err| http_err!(BAD_REQUEST, format!("open file {:?} failed: {}", path2, err)))
.await?;
env.log(format!("download {:?}", path3));
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())
.map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze()));
let body = Body::wrap_stream(payload);
// fixme: set other headers ?
Ok(Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, "application/octet-stream")
.body(body)
.unwrap())
helpers::create_download_response(path).await
}.boxed()
}

View File

@ -5,6 +5,8 @@ use proxmox::api::{api, schema::*};
use proxmox::const_regex;
use proxmox::{IPRE, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::CryptMode;
// File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
if name.starts_with('.') {
@ -76,6 +78,8 @@ const_regex!{
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$";
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
}
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
@ -496,6 +500,10 @@ pub const PRUNE_SCHEMA_KEEP_YEARLY: Schema = IntegerSchema::new(
"filename": {
schema: BACKUP_ARCHIVE_NAME_SCHEMA,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
@ -503,9 +511,9 @@ pub const PRUNE_SCHEMA_KEEP_YEARLY: Schema = IntegerSchema::new(
/// Basic information about archive files inside a backup snapshot.
pub struct BackupContent {
pub filename: String,
/// Info if file is encrypted (or empty if we do not have that info)
/// Info if file is encrypted, signed, or neither.
#[serde(skip_serializing_if="Option::is_none")]
pub encrypted: Option<bool>,
pub crypt_mode: Option<CryptMode>,
/// Archive size (from backup manifest).
#[serde(skip_serializing_if="Option::is_none")]
pub size: Option<u64>,

View File

@ -198,6 +198,9 @@ pub use prune::*;
mod datastore;
pub use datastore::*;
mod verify;
pub use verify::*;
mod catalog_shell;
pub use catalog_shell::*;

View File

@ -44,9 +44,10 @@ impl<S: AsyncReadChunk, I: IndexFile> AsyncIndexReader<S, I> {
}
}
impl<S, I> AsyncRead for AsyncIndexReader<S, I> where
S: AsyncReadChunk + Unpin + 'static,
I: IndexFile + Unpin
impl<S, I> AsyncRead for AsyncIndexReader<S, I>
where
S: AsyncReadChunk + Unpin + Sync + 'static,
I: IndexFile + Unpin,
{
fn poll_read(
self: Pin<&mut Self>,
@ -74,11 +75,11 @@ I: IndexFile + Unpin
this.current_chunk_digest = digest;
let mut store = match this.store.take() {
let store = match this.store.take() {
Some(store) => store,
None => {
return Poll::Ready(Err(io_format_err!("could not find store")));
},
}
};
let future = async move {
@ -88,7 +89,7 @@ I: IndexFile + Unpin
};
this.state = AsyncIndexReaderState::WaitForData(future.boxed());
},
}
AsyncIndexReaderState::WaitForData(ref mut future) => {
match ready!(future.as_mut().poll(cx)) {
Ok((store, mut chunk_data)) => {
@ -96,12 +97,12 @@ I: IndexFile + Unpin
this.read_buffer.append(&mut chunk_data);
this.state = AsyncIndexReaderState::HaveData(0);
this.store = Some(store);
},
}
Err(err) => {
return Poll::Ready(Err(io_err_other(err)));
},
}
};
},
}
AsyncIndexReaderState::HaveData(offset) => {
let offset = *offset;
let len = this.read_buffer.len();
@ -111,7 +112,7 @@ I: IndexFile + Unpin
buf.len()
};
buf[0..n].copy_from_slice(&this.read_buffer[offset..offset+n]);
buf[0..n].copy_from_slice(&this.read_buffer[offset..(offset + n)]);
if offset + n == len {
this.state = AsyncIndexReaderState::NoData;
this.current_chunk_idx += 1;
@ -120,7 +121,7 @@ I: IndexFile + Unpin
}
return Poll::Ready(Ok(n));
},
}
}
}
}

View File

@ -141,6 +141,14 @@ impl BackupGroup {
}
}
impl std::fmt::Display for BackupGroup {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let backup_type = self.backup_type();
let id = self.backup_id();
write!(f, "{}/{}", backup_type, id)
}
}
impl std::str::FromStr for BackupGroup {
type Err = Error;

View File

@ -80,8 +80,9 @@ impl ChunkStore {
let default_options = CreateOptions::new();
if let Err(err) = create_path(&base, Some(default_options.clone()), Some(options.clone())) {
bail!("unable to create chunk store '{}' at {:?} - {}", name, base, err);
match create_path(&base, Some(default_options.clone()), Some(options.clone())) {
Err(err) => bail!("unable to create chunk store '{}' at {:?} - {}", name, base, err),
Ok(res) => if ! res { nix::unistd::chown(&base, Some(uid), Some(gid))? },
}
if let Err(err) = create_dir(&chunk_dir, options.clone()) {

View File

@ -6,12 +6,30 @@
//! See the Wikipedia Artikel for [Authenticated
//! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption)
//! for a short introduction.
use anyhow::{bail, Error};
use openssl::pkcs5::pbkdf2_hmac;
use openssl::hash::MessageDigest;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
use std::io::Write;
use anyhow::{bail, Error};
use chrono::{Local, TimeZone, DateTime};
use openssl::hash::MessageDigest;
use openssl::pkcs5::pbkdf2_hmac;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
use serde::{Deserialize, Serialize};
use proxmox::api::api;
#[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Defines whether data is encrypted (using an AEAD cipher), only signed, or neither.
pub enum CryptMode {
/// Don't encrypt.
None,
/// Encrypt.
Encrypt,
/// Only sign.
SignOnly,
}
/// Encryption Configuration with secret key
///
@ -26,7 +44,6 @@ pub struct CryptConfig {
id_pkey: openssl::pkey::PKey<openssl::pkey::Private>,
// The private key used by the cipher.
enc_key: [u8; 32],
}
impl CryptConfig {
@ -63,10 +80,9 @@ impl CryptConfig {
/// chunk digest values do not clash with values computed for
/// other sectret keys.
pub fn compute_digest(&self, data: &[u8]) -> [u8; 32] {
// FIXME: use HMAC-SHA256 instead??
let mut hasher = openssl::sha::Sha256::new();
hasher.update(&self.id_key);
hasher.update(data);
hasher.update(&self.id_key); // at the end, to avoid length extensions attacks
hasher.finish()
}

View File

@ -3,10 +3,10 @@ use std::convert::TryInto;
use proxmox::tools::io::{ReadExt, WriteExt};
const MAX_BLOB_SIZE: usize = 128*1024*1024;
use super::file_formats::*;
use super::CryptConfig;
use super::{CryptConfig, CryptMode};
const MAX_BLOB_SIZE: usize = 128*1024*1024;
/// Encoded data chunk with digest and positional information
pub struct ChunkInfo {
@ -166,6 +166,19 @@ impl DataBlob {
Ok(blob)
}
/// Get the encryption mode for this blob.
pub fn crypt_mode(&self) -> Result<CryptMode, Error> {
let magic = self.magic();
Ok(if magic == &UNCOMPRESSED_BLOB_MAGIC_1_0 || magic == &COMPRESSED_BLOB_MAGIC_1_0 {
CryptMode::None
} else if magic == &ENCR_COMPR_BLOB_MAGIC_1_0 || magic == &ENCRYPTED_BLOB_MAGIC_1_0 {
CryptMode::Encrypt
} else {
bail!("Invalid blob magic number.");
})
}
/// Decode blob data
pub fn decode(&self, config: Option<&CryptConfig>) -> Result<Vec<u8>, Error> {
@ -194,75 +207,11 @@ impl DataBlob {
} else {
bail!("unable to decrypt blob - missing CryptConfig");
}
} else if magic == &AUTH_COMPR_BLOB_MAGIC_1_0 || magic == &AUTHENTICATED_BLOB_MAGIC_1_0 {
let header_len = std::mem::size_of::<AuthenticatedDataBlobHeader>();
let head = unsafe {
(&self.raw_data[..header_len]).read_le_value::<AuthenticatedDataBlobHeader>()?
};
let data_start = std::mem::size_of::<AuthenticatedDataBlobHeader>();
// Note: only verify if we have a crypt config
if let Some(config) = config {
let signature = config.compute_auth_tag(&self.raw_data[data_start..]);
if signature != head.tag {
bail!("verifying blob signature failed");
}
}
if magic == &AUTH_COMPR_BLOB_MAGIC_1_0 {
let data = zstd::block::decompress(&self.raw_data[data_start..], 16*1024*1024)?;
Ok(data)
} else {
Ok(self.raw_data[data_start..].to_vec())
}
} else {
bail!("Invalid blob magic number.");
}
}
/// Create a signed DataBlob, optionally compressed
pub fn create_signed(
data: &[u8],
config: &CryptConfig,
compress: bool,
) -> Result<Self, Error> {
if data.len() > MAX_BLOB_SIZE {
bail!("data blob too large ({} bytes).", data.len());
}
let compr_data;
let (_compress, data, magic) = if compress {
compr_data = zstd::block::compress(data, 1)?;
// Note: We only use compression if result is shorter
if compr_data.len() < data.len() {
(true, &compr_data[..], AUTH_COMPR_BLOB_MAGIC_1_0)
} else {
(false, data, AUTHENTICATED_BLOB_MAGIC_1_0)
}
} else {
(false, data, AUTHENTICATED_BLOB_MAGIC_1_0)
};
let header_len = std::mem::size_of::<AuthenticatedDataBlobHeader>();
let mut raw_data = Vec::with_capacity(data.len() + header_len);
let head = AuthenticatedDataBlobHeader {
head: DataBlobHeader { magic, crc: [0; 4] },
tag: config.compute_auth_tag(data),
};
unsafe {
raw_data.write_le_value(head)?;
}
raw_data.extend_from_slice(data);
let mut blob = DataBlob { raw_data };
blob.set_crc(blob.compute_crc());
Ok(blob)
}
/// Load blob from ``reader``
pub fn load(reader: &mut dyn std::io::Read) -> Result<Self, Error> {
@ -294,14 +243,6 @@ impl DataBlob {
let blob = DataBlob { raw_data: data };
Ok(blob)
} else if magic == AUTH_COMPR_BLOB_MAGIC_1_0 || magic == AUTHENTICATED_BLOB_MAGIC_1_0 {
if data.len() < std::mem::size_of::<AuthenticatedDataBlobHeader>() {
bail!("authenticated blob too small ({} bytes).", data.len());
}
let blob = DataBlob { raw_data: data };
Ok(blob)
} else {
bail!("unable to parse raw blob - wrong magic");
@ -376,7 +317,7 @@ impl <'a, 'b> DataChunkBuilder<'a, 'b> {
/// Set encryption Configuration
///
/// If set, chunks are encrypted.
/// If set, chunks are encrypted
pub fn crypt_config(mut self, value: &'b CryptConfig) -> Self {
if self.digest_computed {
panic!("unable to set crypt_config after compute_digest().");
@ -415,12 +356,7 @@ impl <'a, 'b> DataChunkBuilder<'a, 'b> {
self.compute_digest();
}
let chunk = DataBlob::encode(
self.orig_data,
self.config,
self.compress,
)?;
let chunk = DataBlob::encode(self.orig_data, self.config, self.compress)?;
Ok((chunk, self.digest))
}

View File

@ -1,4 +1,4 @@
use anyhow::{bail, Error};
use anyhow::{bail, format_err, Error};
use std::sync::Arc;
use std::io::{Read, BufReader};
use proxmox::tools::io::ReadExt;
@ -8,8 +8,6 @@ use super::*;
enum BlobReaderState<R: Read> {
Uncompressed { expected_crc: u32, csum_reader: ChecksumReader<R> },
Compressed { expected_crc: u32, decompr: zstd::stream::read::Decoder<BufReader<ChecksumReader<R>>> },
Signed { expected_crc: u32, expected_hmac: [u8; 32], csum_reader: ChecksumReader<R> },
SignedCompressed { expected_crc: u32, expected_hmac: [u8; 32], decompr: zstd::stream::read::Decoder<BufReader<ChecksumReader<R>>> },
Encrypted { expected_crc: u32, decrypt_reader: CryptReader<BufReader<ChecksumReader<R>>> },
EncryptedCompressed { expected_crc: u32, decompr: zstd::stream::read::Decoder<BufReader<CryptReader<BufReader<ChecksumReader<R>>>>> },
}
@ -41,40 +39,26 @@ impl <R: Read> DataBlobReader<R> {
let decompr = zstd::stream::read::Decoder::new(csum_reader)?;
Ok(Self { state: BlobReaderState::Compressed { expected_crc, decompr }})
}
AUTHENTICATED_BLOB_MAGIC_1_0 => {
let expected_crc = u32::from_le_bytes(head.crc);
let mut expected_hmac = [0u8; 32];
reader.read_exact(&mut expected_hmac)?;
let csum_reader = ChecksumReader::new(reader, config);
Ok(Self { state: BlobReaderState::Signed { expected_crc, expected_hmac, csum_reader }})
}
AUTH_COMPR_BLOB_MAGIC_1_0 => {
let expected_crc = u32::from_le_bytes(head.crc);
let mut expected_hmac = [0u8; 32];
reader.read_exact(&mut expected_hmac)?;
let csum_reader = ChecksumReader::new(reader, config);
let decompr = zstd::stream::read::Decoder::new(csum_reader)?;
Ok(Self { state: BlobReaderState::SignedCompressed { expected_crc, expected_hmac, decompr }})
}
ENCRYPTED_BLOB_MAGIC_1_0 => {
let config = config.ok_or_else(|| format_err!("unable to read encrypted blob without key"))?;
let expected_crc = u32::from_le_bytes(head.crc);
let mut iv = [0u8; 16];
let mut expected_tag = [0u8; 16];
reader.read_exact(&mut iv)?;
reader.read_exact(&mut expected_tag)?;
let csum_reader = ChecksumReader::new(reader, None);
let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config.unwrap())?;
let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config)?;
Ok(Self { state: BlobReaderState::Encrypted { expected_crc, decrypt_reader }})
}
ENCR_COMPR_BLOB_MAGIC_1_0 => {
let config = config.ok_or_else(|| format_err!("unable to read encrypted blob without key"))?;
let expected_crc = u32::from_le_bytes(head.crc);
let mut iv = [0u8; 16];
let mut expected_tag = [0u8; 16];
reader.read_exact(&mut iv)?;
reader.read_exact(&mut expected_tag)?;
let csum_reader = ChecksumReader::new(reader, None);
let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config.unwrap())?;
let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config)?;
let decompr = zstd::stream::read::Decoder::new(decrypt_reader)?;
Ok(Self { state: BlobReaderState::EncryptedCompressed { expected_crc, decompr }})
}
@ -99,31 +83,6 @@ impl <R: Read> DataBlobReader<R> {
}
Ok(reader)
}
BlobReaderState::Signed { csum_reader, expected_crc, expected_hmac } => {
let (reader, crc, hmac) = csum_reader.finish()?;
if crc != expected_crc {
bail!("blob crc check failed");
}
if let Some(hmac) = hmac {
if hmac != expected_hmac {
bail!("blob signature check failed");
}
}
Ok(reader)
}
BlobReaderState::SignedCompressed { expected_crc, expected_hmac, decompr } => {
let csum_reader = decompr.finish().into_inner();
let (reader, crc, hmac) = csum_reader.finish()?;
if crc != expected_crc {
bail!("blob crc check failed");
}
if let Some(hmac) = hmac {
if hmac != expected_hmac {
bail!("blob signature check failed");
}
}
Ok(reader)
}
BlobReaderState::Encrypted { expected_crc, decrypt_reader } => {
let csum_reader = decrypt_reader.finish()?.into_inner();
let (reader, crc, _) = csum_reader.finish()?;
@ -155,12 +114,6 @@ impl <R: Read> Read for DataBlobReader<R> {
BlobReaderState::Compressed { decompr, .. } => {
decompr.read(buf)
}
BlobReaderState::Signed { csum_reader, .. } => {
csum_reader.read(buf)
}
BlobReaderState::SignedCompressed { decompr, .. } => {
decompr.read(buf)
}
BlobReaderState::Encrypted { decrypt_reader, .. } => {
decrypt_reader.read(buf)
}

View File

@ -8,8 +8,6 @@ use super::*;
enum BlobWriterState<W: Write> {
Uncompressed { csum_writer: ChecksumWriter<W> },
Compressed { compr: zstd::stream::write::Encoder<ChecksumWriter<W>> },
Signed { csum_writer: ChecksumWriter<W> },
SignedCompressed { compr: zstd::stream::write::Encoder<ChecksumWriter<W>> },
Encrypted { crypt_writer: CryptWriter<ChecksumWriter<W>> },
EncryptedCompressed { compr: zstd::stream::write::Encoder<CryptWriter<ChecksumWriter<W>>> },
}
@ -42,33 +40,6 @@ impl <W: Write + Seek> DataBlobWriter<W> {
Ok(Self { state: BlobWriterState::Compressed { compr }})
}
pub fn new_signed(mut writer: W, config: Arc<CryptConfig>) -> Result<Self, Error> {
writer.seek(SeekFrom::Start(0))?;
let head = AuthenticatedDataBlobHeader {
head: DataBlobHeader { magic: AUTHENTICATED_BLOB_MAGIC_1_0, crc: [0; 4] },
tag: [0u8; 32],
};
unsafe {
writer.write_le_value(head)?;
}
let csum_writer = ChecksumWriter::new(writer, Some(config));
Ok(Self { state: BlobWriterState::Signed { csum_writer }})
}
pub fn new_signed_compressed(mut writer: W, config: Arc<CryptConfig>) -> Result<Self, Error> {
writer.seek(SeekFrom::Start(0))?;
let head = AuthenticatedDataBlobHeader {
head: DataBlobHeader { magic: AUTH_COMPR_BLOB_MAGIC_1_0, crc: [0; 4] },
tag: [0u8; 32],
};
unsafe {
writer.write_le_value(head)?;
}
let csum_writer = ChecksumWriter::new(writer, Some(config));
let compr = zstd::stream::write::Encoder::new(csum_writer, 1)?;
Ok(Self { state: BlobWriterState::SignedCompressed { compr }})
}
pub fn new_encrypted(mut writer: W, config: Arc<CryptConfig>) -> Result<Self, Error> {
writer.seek(SeekFrom::Start(0))?;
let head = EncryptedDataBlobHeader {
@ -129,37 +100,6 @@ impl <W: Write + Seek> DataBlobWriter<W> {
Ok(writer)
}
BlobWriterState::Signed { csum_writer } => {
let (mut writer, crc, tag) = csum_writer.finish()?;
let head = AuthenticatedDataBlobHeader {
head: DataBlobHeader { magic: AUTHENTICATED_BLOB_MAGIC_1_0, crc: crc.to_le_bytes() },
tag: tag.unwrap(),
};
writer.seek(SeekFrom::Start(0))?;
unsafe {
writer.write_le_value(head)?;
}
Ok(writer)
}
BlobWriterState::SignedCompressed { compr } => {
let csum_writer = compr.finish()?;
let (mut writer, crc, tag) = csum_writer.finish()?;
let head = AuthenticatedDataBlobHeader {
head: DataBlobHeader { magic: AUTH_COMPR_BLOB_MAGIC_1_0, crc: crc.to_le_bytes() },
tag: tag.unwrap(),
};
writer.seek(SeekFrom::Start(0))?;
unsafe {
writer.write_le_value(head)?;
}
Ok(writer)
}
BlobWriterState::Encrypted { crypt_writer } => {
let (csum_writer, iv, tag) = crypt_writer.finish()?;
let (mut writer, crc, _) = csum_writer.finish()?;
@ -203,12 +143,6 @@ impl <W: Write + Seek> Write for DataBlobWriter<W> {
BlobWriterState::Compressed { ref mut compr } => {
compr.write(buf)
}
BlobWriterState::Signed { ref mut csum_writer } => {
csum_writer.write(buf)
}
BlobWriterState::SignedCompressed { ref mut compr } => {
compr.write(buf)
}
BlobWriterState::Encrypted { ref mut crypt_writer } => {
crypt_writer.write(buf)
}
@ -226,12 +160,6 @@ impl <W: Write + Seek> Write for DataBlobWriter<W> {
BlobWriterState::Compressed { ref mut compr } => {
compr.flush()
}
BlobWriterState::Signed { ref mut csum_writer } => {
csum_writer.flush()
}
BlobWriterState::SignedCompressed { ref mut compr } => {
compr.flush()
}
BlobWriterState::Encrypted { ref mut crypt_writer } => {
crypt_writer.flush()
}

View File

@ -15,6 +15,7 @@ use super::fixed_index::{FixedIndexReader, FixedIndexWriter};
use super::manifest::{MANIFEST_BLOB_NAME, CLIENT_LOG_BLOB_NAME, BackupManifest};
use super::index::*;
use super::{DataBlob, ArchiveType, archive_type};
use crate::backup::CryptMode;
use crate::config::datastore;
use crate::server::WorkerTask;
use crate::tools;
@ -494,9 +495,13 @@ impl DataStore {
Ok((blob, raw_size))
}
pub fn load_manifest(&self, backup_dir: &BackupDir) -> Result<(BackupManifest, u64), Error> {
pub fn load_manifest(
&self,
backup_dir: &BackupDir,
) -> Result<(BackupManifest, CryptMode, u64), Error> {
let (blob, raw_size) = self.load_blob(backup_dir, MANIFEST_BLOB_NAME)?;
let crypt_mode = blob.crypt_mode()?;
let manifest = BackupManifest::try_from(blob)?;
Ok((manifest, raw_size))
Ok((manifest, crypt_mode, raw_size))
}
}

View File

@ -4,7 +4,7 @@ use std::ops::Range;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
use std::task::{Context, Poll};
use std::task::Context;
use std::pin::Pin;
use anyhow::{bail, format_err, Error};
@ -13,6 +13,7 @@ use proxmox::tools::io::ReadExt;
use proxmox::tools::uuid::Uuid;
use proxmox::tools::vec;
use proxmox::tools::mmap::Mmap;
use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
use super::chunk_stat::ChunkStat;
use super::chunk_store::ChunkStore;
@ -123,25 +124,6 @@ impl DynamicIndexReader {
})
}
#[allow(clippy::cast_ptr_alignment)]
pub fn chunk_info(&self, pos: usize) -> Result<ChunkReadInfo, Error> {
if pos >= self.index.len() {
bail!("chunk index out of range");
}
let start = if pos == 0 {
0
} else {
self.index[pos - 1].end()
};
let end = self.index[pos].end();
Ok(ChunkReadInfo {
range: start..end,
digest: self.index[pos].digest.clone(),
})
}
#[inline]
#[allow(clippy::cast_ptr_alignment)]
fn chunk_end(&self, pos: usize) -> u64 {
@ -159,24 +141,6 @@ impl DynamicIndexReader {
&self.index[pos].digest
}
/// Compute checksum and data size
pub fn compute_csum(&self) -> ([u8; 32], u64) {
let mut csum = openssl::sha::Sha256::new();
for entry in &self.index {
csum.update(&entry.end_le.to_ne_bytes());
csum.update(&entry.digest);
}
let csum = csum.finish();
(
csum,
self.index
.last()
.map(|entry| entry.end())
.unwrap_or(0)
)
}
// TODO: can we use std::slice::binary_search with Mmap now?
fn binary_search(
&self,
@ -224,6 +188,34 @@ impl IndexFile for DynamicIndexReader {
self.chunk_end(self.index.len() - 1)
}
}
fn compute_csum(&self) -> ([u8; 32], u64) {
let mut csum = openssl::sha::Sha256::new();
let mut chunk_end = 0;
for pos in 0..self.index_count() {
let info = self.chunk_info(pos).unwrap();
chunk_end = info.range.end;
csum.update(&chunk_end.to_le_bytes());
csum.update(&info.digest);
}
let csum = csum.finish();
(csum, chunk_end)
}
#[allow(clippy::cast_ptr_alignment)]
fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo> {
if pos >= self.index.len() {
return None;
}
let start = if pos == 0 { 0 } else { self.index[pos - 1].end() };
let end = self.index[pos].end();
Some(ChunkReadInfo {
range: start..end,
digest: self.index[pos].digest.clone(),
})
}
}
struct CachedChunk {
@ -263,7 +255,10 @@ struct ChunkCacher<'a, S> {
impl<'a, S: ReadChunk> crate::tools::lru_cache::Cacher<usize, CachedChunk> for ChunkCacher<'a, S> {
fn fetch(&mut self, index: usize) -> Result<Option<CachedChunk>, Error> {
let info = self.index.chunk_info(index)?;
let info = match self.index.chunk_info(index) {
Some(info) => info,
None => bail!("chunk index out of range"),
};
let range = info.range;
let data = self.store.read_chunk(&info.digest)?;
CachedChunk::new(range, data).map(Some)
@ -416,19 +411,26 @@ impl<R: ReadChunk> LocalDynamicReadAt<R> {
}
}
impl<R: ReadChunk> pxar::accessor::ReadAt for LocalDynamicReadAt<R> {
fn poll_read_at(
self: Pin<&Self>,
impl<R: ReadChunk> ReadAt for LocalDynamicReadAt<R> {
fn start_read_at<'a>(
self: Pin<&'a Self>,
_cx: &mut Context,
buf: &mut [u8],
buf: &'a mut [u8],
offset: u64,
) -> Poll<io::Result<usize>> {
) -> MaybeReady<io::Result<usize>, ReadAtOperation<'a>> {
use std::io::Read;
tokio::task::block_in_place(move || {
MaybeReady::Ready(tokio::task::block_in_place(move || {
let mut reader = self.inner.lock().unwrap();
reader.seek(SeekFrom::Start(offset))?;
Poll::Ready(Ok(reader.read(buf)?))
})
Ok(reader.read(buf)?)
}))
}
fn poll_complete<'a>(
self: Pin<&'a Self>,
_op: ReadAtOperation<'a>,
) -> MaybeReady<io::Result<usize>, ReadAtOperation<'a>> {
panic!("LocalDynamicReadAt::start_read_at returned Pending");
}
}

View File

@ -17,12 +17,6 @@ pub const ENCRYPTED_BLOB_MAGIC_1_0: [u8; 8] = [123, 103, 133, 190, 34, 45, 76, 2
// openssl::sha::sha256(b"Proxmox Backup zstd compressed encrypted blob v1.0")[0..8]
pub const ENCR_COMPR_BLOB_MAGIC_1_0: [u8; 8] = [230, 89, 27, 191, 11, 191, 216, 11];
//openssl::sha::sha256(b"Proxmox Backup authenticated blob v1.0")[0..8]
pub const AUTHENTICATED_BLOB_MAGIC_1_0: [u8; 8] = [31, 135, 238, 226, 145, 206, 5, 2];
//openssl::sha::sha256(b"Proxmox Backup zstd compressed authenticated blob v1.0")[0..8]
pub const AUTH_COMPR_BLOB_MAGIC_1_0: [u8; 8] = [126, 166, 15, 190, 145, 31, 169, 96];
// openssl::sha::sha256(b"Proxmox Backup fixed sized chunk index v1.0")[0..8]
pub const FIXED_SIZED_CHUNK_INDEX_1_0: [u8; 8] = [47, 127, 65, 237, 145, 253, 15, 205];
@ -50,19 +44,6 @@ pub struct DataBlobHeader {
pub crc: [u8; 4],
}
/// Authenticated data blob binary storage format
///
/// The ``DataBlobHeader`` for authenticated blobs additionally contains
/// a 16 byte HMAC tag, followed by the data:
///
/// (MAGIC || CRC32 || TAG || Data).
#[derive(Endian)]
#[repr(C,packed)]
pub struct AuthenticatedDataBlobHeader {
pub head: DataBlobHeader,
pub tag: [u8; 32],
}
/// Encrypted data blob binary storage format
///
/// The ``DataBlobHeader`` for encrypted blobs additionally contains
@ -87,8 +68,6 @@ pub fn header_size(magic: &[u8; 8]) -> usize {
&COMPRESSED_BLOB_MAGIC_1_0 => std::mem::size_of::<DataBlobHeader>(),
&ENCRYPTED_BLOB_MAGIC_1_0 => std::mem::size_of::<EncryptedDataBlobHeader>(),
&ENCR_COMPR_BLOB_MAGIC_1_0 => std::mem::size_of::<EncryptedDataBlobHeader>(),
&AUTHENTICATED_BLOB_MAGIC_1_0 => std::mem::size_of::<AuthenticatedDataBlobHeader>(),
&AUTH_COMPR_BLOB_MAGIC_1_0 => std::mem::size_of::<AuthenticatedDataBlobHeader>(),
_ => panic!("unknown blob magic"),
}
}

View File

@ -1,10 +1,9 @@
use anyhow::{bail, format_err, Error};
use std::convert::TryInto;
use std::io::{Seek, SeekFrom};
use super::chunk_stat::*;
use super::chunk_store::*;
use super::IndexFile;
use super::{IndexFile, ChunkReadInfo};
use crate::tools::{self, epoch_now_u64};
use chrono::{Local, TimeZone};
@ -147,38 +146,6 @@ impl FixedIndexReader {
Ok(())
}
pub fn chunk_info(&self, pos: usize) -> Result<(u64, u64, [u8; 32]), Error> {
if pos >= self.index_length {
bail!("chunk index out of range");
}
let start = (pos * self.chunk_size) as u64;
let mut end = start + self.chunk_size as u64;
if end > self.size {
end = self.size;
}
let mut digest = std::mem::MaybeUninit::<[u8; 32]>::uninit();
unsafe {
std::ptr::copy_nonoverlapping(
self.index.add(pos * 32),
(*digest.as_mut_ptr()).as_mut_ptr(),
32,
);
}
Ok((start, end, unsafe { digest.assume_init() }))
}
#[inline]
fn chunk_digest(&self, pos: usize) -> &[u8; 32] {
if pos >= self.index_length {
panic!("chunk index out of range");
}
let slice = unsafe { std::slice::from_raw_parts(self.index.add(pos * 32), 32) };
slice.try_into().unwrap()
}
#[inline]
fn chunk_end(&self, pos: usize) -> u64 {
if pos >= self.index_length {
@ -193,20 +160,6 @@ impl FixedIndexReader {
}
}
/// Compute checksum and data size
pub fn compute_csum(&self) -> ([u8; 32], u64) {
let mut csum = openssl::sha::Sha256::new();
let mut chunk_end = 0;
for pos in 0..self.index_length {
chunk_end = self.chunk_end(pos);
let digest = self.chunk_digest(pos);
csum.update(digest);
}
let csum = csum.finish();
(csum, chunk_end)
}
pub fn print_info(&self) {
println!("Size: {}", self.size);
println!("ChunkSize: {}", self.chunk_size);
@ -234,6 +187,38 @@ impl IndexFile for FixedIndexReader {
fn index_bytes(&self) -> u64 {
self.size
}
fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo> {
if pos >= self.index_length {
return None;
}
let start = (pos * self.chunk_size) as u64;
let mut end = start + self.chunk_size as u64;
if end > self.size {
end = self.size;
}
let digest = self.index_digest(pos).unwrap();
Some(ChunkReadInfo {
range: start..end,
digest: *digest,
})
}
fn compute_csum(&self) -> ([u8; 32], u64) {
let mut csum = openssl::sha::Sha256::new();
let mut chunk_end = 0;
for pos in 0..self.index_count() {
let info = self.chunk_info(pos).unwrap();
chunk_end = info.range.end;
csum.update(&info.digest);
}
let csum = csum.finish();
(csum, chunk_end)
}
}
pub struct FixedIndexWriter {
@ -511,18 +496,17 @@ impl<S: ReadChunk> BufferedFixedReader<S> {
fn buffer_chunk(&mut self, idx: usize) -> Result<(), Error> {
let index = &self.index;
let (start, end, digest) = index.chunk_info(idx)?;
let info = match index.chunk_info(idx) {
Some(info) => info,
None => bail!("chunk index out of range"),
};
// fixme: avoid copy
let data = self.store.read_chunk(&digest)?;
if (end - start) != data.len() as u64 {
bail!(
"read chunk with wrong size ({} != {}",
(end - start),
data.len()
);
let data = self.store.read_chunk(&info.digest)?;
let size = info.range.end - info.range.start;
if size != data.len() as u64 {
bail!("read chunk with wrong size ({} != {}", size, data.len());
}
self.read_buffer.clear();
@ -530,8 +514,7 @@ impl<S: ReadChunk> BufferedFixedReader<S> {
self.buffered_chunk_idx = idx;
self.buffered_chunk_start = start as u64;
//println!("BUFFER {} {}", self.buffered_chunk_start, end);
self.buffered_chunk_start = info.range.start as u64;
Ok(())
}
}

View File

@ -1,11 +1,5 @@
use std::collections::HashMap;
use std::ops::Range;
use std::pin::Pin;
use std::task::{Context, Poll};
use bytes::{Bytes, BytesMut};
use anyhow::{format_err, Error};
use futures::*;
pub struct ChunkReadInfo {
pub range: Range<u64>,
@ -26,6 +20,10 @@ pub trait IndexFile {
fn index_count(&self) -> usize;
fn index_digest(&self, pos: usize) -> Option<&[u8; 32]>;
fn index_bytes(&self) -> u64;
fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo>;
/// Compute index checksum and size
fn compute_csum(&self) -> ([u8; 32], u64);
/// Returns most often used chunks
fn find_most_used_chunks(&self, max: usize) -> HashMap<[u8; 32], usize> {
@ -59,111 +57,3 @@ pub trait IndexFile {
map
}
}
/// Encode digest list from an `IndexFile` into a binary stream
///
/// The reader simply returns a birary stream of 32 byte digest values.
pub struct DigestListEncoder {
index: Box<dyn IndexFile + Send + Sync>,
pos: usize,
count: usize,
}
impl DigestListEncoder {
pub fn new(index: Box<dyn IndexFile + Send + Sync>) -> Self {
let count = index.index_count();
Self { index, pos: 0, count }
}
}
impl std::io::Read for DigestListEncoder {
fn read(&mut self, buf: &mut [u8]) -> Result<usize, std::io::Error> {
if buf.len() < 32 {
panic!("read buffer too small");
}
if self.pos < self.count {
let mut written = 0;
loop {
let digest = self.index.index_digest(self.pos).unwrap();
buf[written..(written + 32)].copy_from_slice(digest);
self.pos += 1;
written += 32;
if self.pos >= self.count {
break;
}
if (written + 32) >= buf.len() {
break;
}
}
Ok(written)
} else {
Ok(0)
}
}
}
/// Decodes a Stream<Item=Bytes> into Stream<Item=<[u8;32]>
///
/// The reader simply returns a birary stream of 32 byte digest values.
pub struct DigestListDecoder<S: Unpin> {
input: S,
buffer: BytesMut,
}
impl<S: Unpin> DigestListDecoder<S> {
pub fn new(input: S) -> Self {
Self { input, buffer: BytesMut::new() }
}
}
impl<S: Unpin> Unpin for DigestListDecoder<S> {}
impl<S: Unpin, E> Stream for DigestListDecoder<S>
where
S: Stream<Item=Result<Bytes, E>>,
E: Into<Error>,
{
type Item = Result<[u8; 32], Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
let this = self.get_mut();
loop {
if this.buffer.len() >= 32 {
let left = this.buffer.split_to(32);
let mut digest = std::mem::MaybeUninit::<[u8; 32]>::uninit();
unsafe {
(*digest.as_mut_ptr()).copy_from_slice(&left[..]);
return Poll::Ready(Some(Ok(digest.assume_init())));
}
}
match Pin::new(&mut this.input).poll_next(cx) {
Poll::Pending => {
return Poll::Pending;
}
Poll::Ready(Some(Err(err))) => {
return Poll::Ready(Some(Err(err.into())));
}
Poll::Ready(Some(Ok(data))) => {
this.buffer.extend_from_slice(&data);
// continue
}
Poll::Ready(None) => {
let rest = this.buffer.len();
if rest == 0 {
return Poll::Ready(None);
}
return Poll::Ready(Some(Err(format_err!(
"got small digest ({} != 32).",
rest,
))));
}
}
}
}
}

View File

@ -1,4 +1,4 @@
use anyhow::{bail, format_err, Error};
use anyhow::{bail, format_err, Context, Error};
use serde::{Deserialize, Serialize};
use chrono::{Local, TimeZone, DateTime};
@ -146,12 +146,26 @@ pub fn encrypt_key_with_passphrase(
})
}
pub fn load_and_decrypt_key(path: &std::path::Path, passphrase: &dyn Fn() -> Result<Vec<u8>, Error>) -> Result<([u8;32], DateTime<Local>), Error> {
pub fn load_and_decrypt_key(
path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], DateTime<Local>), Error> {
do_load_and_decrypt_key(path, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
}
let raw = file_get_contents(&path)?;
let data = String::from_utf8(raw)?;
fn do_load_and_decrypt_key(
path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], DateTime<Local>), Error> {
decrypt_key(&file_get_contents(&path)?, passphrase)
}
let key_config: KeyConfig = serde_json::from_str(&data)?;
pub fn decrypt_key(
mut keydata: &[u8],
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], DateTime<Local>), Error> {
let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?;
let raw_data = key_config.data;
let created = key_config.created;

View File

@ -3,22 +3,61 @@ use std::convert::TryFrom;
use std::path::Path;
use serde_json::{json, Value};
use ::serde::{Deserialize, Serialize};
use crate::backup::BackupDir;
use crate::backup::{BackupDir, CryptMode, CryptConfig};
pub const MANIFEST_BLOB_NAME: &str = "index.json.blob";
pub const CLIENT_LOG_BLOB_NAME: &str = "client.log.blob";
mod hex_csum {
use serde::{self, Deserialize, Serializer, Deserializer};
pub fn serialize<S>(
csum: &[u8; 32],
serializer: S,
) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let s = proxmox::tools::digest_to_hex(csum);
serializer.serialize_str(&s)
}
pub fn deserialize<'de, D>(
deserializer: D,
) -> Result<[u8; 32], D::Error>
where
D: Deserializer<'de>,
{
let s = String::deserialize(deserializer)?;
proxmox::tools::hex_to_digest(&s).map_err(serde::de::Error::custom)
}
}
fn crypt_mode_none() -> CryptMode { CryptMode::None }
fn empty_value() -> Value { json!({}) }
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
pub struct FileInfo {
pub filename: String,
pub encrypted: Option<bool>,
#[serde(default="crypt_mode_none")] // to be compatible with < 0.8.0 backups
pub crypt_mode: CryptMode,
pub size: u64,
#[serde(with = "hex_csum")]
pub csum: [u8; 32],
}
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
pub struct BackupManifest {
snapshot: BackupDir,
backup_type: String,
backup_id: String,
backup_time: i64,
files: Vec<FileInfo>,
#[serde(default="empty_value")] // to be compatible with < 0.8.0 backups
pub unprotected: Value,
}
#[derive(PartialEq)]
@ -46,12 +85,18 @@ pub fn archive_type<P: AsRef<Path>>(
impl BackupManifest {
pub fn new(snapshot: BackupDir) -> Self {
Self { files: Vec::new(), snapshot }
Self {
backup_type: snapshot.group().backup_type().into(),
backup_id: snapshot.group().backup_id().into(),
backup_time: snapshot.backup_time().timestamp(),
files: Vec::new(),
unprotected: json!({}),
}
}
pub fn add_file(&mut self, filename: String, size: u64, csum: [u8; 32], encrypted: Option<bool>) -> Result<(), Error> {
pub fn add_file(&mut self, filename: String, size: u64, csum: [u8; 32], crypt_mode: CryptMode) -> Result<(), Error> {
let _archive_type = archive_type(&filename)?; // check type
self.files.push(FileInfo { filename, size, csum, encrypted });
self.files.push(FileInfo { filename, size, csum, crypt_mode });
Ok(())
}
@ -84,31 +129,111 @@ impl BackupManifest {
Ok(())
}
pub fn into_json(self) -> Value {
json!({
"backup-type": self.snapshot.group().backup_type(),
"backup-id": self.snapshot.group().backup_id(),
"backup-time": self.snapshot.backup_time().timestamp(),
"files": self.files.iter()
.fold(Vec::new(), |mut acc, info| {
let mut value = json!({
"filename": info.filename,
"encrypted": info.encrypted,
"size": info.size,
"csum": proxmox::tools::digest_to_hex(&info.csum),
});
if let Some(encrypted) = info.encrypted {
value["encrypted"] = encrypted.into();
// Generate cannonical json
fn to_canonical_json(value: &Value) -> Result<Vec<u8>, Error> {
let mut data = Vec::new();
Self::write_canonical_json(value, &mut data)?;
Ok(data)
}
acc.push(value);
acc
})
})
fn write_canonical_json(value: &Value, output: &mut Vec<u8>) -> Result<(), Error> {
match value {
Value::Null => bail!("got unexpected null value"),
Value::String(_) | Value::Number(_) | Value::Bool(_) => {
serde_json::to_writer(output, &value)?;
}
Value::Array(list) => {
output.push(b'[');
let mut iter = list.iter();
if let Some(item) = iter.next() {
Self::write_canonical_json(item, output)?;
for item in iter {
output.push(b',');
Self::write_canonical_json(item, output)?;
}
}
output.push(b']');
}
Value::Object(map) => {
output.push(b'{');
let mut keys: Vec<&str> = map.keys().map(String::as_str).collect();
keys.sort();
let mut iter = keys.into_iter();
if let Some(key) = iter.next() {
output.extend(key.as_bytes());
output.push(b':');
Self::write_canonical_json(&map[key], output)?;
for key in iter {
output.push(b',');
output.extend(key.as_bytes());
output.push(b':');
Self::write_canonical_json(&map[key], output)?;
}
}
output.push(b'}');
}
}
Ok(())
}
/// Compute manifest signature
///
/// By generating a HMAC SHA256 over the canonical json
/// representation, The 'unpreotected' property is excluded.
pub fn signature(&self, crypt_config: &CryptConfig) -> Result<[u8; 32], Error> {
Self::json_signature(&serde_json::to_value(&self)?, crypt_config)
}
fn json_signature(data: &Value, crypt_config: &CryptConfig) -> Result<[u8; 32], Error> {
let mut signed_data = data.clone();
signed_data.as_object_mut().unwrap().remove("unprotected"); // exclude
signed_data.as_object_mut().unwrap().remove("signature"); // exclude
let canonical = Self::to_canonical_json(&signed_data)?;
let sig = crypt_config.compute_auth_tag(&canonical);
Ok(sig)
}
/// Converts the Manifest into json string, and add a signature if there is a crypt_config.
pub fn to_string(&self, crypt_config: Option<&CryptConfig>) -> Result<String, Error> {
let mut manifest = serde_json::to_value(&self)?;
if let Some(crypt_config) = crypt_config {
let sig = self.signature(crypt_config)?;
manifest["signature"] = proxmox::tools::digest_to_hex(&sig).into();
}
let manifest = serde_json::to_string_pretty(&manifest).unwrap().into();
Ok(manifest)
}
/// Try to read the manifest. This verifies the signature if there is a crypt_config.
pub fn from_data(data: &[u8], crypt_config: Option<&CryptConfig>) -> Result<BackupManifest, Error> {
let json: Value = serde_json::from_slice(data)?;
let signature = json["signature"].as_str().map(String::from);
if let Some(ref crypt_config) = crypt_config {
if let Some(signature) = signature {
let expected_signature = proxmox::tools::digest_to_hex(&Self::json_signature(&json, crypt_config)?);
if signature != expected_signature {
bail!("wrong signature in manifest");
}
} else {
// not signed: warn/fail?
}
}
let manifest: BackupManifest = serde_json::from_value(json)?;
Ok(manifest)
}
}
impl TryFrom<super::DataBlob> for BackupManifest {
type Error = Error;
@ -117,41 +242,50 @@ impl TryFrom<super::DataBlob> for BackupManifest {
.map_err(|err| format_err!("decode backup manifest blob failed - {}", err))?;
let json: Value = serde_json::from_slice(&data[..])
.map_err(|err| format_err!("unable to parse backup manifest json - {}", err))?;
BackupManifest::try_from(json)
let manifest: BackupManifest = serde_json::from_value(json)?;
Ok(manifest)
}
}
impl TryFrom<Value> for BackupManifest {
type Error = Error;
fn try_from(data: Value) -> Result<Self, Error> {
#[test]
fn test_manifest_signature() -> Result<(), Error> {
use crate::tools::{required_string_property, required_integer_property, required_array_property};
use crate::backup::{KeyDerivationConfig};
proxmox::try_block!({
let backup_type = required_string_property(&data, "backup-type")?;
let backup_id = required_string_property(&data, "backup-id")?;
let backup_time = required_integer_property(&data, "backup-time")?;
let pw = b"test";
let snapshot = BackupDir::new(backup_type, backup_id, backup_time);
let kdf = KeyDerivationConfig::Scrypt {
n: 65536,
r: 8,
p: 1,
salt: Vec::new(),
};
let testkey = kdf.derive_key(pw)?;
let crypt_config = CryptConfig::new(testkey)?;
let snapshot: BackupDir = "host/elsa/2020-06-26T13:56:05Z".parse()?;
let mut manifest = BackupManifest::new(snapshot);
for item in required_array_property(&data, "files")?.iter() {
let filename = required_string_property(item, "filename")?.to_owned();
let csum = required_string_property(item, "csum")?;
let csum = proxmox::tools::hex_to_digest(csum)?;
let size = required_integer_property(item, "size")? as u64;
let encrypted = item["encrypted"].as_bool();
manifest.add_file(filename, size, csum, encrypted)?;
}
manifest.add_file("test1.img.fidx".into(), 200, [1u8; 32], CryptMode::Encrypt)?;
manifest.add_file("abc.blob".into(), 200, [2u8; 32], CryptMode::None)?;
if manifest.files().is_empty() {
bail!("manifest does not list any files.");
}
manifest.unprotected["note"] = "This is not protected by the signature.".into();
Ok(manifest)
}).map_err(|err: Error| format_err!("unable to parse backup manifest - {}", err))
let text = manifest.to_string(Some(&crypt_config))?;
}
let manifest: Value = serde_json::from_str(&text)?;
let signature = manifest["signature"].as_str().unwrap().to_string();
assert_eq!(signature, "d7b446fb7db081662081d4b40fedd858a1d6307a5aff4ecff7d5bf4fd35679e9");
let manifest: BackupManifest = serde_json::from_value(manifest)?;
let expected_signature = proxmox::tools::digest_to_hex(&manifest.signature(&crypt_config)?);
assert_eq!(signature, expected_signature);
Ok(())
}

View File

@ -11,10 +11,10 @@ use super::datastore::DataStore;
/// The ReadChunk trait allows reading backup data chunks (local or remote)
pub trait ReadChunk {
/// Returns the encoded chunk data
fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error>;
fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error>;
/// Returns the decoded chunk data
fn read_chunk(&mut self, digest: &[u8; 32]) -> Result<Vec<u8>, Error>;
fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error>;
}
#[derive(Clone)]
@ -33,7 +33,7 @@ impl LocalChunkReader {
}
impl ReadChunk for LocalChunkReader {
fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let (path, _) = self.store.chunk_path(digest);
let raw_data = proxmox::tools::fs::file_get_contents(&path)?;
let chunk = DataBlob::from_raw(raw_data)?;
@ -42,7 +42,7 @@ impl ReadChunk for LocalChunkReader {
Ok(chunk)
}
fn read_chunk(&mut self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
let chunk = ReadChunk::read_raw_chunk(self, digest)?;
let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?;
@ -56,20 +56,20 @@ impl ReadChunk for LocalChunkReader {
pub trait AsyncReadChunk: Send {
/// Returns the encoded chunk data
fn read_raw_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>>;
/// Returns the decoded chunk data
fn read_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>>;
}
impl AsyncReadChunk for LocalChunkReader {
fn read_raw_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>> {
Box::pin(async move{
@ -84,7 +84,7 @@ impl AsyncReadChunk for LocalChunkReader {
}
fn read_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>> {
Box::pin(async move {

196
src/backup/verify.rs Normal file
View File

@ -0,0 +1,196 @@
use anyhow::{bail, Error};
use crate::server::WorkerTask;
use super::{
DataStore, BackupGroup, BackupDir, BackupInfo, IndexFile,
ENCR_COMPR_BLOB_MAGIC_1_0, ENCRYPTED_BLOB_MAGIC_1_0,
FileInfo, ArchiveType, archive_type,
};
fn verify_blob(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
let (blob, raw_size) = datastore.load_blob(backup_dir, &info.filename)?;
let csum = openssl::sha::sha256(blob.raw_data());
if raw_size != info.size {
bail!("wrong size ({} != {})", info.size, raw_size);
}
if csum != info.csum {
bail!("wrong index checksum");
}
blob.verify_crc()?;
let magic = blob.magic();
if magic == &ENCR_COMPR_BLOB_MAGIC_1_0 || magic == &ENCRYPTED_BLOB_MAGIC_1_0 {
return Ok(());
}
blob.decode(None)?;
Ok(())
}
fn verify_index_chunks(
datastore: &DataStore,
index: Box<dyn IndexFile>,
worker: &WorkerTask,
) -> Result<(), Error> {
for pos in 0..index.index_count() {
worker.fail_on_abort()?;
let info = index.chunk_info(pos).unwrap();
let size = info.range.end - info.range.start;
datastore.verify_stored_chunk(&info.digest, size)?;
}
Ok(())
}
fn verify_fixed_index(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo, worker: &WorkerTask) -> Result<(), Error> {
let mut path = backup_dir.relative_path();
path.push(&info.filename);
let index = datastore.open_fixed_reader(&path)?;
let (csum, size) = index.compute_csum();
if size != info.size {
bail!("wrong size ({} != {})", info.size, size);
}
if csum != info.csum {
bail!("wrong index checksum");
}
verify_index_chunks(datastore, Box::new(index), worker)
}
fn verify_dynamic_index(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo, worker: &WorkerTask) -> Result<(), Error> {
let mut path = backup_dir.relative_path();
path.push(&info.filename);
let index = datastore.open_dynamic_reader(&path)?;
let (csum, size) = index.compute_csum();
if size != info.size {
bail!("wrong size ({} != {})", info.size, size);
}
if csum != info.csum {
bail!("wrong index checksum");
}
verify_index_chunks(datastore, Box::new(index), worker)
}
/// Verify a single backup snapshot
///
/// This checks all archives inside a backup snapshot.
/// Errors are logged to the worker log.
///
/// Returns
/// - Ok(true) if verify is successful
/// - Ok(false) if there were verification errors
/// - Err(_) if task was aborted
pub fn verify_backup_dir(datastore: &DataStore, backup_dir: &BackupDir, worker: &WorkerTask) -> Result<bool, Error> {
let manifest = match datastore.load_manifest(&backup_dir) {
Ok((manifest, _crypt_mode, _)) => manifest,
Err(err) => {
worker.log(format!("verify {}:{} - manifest load error: {}", datastore.name(), backup_dir, err));
return Ok(false);
}
};
worker.log(format!("verify {}:{}", datastore.name(), backup_dir));
let mut error_count = 0;
for info in manifest.files() {
let result = proxmox::try_block!({
worker.log(format!(" check {}", info.filename));
match archive_type(&info.filename)? {
ArchiveType::FixedIndex => verify_fixed_index(&datastore, &backup_dir, info, worker),
ArchiveType::DynamicIndex => verify_dynamic_index(&datastore, &backup_dir, info, worker),
ArchiveType::Blob => verify_blob(&datastore, &backup_dir, info),
}
});
worker.fail_on_abort()?;
if let Err(err) = result {
worker.log(format!("verify {}:{}/{} failed: {}", datastore.name(), backup_dir, info.filename, err));
error_count += 1;
}
}
Ok(error_count == 0)
}
/// Verify all backups inside a backup group
///
/// Errors are logged to the worker log.
///
/// Returns
/// - Ok(true) if verify is successful
/// - Ok(false) if there were verification errors
/// - Err(_) if task was aborted
pub fn verify_backup_group(datastore: &DataStore, group: &BackupGroup, worker: &WorkerTask) -> Result<bool, Error> {
let mut list = match group.list_backups(&datastore.base_path()) {
Ok(list) => list,
Err(err) => {
worker.log(format!("verify group {}:{} - unable to list backups: {}", datastore.name(), group, err));
return Ok(false);
}
};
worker.log(format!("verify group {}:{}", datastore.name(), group));
let mut error_count = 0;
BackupInfo::sort_list(&mut list, false); // newest first
for info in list {
if !verify_backup_dir(datastore, &info.backup_dir, worker)? {
error_count += 1;
}
}
Ok(error_count == 0)
}
/// Verify all backups inside a datastore
///
/// Errors are logged to the worker log.
///
/// Returns
/// - Ok(true) if verify is successful
/// - Ok(false) if there were verification errors
/// - Err(_) if task was aborted
pub fn verify_all_backups(datastore: &DataStore, worker: &WorkerTask) -> Result<bool, Error> {
let list = match BackupGroup::list_groups(&datastore.base_path()) {
Ok(list) => list,
Err(err) => {
worker.log(format!("verify datastore {} - unable to list backups: {}", datastore.name(), err));
return Ok(false);
}
};
worker.log(format!("verify datastore {}", datastore.name()));
let mut error_count = 0;
for group in list {
if !verify_backup_group(datastore, &group, worker)? {
error_count += 1;
}
}
Ok(error_count == 0)
}

File diff suppressed because it is too large Load Diff

View File

@ -127,7 +127,7 @@ async fn garbage_collection_status(param: Value) -> Result<Value, Error> {
let mut result = client.get(&path, None).await?;
let mut data = result["data"].take();
let schema = api2::admin::datastore::API_RETURN_SCHEMA_GARBAGE_COLLECTION_STATUS;
let schema = &api2::admin::datastore::API_RETURN_SCHEMA_GARBAGE_COLLECTION_STATUS;
let options = default_table_format_options();
@ -193,7 +193,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let mut result = client.get("api2/json/nodes/localhost/tasks", Some(args)).await?;
let mut data = result["data"].take();
let schema = api2::node::tasks::API_RETURN_SCHEMA_LIST_TASKS;
let schema = &api2::node::tasks::API_RETURN_SCHEMA_LIST_TASKS;
let options = default_table_format_options()
.column(ColumnConfig::new("starttime").right_align(false).renderer(tools::format::render_epoch))
@ -319,6 +319,40 @@ async fn pull_datastore(
Ok(Value::Null)
}
#[api(
input: {
properties: {
"store": {
schema: DATASTORE_SCHEMA,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// Verify backups
async fn verify(
store: String,
param: Value,
) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let mut client = connect()?;
let args = json!({});
let path = format!("api2/json/admin/datastore/{}/verify", store);
let result = client.post(&path, Some(args)).await?;
view_task_result(client, result, &output_format).await?;
Ok(Value::Null)
}
fn main() {
proxmox_backup::tools::setup_safe_path_env();
@ -342,8 +376,16 @@ fn main() {
.completion_cb("local-store", config::datastore::complete_datastore_name)
.completion_cb("remote", config::remote::complete_remote_name)
.completion_cb("remote-store", complete_remote_datastore_name)
)
.insert(
"verify",
CliCommand::new(&API_METHOD_VERIFY)
.arg_param(&["store"])
.completion_cb("store", config::datastore::complete_datastore_name)
);
let mut rpcenv = CliEnvironment::new();
rpcenv.set_user(Some(String::from("root@pam")));

View File

@ -0,0 +1,326 @@
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::{Error};
use serde_json::Value;
use chrono::{TimeZone, Utc};
use serde::Serialize;
use proxmox::api::{ApiMethod, RpcEnvironment};
use proxmox::api::{
api,
cli::{
OUTPUT_FORMAT,
ColumnConfig,
get_output_format,
format_and_print_result_full,
default_table_format_options,
},
};
use proxmox_backup::backup::{
load_and_decrypt_key,
CryptConfig,
KeyDerivationConfig,
};
use proxmox_backup::client::*;
use crate::{
KEYFILE_SCHEMA, REPO_URL_SCHEMA,
extract_repository_from_value,
record_repository,
connect,
};
#[api()]
#[derive(Copy, Clone, Serialize)]
/// Speed test result
struct Speed {
/// The meassured speed in Bytes/second
#[serde(skip_serializing_if="Option::is_none")]
speed: Option<f64>,
/// Top result we want to compare with
top: f64,
}
#[api(
properties: {
"tls": {
type: Speed,
},
"sha256": {
type: Speed,
},
"compress": {
type: Speed,
},
"decompress": {
type: Speed,
},
"aes256_gcm": {
type: Speed,
},
},
)]
#[derive(Copy, Clone, Serialize)]
/// Benchmark Results
struct BenchmarkResult {
/// TLS upload speed
tls: Speed,
/// SHA256 checksum comptation speed
sha256: Speed,
/// ZStd level 1 compression speed
compress: Speed,
/// ZStd level 1 decompression speed
decompress: Speed,
/// AES256 GCM encryption speed
aes256_gcm: Speed,
}
static BENCHMARK_RESULT_2020_TOP: BenchmarkResult = BenchmarkResult {
tls: Speed {
speed: None,
top: 1_000_000.0 * 590.0, // TLS to localhost, AMD Ryzen 7 2700X
},
sha256: Speed {
speed: None,
top: 1_000_000.0 * 2120.0, // AMD Ryzen 7 2700X
},
compress: Speed {
speed: None,
top: 1_000_000.0 * 2158.0, // AMD Ryzen 7 2700X
},
decompress: Speed {
speed: None,
top: 1_000_000.0 * 8062.0, // AMD Ryzen 7 2700X
},
aes256_gcm: Speed {
speed: None,
top: 1_000_000.0 * 3803.0, // AMD Ryzen 7 2700X
},
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
verbose: {
description: "Verbose output.",
type: bool,
optional: true,
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// Run benchmark tests
pub async fn benchmark(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let repo = extract_repository_from_value(&param).ok();
let keyfile = param["keyfile"].as_str().map(PathBuf::from);
let verbose = param["verbose"].as_bool().unwrap_or(false);
let output_format = get_output_format(&param);
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let mut benchmark_result = BENCHMARK_RESULT_2020_TOP;
// do repo tests first, because this may prompt for a password
if let Some(repo) = repo {
test_upload_speed(&mut benchmark_result, repo, crypt_config.clone(), verbose).await?;
}
test_crypt_speed(&mut benchmark_result, verbose)?;
render_result(&output_format, &benchmark_result)?;
Ok(())
}
// print comparison table
fn render_result(
output_format: &str,
benchmark_result: &BenchmarkResult,
) -> Result<(), Error> {
let mut data = serde_json::to_value(benchmark_result)?;
let schema = &BenchmarkResult::API_SCHEMA;
let render_speed = |value: &Value, _record: &Value| -> Result<String, Error> {
match value["speed"].as_f64() {
None => Ok(String::from("not tested")),
Some(speed) => {
let top = value["top"].as_f64().unwrap();
Ok(format!("{:.2} MB/s ({:.0}%)", speed/1_000_000.0, (speed*100.0)/top))
}
}
};
let options = default_table_format_options()
.column(ColumnConfig::new("tls")
.header("TLS (maximal backup upload speed)")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("sha256")
.header("SHA256 checksum comptation speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("compress")
.header("ZStd level 1 compression speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("decompress")
.header("ZStd level 1 decompression speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("aes256_gcm")
.header("AES256 GCM encryption speed")
.right_align(false).renderer(render_speed));
format_and_print_result_full(&mut data, schema, output_format, &options);
Ok(())
}
async fn test_upload_speed(
benchmark_result: &mut BenchmarkResult,
repo: BackupRepository,
crypt_config: Option<Arc<CryptConfig>>,
verbose: bool,
) -> Result<(), Error> {
let backup_time = Utc.timestamp(Utc::now().timestamp(), 0);
let client = connect(repo.host(), repo.user())?;
record_repository(&repo);
if verbose { eprintln!("Connecting to backup server"); }
let client = BackupWriter::start(
client,
crypt_config.clone(),
repo.store(),
"host",
"benchmark",
backup_time,
false,
).await?;
if verbose { eprintln!("Start TLS speed test"); }
let speed = client.upload_speedtest(verbose).await?;
eprintln!("TLS speed: {:.2} MB/s", speed/1_000_000.0);
benchmark_result.tls.speed = Some(speed);
Ok(())
}
// test hash/crypt/compress speed
fn test_crypt_speed(
benchmark_result: &mut BenchmarkResult,
_verbose: bool,
) -> Result<(), Error> {
let pw = b"test";
let kdf = KeyDerivationConfig::Scrypt {
n: 65536,
r: 8,
p: 1,
salt: Vec::new(),
};
let testkey = kdf.derive_key(pw)?;
let crypt_config = CryptConfig::new(testkey)?;
let random_data = proxmox::sys::linux::random_data(1024*1024)?;
let start_time = std::time::Instant::now();
let mut bytes = 0;
loop {
openssl::sha::sha256(&random_data);
bytes += random_data.len();
if start_time.elapsed().as_micros() > 1_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.sha256.speed = Some(speed);
eprintln!("SHA256 speed: {:.2} MB/s", speed/1_000_000_.0);
let start_time = std::time::Instant::now();
let mut bytes = 0;
loop {
let mut reader = &random_data[..];
zstd::stream::encode_all(&mut reader, 1)?;
bytes += random_data.len();
if start_time.elapsed().as_micros() > 3_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.compress.speed = Some(speed);
eprintln!("Compression speed: {:.2} MB/s", speed/1_000_000_.0);
let start_time = std::time::Instant::now();
let compressed_data = {
let mut reader = &random_data[..];
zstd::stream::encode_all(&mut reader, 1)?
};
let mut bytes = 0;
loop {
let mut reader = &compressed_data[..];
let data = zstd::stream::decode_all(&mut reader)?;
bytes += data.len();
if start_time.elapsed().as_micros() > 1_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.decompress.speed = Some(speed);
eprintln!("Decompress speed: {:.2} MB/s", speed/1_000_000_.0);
let start_time = std::time::Instant::now();
let mut bytes = 0;
loop {
let mut out = Vec::new();
crypt_config.encrypt_to(&random_data, &mut out)?;
bytes += random_data.len();
if start_time.elapsed().as_micros() > 1_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.aes256_gcm.speed = Some(speed);
eprintln!("AES256/GCM speed: {:.2} MB/s", speed/1_000_000_.0);
Ok(())
}

View File

@ -0,0 +1,261 @@
use std::os::unix::fs::OpenOptionsExt;
use std::io::{Seek, SeekFrom};
use std::sync::Arc;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::api::{api, cli::*};
use proxmox_backup::tools;
use proxmox_backup::client::*;
use crate::{
REPO_URL_SCHEMA,
KEYFD_SCHEMA,
extract_repository_from_value,
record_repository,
keyfile_parameters,
key::get_encryption_key_password,
decrypt_key,
api_datastore_latest_snapshot,
complete_repository,
complete_backup_snapshot,
complete_group_or_snapshot,
complete_pxar_archive_name,
connect,
BackupDir,
BackupGroup,
BufferedDynamicReader,
BufferedDynamicReadAt,
CatalogReader,
CATALOG_NAME,
CryptConfig,
DynamicIndexReader,
IndexFile,
Shell,
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"keyfile": {
optional: true,
type: String,
description: "Path to encryption key.",
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
}
}
)]
/// Dump catalog.
async fn dump_catalog(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let (keydata, _) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let client = connect(repo.host(), repo.user())?;
let client = BackupReader::start(
client,
crypt_config.clone(),
repo.store(),
&snapshot.group().backup_type(),
&snapshot.group().backup_id(),
snapshot.backup_time(),
true,
).await?;
let (manifest, _) = client.download_manifest().await?;
let index = client.download_dynamic_index(&manifest, CATALOG_NAME).await?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used);
let mut reader = BufferedDynamicReader::new(index, chunk_reader);
let mut catalogfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
std::io::copy(&mut reader, &mut catalogfile)
.map_err(|err| format_err!("unable to download catalog - {}", err))?;
catalogfile.seek(SeekFrom::Start(0))?;
let mut catalog_reader = CatalogReader::new(catalogfile);
catalog_reader.dump()?;
record_repository(&repo);
Ok(Value::Null)
}
#[api(
input: {
properties: {
"snapshot": {
type: String,
description: "Group/Snapshot path.",
},
"archive-name": {
type: String,
description: "Backup archive name.",
},
"repository": {
optional: true,
schema: REPO_URL_SCHEMA,
},
"keyfile": {
optional: true,
type: String,
description: "Path to encryption key.",
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
},
},
)]
/// Shell to interactively inspect and restore snapshots.
async fn catalog_shell(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?;
let path = tools::required_string_param(&param, "snapshot")?;
let archive_name = tools::required_string_param(&param, "archive-name")?;
let (backup_type, backup_id, backup_time) = if path.matches('/').count() == 1 {
let group: BackupGroup = path.parse()?;
api_datastore_latest_snapshot(&client, repo.store(), group).await?
} else {
let snapshot: BackupDir = path.parse()?;
(snapshot.group().backup_type().to_owned(), snapshot.group().backup_id().to_owned(), snapshot.backup_time())
};
let (keydata, _) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let server_archive_name = if archive_name.ends_with(".pxar") {
format!("{}.didx", archive_name)
} else {
bail!("Can only mount pxar archives.");
};
let client = BackupReader::start(
client,
crypt_config.clone(),
repo.store(),
&backup_type,
&backup_id,
backup_time,
true,
).await?;
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
let (manifest, _) = client.download_manifest().await?;
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config.clone(), most_used);
let reader = BufferedDynamicReader::new(index, chunk_reader);
let archive_size = reader.archive_size();
let reader: proxmox_backup::pxar::fuse::Reader =
Arc::new(BufferedDynamicReadAt::new(reader));
let decoder = proxmox_backup::pxar::fuse::Accessor::new(reader, archive_size).await?;
client.download(CATALOG_NAME, &mut tmpfile).await?;
let index = DynamicIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read catalog index - {}", err))?;
// Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum();
manifest.verify_file(CATALOG_NAME, &csum, size)?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used);
let mut reader = BufferedDynamicReader::new(index, chunk_reader);
let mut catalogfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
std::io::copy(&mut reader, &mut catalogfile)
.map_err(|err| format_err!("unable to download catalog - {}", err))?;
catalogfile.seek(SeekFrom::Start(0))?;
let catalog_reader = CatalogReader::new(catalogfile);
let state = Shell::new(
catalog_reader,
&server_archive_name,
decoder,
).await?;
println!("Starting interactive shell");
state.shell().await?;
record_repository(&repo);
Ok(())
}
pub fn catalog_mgmt_cli() -> CliCommandMap {
let catalog_shell_cmd_def = CliCommand::new(&API_METHOD_CATALOG_SHELL)
.arg_param(&["snapshot", "archive-name"])
.completion_cb("repository", complete_repository)
.completion_cb("archive-name", complete_pxar_archive_name)
.completion_cb("snapshot", complete_group_or_snapshot);
let catalog_dump_cmd_def = CliCommand::new(&API_METHOD_DUMP_CATALOG)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot);
CliCommandMap::new()
.insert("dump", catalog_dump_cmd_def)
.insert("shell", catalog_shell_cmd_def)
}

View File

@ -0,0 +1,284 @@
use std::path::PathBuf;
use anyhow::{bail, format_err, Error};
use chrono::{Local, TimeZone};
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::cli::{CliCommand, CliCommandMap};
use proxmox::sys::linux::tty;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox_backup::backup::{
encrypt_key_with_passphrase, load_and_decrypt_key, store_key_config, KeyConfig,
};
use proxmox_backup::tools;
pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
pub const MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
pub fn find_master_pubkey() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(MASTER_PUBKEY_FILE_NAME, "main public key file")
}
pub fn place_master_pubkey() -> Result<PathBuf, Error> {
super::place_xdg_file(MASTER_PUBKEY_FILE_NAME, "main public key file")
}
pub fn find_default_encryption_key() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(DEFAULT_ENCRYPTION_KEY_FILE_NAME, "default encryption key file")
}
pub fn place_default_encryption_key() -> Result<PathBuf, Error> {
super::place_xdg_file(DEFAULT_ENCRYPTION_KEY_FILE_NAME, "default encryption key file")
}
pub fn read_optional_default_encryption_key() -> Result<Option<Vec<u8>>, Error> {
find_default_encryption_key()?
.map(file_get_contents)
.transpose()
}
pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
// fixme: implement other input methods
use std::env::VarError::*;
match std::env::var("PBS_ENCRYPTION_PASSWORD") {
Ok(p) => return Ok(p.as_bytes().to_vec()),
Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
Err(NotPresent) => {
// Try another method
}
}
// If we're on a TTY, query the user for a password
if tty::stdin_isatty() {
return Ok(tty::read_password("Encryption Key Password: ")?);
}
bail!("no password input mechanism available");
}
#[api(
default: "scrypt",
)]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api(
input: {
properties: {
kdf: {
type: Kdf,
optional: true,
},
path: {
description:
"Output file. Without this the key will become the new default encryption key.",
optional: true,
}
},
},
)]
/// Create a new encryption key.
fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => {
let path = place_default_encryption_key()?;
println!("creating default key at: {:?}", path);
path
}
};
let kdf = kdf.unwrap_or_default();
let key = proxmox::sys::linux::random_data(32)?;
match kdf {
Kdf::None => {
let created = Local.timestamp(Local::now().timestamp(), 0);
store_key_config(
&path,
false,
KeyConfig {
kdf: None,
created,
modified: created,
data: key,
},
)?;
}
Kdf::Scrypt => {
// always read passphrase from tty
if !tty::stdin_isatty() {
bail!("unable to read passphrase - no tty");
}
let password = tty::read_and_verify_password("Encryption Key Password: ")?;
let key_config = encrypt_key_with_passphrase(&key, &password)?;
store_key_config(&path, false, key_config)?;
}
}
Ok(())
}
#[api(
input: {
properties: {
kdf: {
type: Kdf,
optional: true,
},
path: {
description: "Key file. Without this the default key's password will be changed.",
optional: true,
}
},
},
)]
/// Change the encryption key's password.
fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => {
let path = find_default_encryption_key()?
.ok_or_else(|| {
format_err!("no encryption file provided and no default file found")
})?;
println!("updating default key at: {:?}", path);
path
}
};
let kdf = kdf.unwrap_or_default();
if !tty::stdin_isatty() {
bail!("unable to change passphrase - no tty");
}
let (key, created) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
match kdf {
Kdf::None => {
let modified = Local.timestamp(Local::now().timestamp(), 0);
store_key_config(
&path,
true,
KeyConfig {
kdf: None,
created, // keep original value
modified,
data: key.to_vec(),
},
)?;
}
Kdf::Scrypt => {
let password = tty::read_and_verify_password("New Password: ")?;
let mut new_key_config = encrypt_key_with_passphrase(&key, &password)?;
new_key_config.created = created; // keep original value
store_key_config(&path, true, new_key_config)?;
}
}
Ok(())
}
#[api(
input: {
properties: {
path: {
description: "Path to the PEM formatted RSA public key.",
},
},
},
)]
/// Import an RSA public key used to put an encrypted version of the symmetric backup encryption
/// key onto the backup server along with each backup.
fn import_master_pubkey(path: String) -> Result<(), Error> {
let pem_data = file_get_contents(&path)?;
if let Err(err) = openssl::pkey::PKey::public_key_from_pem(&pem_data) {
bail!("Unable to decode PEM data - {}", err);
}
let target_path = place_master_pubkey()?;
replace_file(&target_path, &pem_data, CreateOptions::new())?;
println!("Imported public master key to {:?}", target_path);
Ok(())
}
#[api]
/// Create an RSA public/private key pair used to put an encrypted version of the symmetric backup
/// encryption key onto the backup server along with each backup.
fn create_master_key() -> Result<(), Error> {
// we need a TTY to query the new password
if !tty::stdin_isatty() {
bail!("unable to create master key - no tty");
}
let rsa = openssl::rsa::Rsa::generate(4096)?;
let pkey = openssl::pkey::PKey::from_rsa(rsa)?;
let password = String::from_utf8(tty::read_and_verify_password("Master Key Password: ")?)?;
let pub_key: Vec<u8> = pkey.public_key_to_pem()?;
let filename_pub = "master-public.pem";
println!("Writing public master key to {}", filename_pub);
replace_file(filename_pub, pub_key.as_slice(), CreateOptions::new())?;
let cipher = openssl::symm::Cipher::aes_256_cbc();
let priv_key: Vec<u8> = pkey.private_key_to_pem_pkcs8_passphrase(cipher, password.as_bytes())?;
let filename_priv = "master-private.pem";
println!("Writing private master key to {}", filename_priv);
replace_file(filename_priv, priv_key.as_slice(), CreateOptions::new())?;
Ok(())
}
pub fn cli() -> CliCommandMap {
let key_create_cmd_def = CliCommand::new(&API_METHOD_CREATE)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let key_change_passphrase_cmd_def = CliCommand::new(&API_METHOD_CHANGE_PASSPHRASE)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let key_create_master_key_cmd_def = CliCommand::new(&API_METHOD_CREATE_MASTER_KEY);
let key_import_master_pubkey_cmd_def = CliCommand::new(&API_METHOD_IMPORT_MASTER_PUBKEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
CliCommandMap::new()
.insert("create", key_create_cmd_def)
.insert("create-master-key", key_create_master_key_cmd_def)
.insert("import-master-pubkey", key_import_master_pubkey_cmd_def)
.insert("change-passphrase", key_change_passphrase_cmd_def)
}

View File

@ -0,0 +1,39 @@
use anyhow::{Context, Error};
mod benchmark;
pub use benchmark::*;
mod mount;
pub use mount::*;
mod task;
pub use task::*;
mod catalog;
pub use catalog::*;
pub mod key;
pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
}
/// Convenience helper for better error messages:
pub fn find_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<Option<std::path::PathBuf>, Error> {
let file_name = file_name.as_ref();
base_directories()
.map(|base| base.find_config_file(file_name))
.with_context(|| format!("error searching for {}", description))
}
pub fn place_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<std::path::PathBuf, Error> {
let file_name = file_name.as_ref();
base_directories()
.and_then(|base| {
base.place_config_file(file_name).map_err(Error::from)
})
.with_context(|| format!("failed to place {} in xdg home", description))
}

View File

@ -0,0 +1,195 @@
use std::path::PathBuf;
use std::sync::Arc;
use std::os::unix::io::RawFd;
use std::path::Path;
use std::ffi::OsStr;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use tokio::signal::unix::{signal, SignalKind};
use nix::unistd::{fork, ForkResult, pipe};
use futures::select;
use futures::future::FutureExt;
use proxmox::{sortable, identity};
use proxmox::api::{ApiHandler, ApiMethod, RpcEnvironment, schema::*, cli::*};
use proxmox_backup::tools;
use proxmox_backup::backup::{
load_and_decrypt_key,
CryptConfig,
IndexFile,
BackupDir,
BackupGroup,
BufferedDynamicReader,
};
use proxmox_backup::client::*;
use crate::{
REPO_URL_SCHEMA,
extract_repository_from_value,
complete_pxar_archive_name,
complete_group_or_snapshot,
complete_repository,
record_repository,
connect,
api_datastore_latest_snapshot,
BufferedDynamicReadAt,
};
#[sortable]
const API_METHOD_MOUNT: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&mount),
&ObjectSchema::new(
"Mount pxar archive.",
&sorted!([
("snapshot", false, &StringSchema::new("Group/Snapshot path.").schema()),
("archive-name", false, &StringSchema::new("Backup archive name.").schema()),
("target", false, &StringSchema::new("Target directory path.").schema()),
("repository", true, &REPO_URL_SCHEMA),
("keyfile", true, &StringSchema::new("Path to encryption key.").schema()),
("verbose", true, &BooleanSchema::new("Verbose output.").default(false).schema()),
]),
)
);
pub fn mount_cmd_def() -> CliCommand {
CliCommand::new(&API_METHOD_MOUNT)
.arg_param(&["snapshot", "archive-name", "target"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_group_or_snapshot)
.completion_cb("archive-name", complete_pxar_archive_name)
.completion_cb("target", tools::complete_file_name)
}
fn mount(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let verbose = param["verbose"].as_bool().unwrap_or(false);
if verbose {
// This will stay in foreground with debug output enabled as None is
// passed for the RawFd.
return proxmox_backup::tools::runtime::main(mount_do(param, None));
}
// Process should be deamonized.
// Make sure to fork before the async runtime is instantiated to avoid troubles.
let pipe = pipe()?;
match fork() {
Ok(ForkResult::Parent { .. }) => {
nix::unistd::close(pipe.1).unwrap();
// Blocks the parent process until we are ready to go in the child
let _res = nix::unistd::read(pipe.0, &mut [0]).unwrap();
Ok(Value::Null)
}
Ok(ForkResult::Child) => {
nix::unistd::close(pipe.0).unwrap();
nix::unistd::setsid().unwrap();
proxmox_backup::tools::runtime::main(mount_do(param, Some(pipe.1)))
}
Err(_) => bail!("failed to daemonize process"),
}
}
async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let archive_name = tools::required_string_param(&param, "archive-name")?;
let target = tools::required_string_param(&param, "target")?;
let client = connect(repo.host(), repo.user())?;
record_repository(&repo);
let path = tools::required_string_param(&param, "snapshot")?;
let (backup_type, backup_id, backup_time) = if path.matches('/').count() == 1 {
let group: BackupGroup = path.parse()?;
api_datastore_latest_snapshot(&client, repo.store(), group).await?
} else {
let snapshot: BackupDir = path.parse()?;
(snapshot.group().backup_type().to_owned(), snapshot.group().backup_id().to_owned(), snapshot.backup_time())
};
let keyfile = param["keyfile"].as_str().map(PathBuf::from);
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
Some(Arc::new(CryptConfig::new(key)?))
}
};
let server_archive_name = if archive_name.ends_with(".pxar") {
format!("{}.didx", archive_name)
} else {
bail!("Can only mount pxar archives.");
};
let client = BackupReader::start(
client,
crypt_config.clone(),
repo.store(),
&backup_type,
&backup_id,
backup_time,
true,
).await?;
let (manifest, _) = client.download_manifest().await?;
if server_archive_name.ends_with(".didx") {
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used);
let reader = BufferedDynamicReader::new(index, chunk_reader);
let archive_size = reader.archive_size();
let reader: proxmox_backup::pxar::fuse::Reader =
Arc::new(BufferedDynamicReadAt::new(reader));
let decoder = proxmox_backup::pxar::fuse::Accessor::new(reader, archive_size).await?;
let options = OsStr::new("ro,default_permissions");
let session = proxmox_backup::pxar::fuse::Session::mount(
decoder,
&options,
false,
Path::new(target),
)
.map_err(|err| format_err!("pxar mount failed: {}", err))?;
if let Some(pipe) = pipe {
nix::unistd::chdir(Path::new("/")).unwrap();
// Finish creation of daemon by redirecting filedescriptors.
let nullfd = nix::fcntl::open(
"/dev/null",
nix::fcntl::OFlag::O_RDWR,
nix::sys::stat::Mode::empty(),
).unwrap();
nix::unistd::dup2(nullfd, 0).unwrap();
nix::unistd::dup2(nullfd, 1).unwrap();
nix::unistd::dup2(nullfd, 2).unwrap();
if nullfd > 2 {
nix::unistd::close(nullfd).unwrap();
}
// Signal the parent process that we are done with the setup and it can
// terminate.
nix::unistd::write(pipe, &[0u8])?;
nix::unistd::close(pipe).unwrap();
}
let mut interrupt = signal(SignalKind::interrupt())?;
select! {
res = session.fuse() => res?,
_ = interrupt.recv().fuse() => {
// exit on interrupted
}
}
} else {
bail!("unknown archive file extension (expected .pxar)");
}
Ok(Value::Null)
}

View File

@ -0,0 +1,148 @@
use anyhow::{Error};
use serde_json::{json, Value};
use proxmox::api::{api, cli::*};
use proxmox_backup::tools;
use proxmox_backup::client::*;
use proxmox_backup::api2::types::UPID_SCHEMA;
use crate::{
REPO_URL_SCHEMA,
extract_repository_from_value,
complete_repository,
connect,
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
limit: {
description: "The maximal number of tasks to list.",
type: Integer,
optional: true,
minimum: 1,
maximum: 1000,
default: 50,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
all: {
type: Boolean,
description: "Also list stopped tasks.",
optional: true,
},
}
}
)]
/// List running server tasks for this repo user
async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false);
let args = json!({
"running": running,
"start": 0,
"limit": limit,
"userfilter": repo.user(),
"store": repo.store(),
});
let mut result = client.get("api2/json/nodes/localhost/tasks", Some(args)).await?;
let mut data = result["data"].take();
let schema = &proxmox_backup::api2::node::tasks::API_RETURN_SCHEMA_LIST_TASKS;
let options = default_table_format_options()
.column(ColumnConfig::new("starttime").right_align(false).renderer(tools::format::render_epoch))
.column(ColumnConfig::new("endtime").right_align(false).renderer(tools::format::render_epoch))
.column(ColumnConfig::new("upid"))
.column(ColumnConfig::new("status").renderer(tools::format::render_task_status));
format_and_print_result_full(&mut data, schema, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
upid: {
schema: UPID_SCHEMA,
},
}
}
)]
/// Display the task log.
async fn task_log(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid = tools::required_string_param(&param, "upid")?;
let client = connect(repo.host(), repo.user())?;
display_task_log(client, upid, true).await?;
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
upid: {
schema: UPID_SCHEMA,
},
}
}
)]
/// Try to stop a specific task.
async fn task_stop(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect(repo.host(), repo.user())?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let _ = client.delete(&path, None).await?;
Ok(Value::Null)
}
pub fn task_mgmt_cli() -> CliCommandMap {
let task_list_cmd_def = CliCommand::new(&API_METHOD_TASK_LIST)
.completion_cb("repository", complete_repository);
let task_log_cmd_def = CliCommand::new(&API_METHOD_TASK_LOG)
.arg_param(&["upid"]);
let task_stop_cmd_def = CliCommand::new(&API_METHOD_TASK_STOP)
.arg_param(&["upid"]);
CliCommandMap::new()
.insert("log", task_log_cmd_def)
.insert("list", task_list_cmd_def)
.insert("stop", task_stop_cmd_def)
}

View File

@ -1,32 +1,18 @@
use std::path::PathBuf;
use anyhow::{bail, Error};
use proxmox::api::{api, cli::*};
use proxmox_backup::config;
use proxmox_backup::configdir;
use proxmox_backup::auth_helpers::*;
fn x509name_to_string(name: &openssl::x509::X509NameRef) -> Result<String, Error> {
let mut parts = Vec::new();
for entry in name.entries() {
parts.push(format!("{} = {}", entry.object().nid().short_name()?, entry.data().as_utf8()?));
}
Ok(parts.join(", "))
}
use proxmox_backup::tools::cert::CertInfo;
#[api]
/// Display node certificate information.
fn cert_info() -> Result<(), Error> {
let cert_path = PathBuf::from(configdir!("/proxy.pem"));
let cert = CertInfo::new()?;
let cert_pem = proxmox::tools::fs::file_get_contents(&cert_path)?;
let cert = openssl::x509::X509::from_pem(&cert_pem)?;
println!("Subject: {}", x509name_to_string(cert.subject_name())?);
println!("Subject: {}", cert.subject_name()?);
if let Some(san) = cert.subject_alt_names() {
for name in san.iter() {
@ -42,17 +28,12 @@ fn cert_info() -> Result<(), Error> {
}
}
println!("Issuer: {}", x509name_to_string(cert.issuer_name())?);
println!("Issuer: {}", cert.issuer_name()?);
println!("Validity:");
println!(" Not Before: {}", cert.not_before());
println!(" Not After : {}", cert.not_after());
let fp = cert.digest(openssl::hash::MessageDigest::sha256())?;
let fp_string = proxmox::tools::digest_to_hex(&fp);
let fp_string = fp_string.as_bytes().chunks(2).map(|v| std::str::from_utf8(v).unwrap())
.collect::<Vec<&str>>().join(":");
println!("Fingerprint (sha256): {}", fp_string);
println!("Fingerprint (sha256): {}", cert.fingerprint()?);
let pubkey = cert.public_key()?;
println!("Public key type: {}", openssl::nid::Nid::from_raw(pubkey.id().as_raw()).long_name()?);

View File

@ -123,18 +123,19 @@ impl BackupReader {
}
/// Download backup manifest (index.json)
pub async fn download_manifest(&self) -> Result<BackupManifest, Error> {
use std::convert::TryFrom;
///
/// The manifest signature is verified if we have a crypt_config.
pub async fn download_manifest(&self) -> Result<(BackupManifest, Vec<u8>), Error> {
let mut raw_data = Vec::with_capacity(64 * 1024);
self.download(MANIFEST_BLOB_NAME, &mut raw_data).await?;
let blob = DataBlob::from_raw(raw_data)?;
blob.verify_crc()?;
let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref))?;
let json: Value = serde_json::from_slice(&data[..])?;
let data = blob.decode(None)?;
BackupManifest::try_from(json)
let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;
Ok((manifest, data))
}
/// Download a .blob file

View File

@ -1,8 +1,9 @@
use std::collections::HashSet;
use std::os::unix::fs::OpenOptionsExt;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
use anyhow::{format_err, Error};
use anyhow::{bail, format_err, Error};
use chrono::{DateTime, Utc};
use futures::*;
use futures::stream::Stream;
@ -22,6 +23,7 @@ pub struct BackupWriter {
h2: H2Client,
abort: AbortHandle,
verbose: bool,
crypt_config: Option<Arc<CryptConfig>>,
}
impl Drop for BackupWriter {
@ -38,12 +40,13 @@ pub struct BackupStats {
impl BackupWriter {
fn new(h2: H2Client, abort: AbortHandle, verbose: bool) -> Arc<Self> {
Arc::new(Self { h2, abort, verbose })
fn new(h2: H2Client, abort: AbortHandle, crypt_config: Option<Arc<CryptConfig>>, verbose: bool) -> Arc<Self> {
Arc::new(Self { h2, abort, crypt_config, verbose })
}
pub async fn start(
client: HttpClient,
crypt_config: Option<Arc<CryptConfig>>,
datastore: &str,
backup_type: &str,
backup_id: &str,
@ -64,7 +67,7 @@ impl BackupWriter {
let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_PROTOCOL_ID_V1!())).await?;
Ok(BackupWriter::new(h2, abort, debug))
Ok(BackupWriter::new(h2, abort, crypt_config, debug))
}
pub async fn get(
@ -159,19 +162,13 @@ impl BackupWriter {
&self,
data: Vec<u8>,
file_name: &str,
crypt_config: Option<Arc<CryptConfig>>,
compress: bool,
sign_only: bool,
encrypt: bool,
) -> Result<BackupStats, Error> {
let blob = if let Some(ref crypt_config) = crypt_config {
if sign_only {
DataBlob::create_signed(&data, crypt_config, compress)?
} else {
DataBlob::encode(&data, Some(crypt_config), compress)?
}
} else {
DataBlob::encode(&data, None, compress)?
let blob = match (encrypt, &self.crypt_config) {
(false, _) => DataBlob::encode(&data, None, compress)?,
(true, None) => bail!("requested encryption without a crypt config"),
(true, Some(crypt_config)) => DataBlob::encode(&data, Some(crypt_config), compress)?,
};
let raw_data = blob.into_inner();
@ -187,8 +184,8 @@ impl BackupWriter {
&self,
src_path: P,
file_name: &str,
crypt_config: Option<Arc<CryptConfig>>,
compress: bool,
encrypt: bool,
) -> Result<BackupStats, Error> {
let src_path = src_path.as_ref();
@ -203,25 +200,18 @@ impl BackupWriter {
.await
.map_err(|err| format_err!("unable to read file {:?} - {}", src_path, err))?;
let blob = DataBlob::encode(&contents, crypt_config.as_ref().map(AsRef::as_ref), compress)?;
let raw_data = blob.into_inner();
let size = raw_data.len() as u64;
let csum = openssl::sha::sha256(&raw_data);
let param = json!({
"encoded-size": size,
"file-name": file_name,
});
self.h2.upload("POST", "blob", Some(param), "application/octet-stream", raw_data).await?;
Ok(BackupStats { size, csum })
self.upload_blob_from_data(contents, file_name, compress, encrypt).await
}
pub async fn upload_stream(
&self,
previous_manifest: Option<Arc<BackupManifest>>,
archive_name: &str,
stream: impl Stream<Item = Result<bytes::BytesMut, Error>>,
prefix: &str,
fixed_size: Option<u64>,
crypt_config: Option<Arc<CryptConfig>>,
compress: bool,
encrypt: bool,
) -> Result<BackupStats, Error> {
let known_chunks = Arc::new(Mutex::new(HashSet::new()));
@ -230,10 +220,25 @@ impl BackupWriter {
param["size"] = size.into();
}
if encrypt && self.crypt_config.is_none() {
bail!("requested encryption without a crypt config");
}
let index_path = format!("{}_index", prefix);
let close_path = format!("{}_close", prefix);
self.download_chunk_list(&index_path, archive_name, known_chunks.clone()).await?;
if let Some(manifest) = previous_manifest {
// try, but ignore errors
match archive_type(archive_name) {
Ok(ArchiveType::FixedIndex) => {
let _ = self.download_previous_fixed_index(archive_name, &manifest, known_chunks.clone()).await;
}
Ok(ArchiveType::DynamicIndex) => {
let _ = self.download_previous_dynamic_index(archive_name, &manifest, known_chunks.clone()).await;
}
_ => { /* do nothing */ }
}
}
let wid = self.h2.post(&index_path, Some(param)).await?.as_u64().unwrap();
@ -244,7 +249,8 @@ impl BackupWriter {
stream,
&prefix,
known_chunks.clone(),
crypt_config,
if encrypt { self.crypt_config.clone() } else { None },
compress,
self.verbose,
)
.await?;
@ -268,7 +274,7 @@ impl BackupWriter {
})
}
fn response_queue() -> (
fn response_queue(verbose: bool) -> (
mpsc::Sender<h2::client::ResponseFuture>,
oneshot::Receiver<Result<(), Error>>
) {
@ -292,11 +298,11 @@ impl BackupWriter {
tokio::spawn(
verify_queue_rx
.map(Ok::<_, Error>)
.try_for_each(|response: h2::client::ResponseFuture| {
.try_for_each(move |response: h2::client::ResponseFuture| {
response
.map_err(Error::from)
.and_then(H2Client::h2api_response)
.map_ok(|result| println!("RESPONSE: {:?}", result))
.map_ok(move |result| if verbose { println!("RESPONSE: {:?}", result) })
.map_err(|err| format_err!("pipelined request failed: {}", err))
})
.map(|result| {
@ -374,41 +380,91 @@ impl BackupWriter {
(verify_queue_tx, verify_result_rx)
}
pub async fn download_chunk_list(
pub async fn download_previous_fixed_index(
&self,
path: &str,
archive_name: &str,
manifest: &BackupManifest,
known_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
) -> Result<(), Error> {
) -> Result<FixedIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
let param = json!({ "archive-name": archive_name });
let request = H2Client::request_builder("localhost", "GET", path, Some(param), None).unwrap();
self.h2.download("previous", Some(param), &mut tmpfile).await?;
let h2request = self.h2.send_request(request, None).await?;
let resp = h2request.await?;
let index = FixedIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read fixed index '{}' - {}", archive_name, err))?;
// Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum();
manifest.verify_file(archive_name, &csum, size)?;
let status = resp.status();
if !status.is_success() {
H2Client::h2api_response(resp).await?; // raise error
unreachable!();
}
let mut body = resp.into_body();
let mut flow_control = body.flow_control().clone();
let mut stream = DigestListDecoder::new(body.map_err(Error::from));
while let Some(chunk) = stream.try_next().await? {
let _ = flow_control.release_capacity(chunk.len());
known_chunks.lock().unwrap().insert(chunk);
// add index chunks to known chunks
let mut known_chunks = known_chunks.lock().unwrap();
for i in 0..index.index_count() {
known_chunks.insert(*index.index_digest(i).unwrap());
}
if self.verbose {
println!("{}: known chunks list length is {}", archive_name, known_chunks.lock().unwrap().len());
println!("{}: known chunks list length is {}", archive_name, index.index_count());
}
Ok(())
Ok(index)
}
pub async fn download_previous_dynamic_index(
&self,
archive_name: &str,
manifest: &BackupManifest,
known_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
) -> Result<DynamicIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
let param = json!({ "archive-name": archive_name });
self.h2.download("previous", Some(param), &mut tmpfile).await?;
let index = DynamicIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read dynmamic index '{}' - {}", archive_name, err))?;
// Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum();
manifest.verify_file(archive_name, &csum, size)?;
// add index chunks to known chunks
let mut known_chunks = known_chunks.lock().unwrap();
for i in 0..index.index_count() {
known_chunks.insert(*index.index_digest(i).unwrap());
}
if self.verbose {
println!("{}: known chunks list length is {}", archive_name, index.index_count());
}
Ok(index)
}
/// Download backup manifest (index.json) of last backup
pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {
let mut raw_data = Vec::with_capacity(64 * 1024);
let param = json!({ "archive-name": MANIFEST_BLOB_NAME });
self.h2.download("previous", Some(param), &mut raw_data).await?;
let blob = DataBlob::from_raw(raw_data)?;
blob.verify_crc()?;
let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref))?;
let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;
Ok(manifest)
}
fn upload_chunk_info_stream(
@ -418,6 +474,7 @@ impl BackupWriter {
prefix: &str,
known_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
crypt_config: Option<Arc<CryptConfig>>,
compress: bool,
verbose: bool,
) -> impl Future<Output = Result<(usize, usize, std::time::Duration, usize, [u8; 32]), Error>> {
@ -448,7 +505,7 @@ impl BackupWriter {
let offset = stream_len.fetch_add(chunk_len, Ordering::SeqCst) as u64;
let mut chunk_builder = DataChunkBuilder::new(data.as_ref())
.compress(true);
.compress(compress);
if let Some(ref crypt_config) = crypt_config {
chunk_builder = chunk_builder.crypt_config(crypt_config);
@ -543,7 +600,8 @@ impl BackupWriter {
})
}
pub async fn upload_speedtest(&self) -> Result<usize, Error> {
/// Upload speed test - prints result ot stderr
pub async fn upload_speedtest(&self, verbose: bool) -> Result<f64, Error> {
let mut data = vec![];
// generate pseudo random byte sequence
@ -558,7 +616,7 @@ impl BackupWriter {
let mut repeat = 0;
let (upload_queue, upload_result) = Self::response_queue();
let (upload_queue, upload_result) = Self::response_queue(verbose);
let start_time = std::time::Instant::now();
@ -570,7 +628,7 @@ impl BackupWriter {
let mut upload_queue = upload_queue.clone();
println!("send test data ({} bytes)", data.len());
if verbose { eprintln!("send test data ({} bytes)", data.len()); }
let request = H2Client::request_builder("localhost", "POST", "speedtest", None, None).unwrap();
let request_future = self.h2.send_request(request, Some(bytes::Bytes::from(data.clone()))).await?;
@ -581,9 +639,9 @@ impl BackupWriter {
let _ = upload_result.await?;
println!("Uploaded {} chunks in {} seconds.", repeat, start_time.elapsed().as_secs());
let speed = ((item_len*1_000_000*(repeat as usize))/(1024*1024))/(start_time.elapsed().as_micros() as usize);
println!("Time per request: {} microseconds.", (start_time.elapsed().as_micros())/(repeat as u128));
eprintln!("Uploaded {} chunks in {} seconds.", repeat, start_time.elapsed().as_secs());
let speed = ((item_len*(repeat as usize)) as f64)/start_time.elapsed().as_secs_f64();
eprintln!("Time per request: {} microseconds.", (start_time.elapsed().as_micros())/(repeat as u128));
Ok(speed)
}

View File

@ -1,7 +1,7 @@
use std::future::Future;
use std::collections::HashMap;
use std::pin::Pin;
use std::sync::Arc;
use std::sync::{Arc, Mutex};
use anyhow::Error;
@ -10,11 +10,12 @@ use crate::backup::{AsyncReadChunk, CryptConfig, DataBlob, ReadChunk};
use crate::tools::runtime::block_on;
/// Read chunks from remote host using ``BackupReader``
#[derive(Clone)]
pub struct RemoteChunkReader {
client: Arc<BackupReader>,
crypt_config: Option<Arc<CryptConfig>>,
cache_hint: HashMap<[u8; 32], usize>,
cache: HashMap<[u8; 32], Vec<u8>>,
cache: Arc<Mutex<HashMap<[u8; 32], Vec<u8>>>>,
}
impl RemoteChunkReader {
@ -30,11 +31,11 @@ impl RemoteChunkReader {
client,
crypt_config,
cache_hint,
cache: HashMap::new(),
cache: Arc::new(Mutex::new(HashMap::new())),
}
}
pub async fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
pub async fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let mut chunk_data = Vec::with_capacity(4 * 1024 * 1024);
self.client
@ -49,12 +50,12 @@ impl RemoteChunkReader {
}
impl ReadChunk for RemoteChunkReader {
fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
block_on(Self::read_raw_chunk(self, digest))
}
fn read_chunk(&mut self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
if let Some(raw_data) = self.cache.get(digest) {
fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
if let Some(raw_data) = (*self.cache.lock().unwrap()).get(digest) {
return Ok(raw_data.to_vec());
}
@ -66,7 +67,7 @@ impl ReadChunk for RemoteChunkReader {
let use_cache = self.cache_hint.contains_key(digest);
if use_cache {
self.cache.insert(*digest, raw_data.to_vec());
(*self.cache.lock().unwrap()).insert(*digest, raw_data.to_vec());
}
Ok(raw_data)
@ -75,18 +76,18 @@ impl ReadChunk for RemoteChunkReader {
impl AsyncReadChunk for RemoteChunkReader {
fn read_raw_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>> {
Box::pin(Self::read_raw_chunk(self, digest))
}
fn read_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>> {
Box::pin(async move {
if let Some(raw_data) = self.cache.get(digest) {
if let Some(raw_data) = (*self.cache.lock().unwrap()).get(digest) {
return Ok(raw_data.to_vec());
}
@ -98,7 +99,7 @@ impl AsyncReadChunk for RemoteChunkReader {
let use_cache = self.cache_hint.contains_key(digest);
if use_cache {
self.cache.insert(*digest, raw_data.to_vec());
(*self.cache.lock().unwrap()).insert(*digest, raw_data.to_vec());
}
Ok(raw_data)

View File

@ -2,7 +2,7 @@ use std::collections::{HashSet, HashMap};
use std::convert::TryFrom;
use std::ffi::{CStr, CString, OsStr};
use std::fmt;
use std::io::{self, Write};
use std::io::{self, Read, Write};
use std::os::unix::ffi::OsStrExt;
use std::os::unix::io::{AsRawFd, FromRawFd, IntoRawFd, RawFd};
use std::path::{Path, PathBuf};
@ -20,8 +20,10 @@ use pxar::encoder::LinkOffset;
use proxmox::c_str;
use proxmox::sys::error::SysError;
use proxmox::tools::fd::RawFdNum;
use proxmox::tools::vec;
use crate::pxar::catalog::BackupCatalogWriter;
use crate::pxar::metadata::errno_is_unsupported;
use crate::pxar::Flags;
use crate::pxar::tools::assert_single_path_component;
use crate::tools::{acl, fs, xattr, Fd};
@ -35,6 +37,7 @@ fn detect_fs_type(fd: RawFd) -> Result<i64, Error> {
Ok(fs_stat.f_type)
}
#[rustfmt::skip]
pub fn is_virtual_file_system(magic: i64) -> bool {
use proxmox::sys::linux::magic::*;
@ -114,6 +117,7 @@ struct Archiver<'a, 'b> {
device_set: Option<HashSet<u64>>,
hardlinks: HashMap<HardLinkInfo, (PathBuf, LinkOffset)>,
errors: ErrorReporter,
file_copy_buffer: Vec<u8>,
}
type Encoder<'a, 'b> = pxar::encoder::Encoder<'a, &'b mut dyn pxar::encoder::SeqWrite>;
@ -158,7 +162,7 @@ where
if skip_lost_and_found {
patterns.push(MatchEntry::parse_pattern(
"**/lost+found",
"lost+found",
PatternFlag::PATH_NAME,
MatchType::Exclude,
)?);
@ -178,6 +182,7 @@ where
device_set,
hardlinks: HashMap::new(),
errors: ErrorReporter,
file_copy_buffer: vec::undefined(4 * 1024 * 1024),
};
archiver.archive_dir_contents(&mut encoder, source_dir, true)?;
@ -244,11 +249,15 @@ impl<'a, 'b> Archiver<'a, 'b> {
}
/// openat() wrapper which allows but logs `EACCES` and turns `ENOENT` into `None`.
///
/// The `existed` flag is set when iterating through a directory to note that we know the file
/// is supposed to exist and we should warn if it doesnt'.
fn open_file(
&mut self,
parent: RawFd,
file_name: &CStr,
oflags: OFlag,
existed: bool,
) -> Result<Option<Fd>, Error> {
match Fd::openat(
&unsafe { RawFdNum::from_raw_fd(parent) },
@ -257,9 +266,14 @@ impl<'a, 'b> Archiver<'a, 'b> {
Mode::empty(),
) {
Ok(fd) => Ok(Some(fd)),
Err(nix::Error::Sys(Errno::ENOENT)) => Ok(None),
Err(nix::Error::Sys(Errno::ENOENT)) => {
if existed {
self.report_vanished_file()?;
}
Ok(None)
}
Err(nix::Error::Sys(Errno::EACCES)) => {
write!(self.errors, "failed to open file: {:?}: access denied", file_name)?;
writeln!(self.errors, "failed to open file: {:?}: access denied", file_name)?;
Ok(None)
}
Err(other) => Err(Error::from(other)),
@ -271,6 +285,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
parent,
c_str!(".pxarexclude"),
OFlag::O_RDONLY | OFlag::O_CLOEXEC | OFlag::O_NOCTTY,
false,
)?;
let old_pattern_count = self.patterns.len();
@ -283,7 +298,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
let line = match line {
Ok(line) => line,
Err(err) => {
let _ = write!(
let _ = writeln!(
self.errors,
"ignoring .pxarexclude after read error in {:?}: {}",
self.path,
@ -303,7 +318,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
match MatchEntry::parse_pattern(line, PatternFlag::PATH_NAME, MatchType::Exclude) {
Ok(pattern) => self.patterns.push(pattern),
Err(err) => {
let _ = write!(self.errors, "bad pattern in {:?}: {}", self.path, err);
let _ = writeln!(self.errors, "bad pattern in {:?}: {}", self.path, err);
}
}
}
@ -406,7 +421,25 @@ impl<'a, 'b> Archiver<'a, 'b> {
}
fn report_vanished_file(&mut self) -> Result<(), Error> {
write!(self.errors, "warning: file vanished while reading: {:?}", self.path)?;
writeln!(self.errors, "warning: file vanished while reading: {:?}", self.path)?;
Ok(())
}
fn report_file_shrunk_while_reading(&mut self) -> Result<(), Error> {
writeln!(
self.errors,
"warning: file size shrunk while reading: {:?}, file will be padded with zeros!",
self.path,
)?;
Ok(())
}
fn report_file_grew_while_reading(&mut self) -> Result<(), Error> {
writeln!(
self.errors,
"warning: file size increased while reading: {:?}, file will be truncated!",
self.path,
)?;
Ok(())
}
@ -420,24 +453,22 @@ impl<'a, 'b> Archiver<'a, 'b> {
use pxar::format::mode;
let file_mode = stat.st_mode & libc::S_IFMT;
let open_mode = if !(file_mode == libc::S_IFREG || file_mode == libc::S_IFDIR) {
OFlag::O_PATH
} else {
let open_mode = if file_mode == libc::S_IFREG || file_mode == libc::S_IFDIR {
OFlag::empty()
} else {
OFlag::O_PATH
};
let fd = self.open_file(
parent,
c_file_name,
open_mode | OFlag::O_RDONLY | OFlag::O_NOFOLLOW | OFlag::O_CLOEXEC | OFlag::O_NOCTTY,
true,
)?;
let fd = match fd {
Some(fd) => fd,
None => {
self.report_vanished_file()?;
return Ok(());
}
None => return Ok(()),
};
let metadata = get_metadata(fd.as_raw_fd(), &stat, self.flags(), self.fs_magic)?;
@ -591,8 +622,29 @@ impl<'a, 'b> Archiver<'a, 'b> {
file_size: u64,
) -> Result<LinkOffset, Error> {
let mut file = unsafe { std::fs::File::from_raw_fd(fd.into_raw_fd()) };
let offset = encoder.add_file(metadata, file_name, file_size, &mut file)?;
Ok(offset)
let mut remaining = file_size;
let mut out = encoder.create_file(metadata, file_name, file_size)?;
while remaining != 0 {
let mut got = file.read(&mut self.file_copy_buffer[..])?;
if got as u64 > remaining {
self.report_file_grew_while_reading()?;
got = remaining as usize;
}
out.write_all(&self.file_copy_buffer[..got])?;
remaining -= got as u64;
}
if remaining > 0 {
self.report_file_shrunk_while_reading()?;
let to_zero = remaining.min(self.file_copy_buffer.len() as u64) as usize;
vec::clear(&mut self.file_copy_buffer[..to_zero]);
while remaining != 0 {
let fill = remaining.min(self.file_copy_buffer.len() as u64) as usize;
out.write_all(&self.file_copy_buffer[..fill])?;
remaining -= fill as u64;
}
}
Ok(out.file_offset())
}
fn add_symlink(
@ -647,13 +699,6 @@ fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Resu
Ok(meta)
}
fn errno_is_unsupported(errno: Errno) -> bool {
match errno {
Errno::ENOTTY | Errno::ENOSYS | Errno::EBADF | Errno::EOPNOTSUPP | Errno::EINVAL => true,
_ => false,
}
}
fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags) -> Result<(), Error> {
if flags.contains(Flags::WITH_FCAPS) {
return Ok(());
@ -718,7 +763,7 @@ fn get_xattr_fcaps_acl(
}
fn get_chattr(metadata: &mut Metadata, fd: RawFd) -> Result<(), Error> {
let mut attr: usize = 0;
let mut attr: libc::c_long = 0;
match unsafe { fs::read_attr_fd(fd, &mut attr) } {
Ok(_) => (),
@ -728,7 +773,7 @@ fn get_chattr(metadata: &mut Metadata, fd: RawFd) -> Result<(), Error> {
Err(err) => bail!("failed to read file attributes: {}", err),
}
metadata.stat.flags |= Flags::from_chattr(attr as u32).bits();
metadata.stat.flags |= Flags::from_chattr(attr).bits();
Ok(())
}

View File

@ -230,7 +230,8 @@ impl Extractor {
dir.metadata(),
fd,
&CString::new(dir.file_name().as_bytes())?,
)?;
)
.map_err(|err| format_err!("failed to apply directory metadata: {}", err))?;
}
Ok(())
@ -241,7 +242,9 @@ impl Extractor {
}
fn parent_fd(&mut self) -> Result<RawFd, Error> {
self.dir_stack.last_dir_fd(self.allow_existing_dirs)
self.dir_stack
.last_dir_fd(self.allow_existing_dirs)
.map_err(|err| format_err!("failed to get parent directory file descriptor: {}", err))
}
pub fn extract_symlink(
@ -320,10 +323,14 @@ impl Extractor {
file_name,
OFlag::O_CREAT | OFlag::O_EXCL | OFlag::O_WRONLY | OFlag::O_CLOEXEC,
Mode::from_bits(0o600).unwrap(),
)?)
)
.map_err(|err| format_err!("failed to create file {:?}: {}", file_name, err))?)
};
let extracted = io::copy(&mut *contents, &mut file)?;
metadata::apply_initial_flags(self.feature_flags, metadata, file.as_raw_fd())?;
let extracted = io::copy(&mut *contents, &mut file)
.map_err(|err| format_err!("failed to copy file contents: {}", err))?;
if size != extracted {
bail!("extracted {} bytes of a file of {} bytes", extracted, size);
}
@ -345,10 +352,15 @@ impl Extractor {
file_name,
OFlag::O_CREAT | OFlag::O_EXCL | OFlag::O_WRONLY | OFlag::O_CLOEXEC,
Mode::from_bits(0o600).unwrap(),
)?)
)
.map_err(|err| format_err!("failed to create file {:?}: {}", file_name, err))?)
});
let extracted = tokio::io::copy(&mut *contents, &mut file).await?;
metadata::apply_initial_flags(self.feature_flags, metadata, file.as_raw_fd())?;
let extracted = tokio::io::copy(&mut *contents, &mut file)
.await
.map_err(|err| format_err!("failed to copy file contents: {}", err))?;
if size != extracted {
bail!("extracted {} bytes of a file of {} bytes", extracted, size);
}

View File

@ -3,6 +3,8 @@
//! Flags for known supported features for a given filesystem can be derived
//! from the superblocks magic number.
use libc::c_long;
use bitflags::bitflags;
bitflags! {
@ -149,22 +151,27 @@ impl Default for Flags {
}
}
impl Flags {
/// Get a set of feature flags from file attributes.
pub fn from_chattr(attr: u32) -> Flags {
// form /usr/include/linux/fs.h
const FS_APPEND_FL: u32 = 0x0000_0020;
const FS_NOATIME_FL: u32 = 0x0000_0080;
const FS_COMPR_FL: u32 = 0x0000_0004;
const FS_NOCOW_FL: u32 = 0x0080_0000;
const FS_NODUMP_FL: u32 = 0x0000_0040;
const FS_DIRSYNC_FL: u32 = 0x0001_0000;
const FS_IMMUTABLE_FL: u32 = 0x0000_0010;
const FS_SYNC_FL: u32 = 0x0000_0008;
const FS_NOCOMP_FL: u32 = 0x0000_0400;
const FS_PROJINHERIT_FL: u32 = 0x2000_0000;
const FS_APPEND_FL: c_long = 0x0000_0020;
const FS_NOATIME_FL: c_long = 0x0000_0080;
const FS_COMPR_FL: c_long = 0x0000_0004;
const FS_NOCOW_FL: c_long = 0x0080_0000;
const FS_NODUMP_FL: c_long = 0x0000_0040;
const FS_DIRSYNC_FL: c_long = 0x0001_0000;
const FS_IMMUTABLE_FL: c_long = 0x0000_0010;
const FS_SYNC_FL: c_long = 0x0000_0008;
const FS_NOCOMP_FL: c_long = 0x0000_0400;
const FS_PROJINHERIT_FL: c_long = 0x2000_0000;
const CHATTR_MAP: [(Flags, u32); 10] = [
pub(crate) const INITIAL_FS_FLAGS: c_long =
FS_NOATIME_FL
| FS_COMPR_FL
| FS_NOCOW_FL
| FS_NOCOMP_FL
| FS_PROJINHERIT_FL;
#[rustfmt::skip]
const CHATTR_MAP: [(Flags, c_long); 10] = [
( Flags::WITH_FLAG_APPEND, FS_APPEND_FL ),
( Flags::WITH_FLAG_NOATIME, FS_NOATIME_FL ),
( Flags::WITH_FLAG_COMPR, FS_COMPR_FL ),
@ -177,6 +184,21 @@ impl Flags {
( Flags::WITH_FLAG_PROJINHERIT, FS_PROJINHERIT_FL ),
];
// from /usr/include/linux/msdos_fs.h
const ATTR_HIDDEN: u32 = 2;
const ATTR_SYS: u32 = 4;
const ATTR_ARCH: u32 = 32;
#[rustfmt::skip]
const FAT_ATTR_MAP: [(Flags, u32); 3] = [
( Flags::WITH_FLAG_HIDDEN, ATTR_HIDDEN ),
( Flags::WITH_FLAG_SYSTEM, ATTR_SYS ),
( Flags::WITH_FLAG_ARCHIVE, ATTR_ARCH ),
];
impl Flags {
/// Get a set of feature flags from file attributes.
pub fn from_chattr(attr: c_long) -> Flags {
let mut flags = Flags::empty();
for (fe_flag, fs_flag) in &CHATTR_MAP {
@ -188,19 +210,25 @@ impl Flags {
flags
}
/// Get the chattr bit representation of these feature flags.
pub fn to_chattr(self) -> c_long {
let mut flags: c_long = 0;
for (fe_flag, fs_flag) in &CHATTR_MAP {
if self.contains(*fe_flag) {
flags |= *fs_flag;
}
}
flags
}
pub fn to_initial_chattr(self) -> c_long {
self.to_chattr() & INITIAL_FS_FLAGS
}
/// Get a set of feature flags from FAT attributes.
pub fn from_fat_attr(attr: u32) -> Flags {
// from /usr/include/linux/msdos_fs.h
const ATTR_HIDDEN: u32 = 2;
const ATTR_SYS: u32 = 4;
const ATTR_ARCH: u32 = 32;
const FAT_ATTR_MAP: [(Flags, u32); 3] = [
( Flags::WITH_FLAG_HIDDEN, ATTR_HIDDEN ),
( Flags::WITH_FLAG_SYSTEM, ATTR_SYS ),
( Flags::WITH_FLAG_ARCHIVE, ATTR_ARCH ),
];
let mut flags = Flags::empty();
for (fe_flag, fs_flag) in &FAT_ATTR_MAP {
@ -212,6 +240,19 @@ impl Flags {
flags
}
/// Get the fat attribute bit representation of these feature flags.
pub fn to_fat_attr(self) -> u32 {
let mut flags = 0u32;
for (fe_flag, fs_flag) in &FAT_ATTR_MAP {
if self.contains(*fe_flag) {
flags |= *fs_flag;
}
}
flags
}
/// Return the supported *pxar* feature flags based on the magic number of the filesystem.
pub fn from_magic(magic: i64) -> Flags {
use proxmox::sys::linux::magic::*;

View File

@ -79,13 +79,19 @@ pub fn apply_at(
apply(flags, metadata, fd.as_raw_fd(), file_name)
}
pub fn apply_initial_flags(
flags: Flags,
metadata: &Metadata,
fd: RawFd,
) -> Result<(), Error> {
let entry_flags = Flags::from_bits_truncate(metadata.stat.flags);
apply_chattr(fd, entry_flags.to_initial_chattr(), flags.to_initial_chattr())?;
Ok(())
}
pub fn apply(flags: Flags, metadata: &Metadata, fd: RawFd, file_name: &CStr) -> Result<(), Error> {
let c_proc_path = CString::new(format!("/proc/self/fd/{}", fd)).unwrap();
if metadata.stat.flags != 0 {
todo!("apply flags!");
}
unsafe {
// UID and GID first, as this fails if we lose access anyway.
c_result!(libc::chown(
@ -94,13 +100,15 @@ pub fn apply(flags: Flags, metadata: &Metadata, fd: RawFd, file_name: &CStr) ->
metadata.stat.gid
))
.map(drop)
.or_else(allow_notsupp)?;
.or_else(allow_notsupp)
.map_err(|err| format_err!("failed to set ownership: {}", err))?;
}
let mut skip_xattrs = false;
apply_xattrs(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)?;
add_fcaps(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)?;
apply_acls(flags, &c_proc_path, metadata)?;
apply_acls(flags, &c_proc_path, metadata)
.map_err(|err| format_err!("failed to apply acls: {}", err))?;
apply_quota_project_id(flags, fd, metadata)?;
// Finally mode and time. We may lose access with mode, but the changing the mode also
@ -110,7 +118,12 @@ pub fn apply(flags: Flags, metadata: &Metadata, fd: RawFd, file_name: &CStr) ->
libc::chmod(c_proc_path.as_ptr(), perms_from_metadata(metadata)?.bits())
})
.map(drop)
.or_else(allow_notsupp)?;
.or_else(allow_notsupp)
.map_err(|err| format_err!("failed to change file mode: {}", err))?;
}
if metadata.stat.flags != 0 {
apply_flags(flags, fd, metadata.stat.flags)?;
}
let res = c_result!(unsafe {
@ -160,7 +173,8 @@ fn add_fcaps(
)
})
.map(drop)
.or_else(|err| allow_notsupp_remember(err, skip_xattrs))?;
.or_else(|err| allow_notsupp_remember(err, skip_xattrs))
.map_err(|err| format_err!("failed to apply file capabilities: {}", err))?;
Ok(())
}
@ -195,7 +209,8 @@ fn apply_xattrs(
)
})
.map(drop)
.or_else(|err| allow_notsupp_remember(err, &mut *skip_xattrs))?;
.or_else(|err| allow_notsupp_remember(err, &mut *skip_xattrs))
.map_err(|err| format_err!("failed to apply extended attributes: {}", err))?;
}
Ok(())
@ -317,3 +332,49 @@ fn apply_quota_project_id(flags: Flags, fd: RawFd, metadata: &Metadata) -> Resul
Ok(())
}
pub(crate) fn errno_is_unsupported(errno: Errno) -> bool {
match errno {
Errno::ENOTTY | Errno::ENOSYS | Errno::EBADF | Errno::EOPNOTSUPP | Errno::EINVAL => true,
_ => false,
}
}
fn apply_chattr(fd: RawFd, chattr: libc::c_long, mask: libc::c_long) -> Result<(), Error> {
if chattr == 0 {
return Ok(());
}
let mut fattr: libc::c_long = 0;
match unsafe { fs::read_attr_fd(fd, &mut fattr) } {
Ok(_) => (),
Err(nix::Error::Sys(errno)) if errno_is_unsupported(errno) => {
return Ok(());
}
Err(err) => bail!("failed to read file attributes: {}", err),
}
let attr = (chattr & mask) | (fattr & !mask);
match unsafe { fs::write_attr_fd(fd, &attr) } {
Ok(_) => Ok(()),
Err(nix::Error::Sys(errno)) if errno_is_unsupported(errno) => Ok(()),
Err(err) => bail!("failed to set file attributes: {}", err),
}
}
fn apply_flags(flags: Flags, fd: RawFd, entry_flags: u64) -> Result<(), Error> {
let entry_flags = Flags::from_bits_truncate(entry_flags);
apply_chattr(fd, entry_flags.to_chattr(), flags.to_chattr())?;
let fatattr = (flags & entry_flags).to_fat_attr();
if fatattr != 0 {
match unsafe { fs::write_fat_attr_fd(fd, &fatattr) } {
Ok(_) => (),
Err(nix::Error::Sys(errno)) if errno_is_unsupported(errno) => (),
Err(err) => bail!("failed to set file attributes: {}", err),
}
}
Ok(())
}

View File

@ -323,7 +323,7 @@ fn get_index(username: Option<String>, token: Option<String>, template: &Handleb
if let Some(query_str) = parts.uri.query() {
for (k, v) in form_urlencoded::parse(query_str.as_bytes()).into_owned() {
if k == "debug" && v == "1" || v == "true" {
if k == "debug" && v != "0" && v != "false" {
debug = true;
}
}
@ -493,12 +493,12 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
let (parts, body) = req.into_parts();
let method = parts.method.clone();
let (path, components) = tools::normalize_uri_path(parts.uri.path())?;
let (_path, components) = tools::normalize_uri_path(parts.uri.path())?;
let comp_len = components.len();
println!("REQUEST {} {}", method, path);
println!("COMPO {:?}", components);
//println!("REQUEST {} {}", method, path);
//println!("COMPO {:?}", components);
let env_type = api.env_type();
let mut rpcenv = RestEnvironment::new(env_type);

View File

@ -213,6 +213,8 @@ pub fn upid_read_status(upid: &UPID) -> Result<String, Error> {
Some(rest) => {
if rest == "OK" {
status = String::from(rest);
} else if rest.starts_with("WARNINGS: ") {
status = String::from(rest);
} else if rest.starts_with("ERROR: ") {
status = String::from(&rest[7..]);
}
@ -234,7 +236,7 @@ pub struct TaskListInfo {
pub upid_str: String,
/// Task `(endtime, status)` if already finished
///
/// The `status` ise iether `unknown`, `OK`, or `ERROR: ...`
/// The `status` is either `unknown`, `OK`, `WARN`, or `ERROR: ...`
pub state: Option<(i64, String)>, // endtime, status
}
@ -268,14 +270,10 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
let line = line?;
match parse_worker_status_line(&line) {
Err(err) => bail!("unable to parse active worker status '{}' - {}", line, err),
Ok((upid_str, upid, state)) => {
let running = worker_is_active_local(&upid);
if running {
Ok((upid_str, upid, state)) => match state {
None if worker_is_active_local(&upid) => {
active_list.push(TaskListInfo { upid, upid_str, state: None });
} else {
match state {
},
None => {
println!("Detected stopped UPID {}", upid_str);
let status = upid_read_status(&upid)
@ -283,7 +281,7 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
finish_list.push(TaskListInfo {
upid, upid_str, state: Some((Local::now().timestamp(), status))
});
}
},
Some((endtime, status)) => {
finish_list.push(TaskListInfo {
upid, upid_str, state: Some((endtime, status))
@ -293,8 +291,6 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
}
}
}
}
}
if let Some(upid) = new_upid {
active_list.push(TaskListInfo { upid: upid.clone(), upid_str: upid.to_string(), state: None });
@ -385,6 +381,7 @@ impl std::fmt::Display for WorkerTask {
struct WorkerTaskData {
logger: FileLogger,
progress: f64, // 0..1
warn_count: u64,
pub abort_listeners: Vec<oneshot::Sender<()>>,
}
@ -424,6 +421,7 @@ impl WorkerTask {
data: Mutex::new(WorkerTaskData {
logger,
progress: 0.0,
warn_count: 0,
abort_listeners: vec![],
}),
});
@ -507,8 +505,11 @@ impl WorkerTask {
/// Log task result, remove task from running list
pub fn log_result(&self, result: &Result<(), Error>) {
let warn_count = self.data.lock().unwrap().warn_count;
if let Err(err) = result {
self.log(&format!("TASK ERROR: {}", err));
} else if warn_count > 0 {
self.log(format!("TASK WARNINGS: {}", warn_count));
} else {
self.log("TASK OK");
}
@ -524,6 +525,13 @@ impl WorkerTask {
data.logger.log(msg);
}
/// Log a message as warning.
pub fn warn<S: AsRef<str>>(&self, msg: S) {
let mut data = self.data.lock().unwrap();
data.logger.log(format!("WARN: {}", msg.as_ref()));
data.warn_count += 1;
}
/// Set progress indicator
pub fn progress(&self, progress: f64) {
if progress >= 0.0 && progress <= 1.0 {

View File

@ -23,6 +23,7 @@ pub use proxmox::tools::fd::Fd;
pub mod acl;
pub mod async_io;
pub mod borrow;
pub mod cert;
pub mod daemon;
pub mod disks;
pub mod fs;

67
src/tools/cert.rs Normal file
View File

@ -0,0 +1,67 @@
use std::path::PathBuf;
use anyhow::Error;
use openssl::x509::{X509, GeneralName};
use openssl::stack::Stack;
use openssl::pkey::{Public, PKey};
use crate::configdir;
pub struct CertInfo {
x509: X509,
}
fn x509name_to_string(name: &openssl::x509::X509NameRef) -> Result<String, Error> {
let mut parts = Vec::new();
for entry in name.entries() {
parts.push(format!("{} = {}", entry.object().nid().short_name()?, entry.data().as_utf8()?));
}
Ok(parts.join(", "))
}
impl CertInfo {
pub fn new() -> Result<Self, Error> {
Self::from_path(PathBuf::from(configdir!("/proxy.pem")))
}
pub fn from_path(path: PathBuf) -> Result<Self, Error> {
let cert_pem = proxmox::tools::fs::file_get_contents(&path)?;
let x509 = openssl::x509::X509::from_pem(&cert_pem)?;
Ok(Self{
x509
})
}
pub fn subject_alt_names(&self) -> Option<Stack<GeneralName>> {
self.x509.subject_alt_names()
}
pub fn subject_name(&self) -> Result<String, Error> {
Ok(x509name_to_string(self.x509.subject_name())?)
}
pub fn issuer_name(&self) -> Result<String, Error> {
Ok(x509name_to_string(self.x509.issuer_name())?)
}
pub fn fingerprint(&self) -> Result<String, Error> {
let fp = self.x509.digest(openssl::hash::MessageDigest::sha256())?;
let fp_string = proxmox::tools::digest_to_hex(&fp);
let fp_string = fp_string.as_bytes().chunks(2).map(|v| std::str::from_utf8(v).unwrap())
.collect::<Vec<&str>>().join(":");
Ok(fp_string)
}
pub fn public_key(&self) -> Result<PKey<Public>, Error> {
let pubkey = self.x509.public_key()?;
Ok(pubkey)
}
pub fn not_before(&self) -> &openssl::asn1::Asn1TimeRef {
self.x509.not_before()
}
pub fn not_after(&self) -> &openssl::asn1::Asn1TimeRef {
self.x509.not_after()
}
}

View File

@ -743,7 +743,10 @@ pub fn get_disks(
let partition_type_map = get_partition_type_info()?;
let zfs_devices = zfs_devices(&partition_type_map, None)?;
let zfs_devices = zfs_devices(&partition_type_map, None).or_else(|err| -> Result<HashSet<u64>, Error> {
eprintln!("error getting zfs devices: {}", err);
Ok(HashSet::new())
})?;
let lvm_devices = get_lvm_devices(&partition_type_map)?;

View File

@ -64,7 +64,7 @@ fn parse_zpool_list_header(i: &str) -> IResult<&str, ZFSPoolInfo> {
let (i, (text, size, alloc, free, _, _,
frag, _, dedup, health,
_altroot, _eol)) = tuple((
take_while1(|c| char::is_alphanumeric(c)), // name
take_while1(|c| char::is_alphanumeric(c) || c == '-' || c == ':' || c == '_' || c == '.'), // name
preceded(multispace1, parse_optional_u64), // size
preceded(multispace1, parse_optional_u64), // allocated
preceded(multispace1, parse_optional_u64), // free
@ -221,7 +221,7 @@ logs
assert_eq!(data, expect);
let output = "\
btest 427349245952 761856 427348484096 - - 0 0 1.00 ONLINE -
b-test 427349245952 761856 427348484096 - - 0 0 1.00 ONLINE -
mirror 213674622976 438272 213674184704 - - 0 0 - ONLINE
/dev/sda1 - - - - - - - - ONLINE
/dev/sda2 - - - - - - - - ONLINE
@ -235,7 +235,7 @@ logs - - - - - - - - -
let data = parse_zpool_list(&output)?;
let expect = vec![
ZFSPoolInfo {
name: String::from("btest"),
name: String::from("b-test"),
health: String::from("ONLINE"),
usage: Some(ZFSPoolUsage {
size: 427349245952,
@ -261,5 +261,31 @@ logs - - - - - - - - -
assert_eq!(data, expect);
let output = "\
b.test 427349245952 761856 427348484096 - - 0 0 1.00 ONLINE -
mirror 213674622976 438272 213674184704 - - 0 0 - ONLINE
/dev/sda1 - - - - - - - - ONLINE
";
let data = parse_zpool_list(&output)?;
let expect = vec![
ZFSPoolInfo {
name: String::from("b.test"),
health: String::from("ONLINE"),
usage: Some(ZFSPoolUsage {
size: 427349245952,
alloc: 761856,
free: 427348484096,
dedup: 1.0,
frag: 0,
}),
devices: vec![
String::from("/dev/sda1"),
]
},
];
assert_eq!(data, expect);
Ok(())
}

View File

@ -430,3 +430,38 @@ errors: No known data errors
Ok(())
}
#[test]
fn test_zpool_status_parser3() -> Result<(), Error> {
let output = r###" pool: bt-est
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
bt-est ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/dev/sda1 ONLINE 0 0 0
/dev/sda2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/dev/sda3 ONLINE 0 0 0
/dev/sda4 ONLINE 0 0 0
logs
/dev/sda5 ONLINE 0 0 0
errors: No known data errors
"###;
let key_value_list = parse_zpool_status(&output)?;
for (k, v) in key_value_list {
println!("{} => {}", k,v);
if k == "config" {
let vdev_list = parse_zpool_status_config_tree(&v)?;
let _tree = vdev_list_to_tree(&vdev_list);
//println!("TREE1 {}", serde_json::to_string_pretty(&tree)?);
}
}
Ok(())
}

View File

@ -222,11 +222,13 @@ where
// /usr/include/linux/fs.h: #define FS_IOC_GETFLAGS _IOR('f', 1, long)
// read Linux file system attributes (see man chattr)
nix::ioctl_read!(read_attr_fd, b'f', 1, usize);
nix::ioctl_read!(read_attr_fd, b'f', 1, libc::c_long);
nix::ioctl_write_ptr!(write_attr_fd, b'f', 2, libc::c_long);
// /usr/include/linux/msdos_fs.h: #define FAT_IOCTL_GET_ATTRIBUTES _IOR('r', 0x10, __u32)
// read FAT file system attributes
nix::ioctl_read!(read_fat_attr_fd, b'r', 0x10, u32);
nix::ioctl_write_ptr!(write_fat_attr_fd, b'r', 0x11, u32);
// From /usr/include/linux/fs.h
// #define FS_IOC_FSGETXATTR _IOR('X', 31, struct fsxattr)

View File

@ -41,7 +41,7 @@ pub fn parse_u64(i: &str) -> IResult<&str, u64> {
map_res(recognize(digit1), str::parse)(i)
}
/// Parse complete input, generate vervose error message with line numbers
/// Parse complete input, generate verbose error message with line numbers
pub fn parse_complete<'a, F, O>(what: &str, i: &'a str, parser: F) -> Result<O, Error>
where F: Fn(&'a str) -> IResult<&'a str, O>,
{

View File

@ -56,30 +56,41 @@ extern {
///
/// This makes sure that tokio's worker threads are marked for us so that we know whether we
/// can/need to use `block_in_place` in our `block_on` helper.
pub fn get_runtime() -> Arc<Runtime> {
pub fn get_runtime_with_builder<F: Fn() -> runtime::Builder>(get_builder: F) -> Arc<Runtime> {
let mut guard = RUNTIME.lock().unwrap();
if let Some(rt) = guard.upgrade() { return rt; }
let rt = Arc::new(
runtime::Builder::new()
.on_thread_stop(|| {
let mut builder = get_builder();
builder.on_thread_stop(|| {
// avoid openssl bug: https://github.com/openssl/openssl/issues/6214
// call OPENSSL_thread_stop to avoid race with openssl cleanup handlers
unsafe { OPENSSL_thread_stop(); }
})
.threaded_scheduler()
.enable_all()
.build()
.expect("failed to spawn tokio runtime")
);
});
let runtime = builder.build().expect("failed to spawn tokio runtime");
let rt = Arc::new(runtime);
*guard = Arc::downgrade(&rt.clone());
rt
}
/// Get or create the current main tokio runtime.
///
/// This calls get_runtime_with_builder() using the tokio default threaded scheduler
pub fn get_runtime() -> Arc<Runtime> {
get_runtime_with_builder(|| {
let mut builder = runtime::Builder::new();
builder.threaded_scheduler();
builder.enable_all();
builder
})
}
/// Block on a synchronous piece of code.
pub fn block_in_place<R>(fut: impl FnOnce() -> R) -> R {
// don't double-exit the context (tokio doesn't like that)

View File

@ -78,24 +78,6 @@ fn test_compressed_blob_writer() -> Result<(), Error> {
verify_test_blob(blob_writer.finish()?)
}
#[test]
fn test_signed_blob_writer() -> Result<(), Error> {
let tmp = Cursor::new(Vec::<u8>::new());
let mut blob_writer = DataBlobWriter::new_signed(tmp, CRYPT_CONFIG.clone())?;
blob_writer.write_all(&TEST_DATA)?;
verify_test_blob(blob_writer.finish()?)
}
#[test]
fn test_signed_compressed_blob_writer() -> Result<(), Error> {
let tmp = Cursor::new(Vec::<u8>::new());
let mut blob_writer = DataBlobWriter::new_signed_compressed(tmp, CRYPT_CONFIG.clone())?;
blob_writer.write_all(&TEST_DATA)?;
verify_test_blob(blob_writer.finish()?)
}
#[test]
fn test_encrypted_blob_writer() -> Result<(), Error> {
let tmp = Cursor::new(Vec::<u8>::new());

View File

@ -76,6 +76,7 @@ Ext.define('PBS.Dashboard', {
let viewmodel = me.getViewModel();
let res = records[0].data;
viewmodel.set('fingerprint', res.info.fingerprint || Proxmox.Utils.unknownText);
let cpu = res.cpu,
mem = res.memory,
@ -91,6 +92,45 @@ Ext.define('PBS.Dashboard', {
hdPanel.updateValue(root.used / root.total);
},
showFingerPrint: function() {
let me = this;
let vm = me.getViewModel();
let fingerprint = vm.get('fingerprint');
Ext.create('Ext.window.Window', {
modal: true,
width: 600,
title: gettext('Fingerprint'),
layout: 'form',
bodyPadding: '10 0',
items: [
{
xtype: 'textfield',
inputId: 'fingerprintField',
value: fingerprint,
editable: false,
},
],
buttons: [
{
xtype: 'button',
iconCls: 'fa fa-clipboard',
handler: function(b) {
var el = document.getElementById('fingerprintField');
el.select();
document.execCommand("copy");
},
text: gettext('Copy')
},
{
text: gettext('Ok'),
handler: function() {
this.up('window').close();
},
},
],
}).show();
},
updateTasks: function(store, records, success) {
if (!success) return;
let me = this;
@ -134,11 +174,16 @@ Ext.define('PBS.Dashboard', {
timespan: 300, // in seconds
hours: 12, // in hours
error_shown: false,
fingerprint: "",
'bytes_in': 0,
'bytes_out': 0,
'avg_ptime': 0.0
},
formulas: {
disableFPButton: (get) => get('fingerprint') === "",
},
stores: {
usage: {
storeid: 'dash-usage',
@ -211,6 +256,16 @@ Ext.define('PBS.Dashboard', {
iconCls: 'fa fa-tasks',
title: gettext('Server Resources'),
bodyPadding: '0 20 0 20',
tools: [
{
xtype: 'button',
text: gettext('Show Fingerprint'),
handler: 'showFingerPrint',
bind: {
disabled: '{disableFPButton}',
},
},
],
layout: {
type: 'hbox',
align: 'center'

View File

@ -10,28 +10,30 @@ Ext.define('pbs-data-store-snapshots', {
},
'files',
'owner',
{ name: 'size', type: 'int' },
{ name: 'size', type: 'int', allowNull: true, },
{
name: 'encrypted',
name: 'crypt-mode',
type: 'boolean',
calculate: function(data) {
let encrypted = 0;
let files = 0;
let crypt = {
none: 0,
mixed: 0,
'sign-only': 0,
encrypt: 0,
count: 0,
};
let signed = 0;
data.files.forEach(file => {
if (file.filename === 'index.json.blob') return; // is never encrypted
if (file.encrypted) {
encrypted++;
let mode = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
if (mode !== -1) {
crypt[file['crypt-mode']]++;
}
files++;
crypt.count++;
});
if (encrypted === 0) {
return 0;
} else if (encrypted < files) {
return 1;
} else {
return 2;
}
return PBS.Utils.calculateCryptMode(crypt);
}
}
]
@ -64,7 +66,7 @@ Ext.define('PBS.DataStoreContent', {
'text',
'backup-time'
]);
Proxmox.Utils.monStoreErrors(view, view.store, true);
Proxmox.Utils.monStoreErrors(view, this.store);
this.reload(); // initial load
},
@ -79,6 +81,7 @@ Ext.define('PBS.DataStoreContent', {
let url = `/api2/json/admin/datastore/${view.datastore}/snapshots`;
this.store.setProxy({
type: 'proxmox',
timeout: 300*1000, // 5 minutes, we should make that api call faster
url: url
});
@ -122,10 +125,11 @@ Ext.define('PBS.DataStoreContent', {
return groups;
},
onLoad: function(store, records, success) {
onLoad: function(store, records, success, operation) {
let view = this.getView();
if (!success) {
Proxmox.Utils.setErrorMask(view, Proxmox.Utils.getResponseErrorMessage(operation.getError()));
return;
}
@ -147,12 +151,15 @@ Ext.define('PBS.DataStoreContent', {
let children = [];
for (const [_key, group] of Object.entries(groups)) {
let last_backup = 0;
let encrypted = 0;
let crypt = {
none: 0,
mixed: 0,
'sign-only': 0,
encrypt: 0,
};
for (const item of group.children) {
if (item.encrypted > 0) {
encrypted++;
}
if (item["backup-time"] > last_backup) {
crypt[PBS.Utils.cryptmap[item['crypt-mode']]]++;
if (item["backup-time"] > last_backup && item.size !== null) {
last_backup = item["backup-time"];
group["backup-time"] = last_backup;
group.files = item.files;
@ -161,14 +168,9 @@ Ext.define('PBS.DataStoreContent', {
}
}
if (encrypted === 0) {
group.encrypted = 0;
} else if (encrypted < group.children.length) {
group.encrypted = 1;
} else {
group.encrypted = 2;
}
group.count = group.children.length;
crypt.count = group.count;
group['crypt-mode'] = PBS.Utils.calculateCryptMode(crypt);
children.push(group);
}
@ -176,6 +178,7 @@ Ext.define('PBS.DataStoreContent', {
expanded: true,
children: children
});
Proxmox.Utils.setErrorMask(view, false);
},
onPrune: function() {
@ -197,6 +200,73 @@ Ext.define('PBS.DataStoreContent', {
win.show();
},
onVerify: function() {
var view = this.getView();
if (!view.datastore) return;
let rec = view.selModel.getSelection()[0];
if (!(rec && rec.data)) return;
let data = rec.data;
let params;
if (data.leaf) {
params = {
"backup-type": data["backup-type"],
"backup-id": data["backup-id"],
"backup-time": (data['backup-time'].getTime()/1000).toFixed(0),
};
} else {
params = {
"backup-type": data.backup_type,
"backup-id": data.backup_id,
};
}
Proxmox.Utils.API2Request({
params: params,
url: `/admin/datastore/${view.datastore}/verify`,
method: 'POST',
failure: function(response) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
success: function(response, options) {
Ext.create('Proxmox.window.TaskViewer', {
upid: response.result.data,
}).show();
},
});
},
onForget: function() {
var view = this.getView();
let rec = view.selModel.getSelection()[0];
if (!(rec && rec.data)) return;
let data = rec.data;
if (!data.leaf) return;
if (!view.datastore) return;
console.log(data);
Proxmox.Utils.API2Request({
params: {
"backup-type": data["backup-type"],
"backup-id": data["backup-id"],
"backup-time": (data['backup-time'].getTime()/1000).toFixed(0),
},
url: `/admin/datastore/${view.datastore}/snapshots`,
method: 'DELETE',
waitMsgTarget: view,
failure: function(response, opts) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
callback: this.reload.bind(this),
});
},
openBackupFileDownloader: function() {
let me = this;
let view = me.getView();
@ -226,7 +296,7 @@ Ext.define('PBS.DataStoreContent', {
let encrypted = false;
data.files.forEach(file => {
if (file.filename === 'catalog.pcat1.didx' && file.encrypted) {
if (file.filename === 'catalog.pcat1.didx' && file['crypt-mode'] === 'encrypt') {
encrypted = true;
}
});
@ -273,7 +343,13 @@ Ext.define('PBS.DataStoreContent', {
header: gettext("Size"),
sortable: true,
dataIndex: 'size',
renderer: Proxmox.Utils.format_size,
renderer: (v, meta, record) => {
if (v === undefined || v === null) {
meta.tdCls = "x-grid-row-loading";
return '';
}
return Proxmox.Utils.format_size(v);
},
},
{
xtype: 'numbercolumn',
@ -289,15 +365,8 @@ Ext.define('PBS.DataStoreContent', {
},
{
header: gettext('Encrypted'),
dataIndex: 'encrypted',
renderer: function(value) {
switch (value) {
case 0: return Proxmox.Utils.noText;
case 1: return gettext('Mixed');
case 2: return Proxmox.Utils.yesText;
default: Proxmox.Utils.unknownText;
}
}
dataIndex: 'crypt-mode',
renderer: value => PBS.Utils.cryptText[value] || Proxmox.Utils.unknownText,
},
{
header: gettext("Files"),
@ -307,8 +376,10 @@ Ext.define('PBS.DataStoreContent', {
return files.map((file) => {
let icon = '';
let size = '';
if (file.encrypted) {
icon = '<i class="fa fa-lock"></i> ';
let mode = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
let iconCls = PBS.Utils.cryptIconCls[mode] || '';
if (iconCls !== '') {
icon = `<i class="fa fa-${iconCls}"></i> `;
}
if (file.size) {
size = ` (${Proxmox.Utils.format_size(file.size)})`;
@ -326,23 +397,45 @@ Ext.define('PBS.DataStoreContent', {
iconCls: 'fa fa-refresh',
handler: 'reload',
},
'-',
{
xtype: 'proxmoxButton',
text: gettext('Verify'),
disabled: true,
parentXType: 'pbsDataStoreContent',
enableFn: (rec) => !!rec.data && rec.data.size !== null,
handler: 'onVerify',
},
{
xtype: 'proxmoxButton',
text: gettext('Prune'),
disabled: true,
parentXType: 'pbsDataStoreContent',
enableFn: function(record) { return !record.data.leaf; },
enableFn: (rec) => !rec.data.leaf,
handler: 'onPrune',
},
{
xtype: 'proxmoxButton',
text: gettext('Forget'),
disabled: true,
parentXType: 'pbsDataStoreContent',
handler: 'onForget',
dangerous: true,
confirmMsg: function(record) {
//console.log(record);
let name = record.data.text;
return Ext.String.format(gettext('Are you sure you want to remove snapshot {0}'), `'${name}'`);
},
enableFn: (rec) => !!rec.data.leaf && rec.data.size !== null,
},
'-',
{
xtype: 'proxmoxButton',
text: gettext('Download Files'),
disabled: true,
parentXType: 'pbsDataStoreContent',
handler: 'openBackupFileDownloader',
enableFn: function(record) {
return !!record.data.leaf;
},
enableFn: (rec) => !!rec.data.leaf && rec.data.size !== null,
},
{
xtype: "proxmoxButton",
@ -351,7 +444,7 @@ Ext.define('PBS.DataStoreContent', {
handler: 'openPxarBrowser',
parentXType: 'pbsDataStoreContent',
enableFn: function(record) {
return !!record.data.leaf && record.data.files.some(el => el.filename.endsWith('pxar.didx'));
return !!record.data.leaf && record.size !== null && record.data.files.some(el => el.filename.endsWith('pxar.didx'));
},
}
],

View File

@ -40,7 +40,7 @@ Ext.define('PBS.DataStorePanel', {
initComponent: function() {
let me = this;
me.title = `${gettext("Data Store")}: ${me.datastore}`;
me.title = `${gettext("Datastore")}: ${me.datastore}`;
me.callParent();
},
});

View File

@ -72,10 +72,15 @@ Ext.define('PBS.MainView', {
let datastore = PBS.Utils.getDataStoreFromPath(path);
obj = contentpanel.add({
xtype: 'pbsDataStorePanel',
nodename: 'localhost',
datastore,
});
} else {
obj = contentpanel.add({ xtype: path, border: false });
obj = contentpanel.add({
xtype: path,
nodename: 'localhost',
border: false
});
}
var treelist = me.lookupReference('navtree');
@ -120,7 +125,7 @@ Ext.define('PBS.MainView', {
},
control: {
'button[reference=logoutButton]': {
'[reference=logoutButton]': {
click: 'logout'
}
},
@ -128,7 +133,8 @@ Ext.define('PBS.MainView', {
init: function(view) {
var me = this;
me.lookupReference('usernameinfo').update({username:Proxmox.UserName});
PBS.data.RunningTasksStore.startUpdate();
me.lookupReference('usernameinfo').setText(Proxmox.UserName);
// show login on requestexception
// fixme: what about other errors
@ -184,7 +190,7 @@ Ext.define('PBS.MainView', {
type: 'hbox',
align: 'middle'
},
margin: '2 5 2 5',
margin: '2 0 2 5',
height: 38,
items: [
{
@ -192,16 +198,17 @@ Ext.define('PBS.MainView', {
prefix: '',
},
{
xtype: 'versioninfo'
},
{
flex: 1
padding: '0 0 0 5',
xtype: 'versioninfo',
},
{
padding: 5,
html: '<a href="https://bugzilla.proxmox.com" target="_blank">BETA</a>',
baseCls: 'x-plain',
},
{
flex: 1,
baseCls: 'x-plain',
reference: 'usernameinfo',
padding: '0 5',
tpl: Ext.String.format(gettext("You are logged in as {0}"), "'{username}'")
},
{
xtype: 'button',
@ -213,11 +220,27 @@ Ext.define('PBS.MainView', {
margin: '0 5 0 0',
},
{
reference: 'logoutButton',
xtype: 'pbsTaskButton',
margin: '0 5 0 0',
},
{
xtype: 'button',
reference: 'usernameinfo',
style: {
// proxmox dark grey p light grey as border
backgroundColor: '#464d4d',
borderColor: '#ABBABA'
},
margin: '0 5 0 0',
iconCls: 'fa fa-user',
menu: [
{
reference: 'logoutButton',
iconCls: 'fa fa-sign-out',
text: gettext('Logout')
}
text: gettext('Logout'),
},
],
},
]
},
{

View File

@ -8,6 +8,8 @@ JSSRC= \
form/UserSelector.js \
form/RemoteSelector.js \
form/DataStoreSelector.js \
data/RunningTasksStore.js \
button/TaskButton.js \
config/UserView.js \
config/RemoteView.js \
config/ACLView.js \
@ -19,6 +21,7 @@ JSSRC= \
window/ACLEdit.js \
window/DataStoreEdit.js \
window/CreateDirectory.js \
window/ZFSCreate.js \
window/FileBrowser.js \
window/BackupFileDownloader.js \
dashboard/DataStoreStatistics.js \
@ -26,6 +29,7 @@ JSSRC= \
dashboard/RunningTasks.js \
dashboard/TaskSummary.js \
Utils.js \
ZFSList.js \
DirectoryList.js \
LoginView.js \
VersionInfo.js \
@ -51,6 +55,10 @@ js/proxmox-backup-gui.js: js OnlineHelpInfo.js ${JSSRC}
cat OnlineHelpInfo.js ${JSSRC} >$@.tmp
mv $@.tmp $@
.PHONY: lint
lint: ${JSSRC}
eslint ${JSSRC}
.PHONY: clean
clean:
find . -name '*~' -exec rm {} ';'

View File

@ -69,12 +69,18 @@ Ext.define('PBS.store.NavigationStore', {
path: 'pbsDirectoryList',
leaf: true,
},
{
text: "ZFS",
iconCls: 'fa fa-th-large',
path: 'pbsZFSList',
leaf: true,
},
]
}
]
},
{
text: gettext('Data Store'),
text: gettext('Datastore'),
iconCls: 'fa fa-archive',
path: 'pbsDataStoreConfig',
expanded: true,

View File

@ -13,6 +13,45 @@ Ext.define('PBS.Utils', {
dataStorePrefix: 'DataStore-',
cryptmap: [
'none',
'mixed',
'sign-only',
'encrypt',
],
cryptText: [
Proxmox.Utils.noText,
gettext('Mixed'),
gettext('Signed'),
gettext('Encrypted'),
],
cryptIconCls: [
'',
'',
'certificate',
'lock',
],
calculateCryptMode: function(data) {
let mixed = data.mixed;
let encrypted = data.encrypt;
let signed = data['sign-only'];
let files = data.count;
if (mixed > 0) {
return PBS.Utils.cryptmap.indexOf('mixed');
} else if (files === encrypted) {
return PBS.Utils.cryptmap.indexOf('encrypt');
} else if (files === signed) {
return PBS.Utils.cryptmap.indexOf('sign-only');
} else if ((signed+encrypted) === 0) {
return PBS.Utils.cryptmap.indexOf('none');
} else {
return PBS.Utils.cryptmap.indexOf('mixed');
}
},
getDataStoreFromPath: function(path) {
return path.slice(PBS.Utils.dataStorePrefix.length);
},
@ -33,22 +72,18 @@ Ext.define('PBS.Utils', {
},
render_datastore_worker_id: function(id, what) {
const result = id.match(/^(\S+)_([^_\s]+)_([^_\s]+)$/);
if (result) {
let datastore = result[1], type = result[2], id = result[3];
return `Datastore ${datastore} - ${what} ${type}/${id}`;
}
return what;
},
render_datastore_time_worker_id: function(id, what) {
const res = id.match(/^(\S+)_([^_\s]+)_([^_\s]+)_([^_\s]+)$/);
const res = id.match(/^(\S+?)_(\S+?)_(\S+?)(_(.+))?$/);
if (res) {
let datastore = res[1], type = res[2], id = res[3];
let datetime = Ext.Date.parse(parseInt(res[4], 16), 'U');
if (res[4] !== undefined) {
let datetime = Ext.Date.parse(parseInt(res[5], 16), 'U');
let utctime = PBS.Utils.render_datetime_utc(datetime);
return `Datastore ${datastore} - ${what} ${type}/${id}/${utctime}`;
return `Datastore ${datastore} ${what} ${type}/${id}/${utctime}`;
} else {
return `Datastore ${datastore} ${what} ${type}/${id}`;
}
return what;
}
return `Datastore ${what} ${id}`;
},
constructor: function() {
@ -62,11 +97,14 @@ Ext.define('PBS.Utils', {
prune: (type, id) => {
return PBS.Utils.render_datastore_worker_id(id, gettext('Prune'));
},
verify: (type, id) => {
return PBS.Utils.render_datastore_worker_id(id, gettext('Verify'));
},
backup: (type, id) => {
return PBS.Utils.render_datastore_worker_id(id, gettext('Backup'));
},
reader: (type, id) => {
return PBS.Utils.render_datastore_time_worker_id(id, gettext('Read objects'));
return PBS.Utils.render_datastore_worker_id(id, gettext('Read objects'));
},
});
}

136
www/ZFSList.js Normal file
View File

@ -0,0 +1,136 @@
Ext.define('PBS.admin.ZFSList', {
extend: 'Ext.grid.Panel',
xtype: 'pbsZFSList',
stateful: true,
stateId: 'grid-node-zfs',
controller: {
xclass: 'Ext.app.ViewController',
openCreateWindow: function() {
let me = this;
Ext.create('PBS.window.CreateZFS', {
nodename: me.nodename,
listeners: {
destroy: function() { me.reload(); },
}
}).show();
},
openDetailWindow: function() {
let me = this;
let view = me.getView();
let selection = view.getSelection();
if (!selection || selection.length < 1) return;
let rec = selection[0];
let zpool = rec.get('name');
Ext.create('Proxmox.window.ZFSDetail', {
zpool,
nodename: view.nodename,
}).show();
},
reload: function() {
let me = this;
let view = me.getView();
let store = view.getStore();
store.load();
store.sort();
},
init: function(view) {
let me = this;
if (!view.nodename) {
throw "no nodename given";
}
let url = `/api2/json/nodes/${view.nodename}/disks/zfs`;
view.getStore().getProxy().setUrl(url)
Proxmox.Utils.monStoreErrors(view, view.getStore(), true);
me.reload();
},
},
columns: [
{
text: gettext('Name'),
dataIndex: 'name',
flex: 1
},
{
header: gettext('Size'),
renderer: Proxmox.Utils.format_size,
dataIndex: 'size'
},
{
header: gettext('Free'),
renderer: Proxmox.Utils.format_size,
dataIndex: 'free'
},
{
header: gettext('Allocated'),
renderer: Proxmox.Utils.format_size,
dataIndex: 'alloc'
},
{
header: gettext('Fragmentation'),
renderer: function(value) {
return value.toString() + '%';
},
dataIndex: 'frag'
},
{
header: gettext('Health'),
renderer: Proxmox.Utils.render_zfs_health,
dataIndex: 'health'
},
{
header: gettext('Deduplication'),
hidden: true,
renderer: function(value) {
return value.toFixed(2).toString() + 'x';
},
dataIndex: 'dedup'
}
],
rootVisible: false,
useArrows: true,
tbar: [
{
text: gettext('Reload'),
iconCls: 'fa fa-refresh',
handler: 'reload',
},
{
text: gettext('Create') + ': ZFS',
handler: 'openCreateWindow',
},
{
text: gettext('Detail'),
xtype: 'proxmoxButton',
disabled: true,
handler: 'openDetailWindow',
}
],
listeners: {
itemdblclick: 'openDetailWindow',
},
store: {
fields: ['name', 'size', 'free', 'alloc', 'dedup', 'frag', 'health'],
proxy: {
type: 'proxmox',
},
sorters: 'name'
},
});

92
www/button/TaskButton.js Normal file
View File

@ -0,0 +1,92 @@
Ext.define('PBS.TaskButton', {
extend: 'Ext.button.Button',
alias: 'widget.pbsTaskButton',
config: {
badgeText: '0',
badgeCls: '',
},
iconCls: 'fa fa-list',
userCls: 'pmx-has-badge',
text: gettext('Tasks'),
setText: function(value) {
let me = this;
me.realText = value;
let badgeText = me.getBadgeText();
let badgeCls = me.getBadgeCls();
let text = `${value} <span class="pmx-button-badge ${badgeCls}">${badgeText}</span>`;
return me.callParent([text]);
},
getText: function() {
let me = this;
return me.realText;
},
setBadgeText: function(value) {
let me = this;
me.badgeText = value.toString();
return me.setText(me.getText());
},
setBadgeCls: function(value) {
let me = this;
let res = me.callParent([value]);
let badgeText = me.getBadgeText();
me.setBadgeText(badgeText);
return res;
},
handler: function() {
let me = this;
if (me.grid.isVisible()) {
me.grid.setVisible(false);
} else {
me.grid.showBy(me, 'tr-br');
}
},
initComponent: function() {
let me = this;
me.grid = Ext.create({
xtype: 'pbsRunningTasks',
title: '',
hideHeaders: false,
floating: true,
width: 600,
bbar: [
'->',
{
xtype: 'button',
text: gettext('Show All Tasks'),
handler: function() {
var mainview = me.up('mainview');
mainview.getController().redirectTo('pbsServerAdministration:tasks');
me.grid.hide();
},
},
],
listeners: {
'taskopened': function() {
me.grid.hide();
},
},
});
me.callParent();
me.mon(me.grid.getStore().rstore, 'load', function(store, records, success) {
if (!success) return;
let count = records.length;
let text = count > 99 ? '99+' : count.toString();
let cls = count > 0 ? 'active': '';
me.setBadgeText(text);
me.setBadgeCls(cls);
});
},
});

View File

@ -25,7 +25,7 @@ Ext.define('PBS.DataStoreConfig', {
extend: 'Ext.grid.GridPanel',
alias: 'widget.pbsDataStoreConfig',
title: gettext('Data Store Configuration'),
title: gettext('Datastore Configuration'),
controller: {
xclass: 'Ext.app.ViewController',
@ -58,6 +58,27 @@ Ext.define('PBS.DataStoreConfig', {
}).show();
},
onVerify: function() {
var view = this.getView();
let rec = view.selModel.getSelection()[0];
if (!(rec && rec.data)) return;
let data = rec.data;
Proxmox.Utils.API2Request({
url: `/admin/datastore/${data.name}/verify`,
method: 'POST',
failure: function(response) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
success: function(response, options) {
Ext.create('Proxmox.window.TaskViewer', {
upid: response.result.data,
}).show();
},
});
},
garbageCollect: function() {
let me = this;
let view = me.getView();
@ -115,6 +136,12 @@ Ext.define('PBS.DataStoreConfig', {
},
// remove_btn
'-',
{
xtype: 'proxmoxButton',
text: gettext('Verify'),
disabled: true,
handler: 'onVerify',
},
{
xtype: 'proxmoxButton',
text: gettext('Start GC'),

View File

@ -190,3 +190,21 @@ p.logs {
visibility: hidden;
width: 5px;
}
.pmx-has-badge .x-btn-inner {
padding: 0 0 0 5px;
min-width: 24px;
}
.pmx-button-badge {
display: inline-block;
font-weight: bold;
border-radius: 4px;
padding: 2px 3px;
min-width: 24px;
line-height: 1em;
}
.pmx-button-badge.active {
background-color: #464d4d;
}

View File

@ -2,7 +2,19 @@ Ext.define('pbs-datastore-statistics', {
extend: 'Ext.data.Model',
fields: [
'store', 'total', 'used', 'avail', 'estimated-full-date', 'history',
'store', 'total', 'used', 'avail', 'estimated-full-date',
{
name: 'history',
convert: function(values) {
let last = null;
return values.map(v => {
if (v !== undefined && v !== null) {
last = v;
}
return last;
});
}
},
{
name: 'usage',
calculate: function(data) {

View File

@ -56,10 +56,16 @@ Ext.define('PBS.LongestTasks', {
type: 'diff',
autoDestroy: true,
autoDestroyRstore: true,
sorters: {
sorters: [
{
property: 'duration',
direction: 'DESC',
},
{
property: 'upid',
direction: 'ASC',
},
],
rstore: {
storeid: 'proxmox-tasks-dash',
type: 'store',

View File

@ -18,6 +18,8 @@ Ext.define('PBS.RunningTasks', {
upid: record.data.upid,
endtime: record.data.endtime,
}).show();
view.fireEvent('taskopened', view, record.data.upid);
},
openTaskItemDblClick: function(grid, record) {
@ -54,20 +56,8 @@ Ext.define('PBS.RunningTasks', {
store: {
type: 'diff',
autoDestroy: true,
autoDestroyRstore: true,
sorters: 'starttime',
rstore: {
type: 'update',
autoStart: true,
interval: 3000,
storeid: 'pbs-running-tasks-dash',
model: 'proxmox-tasks',
proxy: {
type: 'proxmox',
// maybe separate api call?
url: '/api2/json/nodes/localhost/tasks?running=1'
},
},
rstore: PBS.data.RunningTasksStore,
},
columns: [

View File

@ -9,12 +9,27 @@ Ext.define('PBS.TaskSummary', {
render_count: function(value, md, record, rowindex, colindex) {
let cls = 'question';
let color = 'faded';
switch (colindex) {
case 1: cls = "times-circle critical"; break;
case 2: cls = "exclamation-circle warning"; break;
case 3: cls = "check-circle good"; break;
case 1:
cls = "times-circle";
color = "critical";
break;
case 2:
cls = "exclamation-circle";
color = "warning";
break;
case 3:
cls = "check-circle";
color = "good";
break;
default: break;
}
if (value < 1) {
color = "faded";
}
cls += " " + color;
return `<i class="fa fa-${cls}"></i> ${value}`;
},
},

View File

@ -0,0 +1,21 @@
Ext.define('PBS.data.RunningTasksStore', {
extend: 'Proxmox.data.UpdateStore',
singleton: true,
constructor: function(config) {
let me = this;
config = config || {};
Ext.apply(config, {
interval: 3000,
storeid: 'pbs-running-tasks-dash',
model: 'proxmox-tasks',
proxy: {
type: 'proxmox',
// maybe separate api call?
url: '/api2/json/nodes/localhost/tasks?running=1&limit=100',
},
});
me.callParent([config]);
},
});

View File

@ -16,7 +16,7 @@ Ext.define('PBS.form.DataStoreSelector', {
listConfig: {
columns: [
{
header: gettext('DataStore'),
header: gettext('Datastore'),
sortable: true,
dataIndex: 'store',
renderer: Ext.String.htmlEncode,

View File

@ -46,8 +46,9 @@ Ext.define('PBS.window.BackupFileDownloader', {
let me = this;
let combo = me.lookup('file');
let rec = combo.getStore().findRecord('filename', value, 0, false, true, true);
let canDownload = !rec.data.encrypted;
let canDownload = rec.data['crypt-mode'] !== 'encrypt';
me.lookup('encryptedHint').setVisible(!canDownload);
me.lookup('signedHint').setVisible(rec.data['crypt-mode'] === 'sign-only');
me.lookup('downloadBtn').setDisabled(!canDownload);
},
@ -88,7 +89,7 @@ Ext.define('PBS.window.BackupFileDownloader', {
emptyText: gettext('No file selected'),
fieldLabel: gettext('File'),
store: {
fields: ['filename', 'size', 'encrypted',],
fields: ['filename', 'size', 'crypt-mode',],
idProperty: ['filename'],
},
listConfig: {
@ -107,12 +108,25 @@ Ext.define('PBS.window.BackupFileDownloader', {
},
{
text: gettext('Encrypted'),
dataIndex: 'encrypted',
renderer: Proxmox.Utils.format_boolean,
dataIndex: 'crypt-mode',
renderer: function(value) {
let mode = -1;
if (value !== undefined) {
mode = PBS.Utils.cryptmap.indexOf(value);
}
return PBS.Utils.cryptText[mode] || Proxmox.Utils.unknownText;
}
},
],
},
},
{
xtype: 'displayfield',
userCls: 'pmx-hint',
reference: 'signedHint',
hidden: true,
value: gettext('Note: Signatures of signed files will not be verified on the server. Please use the client to do this.'),
},
{
xtype: 'displayfield',
userCls: 'pmx-hint',

View File

@ -39,7 +39,7 @@ Ext.define('PBS.window.CreateDirectory', {
{
xtype: 'proxmoxcheckbox',
name: 'add-datastore',
fieldLabel: gettext('Add Data Store'),
fieldLabel: gettext('Add as Datastore'),
value: '1',
},
],

View File

@ -21,8 +21,7 @@ Ext.define('PBS.DataStoreEdit', {
return {};
},
items: [
{
items: {
xtype: 'tabpanel',
bodyPadding: 10,
items: [
@ -50,7 +49,6 @@ Ext.define('PBS.DataStoreEdit', {
emptyText: gettext('An absolute path'),
},
],
column2: [
{
xtype: 'proxmoxtextfield',
@ -69,7 +67,6 @@ Ext.define('PBS.DataStoreEdit', {
},
},
],
columnB: [
{
xtype: 'textfield',
@ -113,7 +110,6 @@ Ext.define('PBS.DataStoreEdit', {
allowBlank: true,
},
],
column2: [
{
xtype: 'proxmoxintegerfield',
@ -146,9 +142,7 @@ Ext.define('PBS.DataStoreEdit', {
allowBlank: true,
},
],
}
]
}
},
],
},
});

View File

@ -52,7 +52,7 @@ Ext.define("PBS.window.FileBrowser", {
extend: "Ext.window.Window",
width: 800,
height: 400,
height: 600,
modal: true,
@ -142,8 +142,13 @@ Ext.define("PBS.window.FileBrowser", {
'backup-type': view['backup-type'],
'backup-time': view['backup-time'],
});
store.load();
store.getRoot().expand();
store.load(() => {
let root = store.getRoot();
root.expand(); // always expand invisible root node
if (root.childNodes.length === 1) {
root.firstChild.expand();
}
});
},
control: {

95
www/window/ZFSCreate.js Normal file
View File

@ -0,0 +1,95 @@
Ext.define('PBS.window.CreateZFS', {
extend: 'Proxmox.window.Edit',
xtype: 'pbsCreateZFS',
subject: 'ZFS',
showProgress: true,
onlineHelp: 'chapter_zfs',
width: 800,
url: '/nodes/localhost/disks/zfs',
method: 'POST',
items: [
{
xtype: 'inputpanel',
onGetValues: function(values) {
return values;
},
column1: [
{
xtype: 'proxmoxtextfield',
name: 'name',
fieldLabel: gettext('Name'),
minLength: 3,
allowBlank: false,
},
{
xtype: 'proxmoxcheckbox',
name: 'add-datastore',
fieldLabel: gettext('Add as Datastore'),
value: '1'
}
],
column2: [
{
xtype: 'proxmoxKVComboBox',
fieldLabel: gettext('RAID Level'),
name: 'raidlevel',
value: 'single',
comboItems: [
['single', gettext('Single Disk')],
['mirror', 'Mirror'],
['raid10', 'RAID10'],
['raidz', 'RAIDZ'],
['raidz2', 'RAIDZ2'],
['raidz3', 'RAIDZ3']
]
},
{
xtype: 'proxmoxKVComboBox',
fieldLabel: gettext('Compression'),
name: 'compression',
value: 'on',
comboItems: [
['on', 'on'],
['off', 'off'],
['gzip', 'gzip'],
['lz4', 'lz4'],
['lzjb', 'lzjb'],
['zle', 'zle']
]
},
{
xtype: 'proxmoxintegerfield',
fieldLabel: gettext('ashift'),
minValue: 9,
maxValue: 16,
value: '12',
name: 'ashift'
}
],
columnB: [
{
xtype: 'pmxMultiDiskSelector',
name: 'devices',
nodename: 'localhost',
typeParameter: 'usage-type',
valueField: 'name',
height: 200,
emptyText: gettext('No Disks unused'),
}
]
},
{
xtype: 'displayfield',
padding: '5 0 0 0',
userCls: 'pmx-hint',
value: 'Note: ZFS is not compatible with disks backed by a hardware ' +
'RAID controller. For details see ' +
'<a target="_blank" href="' + Proxmox.Utils.get_help_link('chapter_zfs') + '">the reference documentation</a>.',
},
],
});