Compare commits

...

74 Commits

Author SHA1 Message Date
27b8a3f671 bump version to 1.0.4-1 2020-11-25 08:03:11 +01:00
abf9b6da42 docs: fix renamed commands 2020-11-25 08:03:11 +01:00
0c9209b04c cli: rename command "upload-log" to "snapshot upload-log" 2020-11-25 07:57:39 +01:00
edebd52374 cli: rename command "forget" to "snapshot forget" 2020-11-25 07:57:39 +01:00
61205f00fb cli: rename command "files" to "snapshot files" 2020-11-25 07:57:39 +01:00
a303e00289 fingerprint: add new() method 2020-11-25 07:57:39 +01:00
af9f72e9d8 fingerprint: add bytes() accessor
needed for libproxmox-backup-qemu0

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-25 06:34:34 +01:00
5176346b30 ui: fix broken gettext use
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 00:21:17 +01:00
731eeef25b cli: use new alias feature for "snapshots"
Now maps to "snapshot list".
2020-11-24 13:26:43 +01:00
a65e3e4bc0 client: add 'snapshot notes show/update' command
to show and update snapshot notes from the cli

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-24 11:44:19 +01:00
027eb2bbe6 bump version to 1.0.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:56:18 +01:00
6982a54701 gui: add snapshot/file fingerprint tooltip
display short key ID, like backend's Display trait.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
035c40e638 list_snapshots: return manifest fingerprint
for display in clients.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
79c535955d refactor BackupInfo -> SnapshotListItem helper
before adding more fields to the tuple, let's just create the struct
inside the match arms to improve readability.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
8b7f8d3f3d expose previous backup time in backup env
and use this information to add more information to client backup log
and guide the download manifest decision.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
866c859a1e bump version to 1.0.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:33:20 +01:00
23e4e90540 verification: fix message in notification mail
the errors Vec can contain failed groups as well (e.g., if a group has
no or an invalid owner).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
a4fa3fc241 verification job: log failed dirs
else users have to manually search through a potentially very long task
log to find the entries that are different.. this is the same summary
printed at the end of a manual verify task.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
81d10c3b37 cleanup: remove dead code 2020-11-24 08:03:00 +01:00
f1e2904150 paperkey: refactor common code
from formatting functions to main function, and pass along the key data
lines instead of the full string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:57:21 +01:00
23f9503a31 client: check fingerprint after downloading manifest
this is stricter than the check that happened on manifest load, as it
also fails if the manifest is signed but we don't have a key available.

add some additional output at the start of a backup to indicate whether
a previous manifest is available to base the backup on.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:55:12 +01:00
a0ef68b93c manifest: check fingerprint when loading with key
otherwise loading will run into the signature mismatch which is
technically true, but not the complete picture in this case.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:49:51 +01:00
6b127e6ea0 fix #3139: add key fingerprint to manifest
if the manifest is signed/the contained archives/blobs are encrypted.
stored in 'unprotected' area, since there is already a strong binding
between key and manifest via the signature, and this avoids breaking
backwards compatibility for a simple usability improvement.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:45:11 +01:00
5e17dbf2bb cli: cleanup 'key show' - use format_and_print_result_full
We now expose all key derivation functions on the cli, so users can
choose between scrypt or pbkdf2.
2020-11-24 07:32:34 +01:00
dfb04575ad client: add 'key show' command
for (pretty-)printing a keyfile.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:15:29 +01:00
6f2626ae19 client: print key fingerprint and master key
for operations where it makes sense.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:11:26 +01:00
37e60ddcde key: add fingerprint to key config
and set/generate it on
- key creation
- key passphrase change
- key decryption if not already set
- key encryption with master key

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:46 +01:00
05cdc05347 crypt config: add fingerprint mechanism
by computing the ID digest of a hash of a static string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:16 +01:00
6364115b4b OnlineHelpInfo.js problems
Anybody known why I always get the following diff:
2020-11-23 12:57:41 +01:00
2133cd9103 update debian/control 2020-11-23 12:13:58 +01:00
01f84fcce1 ui: datastore content: use our keep field for group pruning
sets some defaults and provides the clear trigger, so less code and
slightly nicer UX.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-21 19:52:03 +01:00
08b3823025 bump dependency on proxmox to 0.7.1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-20 17:38:34 +01:00
968a0ab261 fix systemd-encoded upid strings in http client
since we systemd-encode parts of the upid string, and those can contain
characters that are invalid in urls (e.g. '\'), we have to percent encode
those

add a 'percent_encode_component' helper, so that we can maybe change
the AsciiSet for all uses at the same time

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-19 11:01:19 +01:00
21b552848a prune sim: make numberfields more similar to PBS's
by creating a new class that adds a clear trigger and also uses the
clear-trigger image. Code was taken from the one in PBS's prune window,
but we have default values here, so a bit of adapting was necessary. For
example, we don't want to reset to the original value (which might have
been one of the defaults) when clearing, but always to 'null'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-19 09:47:51 +01:00
fd19256470 gc: treat .bad files like regular chunks
Simplify the phase 2 code by treating .bad files just like regular
chunks, with the exception of stat logging.

To facilitate, we need to touch .bad files in phase 1. We only do this
under the condition that 1) the original chunk is missing (as before),
and 2) the original chunk is still referenced somewhere (since the code
lives in the error handler for a failed chunk touch, it only gets called
for chunks we expect to be there, i.e. ones that are referenced).

Untouched they will then be cleaned up after 24 hours (or after the last
longer-running task finishes).

Reason 2) is also a fix for .bad files not being cleaned up at all if
the original is no longer referenced anywhere (e.g. a user deleting all
snapshots after seeing some corrupt chunks appear).

cond_touch_path is introduced to touch arbitrary paths in the chunk
store with the same logic as touching chunks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-18 14:04:49 +01:00
1ed022576c api: include store in invalid owner errors
since a group might exist in plenty stores

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:24 +01:00
f6aa7b38bf drop now unused BackupInfo::list_backups
all global backup listing now happens via BackupGroup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:21 +01:00
fdfcb74d67 api: filter snapshot counts
unprivileged users should only see the counts related to their part of
the datastore.

while we're at it, switch to a list groups, filter groups, count
snapshots approach (like list_snapshots) to speedup calls to this
endpoint when many unprivileged users share a datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:50 +01:00
98afc7b152 api: make expensive parts of datastore status opt-in
used in the PBS GUI, but also for PVE usage queries which don't need all
the extra expensive information..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:47 +01:00
0d08fceeb9 improve group/snapshot listing
by listing groups first, then filtering, then listing group snapshots.

this cuts down the number of openat/getdirents calls for users that just
have a partial view of the datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 10:37:04 +01:00
3c945d73c2 client/http_client: add put method
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 16:59:14 +01:00
58fcbf5ab7 client: expose all-file-systems option
Useful to avoid the need for a long (and possibly changing) list of include-dev
options in certain situations, e.g. nested ZFS file systems. The option is
already implemented and seems to work as expected. The checks for virtual
filesystems are not affected by this option.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-16 16:59:14 +01:00
3a3f31c947 ui: datastores: hide "no datastore" box by default
avoids that it shows during store load, we do not know if there are
no datastores at that point and have already a loading mask.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-16 16:59:14 +01:00
8fc63287df ui: improve comment behaviour for datastore Summary
when we could not load the config (e.g. missing permissions)
show the comment from the global datastore-list

also show a messagebox for a load error instead of setting
the text of the comment box

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
172473e4de ui: DataStoreList: show message when there are no datastores
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
76f549debb ui: DataStoreList: remove datastores also from hash
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
c9097ff801 pxar: avoid including archive root's exclude patterns in .pxarexclude-cli
The patterns from the archive root's .pxarexclude file are already present in
self.patterns when encode_pxarexclude_cli is called. Pass along the number of
CLI patterns and slice accordingly.

Suggested-By: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 13:05:09 +01:00
fb01fd3af6 visibility cleanups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:53:50 +01:00
fa4bcbcad0 pxar: only generate .pxarexclude-cli if there were CLI parameters
previously a .pxarexclude entry in the root of the archive caused the file to
be generated as well, because the patterns are read before calling
generate_directory_file_list and within the function it wasn't possible to
distinguish between a pattern coming from the CLI and a pattern coming from
archive/root/.pxarexclude

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:08 +01:00
189cdb7427 pxar: include .pxarexclude files in the archive
The documentation states:
.pxarexclude files are treated as regular files and will be included in the
backup archive.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:06 +01:00
874bd5454d pxar: fix anchored exclusion at archive root
There is no leading slash in an entry's full_path, causing an anchored
exclude at the root level to fail, e.g. having "/name" as the content of the
file archive/root/.pxarexclude didn't match the file archive/root/name

Fix this by prepending a leading slash before matching.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:04 +01:00
b649887e9a remove unused function
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:15:15 +01:00
8c62c15f56 follouwp: whitespace cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 11:02:45 +01:00
51ac17b56e api: apt/versions: fix running_kernel string for unknown package case
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-12 11:02:20 +01:00
fc5a012068 manager: versions: non-verbose should actually print server pkg info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 10:28:03 +01:00
5e293f1315 apt: use typed response for get_versions
...and cleanup get_versions for manager CLI.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-12 10:15:32 +01:00
87367decf2 ui: tell ESLint to be strict in check target
the .lint-incremental target, which is implicitly used by the install
target, is still more forgiving to allow faster "change, build, test"
iteration when developing.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
f792220dd4 d/control: update for new pin-project dependency
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
97030c9407 cleanup clippy leftovers
this used to contain a pointer cast, now it doesn't

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
5d1d0f5d6c use pin-project to remove more unsafe blocks
we already have it in our dependency tree, so use it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
294466ee61 manager: versions: unify printing
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
c100fe9108 add versions command to proxmox-backup-manager
Add the versions command to proxmox-backup-manager with a similar output
to pveversion [-v]. It prints the packages line by line with only the
package name, followed by the version and, for proxmox-backup and
proxmox-backup-server, some additional information (running kernel,
running version).

In addition it supports the optional output-format parameter which can
be used to print the complete data in either json, json-pretty or text
format. If output-format is specified, the --verbose parameter is
ignored and the detailed list of packages is printed.

With the addition of the versions command, the report is extended as
well.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 18:30:33 +01:00
e754da3ac2 api: versions: add version also in server package unknown case
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
bc1e52bc38 api: versions: rust fmt cleanups
line length limit is 100

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
6f0073bbb5 api: apt update info: do not serialize extra info if none
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
2decf85d6e add extra_info field to APTUpdateInfo
Add an optional string field to APTUpdateInfo which can be used for
extra information.

This is used for passing running kernel and running version information
in the versions API call together with proxmox-backup and
proxmox-backup-server.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 16:39:11 +01:00
1d8f849457 api2/node/syslog: use 'real_service_name' here also
for now this only does the 'postfix' -> 'postfix@-' conversion,
fixes the issue that we only showed the 'postfix' service syslog
(which is rather empty in a default setup) instead of the instance one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 16:36:42 +01:00
beb07279b6 log source of encryption key
This patch prints the source of the encryption key when running
operations with proxmox-backup-client.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
8c6854c8fd inform user when using default encryption key
Currently if you generate a default encryption key:
`proxmox-backup-client key create --kdf none`

all backup operations which don't explicitly disable encryption will be
encrypted with this key.

I found it quite surprising, that my backups were all encrypted without
me explicitly specfying neither key nor encryption mode

This patch informs the user when the default key is used (and no
crypt-mode is provided explicitly)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
57f472d9bb report: use '$' instead of '#' for showing commands
since some files can contain '#' character for comments. (i.e.,
/etc/hosts)

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:37 +01:00
94ffca10a2 report: fix grammar error
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:33 +01:00
0a274ab0a0 ui: UserView: render name as 'Firstname Lastname'
instead of only the firstname

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
c0026563b0 make user properties deletable
by using our usual pattern for the update call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
e411924c7c rest: check for disabled token (user)
when authenticating a token, and not just when authenticating a
user/ticket.

Reported-By: Dominik Jäger <d.jaeger@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-11 12:21:29 +01:00
56 changed files with 1508 additions and 817 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "1.0.1" version = "1.0.4"
authors = [ authors = [
"Dietmar Maurer <dietmar@proxmox.com>", "Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>", "Dominik Csapak <d.csapak@proxmox.com>",
@ -46,8 +46,9 @@ pam = "0.7"
pam-sys = "0.5" pam-sys = "0.5"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-utils = "0.1.0" pin-utils = "0.1.0"
pin-project = "0.4"
pathpatterns = "0.1.2" pathpatterns = "0.1.2"
proxmox = { version = "0.7.0", features = [ "sortable-macro", "api-macro", "websocket" ] } proxmox = { version = "0.7.2", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0" proxmox-fuse = "0.1.0"

44
debian/changelog vendored
View File

@ -1,3 +1,47 @@
rust-proxmox-backup (1.0.4-1) unstable; urgency=medium
* fingerprint: add bytes() accessor
* ui: fix broken gettext use
* cli: move more commands into "snapshot" sub-command
-- Proxmox Support Team <support@proxmox.com> Wed, 25 Nov 2020 06:37:41 +0100
rust-proxmox-backup (1.0.3-1) unstable; urgency=medium
* client: inform user when automatically using the default encryption key
* ui: UserView: render name as 'Firstname Lastname'
* proxmox-backup-manager: add versions command
* pxar: fix anchored exclusion at archive root
* pxar: include .pxarexclude files in the archive
* client: expose all-file-systems option
* api: make expensive parts of datastore status opt-in
* api: include datastore ID in invalid owner errors
* garbage collection: treat .bad files like regular chunks to ensure they
are removed if not referenced anymore
* client: fix issues with encoded UPID strings
* encryption: add fingerprint to key config
* client: add 'key show' command
* fix #3139: add key fingerprint to backup snapshot manifest and check it
when loading with a key
* ui: add snapshot/file fingerprint tooltip
-- Proxmox Support Team <support@proxmox.com> Tue, 24 Nov 2020 08:55:47 +0100
rust-proxmox-backup (1.0.1-1) unstable; urgency=medium rust-proxmox-backup (1.0.1-1) unstable; urgency=medium
* ui: datastore summary: drop 'removed bytes' display * ui: datastore summary: drop 'removed bytes' display

11
debian/control vendored
View File

@ -33,11 +33,12 @@ Build-Depends: debhelper (>= 11),
librust-pam-sys-0.5+default-dev, librust-pam-sys-0.5+default-dev,
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~), librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~), librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-0.4+default-dev,
librust-pin-utils-0.1+default-dev, librust-pin-utils-0.1+default-dev,
librust-proxmox-0.7+api-macro-dev, librust-proxmox-0.7+api-macro-dev (>= 0.7.1-~~),
librust-proxmox-0.7+default-dev, librust-proxmox-0.7+default-dev (>= 0.7.1-~~),
librust-proxmox-0.7+sortable-macro-dev, librust-proxmox-0.7+sortable-macro-dev (>= 0.7.1-~~),
librust-proxmox-0.7+websocket-dev, librust-proxmox-0.7+websocket-dev (>= 0.7.1-~~),
librust-proxmox-fuse-0.1+default-dev, librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.6+default-dev (>= 0.6.1-~~), librust-pxar-0.6+default-dev (>= 0.6.1-~~),
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~), librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),
@ -78,7 +79,7 @@ Build-Depends: debhelper (>= 11),
uuid-dev, uuid-dev,
debhelper (>= 12~), debhelper (>= 12~),
bash-completion, bash-completion,
pve-eslint, pve-eslint (>= 7.12.1-1),
python3-docutils, python3-docutils,
python3-pygments, python3-pygments,
rsync, rsync,

View File

@ -14,7 +14,7 @@ section = "admin"
build_depends = [ build_depends = [
"debhelper (>= 12~)", "debhelper (>= 12~)",
"bash-completion", "bash-completion",
"pve-eslint", "pve-eslint (>= 7.12.1-1)",
"python3-docutils", "python3-docutils",
"python3-pygments", "python3-pygments",
"rsync", "rsync",

View File

@ -17,6 +17,7 @@ MANUAL_PAGES := \
PRUNE_SIMULATOR_FILES := \ PRUNE_SIMULATOR_FILES := \
prune-simulator/index.html \ prune-simulator/index.html \
prune-simulator/documentation.html \ prune-simulator/documentation.html \
prune-simulator/clear-trigger.png \
prune-simulator/prune-simulator.js prune-simulator/prune-simulator.js
# Sphinx documentation setup # Sphinx documentation setup

View File

@ -392,11 +392,11 @@ periodic recovery tests to ensure that you can access the data in
case of problems. case of problems.
First, you need to find the snapshot which you want to restore. The snapshot First, you need to find the snapshot which you want to restore. The snapshot
command provides a list of all the snapshots on the server: list command provides a list of all the snapshots on the server:
.. code-block:: console .. code-block:: console
# proxmox-backup-client snapshots # proxmox-backup-client snapshot list
┌────────────────────────────────┬─────────────┬────────────────────────────────────┐ ┌────────────────────────────────┬─────────────┬────────────────────────────────────┐
│ snapshot │ size │ files │ │ snapshot │ size │ files │
╞════════════════════════════════╪═════════════╪════════════════════════════════════╡ ╞════════════════════════════════╪═════════════╪════════════════════════════════════╡
@ -581,7 +581,7 @@ command:
.. code-block:: console .. code-block:: console
# proxmox-backup-client forget <snapshot> # proxmox-backup-client snapshot forget <snapshot>
.. caution:: This command removes all archives in this backup .. caution:: This command removes all archives in this backup

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -33,6 +33,9 @@
.first-of-month { .first-of-month {
border-right: dashed black 4px; border-right: dashed black 4px;
} }
.clear-trigger {
background-image: url(./clear-trigger.png);
}
</style> </style>
<script type="text/javascript" src="extjs/ext-all.js"></script> <script type="text/javascript" src="extjs/ext-all.js"></script>

View File

@ -265,6 +265,34 @@ Ext.onReady(function() {
}, },
}); });
Ext.define('PBS.PruneSimulatorKeepInput', {
extend: 'Ext.form.field.Number',
alias: 'widget.prunesimulatorKeepInput',
allowBlank: true,
fieldGroup: 'keep',
minValue: 1,
listeners: {
afterrender: function(field) {
this.triggers.clear.setVisible(field.value !== null);
},
change: function(field, newValue, oldValue) {
this.triggers.clear.setVisible(newValue !== null);
},
},
triggers: {
clear: {
cls: 'clear-trigger',
weight: -1,
handler: function() {
this.triggers.clear.setVisible(false);
this.setValue(null);
},
},
},
});
Ext.define('PBS.PruneSimulatorPanel', { Ext.define('PBS.PruneSimulatorPanel', {
extend: 'Ext.panel.Panel', extend: 'Ext.panel.Panel',
alias: 'widget.prunesimulatorPanel', alias: 'widget.prunesimulatorPanel',
@ -560,58 +588,37 @@ Ext.onReady(function() {
keepItems: [ keepItems: [
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-last', name: 'keep-last',
allowBlank: true,
fieldLabel: 'keep-last', fieldLabel: 'keep-last',
minValue: 0,
value: 4, value: 4,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-hourly', name: 'keep-hourly',
allowBlank: true,
fieldLabel: 'keep-hourly', fieldLabel: 'keep-hourly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-daily', name: 'keep-daily',
allowBlank: true,
fieldLabel: 'keep-daily', fieldLabel: 'keep-daily',
minValue: 0,
value: 5, value: 5,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-weekly', name: 'keep-weekly',
allowBlank: true,
fieldLabel: 'keep-weekly', fieldLabel: 'keep-weekly',
minValue: 0,
value: 2, value: 2,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-monthly', name: 'keep-monthly',
allowBlank: true,
fieldLabel: 'keep-monthly', fieldLabel: 'keep-monthly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-yearly', name: 'keep-yearly',
allowBlank: true,
fieldLabel: 'keep-yearly', fieldLabel: 'keep-yearly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
], ],

View File

@ -268,6 +268,21 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
Ok(user) Ok(user)
} }
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
#[allow(non_camel_case_types)]
pub enum DeletableProperty {
/// Delete the comment property.
comment,
/// Delete the firstname property.
firstname,
/// Delete the lastname property.
lastname,
/// Delete the email property.
email,
}
#[api( #[api(
protected: true, protected: true,
input: { input: {
@ -303,6 +318,14 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
schema: user::EMAIL_SCHEMA, schema: user::EMAIL_SCHEMA,
optional: true, optional: true,
}, },
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: { digest: {
optional: true, optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA, schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
@ -326,6 +349,7 @@ pub fn update_user(
firstname: Option<String>, firstname: Option<String>,
lastname: Option<String>, lastname: Option<String>,
email: Option<String>, email: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
@ -340,6 +364,17 @@ pub fn update_user(
let mut data: user::User = config.lookup("user", userid.as_str())?; let mut data: user::User = config.lookup("user", userid.as_str())?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::comment => data.comment = None,
DeletableProperty::firstname => data.firstname = None,
DeletableProperty::lastname => data.lastname = None,
DeletableProperty::email => data.email = None,
}
}
}
if let Some(comment) = comment { if let Some(comment) = comment {
let comment = comment.trim().to_string(); let comment = comment.trim().to_string();
if comment.is_empty() { if comment.is_empty() {

View File

@ -1,4 +1,4 @@
use std::collections::{HashSet, HashMap}; use std::collections::HashSet;
use std::ffi::OsStr; use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
@ -125,19 +125,6 @@ fn get_all_snapshot_files(
Ok((manifest, files)) Ok((manifest, files))
} }
fn group_backups(backup_list: Vec<BackupInfo>) -> HashMap<String, Vec<BackupInfo>> {
let mut group_hash = HashMap::new();
for info in backup_list {
let group_id = info.backup_dir.group().group_path().to_str().unwrap().to_owned();
let time_list = group_hash.entry(group_id).or_insert(vec![]);
time_list.push(info);
}
group_hash
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -171,45 +158,64 @@ fn list_groups(
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let backup_list = BackupInfo::list_backups(&datastore.base_path())?; let backup_groups = BackupInfo::list_backup_groups(&datastore.base_path())?;
let group_hash = group_backups(backup_list); let group_info = backup_groups
.into_iter()
.fold(Vec::new(), |mut group_info, group| {
let owner = match datastore.get_owner(&group) {
Ok(auth_id) => auth_id,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
&store,
group,
err);
return group_info;
},
};
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
return group_info;
}
let mut groups = Vec::new(); let snapshots = match group.list_backups(&datastore.base_path()) {
Ok(snapshots) => snapshots,
Err(_) => {
return group_info;
},
};
for (_group_id, mut list) in group_hash { let backup_count: u64 = snapshots.len() as u64;
if backup_count == 0 {
return group_info;
}
BackupInfo::sort_list(&mut list, false); let last_backup = snapshots
.iter()
.fold(&snapshots[0], |last, curr| {
if curr.is_finished()
&& curr.backup_dir.backup_time() > last.backup_dir.backup_time() {
curr
} else {
last
}
})
.to_owned();
let info = &list[0]; group_info.push(GroupListItem {
backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(),
last_backup: last_backup.backup_dir.backup_time(),
owner: Some(owner),
backup_count,
files: last_backup.files,
});
let group = info.backup_dir.group(); group_info
});
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0; Ok(group_info)
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
println!("Failed to get owner of group '{}' - {}", group, err);
continue;
},
};
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
let result_item = GroupListItem {
backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(),
last_backup: info.backup_dir.backup_time(),
backup_count: list.len() as u64,
files: info.files.clone(),
owner: Some(owner),
};
groups.push(result_item);
}
Ok(groups)
} }
#[api( #[api(
@ -357,49 +363,56 @@ pub fn list_snapshots (
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let base_path = datastore.base_path(); let base_path = datastore.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?; let groups = match (backup_type, backup_id) {
(Some(backup_type), Some(backup_id)) => {
let mut groups = Vec::with_capacity(1);
groups.push(BackupGroup::new(backup_type, backup_id));
groups
},
(Some(backup_type), None) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_type() == backup_type)
.collect()
},
(None, Some(backup_id)) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_id() == backup_id)
.collect()
},
_ => BackupInfo::list_backup_groups(&base_path)?,
};
let mut snapshots = vec![]; let info_to_snapshot_list_item = |group: &BackupGroup, owner, info: BackupInfo| {
let backup_type = group.backup_type().to_string();
let backup_id = group.backup_id().to_string();
let backup_time = info.backup_dir.backup_time();
for info in backup_list { match get_all_snapshot_files(&datastore, &info) {
let group = info.backup_dir.group();
if let Some(ref backup_type) = backup_type {
if backup_type != group.backup_type() { continue; }
}
if let Some(ref backup_id) = backup_id {
if backup_id != group.backup_id() { continue; }
}
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
println!("Failed to get owner of group '{}' - {}", group, err);
continue;
},
};
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
let mut size = None;
let (comment, verification, files) = match get_all_snapshot_files(&datastore, &info) {
Ok((manifest, files)) => { Ok((manifest, files)) => {
size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
// extract the first line from notes // extract the first line from notes
let comment: Option<String> = manifest.unprotected["notes"] let comment: Option<String> = manifest.unprotected["notes"]
.as_str() .as_str()
.and_then(|notes| notes.lines().next()) .and_then(|notes| notes.lines().next())
.map(String::from); .map(String::from);
let verify = manifest.unprotected["verify_state"].clone(); let fingerprint = match manifest.fingerprint() {
let verify: Option<SnapshotVerifyState> = match serde_json::from_value(verify) { Ok(fp) => fp,
Err(err) => {
eprintln!("error parsing fingerprint: '{}'", err);
None
},
};
let verification = manifest.unprotected["verify_state"].clone();
let verification: Option<SnapshotVerifyState> = match serde_json::from_value(verification) {
Ok(verify) => verify, Ok(verify) => verify,
Err(err) => { Err(err) => {
eprintln!("error parsing verification state : '{}'", err); eprintln!("error parsing verification state : '{}'", err);
@ -407,88 +420,114 @@ pub fn list_snapshots (
} }
}; };
(comment, verify, files) let size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment,
verification,
fingerprint,
files,
size,
owner,
}
}, },
Err(err) => { Err(err) => {
eprintln!("error during snapshot file listing: '{}'", err); eprintln!("error during snapshot file listing: '{}'", err);
( let files = info
None,
None,
info
.files .files
.iter() .into_iter()
.map(|x| BackupContent { .map(|x| BackupContent {
filename: x.to_string(), filename: x.to_string(),
size: None, size: None,
crypt_mode: None, crypt_mode: None,
}) })
.collect() .collect();
)
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment: None,
verification: None,
fingerprint: None,
files,
size: None,
owner,
}
}, },
}; }
let result_item = SnapshotListItem {
backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(),
backup_time: info.backup_dir.backup_time(),
comment,
verification,
files,
size,
owner: Some(owner),
};
snapshots.push(result_item);
}
Ok(snapshots)
}
fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
let base_path = store.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?;
let mut groups = HashSet::new();
let mut result = Counts {
ct: None,
host: None,
vm: None,
other: None,
}; };
for info in backup_list { groups
let group = info.backup_dir.group(); .iter()
.try_fold(Vec::new(), |mut snapshots, group| {
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
&store,
group,
err);
return Ok(snapshots);
},
};
let id = group.backup_id(); if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
let backup_type = group.backup_type(); return Ok(snapshots);
}
let mut new_id = false; let group_backups = group.list_backups(&datastore.base_path())?;
if groups.insert(format!("{}-{}", &backup_type, &id)) { snapshots.extend(
new_id = true; group_backups
} .into_iter()
.map(|info| info_to_snapshot_list_item(&group, Some(owner.clone()), info))
);
let mut counts = match backup_type { Ok(snapshots)
"ct" => result.ct.take().unwrap_or(Default::default()), })
"host" => result.host.take().unwrap_or(Default::default()), }
"vm" => result.vm.take().unwrap_or(Default::default()),
_ => result.other.take().unwrap_or(Default::default()),
};
counts.snapshots += 1; fn get_snapshots_count(store: &DataStore, filter_owner: Option<&Authid>) -> Result<Counts, Error> {
if new_id { let base_path = store.base_path();
counts.groups +=1; let groups = BackupInfo::list_backup_groups(&base_path)?;
}
match backup_type { groups.iter()
"ct" => result.ct = Some(counts), .filter(|group| {
"host" => result.host = Some(counts), let owner = match store.get_owner(&group) {
"vm" => result.vm = Some(counts), Ok(owner) => owner,
_ => result.other = Some(counts), Err(err) => {
} eprintln!("Failed to get owner of group '{}/{}' - {}",
} store.name(),
group,
err);
return false;
},
};
Ok(result) match filter_owner {
Some(filter) => check_backup_owner(&owner, filter).is_ok(),
None => true,
}
})
.try_fold(Counts::default(), |mut counts, group| {
let snapshot_count = group.list_backups(&base_path)?.len() as u64;
let type_count = match group.backup_type() {
"ct" => counts.ct.get_or_insert(Default::default()),
"vm" => counts.vm.get_or_insert(Default::default()),
"host" => counts.host.get_or_insert(Default::default()),
_ => counts.other.get_or_insert(Default::default()),
};
type_count.groups += 1;
type_count.snapshots += snapshot_count;
Ok(counts)
})
} }
#[api( #[api(
@ -497,7 +536,14 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
store: { store: {
schema: DATASTORE_SCHEMA, schema: DATASTORE_SCHEMA,
}, },
verbose: {
type: bool,
default: false,
optional: true,
description: "Include additional information like snapshot counts and GC status.",
},
}, },
}, },
returns: { returns: {
type: DataStoreStatus, type: DataStoreStatus,
@ -509,13 +555,30 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
/// Get datastore status. /// Get datastore status.
pub fn status( pub fn status(
store: String, store: String,
verbose: bool,
_info: &ApiMethod, _info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<DataStoreStatus, Error> { ) -> Result<DataStoreStatus, Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let storage = crate::tools::disks::disk_usage(&datastore.base_path())?; let storage = crate::tools::disks::disk_usage(&datastore.base_path())?;
let counts = get_snapshots_count(&datastore)?; let (counts, gc_status) = if verbose {
let gc_status = datastore.last_gc_status(); let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let store_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let filter_owner = if store_privs & PRIV_DATASTORE_AUDIT != 0 {
None
} else {
Some(&auth_id)
};
let counts = Some(get_snapshots_count(&datastore, filter_owner)?);
let gc_status = Some(datastore.last_gc_status());
(counts, gc_status)
} else {
(None, None)
};
Ok(DataStoreStatus { Ok(DataStoreStatus {
total: storage.total, total: storage.total,
@ -648,7 +711,7 @@ pub fn verify(
verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)? verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)?
}; };
if failed_dirs.len() > 0 { if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots/groups:"); worker.log("Failed to verify the following snapshots/groups:");
for dir in failed_dirs { for dir in failed_dirs {
worker.log(format!("\t{}", dir)); worker.log(format!("\t{}", dir));
} }

View File

@ -311,6 +311,10 @@ pub const BACKUP_API_SUBDIRS: SubdirMap = &[
"previous", &Router::new() "previous", &Router::new()
.download(&API_METHOD_DOWNLOAD_PREVIOUS) .download(&API_METHOD_DOWNLOAD_PREVIOUS)
), ),
(
"previous_backup_time", &Router::new()
.get(&API_METHOD_GET_PREVIOUS_BACKUP_TIME)
),
( (
"speedtest", &Router::new() "speedtest", &Router::new()
.upload(&API_METHOD_UPLOAD_SPEEDTEST) .upload(&API_METHOD_UPLOAD_SPEEDTEST)
@ -694,6 +698,28 @@ fn finish_backup (
Ok(Value::Null) Ok(Value::Null)
} }
#[sortable]
pub const API_METHOD_GET_PREVIOUS_BACKUP_TIME: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&get_previous_backup_time),
&ObjectSchema::new(
"Get previous backup time.",
&[],
)
);
fn get_previous_backup_time(
_param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let env: &BackupEnvironment = rpcenv.as_ref();
let backup_time = env.last_backup.as_ref().map(|info| info.backup_dir.backup_time());
Ok(json!(backup_time))
}
#[sortable] #[sortable]
pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new( pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&download_previous), &ApiHandler::AsyncHttp(&download_previous),

View File

@ -34,7 +34,7 @@ pub mod subscription;
pub(crate) mod rrd; pub(crate) mod rrd;
mod journal; mod journal;
mod services; pub(crate) mod services;
mod status; mod status;
mod syslog; mod syslog;
mod time; mod time;

View File

@ -261,7 +261,7 @@ fn apt_get_changelog(
}, },
)] )]
/// Get package information for important Proxmox Backup Server packages. /// Get package information for important Proxmox Backup Server packages.
pub fn get_versions() -> Result<Value, Error> { pub fn get_versions() -> Result<Vec<APTUpdateInfo>, Error> {
const PACKAGES: &[&str] = &[ const PACKAGES: &[&str] = &[
"ifupdown2", "ifupdown2",
"libjs-extjs", "libjs-extjs",
@ -276,7 +276,7 @@ pub fn get_versions() -> Result<Value, Error> {
"zfsutils-linux", "zfsutils-linux",
]; ];
fn unknown_package(package: String) -> APTUpdateInfo { fn unknown_package(package: String, extra_info: Option<String>) -> APTUpdateInfo {
APTUpdateInfo { APTUpdateInfo {
package, package,
title: "unknown".into(), title: "unknown".into(),
@ -288,6 +288,7 @@ pub fn get_versions() -> Result<Value, Error> {
priority: "unknown".into(), priority: "unknown".into(),
section: "unknown".into(), section: "unknown".into(),
change_log_url: "unknown".into(), change_log_url: "unknown".into(),
extra_info,
} }
} }
@ -301,14 +302,28 @@ pub fn get_versions() -> Result<Value, Error> {
}, },
None, None,
); );
let running_kernel = format!(
"running kernel: {}",
nix::sys::utsname::uname().release().to_owned()
);
if let Some(proxmox_backup) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup") { if let Some(proxmox_backup) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup") {
packages.push(proxmox_backup.to_owned()); let mut proxmox_backup = proxmox_backup.clone();
proxmox_backup.extra_info = Some(running_kernel);
packages.push(proxmox_backup);
} else { } else {
packages.push(unknown_package("proxmox-backup".into())); packages.push(unknown_package("proxmox-backup".into(), Some(running_kernel)));
} }
let version = crate::api2::version::PROXMOX_PKG_VERSION;
let release = crate::api2::version::PROXMOX_PKG_RELEASE;
let daemon_version_info = Some(format!("running version: {}.{}", version, release));
if let Some(pkg) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup-server") { if let Some(pkg) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup-server") {
packages.push(pkg.to_owned()); let mut pkg = pkg.clone();
pkg.extra_info = daemon_version_info;
packages.push(pkg);
} else {
packages.push(unknown_package("proxmox-backup".into(), daemon_version_info));
} }
let mut kernel_pkgs: Vec<APTUpdateInfo> = pbs_packages let mut kernel_pkgs: Vec<APTUpdateInfo> = pbs_packages
@ -334,11 +349,11 @@ pub fn get_versions() -> Result<Value, Error> {
} }
match pbs_packages.iter().find(|item| &item.package == pkg) { match pbs_packages.iter().find(|item| &item.package == pkg) {
Some(apt_pkg) => packages.push(apt_pkg.to_owned()), Some(apt_pkg) => packages.push(apt_pkg.to_owned()),
None => packages.push(unknown_package(pkg.to_string())), None => packages.push(unknown_package(pkg.to_string(), None)),
} }
} }
Ok(json!(packages)) Ok(packages)
} }
const SUBDIRS: SubdirMap = &[ const SUBDIRS: SubdirMap = &[

View File

@ -22,7 +22,7 @@ static SERVICE_NAME_LIST: [&str; 7] = [
"systemd-timesyncd", "systemd-timesyncd",
]; ];
fn real_service_name(service: &str) -> &str { pub fn real_service_name(service: &str) -> &str {
// since postfix package 3.1.0-3.1 the postfix unit is only here // since postfix package 3.1.0-3.1 the postfix unit is only here
// to manage subinstances, of which the default is called "-". // to manage subinstances, of which the default is called "-".

View File

@ -134,12 +134,18 @@ fn get_syslog(
mut rpcenv: &mut dyn RpcEnvironment, mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let service = if let Some(service) = param["service"].as_str() {
Some(crate::api2::node::services::real_service_name(service))
} else {
None
};
let (count, lines) = dump_journal( let (count, lines) = dump_journal(
param["start"].as_u64(), param["start"].as_u64(),
param["limit"].as_u64(), param["limit"].as_u64(),
param["since"].as_str(), param["since"].as_str(),
param["until"].as_str(), param["until"].as_str(),
param["service"].as_str())?; service)?;
rpcenv["total"] = Value::from(count); rpcenv["total"] = Value::from(count);

View File

@ -5,7 +5,7 @@ use proxmox::api::{api, schema::*};
use proxmox::const_regex; use proxmox::const_regex;
use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32}; use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::{CryptMode, BACKUP_ID_REGEX}; use crate::backup::{CryptMode, Fingerprint, BACKUP_ID_REGEX};
use crate::server::UPID; use crate::server::UPID;
#[macro_use] #[macro_use]
@ -484,6 +484,10 @@ pub struct SnapshotVerifyState {
type: SnapshotVerifyState, type: SnapshotVerifyState,
optional: true, optional: true,
}, },
fingerprint: {
type: String,
optional: true,
},
files: { files: {
items: { items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA schema: BACKUP_ARCHIVE_NAME_SCHEMA
@ -508,6 +512,9 @@ pub struct SnapshotListItem {
/// The result of the last run verify task /// The result of the last run verify task
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub verification: Option<SnapshotVerifyState>, pub verification: Option<SnapshotVerifyState>,
/// Fingerprint of encryption key
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<Fingerprint>,
/// List of contained archive files. /// List of contained archive files.
pub files: Vec<BackupContent>, pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes). /// Overall snapshot size (sum of all archive sizes).
@ -692,7 +699,7 @@ pub struct TypeCounts {
}, },
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize, Default)]
/// Counts of groups/snapshots per BackupType. /// Counts of groups/snapshots per BackupType.
pub struct Counts { pub struct Counts {
/// The counts for CT backups /// The counts for CT backups
@ -707,8 +714,14 @@ pub struct Counts {
#[api( #[api(
properties: { properties: {
"gc-status": { type: GarbageCollectionStatus, }, "gc-status": {
counts: { type: Counts, } type: GarbageCollectionStatus,
optional: true,
},
counts: {
type: Counts,
optional: true,
},
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
@ -722,9 +735,11 @@ pub struct DataStoreStatus {
/// Available space (bytes). /// Available space (bytes).
pub avail: u64, pub avail: u64,
/// Status of last GC /// Status of last GC
pub gc_status: GarbageCollectionStatus, #[serde(skip_serializing_if="Option::is_none")]
pub gc_status: Option<GarbageCollectionStatus>,
/// Group/Snapshot counts /// Group/Snapshot counts
pub counts: Counts, #[serde(skip_serializing_if="Option::is_none")]
pub counts: Option<Counts>,
} }
#[api( #[api(
@ -1177,6 +1192,9 @@ pub struct APTUpdateInfo {
pub section: String, pub section: String,
/// URL under which the package's changelog can be retrieved /// URL under which the package's changelog can be retrieved
pub change_log_url: String, pub change_log_url: String,
/// Custom extra field for additional package information
#[serde(skip_serializing_if="Option::is_none")]
pub extra_info: Option<String>,
} }
#[api()] #[api()]

View File

@ -327,26 +327,20 @@ impl BackupInfo {
Ok(files) Ok(files)
} }
pub fn list_backups(base_path: &Path) -> Result<Vec<BackupInfo>, Error> { pub fn list_backup_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
let mut list = Vec::new(); let mut list = Vec::new();
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| { tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |l1_fd, backup_id, file_type| { tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l1_fd, backup_id, &BACKUP_DATE_REGEX, |l2_fd, backup_time_string, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
let backup_dir = BackupDir::with_rfc3339(backup_type, backup_id, backup_time_string)?; list.push(BackupGroup::new(backup_type, backup_id));
let files = list_backup_files(l2_fd, backup_time_string)?; Ok(())
list.push(BackupInfo { backup_dir, files });
Ok(())
})
}) })
})?; })?;
Ok(list) Ok(list)
} }

View File

@ -154,9 +154,11 @@ impl ChunkStore {
} }
pub fn cond_touch_chunk(&self, digest: &[u8; 32], fail_if_not_exist: bool) -> Result<bool, Error> { pub fn cond_touch_chunk(&self, digest: &[u8; 32], fail_if_not_exist: bool) -> Result<bool, Error> {
let (chunk_path, _digest_str) = self.chunk_path(digest); let (chunk_path, _digest_str) = self.chunk_path(digest);
self.cond_touch_path(&chunk_path, fail_if_not_exist)
}
pub fn cond_touch_path(&self, path: &Path, fail_if_not_exist: bool) -> Result<bool, Error> {
const UTIME_NOW: i64 = (1 << 30) - 1; const UTIME_NOW: i64 = (1 << 30) - 1;
const UTIME_OMIT: i64 = (1 << 30) - 2; const UTIME_OMIT: i64 = (1 << 30) - 2;
@ -167,7 +169,7 @@ impl ChunkStore {
use nix::NixPath; use nix::NixPath;
let res = chunk_path.with_nix_path(|cstr| unsafe { let res = path.with_nix_path(|cstr| unsafe {
let tmp = libc::utimensat(-1, cstr.as_ptr(), &times[0], libc::AT_SYMLINK_NOFOLLOW); let tmp = libc::utimensat(-1, cstr.as_ptr(), &times[0], libc::AT_SYMLINK_NOFOLLOW);
nix::errno::Errno::result(tmp) nix::errno::Errno::result(tmp)
})?; })?;
@ -177,7 +179,7 @@ impl ChunkStore {
return Ok(false); return Ok(false);
} }
bail!("update atime failed for chunk {:?} - {}", chunk_path, err); bail!("update atime failed for chunk/file {:?} - {}", path, err);
} }
Ok(true) Ok(true)
@ -328,49 +330,13 @@ impl ChunkStore {
let lock = self.mutex.lock(); let lock = self.mutex.lock();
if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) { if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) {
if bad { if stat.st_atime < min_atime {
// filename validity checked in iterator
let orig_filename = std::ffi::CString::new(&filename.to_bytes()[..64])?;
match fstatat(
dirfd,
orig_filename.as_c_str(),
nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW)
{
Ok(_) => {
match unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
Err(err) =>
crate::task_warn!(
worker,
"unlinking corrupt chunk {:?} failed on store '{}' - {}",
filename,
self.name,
err,
),
Ok(_) => {
status.removed_bad += 1;
status.removed_bytes += stat.st_size as u64;
}
}
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
// chunk hasn't been rewritten yet, keep .bad file
status.still_bad += 1;
},
Err(err) => {
// some other error, warn user and keep .bad file around too
status.still_bad += 1;
crate::task_warn!(
worker,
"error during stat on '{:?}' - {}",
orig_filename,
err,
);
}
}
} else if stat.st_atime < min_atime {
//let age = now - stat.st_atime; //let age = now - stat.st_atime;
//println!("UNLINK {} {:?}", age/(3600*24), filename); //println!("UNLINK {} {:?}", age/(3600*24), filename);
if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) { if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
if bad {
status.still_bad += 1;
}
bail!( bail!(
"unlinking chunk {:?} failed on store '{}' - {}", "unlinking chunk {:?} failed on store '{}' - {}",
filename, filename,
@ -378,13 +344,23 @@ impl ChunkStore {
err, err,
); );
} }
status.removed_chunks += 1; if bad {
status.removed_bad += 1;
} else {
status.removed_chunks += 1;
}
status.removed_bytes += stat.st_size as u64; status.removed_bytes += stat.st_size as u64;
} else if stat.st_atime < oldest_writer { } else if stat.st_atime < oldest_writer {
status.pending_chunks += 1; if bad {
status.still_bad += 1;
} else {
status.pending_chunks += 1;
}
status.pending_bytes += stat.st_size as u64; status.pending_bytes += stat.st_size as u64;
} else { } else {
status.disk_chunks += 1; if !bad {
status.disk_chunks += 1;
}
status.disk_bytes += stat.st_size as u64; status.disk_bytes += stat.st_size as u64;
} }
} }

View File

@ -7,6 +7,8 @@
//! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption) //! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption)
//! for a short introduction. //! for a short introduction.
use std::fmt;
use std::fmt::Display;
use std::io::Write; use std::io::Write;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
@ -15,8 +17,15 @@ use openssl::pkcs5::pbkdf2_hmac;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode}; use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::tools::format::{as_fingerprint, bytes_as_fingerprint};
use proxmox::api::api; use proxmox::api::api;
// openssl::sha::sha256(b"Proxmox Backup Encryption Key Fingerprint")
const FINGERPRINT_INPUT: [u8; 32] = [ 110, 208, 239, 119, 71, 31, 255, 77,
85, 199, 168, 254, 74, 157, 182, 33,
97, 64, 127, 19, 76, 114, 93, 223,
48, 153, 45, 37, 236, 69, 237, 38, ];
#[api(default: "encrypt")] #[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)] #[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
@ -30,6 +39,30 @@ pub enum CryptMode {
SignOnly, SignOnly,
} }
#[derive(Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(transparent)]
/// 32-byte fingerprint, usually calculated with SHA256.
pub struct Fingerprint {
#[serde(with = "bytes_as_fingerprint")]
bytes: [u8; 32],
}
impl Fingerprint {
pub fn new(bytes: [u8; 32]) -> Self {
Self { bytes }
}
pub fn bytes(&self) -> &[u8; 32] {
&self.bytes
}
}
/// Display as short key ID
impl Display for Fingerprint {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", as_fingerprint(&self.bytes[0..8]))
}
}
/// Encryption Configuration with secret key /// Encryption Configuration with secret key
/// ///
/// This structure stores the secret key and provides helpers for /// This structure stores the secret key and provides helpers for
@ -101,6 +134,10 @@ impl CryptConfig {
tag tag
} }
pub fn fingerprint(&self) -> Fingerprint {
Fingerprint::new(self.compute_digest(&FINGERPRINT_INPUT))
}
pub fn data_crypter(&self, iv: &[u8; 16], mode: Mode) -> Result<Crypter, Error> { pub fn data_crypter(&self, iv: &[u8; 16], mode: Mode) -> Result<Crypter, Error> {
let mut crypter = openssl::symm::Crypter::new(self.cipher, mode, &self.enc_key, Some(iv))?; let mut crypter = openssl::symm::Crypter::new(self.cipher, mode, &self.enc_key, Some(iv))?;
crypter.aad_update(b"")?; //?? crypter.aad_update(b"")?; //??
@ -219,7 +256,13 @@ impl CryptConfig {
) -> Result<Vec<u8>, Error> { ) -> Result<Vec<u8>, Error> {
let modified = proxmox::tools::time::epoch_i64(); let modified = proxmox::tools::time::epoch_i64();
let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() }; let key_config = super::KeyConfig {
kdf: None,
created,
modified,
data: self.enc_key.to_vec(),
fingerprint: Some(self.fingerprint()),
};
let data = serde_json::to_string(&key_config)?.as_bytes().to_vec(); let data = serde_json::to_string(&key_config)?.as_bytes().to_vec();
let mut buffer = vec![0u8; rsa.size() as usize]; let mut buffer = vec![0u8; rsa.size() as usize];

View File

@ -446,6 +446,17 @@ impl DataStore {
file_name, file_name,
err, err,
); );
// touch any corresponding .bad files to keep them around, meaning if a chunk is
// rewritten correctly they will be removed automatically, as well as if no index
// file requires the chunk anymore (won't get to this loop then)
for i in 0..=9 {
let bad_ext = format!("{}.bad", i);
let mut bad_path = PathBuf::new();
bad_path.push(self.chunk_path(digest).0);
bad_path.set_extension(bad_ext);
self.chunk_store.cond_touch_path(&bad_path, false)?;
}
} }
} }
Ok(()) Ok(())

View File

@ -219,7 +219,6 @@ impl IndexFile for DynamicIndexReader {
(csum, chunk_end) (csum, chunk_end)
} }
#[allow(clippy::cast_ptr_alignment)]
fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo> { fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo> {
if pos >= self.index.len() { if pos >= self.index.len() {
return None; return None;

View File

@ -2,9 +2,34 @@ use anyhow::{bail, format_err, Context, Error};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::backup::{CryptConfig, Fingerprint};
use proxmox::api::api;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions}; use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::try_block; use proxmox::try_block;
#[api(default: "scrypt")]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
/// Encrtypt the Key with a password using PBKDF2
PBKDF2,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[derive(Deserialize, Serialize, Debug)] #[derive(Deserialize, Serialize, Debug)]
pub enum KeyDerivationConfig { pub enum KeyDerivationConfig {
Scrypt { Scrypt {
@ -66,6 +91,9 @@ pub struct KeyConfig {
pub modified: i64, pub modified: i64,
#[serde(with = "proxmox::tools::serde::bytes_as_base64")] #[serde(with = "proxmox::tools::serde::bytes_as_base64")]
pub data: Vec<u8>, pub data: Vec<u8>,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(default)]
pub fingerprint: Option<Fingerprint>,
} }
pub fn store_key_config( pub fn store_key_config(
@ -103,15 +131,25 @@ pub fn store_key_config(
pub fn encrypt_key_with_passphrase( pub fn encrypt_key_with_passphrase(
raw_key: &[u8], raw_key: &[u8],
passphrase: &[u8], passphrase: &[u8],
kdf: Kdf,
) -> Result<KeyConfig, Error> { ) -> Result<KeyConfig, Error> {
let salt = proxmox::sys::linux::random_data(32)?; let salt = proxmox::sys::linux::random_data(32)?;
let kdf = KeyDerivationConfig::Scrypt { let kdf = match kdf {
n: 65536, Kdf::Scrypt => KeyDerivationConfig::Scrypt {
r: 8, n: 65536,
p: 1, r: 8,
salt, p: 1,
salt,
},
Kdf::PBKDF2 => KeyDerivationConfig::PBKDF2 {
iter: 65535,
salt,
},
Kdf::None => {
bail!("No key derivation function specified");
}
}; };
let derived_key = kdf.derive_key(passphrase)?; let derived_key = kdf.derive_key(passphrase)?;
@ -142,28 +180,22 @@ pub fn encrypt_key_with_passphrase(
created, created,
modified: created, modified: created,
data: enc_data, data: enc_data,
fingerprint: None,
}) })
} }
pub fn load_and_decrypt_key( pub fn load_and_decrypt_key(
path: &std::path::Path, path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>, passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> { ) -> Result<([u8;32], i64, Fingerprint), Error> {
do_load_and_decrypt_key(path, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
}
fn do_load_and_decrypt_key(
path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> {
decrypt_key(&file_get_contents(&path)?, passphrase) decrypt_key(&file_get_contents(&path)?, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
} }
pub fn decrypt_key( pub fn decrypt_key(
mut keydata: &[u8], mut keydata: &[u8],
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>, passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> { ) -> Result<([u8;32], i64, Fingerprint), Error> {
let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?; let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?;
let raw_data = key_config.data; let raw_data = key_config.data;
@ -203,5 +235,13 @@ pub fn decrypt_key(
let mut result = [0u8; 32]; let mut result = [0u8; 32];
result.copy_from_slice(&key); result.copy_from_slice(&key);
Ok((result, created)) let fingerprint = match key_config.fingerprint {
Some(fingerprint) => fingerprint,
None => {
let crypt_config = CryptConfig::new(result.clone())?;
crypt_config.fingerprint()
},
};
Ok((result, created, fingerprint))
} }

View File

@ -5,7 +5,7 @@ use std::path::Path;
use serde_json::{json, Value}; use serde_json::{json, Value};
use ::serde::{Deserialize, Serialize}; use ::serde::{Deserialize, Serialize};
use crate::backup::{BackupDir, CryptMode, CryptConfig}; use crate::backup::{BackupDir, CryptMode, CryptConfig, Fingerprint};
pub const MANIFEST_BLOB_NAME: &str = "index.json.blob"; pub const MANIFEST_BLOB_NAME: &str = "index.json.blob";
pub const MANIFEST_LOCK_NAME: &str = ".index.json.lck"; pub const MANIFEST_LOCK_NAME: &str = ".index.json.lck";
@ -223,12 +223,48 @@ impl BackupManifest {
if let Some(crypt_config) = crypt_config { if let Some(crypt_config) = crypt_config {
let sig = self.signature(crypt_config)?; let sig = self.signature(crypt_config)?;
manifest["signature"] = proxmox::tools::digest_to_hex(&sig).into(); manifest["signature"] = proxmox::tools::digest_to_hex(&sig).into();
let fingerprint = &crypt_config.fingerprint();
manifest["unprotected"]["key-fingerprint"] = serde_json::to_value(fingerprint)?;
} }
let manifest = serde_json::to_string_pretty(&manifest).unwrap().into(); let manifest = serde_json::to_string_pretty(&manifest).unwrap().into();
Ok(manifest) Ok(manifest)
} }
pub fn fingerprint(&self) -> Result<Option<Fingerprint>, Error> {
match &self.unprotected["key-fingerprint"] {
Value::Null => Ok(None),
value => Ok(Some(serde_json::from_value(value.clone())?))
}
}
/// Checks if a BackupManifest and a CryptConfig share a valid fingerprint combination.
///
/// An unsigned manifest is valid with any or no CryptConfig.
/// A signed manifest is only valid with a matching CryptConfig.
pub fn check_fingerprint(&self, crypt_config: Option<&CryptConfig>) -> Result<(), Error> {
if let Some(fingerprint) = self.fingerprint()? {
match crypt_config {
None => bail!(
"missing key - manifest was created with key {}",
fingerprint,
),
Some(crypt_config) => {
let config_fp = crypt_config.fingerprint();
if config_fp != fingerprint {
bail!(
"wrong key - manifest's key {} does not match provided key {}",
fingerprint,
config_fp
);
}
}
}
};
Ok(())
}
/// Try to read the manifest. This verifies the signature if there is a crypt_config. /// Try to read the manifest. This verifies the signature if there is a crypt_config.
pub fn from_data(data: &[u8], crypt_config: Option<&CryptConfig>) -> Result<BackupManifest, Error> { pub fn from_data(data: &[u8], crypt_config: Option<&CryptConfig>) -> Result<BackupManifest, Error> {
let json: Value = serde_json::from_slice(data)?; let json: Value = serde_json::from_slice(data)?;
@ -237,6 +273,19 @@ impl BackupManifest {
if let Some(ref crypt_config) = crypt_config { if let Some(ref crypt_config) = crypt_config {
if let Some(signature) = signature { if let Some(signature) = signature {
let expected_signature = proxmox::tools::digest_to_hex(&Self::json_signature(&json, crypt_config)?); let expected_signature = proxmox::tools::digest_to_hex(&Self::json_signature(&json, crypt_config)?);
let fingerprint = &json["unprotected"]["key-fingerprint"];
if fingerprint != &Value::Null {
let fingerprint = serde_json::from_value(fingerprint.clone())?;
let config_fp = crypt_config.fingerprint();
if config_fp != fingerprint {
bail!(
"wrong key - unable to verify signature since manifest's key {} does not match provided key {}",
fingerprint,
config_fp
);
}
}
if signature != expected_signature { if signature != expected_signature {
bail!("wrong signature in manifest"); bail!("wrong signature in manifest");
} }

View File

@ -53,7 +53,6 @@ use proxmox_backup::backup::{
ChunkStream, ChunkStream,
CryptConfig, CryptConfig,
CryptMode, CryptMode,
DataBlob,
DynamicIndexReader, DynamicIndexReader,
FixedChunkStream, FixedChunkStream,
FixedIndexReader, FixedIndexReader,
@ -456,112 +455,6 @@ async fn change_backup_owner(group: String, mut param: Value) -> Result<(), Erro
Ok(()) Ok(())
} }
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
group: {
type: String,
description: "Backup group.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List backup snapshots.
async fn list_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?)
} else {
None
};
let mut data = api_datastore_list_snapshots(&client, repo.store(), group).await?;
record_repository(&repo);
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned())
};
let render_files = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let mut filenames = Vec::new();
for file in &item.files {
filenames.push(file.filename.to_string());
}
Ok(tools::format::render_backup_file_list(&filenames[..]))
};
let options = default_table_format_options()
.sortby("backup-type", false)
.sortby("backup-id", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
.column(ColumnConfig::new("size").renderer(tools::format::render_bytes_human_readable))
.column(ColumnConfig::new("files").renderer(render_files))
;
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOTS;
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
}
}
)]
/// Forget (remove) backup snapshots.
async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
let result = client.delete(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
Ok(result)
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -655,58 +548,6 @@ async fn api_version(param: Value) -> Result<(), Error> {
Ok(()) Ok(())
} }
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List snapshot files.
async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
let mut result = client.get(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOT_FILES;
let mut data: Value = result["data"].take();
let options = default_table_format_options();
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -802,7 +643,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
let keydata = match (keyfile, key_fd) { let keydata = match (keyfile, key_fd) {
(None, None) => None, (None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"), (Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(file_get_contents(keyfile)?), (Some(keyfile), None) => {
println!("Using encryption key file: {}", keyfile);
Some(file_get_contents(keyfile)?)
},
(None, Some(fd)) => { (None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) }; let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new(); let mut data = Vec::new();
@ -810,6 +654,7 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
.map_err(|err| { .map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err) format_err!("error reading encryption key from fd {}: {}", fd, err)
})?; })?;
println!("Using encryption key from file descriptor");
Some(data) Some(data)
} }
}; };
@ -817,7 +662,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
Ok(match (keydata, crypt_mode) { Ok(match (keydata, crypt_mode) {
// no parameters: // no parameters:
(None, None) => match key::read_optional_default_encryption_key()? { (None, None) => match key::read_optional_default_encryption_key()? {
Some(key) => (Some(key), CryptMode::Encrypt), Some(key) => {
println!("Encrypting with default encryption key!");
(Some(key), CryptMode::Encrypt)
},
None => (None, CryptMode::None), None => (None, CryptMode::None),
}, },
@ -827,7 +675,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
// just --crypt-mode other than none // just --crypt-mode other than none
(None, Some(crypt_mode)) => match key::read_optional_default_encryption_key()? { (None, Some(crypt_mode)) => match key::read_optional_default_encryption_key()? {
None => bail!("--crypt-mode without --keyfile and no default key file available"), None => bail!("--crypt-mode without --keyfile and no default key file available"),
Some(key) => (Some(key), crypt_mode), Some(key) => {
println!("Encrypting with default encryption key!");
(Some(key), crypt_mode)
},
} }
// just --keyfile // just --keyfile
@ -865,6 +716,11 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
description: "Path to file.", description: "Path to file.",
} }
}, },
"all-file-systems": {
type: Boolean,
description: "Include all mounted subdirectories.",
optional: true,
},
keyfile: { keyfile: {
schema: KEYFILE_SCHEMA, schema: KEYFILE_SCHEMA,
optional: true, optional: true,
@ -1054,7 +910,8 @@ async fn create_backup(
let (crypt_config, rsa_encrypted_key) = match keydata { let (crypt_config, rsa_encrypted_key) = match keydata {
None => (None, None), None => (None, None),
Some(key) => { Some(key) => {
let (key, created) = decrypt_key(&key, &key::get_encryption_key_password)?; let (key, created, fingerprint) = decrypt_key(&key, &key::get_encryption_key_password)?;
println!("Encryption key fingerprint: {}", fingerprint);
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
@ -1063,6 +920,8 @@ async fn create_backup(
let pem_data = file_get_contents(path)?; let pem_data = file_get_contents(path)?;
let rsa = openssl::rsa::Rsa::public_key_from_pem(&pem_data)?; let rsa = openssl::rsa::Rsa::public_key_from_pem(&pem_data)?;
let enc_key = crypt_config.generate_rsa_encoded_key(rsa, created)?; let enc_key = crypt_config.generate_rsa_encoded_key(rsa, created)?;
println!("Master key '{:?}'", path);
(Some(Arc::new(crypt_config)), Some(enc_key)) (Some(Arc::new(crypt_config)), Some(enc_key))
} }
_ => (Some(Arc::new(crypt_config)), None), _ => (Some(Arc::new(crypt_config)), None),
@ -1081,8 +940,40 @@ async fn create_backup(
false false
).await?; ).await?;
let previous_manifest = if let Ok(previous_manifest) = client.download_previous_manifest().await { let download_previous_manifest = match client.previous_backup_time().await {
Some(Arc::new(previous_manifest)) Ok(Some(backup_time)) => {
println!(
"Downloading previous manifest ({})",
strftime_local("%c", backup_time)?
);
true
}
Ok(None) => {
println!("No previous manifest available.");
false
}
Err(_) => {
// Fallback for outdated server, TODO remove/bubble up with 2.0
true
}
};
let previous_manifest = if download_previous_manifest {
match client.download_previous_manifest().await {
Ok(previous_manifest) => {
match previous_manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref)) {
Ok(()) => Some(Arc::new(previous_manifest)),
Err(err) => {
println!("Couldn't re-use previous manifest - {}", err);
None
}
}
}
Err(err) => {
println!("Couldn't download previous manifest - {}", err);
None
}
}
} else { } else {
None None
}; };
@ -1365,7 +1256,8 @@ async fn restore(param: Value) -> Result<Value, Error> {
let crypt_config = match keydata { let crypt_config = match keydata {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _) = decrypt_key(&key, &key::get_encryption_key_password)?; let (key, _, fingerprint) = decrypt_key(&key, &key::get_encryption_key_password)?;
println!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?)) Some(Arc::new(CryptConfig::new(key)?))
} }
}; };
@ -1381,6 +1273,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
).await?; ).await?;
let (manifest, backup_index_data) = client.download_manifest().await?; let (manifest, backup_index_data) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let (archive_name, archive_type) = parse_archive_type(archive_name); let (archive_name, archive_type) = parse_archive_type(archive_name);
@ -1477,81 +1370,6 @@ async fn restore(param: Value) -> Result<Value, Error> {
Ok(Value::Null) Ok(Value::Null)
} }
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Group/Snapshot path.",
},
logfile: {
type: String,
description: "The path to the log file you want to upload.",
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
}
}
)]
/// Upload backup log file.
async fn upload_log(param: Value) -> Result<Value, Error> {
let logfile = tools::required_string_param(&param, "logfile")?;
let repo = extract_repository_from_value(&param)?;
let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(&repo)?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let data = file_get_contents(logfile)?;
// fixme: howto sign log?
let blob = match crypt_mode {
CryptMode::None | CryptMode::SignOnly => DataBlob::encode(&data, None, true)?,
CryptMode::Encrypt => DataBlob::encode(&data, crypt_config.as_ref().map(Arc::as_ref), true)?,
};
let raw_data = blob.into_inner();
let path = format!("api2/json/admin/datastore/{}/upload-backup-log", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let body = hyper::Body::from(raw_data);
client.upload("application/octet-stream", body, &path, Some(args)).await
}
const API_METHOD_PRUNE: ApiMethod = ApiMethod::new( const API_METHOD_PRUNE: ApiMethod = ApiMethod::new(
&ApiHandler::Async(&prune), &ApiHandler::Async(&prune),
&ObjectSchema::new( &ObjectSchema::new(
@ -1989,26 +1807,9 @@ fn main() {
.completion_cb("repository", complete_repository) .completion_cb("repository", complete_repository)
.completion_cb("keyfile", tools::complete_file_name); .completion_cb("keyfile", tools::complete_file_name);
let upload_log_cmd_def = CliCommand::new(&API_METHOD_UPLOAD_LOG)
.arg_param(&["snapshot", "logfile"])
.completion_cb("snapshot", complete_backup_snapshot)
.completion_cb("logfile", tools::complete_file_name)
.completion_cb("keyfile", tools::complete_file_name)
.completion_cb("repository", complete_repository);
let list_cmd_def = CliCommand::new(&API_METHOD_LIST_BACKUP_GROUPS) let list_cmd_def = CliCommand::new(&API_METHOD_LIST_BACKUP_GROUPS)
.completion_cb("repository", complete_repository); .completion_cb("repository", complete_repository);
let snapshots_cmd_def = CliCommand::new(&API_METHOD_LIST_SNAPSHOTS)
.arg_param(&["group"])
.completion_cb("group", complete_backup_group)
.completion_cb("repository", complete_repository);
let forget_cmd_def = CliCommand::new(&API_METHOD_FORGET_SNAPSHOTS)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot);
let garbage_collect_cmd_def = CliCommand::new(&API_METHOD_START_GARBAGE_COLLECTION) let garbage_collect_cmd_def = CliCommand::new(&API_METHOD_START_GARBAGE_COLLECTION)
.completion_cb("repository", complete_repository); .completion_cb("repository", complete_repository);
@ -2019,11 +1820,6 @@ fn main() {
.completion_cb("archive-name", complete_archive_name) .completion_cb("archive-name", complete_archive_name)
.completion_cb("target", tools::complete_file_name); .completion_cb("target", tools::complete_file_name);
let files_cmd_def = CliCommand::new(&API_METHOD_LIST_SNAPSHOT_FILES)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot);
let prune_cmd_def = CliCommand::new(&API_METHOD_PRUNE) let prune_cmd_def = CliCommand::new(&API_METHOD_PRUNE)
.arg_param(&["group"]) .arg_param(&["group"])
.completion_cb("group", complete_backup_group) .completion_cb("group", complete_backup_group)
@ -2049,16 +1845,13 @@ fn main() {
let cmd_def = CliCommandMap::new() let cmd_def = CliCommandMap::new()
.insert("backup", backup_cmd_def) .insert("backup", backup_cmd_def)
.insert("upload-log", upload_log_cmd_def)
.insert("forget", forget_cmd_def)
.insert("garbage-collect", garbage_collect_cmd_def) .insert("garbage-collect", garbage_collect_cmd_def)
.insert("list", list_cmd_def) .insert("list", list_cmd_def)
.insert("login", login_cmd_def) .insert("login", login_cmd_def)
.insert("logout", logout_cmd_def) .insert("logout", logout_cmd_def)
.insert("prune", prune_cmd_def) .insert("prune", prune_cmd_def)
.insert("restore", restore_cmd_def) .insert("restore", restore_cmd_def)
.insert("snapshots", snapshots_cmd_def) .insert("snapshot", snapshot_mgtm_cli())
.insert("files", files_cmd_def)
.insert("status", status_cmd_def) .insert("status", status_cmd_def)
.insert("key", key::cli()) .insert("key", key::cli())
.insert("mount", mount_cmd_def()) .insert("mount", mount_cmd_def())
@ -2068,7 +1861,13 @@ fn main() {
.insert("task", task_mgmt_cli()) .insert("task", task_mgmt_cli())
.insert("version", version_cmd_def) .insert("version", version_cmd_def)
.insert("benchmark", benchmark_cmd_def) .insert("benchmark", benchmark_cmd_def)
.insert("change-owner", change_owner_cmd_def); .insert("change-owner", change_owner_cmd_def)
.alias(&["files"], &["snapshot", "files"])
.alias(&["forget"], &["snapshot", "forget"])
.alias(&["upload-log"], &["snapshot", "upload-log"])
.alias(&["snapshots"], &["snapshot", "list"])
;
let rpcenv = CliEnvironment::new(); let rpcenv = CliEnvironment::new();
run_cli_command(cmd_def, rpcenv, Some(|future| { run_cli_command(cmd_def, rpcenv, Some(|future| {

View File

@ -245,7 +245,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let mut client = connect()?; let mut client = connect()?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
let _ = client.delete(&path, None).await?; let _ = client.delete(&path, None).await?;
Ok(Value::Null) Ok(Value::Null)
@ -363,6 +363,43 @@ async fn report() -> Result<Value, Error> {
Ok(Value::Null) Ok(Value::Null)
} }
#[api(
input: {
properties: {
verbose: {
type: Boolean,
optional: true,
default: false,
description: "Output verbose package information. It is ignored if output-format is specified.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
}
}
}
)]
/// List package versions for important Proxmox Backup Server packages.
async fn get_versions(verbose: bool, param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let packages = crate::api2::node::apt::get_versions()?;
let mut packages = json!(if verbose { &packages[..] } else { &packages[1..2] });
let options = default_table_format_options()
.disable_sort()
.noborder(true) // just not helpfull for version info which gets copy pasted often
.column(ColumnConfig::new("Package"))
.column(ColumnConfig::new("Version"))
.column(ColumnConfig::new("ExtraInfo").header("Extra Info"))
;
let schema = &crate::api2::node::apt::API_RETURN_SCHEMA_GET_VERSIONS;
format_and_print_result_full(&mut packages, schema, &output_format, &options);
Ok(Value::Null)
}
fn main() { fn main() {
proxmox_backup::tools::setup_safe_path_env(); proxmox_backup::tools::setup_safe_path_env();
@ -396,6 +433,9 @@ fn main() {
) )
.insert("report", .insert("report",
CliCommand::new(&API_METHOD_REPORT) CliCommand::new(&API_METHOD_REPORT)
)
.insert("versions",
CliCommand::new(&API_METHOD_GET_VERSIONS)
); );

View File

@ -151,7 +151,7 @@ pub async fn benchmark(
let crypt_config = match keyfile { let crypt_config = match keyfile {
None => None, None => None,
Some(path) => { Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?; let (key, _, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }

View File

@ -73,7 +73,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
let crypt_config = match keydata { let crypt_config = match keydata {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?; let (key, _created, _fingerprint) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }
@ -92,6 +92,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
).await?; ).await?;
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let index = client.download_dynamic_index(&manifest, CATALOG_NAME).await?; let index = client.download_dynamic_index(&manifest, CATALOG_NAME).await?;
@ -170,7 +171,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
let crypt_config = match keydata { let crypt_config = match keydata {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?; let (key, _created, _fingerprint) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }
@ -199,6 +200,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
.open("/tmp")?; .open("/tmp")?;
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?; let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8); let most_used = index.find_most_used_chunks(8);

View File

@ -4,14 +4,28 @@ use std::process::{Stdio, Command};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value;
use proxmox::api::api; use proxmox::api::api;
use proxmox::api::cli::{CliCommand, CliCommandMap}; use proxmox::api::cli::{
ColumnConfig,
CliCommand,
CliCommandMap,
format_and_print_result_full,
get_output_format,
OUTPUT_FORMAT,
};
use proxmox::sys::linux::tty; use proxmox::sys::linux::tty;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions}; use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox_backup::backup::{ use proxmox_backup::backup::{
encrypt_key_with_passphrase, load_and_decrypt_key, store_key_config, KeyConfig, encrypt_key_with_passphrase,
load_and_decrypt_key,
store_key_config,
CryptConfig,
Kdf,
KeyConfig,
KeyDerivationConfig,
}; };
use proxmox_backup::tools; use proxmox_backup::tools;
@ -71,27 +85,6 @@ pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
bail!("no password input mechanism available"); bail!("no password input mechanism available");
} }
#[api(
default: "scrypt",
)]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -120,7 +113,10 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let kdf = kdf.unwrap_or_default(); let kdf = kdf.unwrap_or_default();
let key = proxmox::sys::linux::random_data(32)?; let mut key_array = [0u8; 32];
proxmox::sys::linux::fill_with_random_data(&mut key_array)?;
let crypt_config = CryptConfig::new(key_array.clone())?;
let key = key_array.to_vec();
match kdf { match kdf {
Kdf::None => { Kdf::None => {
@ -134,10 +130,11 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
created, created,
modified: created, modified: created,
data: key, data: key,
fingerprint: Some(crypt_config.fingerprint()),
}, },
)?; )?;
} }
Kdf::Scrypt => { Kdf::Scrypt | Kdf::PBKDF2 => {
// always read passphrase from tty // always read passphrase from tty
if !tty::stdin_isatty() { if !tty::stdin_isatty() {
bail!("unable to read passphrase - no tty"); bail!("unable to read passphrase - no tty");
@ -145,7 +142,8 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let password = tty::read_and_verify_password("Encryption Key Password: ")?; let password = tty::read_and_verify_password("Encryption Key Password: ")?;
let key_config = encrypt_key_with_passphrase(&key, &password)?; let mut key_config = encrypt_key_with_passphrase(&key, &password, kdf)?;
key_config.fingerprint = Some(crypt_config.fingerprint());
store_key_config(&path, false, key_config)?; store_key_config(&path, false, key_config)?;
} }
@ -188,7 +186,7 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
bail!("unable to change passphrase - no tty"); bail!("unable to change passphrase - no tty");
} }
let (key, created) = load_and_decrypt_key(&path, &get_encryption_key_password)?; let (key, created, fingerprint) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
match kdf { match kdf {
Kdf::None => { Kdf::None => {
@ -202,14 +200,16 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
created, // keep original value created, // keep original value
modified, modified,
data: key.to_vec(), data: key.to_vec(),
fingerprint: Some(fingerprint),
}, },
)?; )?;
} }
Kdf::Scrypt => { Kdf::Scrypt | Kdf::PBKDF2 => {
let password = tty::read_and_verify_password("New Password: ")?; let password = tty::read_and_verify_password("New Password: ")?;
let mut new_key_config = encrypt_key_with_passphrase(&key, &password)?; let mut new_key_config = encrypt_key_with_passphrase(&key, &password, kdf)?;
new_key_config.created = created; // keep original value new_key_config.created = created; // keep original value
new_key_config.fingerprint = Some(fingerprint);
store_key_config(&path, true, new_key_config)?; store_key_config(&path, true, new_key_config)?;
} }
@ -218,6 +218,91 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
Ok(()) Ok(())
} }
#[api(
properties: {
kdf: {
type: Kdf,
},
},
)]
#[derive(Deserialize, Serialize)]
/// Encryption Key Information
struct KeyInfo {
/// Path to key
path: String,
kdf: Kdf,
/// Key creation time
pub created: i64,
/// Key modification time
pub modified: i64,
/// Key fingerprint
pub fingerprint: Option<String>,
}
#[api(
input: {
properties: {
path: {
description: "Key file. Without this the default key's metadata will be shown.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// Print the encryption key's metadata.
fn show_key(
path: Option<String>,
param: Value,
) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => {
let path = find_default_encryption_key()?
.ok_or_else(|| {
format_err!("no encryption file provided and no default file found")
})?;
path
}
};
let config: KeyConfig = serde_json::from_slice(&file_get_contents(path.clone())?)?;
let output_format = get_output_format(&param);
let info = KeyInfo {
path: format!("{:?}", path),
kdf: match config.kdf {
Some(KeyDerivationConfig::PBKDF2 { .. }) => Kdf::PBKDF2,
Some(KeyDerivationConfig::Scrypt { .. }) => Kdf::Scrypt,
None => Kdf::None,
},
created: config.created,
modified: config.modified,
fingerprint: match config.fingerprint {
Some(ref fp) => Some(format!("{}", fp)),
None => None,
},
};
let options = proxmox::api::cli::default_table_format_options()
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("kdf"))
.column(ColumnConfig::new("created").renderer(tools::format::render_epoch))
.column(ColumnConfig::new("modified").renderer(tools::format::render_epoch))
.column(ColumnConfig::new("fingerprint"));
let schema = &KeyInfo::API_SCHEMA;
format_and_print_result_full(&mut serde_json::to_value(info)?, schema, &output_format, &options);
Ok(())
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -313,13 +398,47 @@ fn paper_key(
}; };
let data = file_get_contents(&path)?; let data = file_get_contents(&path)?;
let data = std::str::from_utf8(&data)?; let data = String::from_utf8(data)?;
let (data, is_private_key) = if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data
.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
(lines, true)
} else {
match serde_json::from_str::<KeyConfig>(&data) {
Ok(key_config) => {
let lines = serde_json::to_string_pretty(&key_config)?
.lines()
.map(String::from)
.collect();
(lines, false)
},
Err(err) => {
eprintln!("Couldn't parse '{:?}' as KeyConfig - {}", path, err);
bail!("Neither a PEM-formatted private key, nor a PBS key file.");
},
}
};
let format = output_format.unwrap_or(PaperkeyFormat::Html); let format = output_format.unwrap_or(PaperkeyFormat::Html);
match format { match format {
PaperkeyFormat::Html => paperkey_html(data, subject), PaperkeyFormat::Html => paperkey_html(&data, subject, is_private_key),
PaperkeyFormat::Text => paperkey_text(data, subject), PaperkeyFormat::Text => paperkey_text(&data, subject, is_private_key),
} }
} }
@ -337,6 +456,10 @@ pub fn cli() -> CliCommandMap {
.arg_param(&["path"]) .arg_param(&["path"])
.completion_cb("path", tools::complete_file_name); .completion_cb("path", tools::complete_file_name);
let key_show_cmd_def = CliCommand::new(&API_METHOD_SHOW_KEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let paper_key_cmd_def = CliCommand::new(&API_METHOD_PAPER_KEY) let paper_key_cmd_def = CliCommand::new(&API_METHOD_PAPER_KEY)
.arg_param(&["path"]) .arg_param(&["path"])
.completion_cb("path", tools::complete_file_name); .completion_cb("path", tools::complete_file_name);
@ -346,10 +469,11 @@ pub fn cli() -> CliCommandMap {
.insert("create-master-key", key_create_master_key_cmd_def) .insert("create-master-key", key_create_master_key_cmd_def)
.insert("import-master-pubkey", key_import_master_pubkey_cmd_def) .insert("import-master-pubkey", key_import_master_pubkey_cmd_def)
.insert("change-passphrase", key_change_passphrase_cmd_def) .insert("change-passphrase", key_change_passphrase_cmd_def)
.insert("show", key_show_cmd_def)
.insert("paperkey", paper_key_cmd_def) .insert("paperkey", paper_key_cmd_def)
} }
fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> { fn paperkey_html(lines: &[String], subject: Option<String>, is_private: bool) -> Result<(), Error> {
let img_size_pt = 500; let img_size_pt = 500;
@ -378,21 +502,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("<p>Subject: {}</p>", subject); println!("<p>Subject: {}</p>", subject);
} }
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") { if is_private {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
const BLOCK_SIZE: usize = 20; const BLOCK_SIZE: usize = 20;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE; let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
@ -413,8 +523,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>"); println!("</p>");
let data = data.join("\n"); let qr_code = generate_qr_code("svg", data)?;
let qr_code = generate_qr_code("svg", data.as_bytes())?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD); let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>"); println!("<center>");
@ -430,16 +539,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
return Ok(()); return Ok(());
} }
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("<div style=\"page-break-inside: avoid\">"); println!("<div style=\"page-break-inside: avoid\">");
println!("<p>"); println!("<p>");
println!("-----BEGIN PROXMOX BACKUP KEY-----"); println!("-----BEGIN PROXMOX BACKUP KEY-----");
for line in key_text.lines() { for line in lines {
println!("{}", line); println!("{}", line);
} }
@ -447,7 +553,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>"); println!("</p>");
let qr_code = generate_qr_code("svg", key_text.as_bytes())?; let qr_code = generate_qr_code("svg", lines)?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD); let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>"); println!("<center>");
@ -464,27 +570,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
Ok(()) Ok(())
} }
fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> { fn paperkey_text(lines: &[String], subject: Option<String>, is_private: bool) -> Result<(), Error> {
if let Some(subject) = subject { if let Some(subject) = subject {
println!("Subject: {}\n", subject); println!("Subject: {}\n", subject);
} }
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") { if is_private {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
const BLOCK_SIZE: usize = 5; const BLOCK_SIZE: usize = 5;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE; let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
@ -499,8 +591,7 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
for l in start..end { for l in start..end {
println!("{:-2}: {}", l, lines[l]); println!("{:-2}: {}", l, lines[l]);
} }
let data = data.join("\n"); let qr_code = generate_qr_code("utf8i", data)?;
let qr_code = generate_qr_code("utf8i", data.as_bytes())?;
let qr_code = String::from_utf8(qr_code) let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?; .map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
println!("{}", qr_code); println!("{}", qr_code);
@ -510,14 +601,13 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
return Ok(()); return Ok(());
} }
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("-----BEGIN PROXMOX BACKUP KEY-----"); println!("-----BEGIN PROXMOX BACKUP KEY-----");
println!("{}", key_text); for line in lines {
println!("{}", line);
}
println!("-----END PROXMOX BACKUP KEY-----"); println!("-----END PROXMOX BACKUP KEY-----");
let qr_code = generate_qr_code("utf8i", key_text.as_bytes())?; let qr_code = generate_qr_code("utf8i", &lines)?;
let qr_code = String::from_utf8(qr_code) let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?; .map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
@ -526,8 +616,7 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
Ok(()) Ok(())
} }
fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> { fn generate_qr_code(output_type: &str, lines: &[String]) -> Result<Vec<u8>, Error> {
let mut child = Command::new("qrencode") let mut child = Command::new("qrencode")
.args(&["-t", output_type, "-m0", "-s1", "-lm", "--output", "-"]) .args(&["-t", output_type, "-m0", "-s1", "-lm", "--output", "-"])
.stdin(Stdio::piped()) .stdin(Stdio::piped())
@ -537,7 +626,8 @@ fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> {
{ {
let stdin = child.stdin.as_mut() let stdin = child.stdin.as_mut()
.ok_or_else(|| format_err!("Failed to open stdin"))?; .ok_or_else(|| format_err!("Failed to open stdin"))?;
stdin.write_all(data) let data = lines.join("\n");
stdin.write_all(data.as_bytes())
.map_err(|_| format_err!("Failed to write to stdin"))?; .map_err(|_| format_err!("Failed to write to stdin"))?;
} }

View File

@ -8,6 +8,8 @@ mod task;
pub use task::*; pub use task::*;
mod catalog; mod catalog;
pub use catalog::*; pub use catalog::*;
mod snapshot;
pub use snapshot::*;
pub mod key; pub mod key;

View File

@ -182,7 +182,9 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let crypt_config = match keyfile { let crypt_config = match keyfile {
None => None, None => None,
Some(path) => { Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?; println!("Encryption key file: '{:?}'", path);
let (key, _, fingerprint) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
println!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?)) Some(Arc::new(CryptConfig::new(key)?))
} }
}; };
@ -212,6 +214,7 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
).await?; ).await?;
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let file_info = manifest.lookup_file_info(&server_archive_name)?; let file_info = manifest.lookup_file_info(&server_archive_name)?;

View File

@ -0,0 +1,416 @@
use std::sync::Arc;
use anyhow::Error;
use serde_json::{json, Value};
use proxmox::{
api::{api, cli::*},
tools::fs::file_get_contents,
};
use proxmox_backup::{
tools,
api2::types::*,
backup::{
CryptMode,
CryptConfig,
DataBlob,
BackupGroup,
decrypt_key,
}
};
use crate::{
REPO_URL_SCHEMA,
KEYFILE_SCHEMA,
KEYFD_SCHEMA,
BackupDir,
api_datastore_list_snapshots,
complete_backup_snapshot,
complete_backup_group,
complete_repository,
connect,
extract_repository_from_value,
record_repository,
keyfile_parameters,
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
group: {
type: String,
description: "Backup group.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List backup snapshots.
async fn list_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?)
} else {
None
};
let mut data = api_datastore_list_snapshots(&client, repo.store(), group).await?;
record_repository(&repo);
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned())
};
let render_files = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let mut filenames = Vec::new();
for file in &item.files {
filenames.push(file.filename.to_string());
}
Ok(tools::format::render_backup_file_list(&filenames[..]))
};
let options = default_table_format_options()
.sortby("backup-type", false)
.sortby("backup-id", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
.column(ColumnConfig::new("size").renderer(tools::format::render_bytes_human_readable))
.column(ColumnConfig::new("files").renderer(render_files))
;
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOTS;
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List snapshot files.
async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
let mut result = client.get(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOT_FILES;
let mut data: Value = result["data"].take();
let options = default_table_format_options();
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
}
}
)]
/// Forget (remove) backup snapshots.
async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
let result = client.delete(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
Ok(result)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Group/Snapshot path.",
},
logfile: {
type: String,
description: "The path to the log file you want to upload.",
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
}
}
)]
/// Upload backup log file.
async fn upload_log(param: Value) -> Result<Value, Error> {
let logfile = tools::required_string_param(&param, "logfile")?;
let repo = extract_repository_from_value(&param)?;
let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(&repo)?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created, _) = decrypt_key(&key, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let data = file_get_contents(logfile)?;
// fixme: howto sign log?
let blob = match crypt_mode {
CryptMode::None | CryptMode::SignOnly => DataBlob::encode(&data, None, true)?,
CryptMode::Encrypt => DataBlob::encode(&data, crypt_config.as_ref().map(Arc::as_ref), true)?,
};
let raw_data = blob.into_inner();
let path = format!("api2/json/admin/datastore/{}/upload-backup-log", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let body = hyper::Body::from(raw_data);
client.upload("application/octet-stream", body, &path, Some(args)).await
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// Show notes
async fn show_notes(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/notes", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let output_format = get_output_format(&param);
let mut result = client.get(&path, Some(args)).await?;
let notes = result["data"].take();
if output_format == "text" {
if let Some(notes) = notes.as_str() {
println!("{}", notes);
}
} else {
format_and_print_result(
&json!({
"notes": notes,
}),
&output_format,
);
}
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
notes: {
type: String,
description: "The Notes.",
},
}
}
)]
/// Update Notes
async fn update_notes(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let notes = tools::required_string_param(&param, "notes")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/notes", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
"notes": notes,
});
client.put(&path, Some(args)).await?;
Ok(Value::Null)
}
fn notes_cli() -> CliCommandMap {
CliCommandMap::new()
.insert(
"show",
CliCommand::new(&API_METHOD_SHOW_NOTES)
.arg_param(&["snapshot"])
.completion_cb("snapshot", complete_backup_snapshot),
)
.insert(
"update",
CliCommand::new(&API_METHOD_UPDATE_NOTES)
.arg_param(&["snapshot", "notes"])
.completion_cb("snapshot", complete_backup_snapshot),
)
}
pub fn snapshot_mgtm_cli() -> CliCommandMap {
CliCommandMap::new()
.insert("notes", notes_cli())
.insert(
"list",
CliCommand::new(&API_METHOD_LIST_SNAPSHOTS)
.arg_param(&["group"])
.completion_cb("group", complete_backup_group)
.completion_cb("repository", complete_repository)
)
.insert(
"files",
CliCommand::new(&API_METHOD_LIST_SNAPSHOT_FILES)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot)
)
.insert(
"forget",
CliCommand::new(&API_METHOD_FORGET_SNAPSHOTS)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot)
)
.insert(
"upload-log",
CliCommand::new(&API_METHOD_UPLOAD_LOG)
.arg_param(&["snapshot", "logfile"])
.completion_cb("snapshot", complete_backup_snapshot)
.completion_cb("logfile", tools::complete_file_name)
.completion_cb("keyfile", tools::complete_file_name)
.completion_cb("repository", complete_repository)
)
}

View File

@ -124,7 +124,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let mut client = connect(&repo)?; let mut client = connect(&repo)?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
let _ = client.delete(&path, None).await?; let _ = client.delete(&path, None).await?;
Ok(Value::Null) Ok(Value::Null)

View File

@ -475,6 +475,13 @@ impl BackupWriter {
Ok(index) Ok(index)
} }
/// Retrieve backup time of last backup
pub async fn previous_backup_time(&self) -> Result<Option<i64>, Error> {
let data = self.h2.get("previous_backup_time", None).await?;
serde_json::from_value(data)
.map_err(|err| format_err!("Failed to parse backup time value returned by server - {}", err))
}
/// Download backup manifest (index.json) of last backup /// Download backup manifest (index.json) of last backup
pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> { pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {

View File

@ -534,6 +534,15 @@ impl HttpClient {
self.request(req).await self.request(req).await
} }
pub async fn put(
&mut self,
path: &str,
data: Option<Value>,
) -> Result<Value, Error> {
let req = Self::request_builder(&self.server, self.port, "PUT", path, data)?;
self.request(req).await
}
pub async fn download( pub async fn download(
&mut self, &mut self,
path: &str, path: &str,

View File

@ -3,6 +3,7 @@ use std::task::{Context, Poll};
use anyhow::{Error}; use anyhow::{Error};
use futures::*; use futures::*;
use pin_project::pin_project;
use crate::backup::ChunkInfo; use crate::backup::ChunkInfo;
@ -15,7 +16,9 @@ pub trait MergeKnownChunks: Sized {
fn merge_known_chunks(self) -> MergeKnownChunksQueue<Self>; fn merge_known_chunks(self) -> MergeKnownChunksQueue<Self>;
} }
#[pin_project]
pub struct MergeKnownChunksQueue<S> { pub struct MergeKnownChunksQueue<S> {
#[pin]
input: S, input: S,
buffer: Option<MergedChunkInfo>, buffer: Option<MergedChunkInfo>,
} }
@ -39,10 +42,10 @@ where
type Item = Result<MergedChunkInfo, Error>; type Item = Result<MergedChunkInfo, Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> { fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
let this = unsafe { self.get_unchecked_mut() }; let mut this = self.project();
loop { loop {
match ready!(unsafe { Pin::new_unchecked(&mut this.input) }.poll_next(cx)) { match ready!(this.input.as_mut().poll_next(cx)) {
Some(Err(err)) => return Poll::Ready(Some(Err(err))), Some(Err(err)) => return Poll::Ready(Some(Err(err))),
None => { None => {
if let Some(last) = this.buffer.take() { if let Some(last) = this.buffer.take() {
@ -58,13 +61,13 @@ where
match last { match last {
None => { None => {
this.buffer = Some(MergedChunkInfo::Known(list)); *this.buffer = Some(MergedChunkInfo::Known(list));
// continue // continue
} }
Some(MergedChunkInfo::Known(mut last_list)) => { Some(MergedChunkInfo::Known(mut last_list)) => {
last_list.extend_from_slice(&list); last_list.extend_from_slice(&list);
let len = last_list.len(); let len = last_list.len();
this.buffer = Some(MergedChunkInfo::Known(last_list)); *this.buffer = Some(MergedChunkInfo::Known(last_list));
if len >= 64 { if len >= 64 {
return Poll::Ready(this.buffer.take().map(Ok)); return Poll::Ready(this.buffer.take().map(Ok));
@ -72,7 +75,7 @@ where
// continue // continue
} }
Some(MergedChunkInfo::New(_)) => { Some(MergedChunkInfo::New(_)) => {
this.buffer = Some(MergedChunkInfo::Known(list)); *this.buffer = Some(MergedChunkInfo::Known(list));
return Poll::Ready(last.map(Ok)); return Poll::Ready(last.map(Ok));
} }
} }
@ -80,7 +83,7 @@ where
MergedChunkInfo::New(chunk_info) => { MergedChunkInfo::New(chunk_info) => {
let new = MergedChunkInfo::New(chunk_info); let new = MergedChunkInfo::New(chunk_info);
if let Some(last) = this.buffer.take() { if let Some(last) = this.buffer.take() {
this.buffer = Some(new); *this.buffer = Some(new);
return Poll::Ready(Some(Ok(last))); return Poll::Ready(Some(Ok(last)));
} else { } else {
return Poll::Ready(Some(Ok(new))); return Poll::Ready(Some(Ok(new)));

View File

@ -2,6 +2,7 @@ use anyhow::{bail, Error};
use serde_json::json; use serde_json::json;
use super::HttpClient; use super::HttpClient;
use crate::tools;
pub async fn display_task_log( pub async fn display_task_log(
client: HttpClient, client: HttpClient,
@ -9,7 +10,7 @@ pub async fn display_task_log(
strip_date: bool, strip_date: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let path = format!("api2/json/nodes/localhost/tasks/{}/log", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}/log", tools::percent_encode_component(upid_str));
let mut start = 1; let mut start = 1;
let limit = 500; let limit = 500;

View File

@ -237,7 +237,15 @@ impl<'a, 'b> Archiver<'a, 'b> {
let old_patterns_count = self.patterns.len(); let old_patterns_count = self.patterns.len();
self.read_pxar_excludes(dir.as_raw_fd())?; self.read_pxar_excludes(dir.as_raw_fd())?;
let file_list = self.generate_directory_file_list(&mut dir, is_root)?; let mut file_list = self.generate_directory_file_list(&mut dir, is_root)?;
if is_root && old_patterns_count > 0 {
file_list.push(FileListEntry {
name: CString::new(".pxarexclude-cli").unwrap(),
path: PathBuf::new(),
stat: unsafe { std::mem::zeroed() },
});
}
let dir_fd = dir.as_raw_fd(); let dir_fd = dir.as_raw_fd();
@ -247,7 +255,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
let file_name = file_entry.name.to_bytes(); let file_name = file_entry.name.to_bytes();
if is_root && file_name == b".pxarexclude-cli" { if is_root && file_name == b".pxarexclude-cli" {
self.encode_pxarexclude_cli(encoder, &file_entry.name)?; self.encode_pxarexclude_cli(encoder, &file_entry.name, old_patterns_count)?;
continue; continue;
} }
@ -379,8 +387,9 @@ impl<'a, 'b> Archiver<'a, 'b> {
&mut self, &mut self,
encoder: &mut Encoder, encoder: &mut Encoder,
file_name: &CStr, file_name: &CStr,
patterns_count: usize,
) -> Result<(), Error> { ) -> Result<(), Error> {
let content = generate_pxar_excludes_cli(&self.patterns); let content = generate_pxar_excludes_cli(&self.patterns[..patterns_count]);
if let Some(ref mut catalog) = self.catalog { if let Some(ref mut catalog) = self.catalog {
catalog.add_file(file_name, content.len() as u64, 0)?; catalog.add_file(file_name, content.len() as u64, 0)?;
@ -404,14 +413,6 @@ impl<'a, 'b> Archiver<'a, 'b> {
let mut file_list = Vec::new(); let mut file_list = Vec::new();
if is_root && !self.patterns.is_empty() {
file_list.push(FileListEntry {
name: CString::new(".pxarexclude-cli").unwrap(),
path: PathBuf::new(),
stat: unsafe { std::mem::zeroed() },
});
}
for file in dir.iter() { for file in dir.iter() {
let file = file?; let file = file?;
@ -425,10 +426,6 @@ impl<'a, 'b> Archiver<'a, 'b> {
continue; continue;
} }
if file_name_bytes == b".pxarexclude" {
continue;
}
let os_file_name = OsStr::from_bytes(file_name_bytes); let os_file_name = OsStr::from_bytes(file_name_bytes);
assert_single_path_component(os_file_name)?; assert_single_path_component(os_file_name)?;
let full_path = self.path.join(os_file_name); let full_path = self.path.join(os_file_name);
@ -443,9 +440,10 @@ impl<'a, 'b> Archiver<'a, 'b> {
Err(err) => bail!("stat failed on {:?}: {}", full_path, err), Err(err) => bail!("stat failed on {:?}: {}", full_path, err),
}; };
let match_path = PathBuf::from("/").join(full_path.clone());
if self if self
.patterns .patterns
.matches(full_path.as_os_str().as_bytes(), Some(stat.st_mode as u32)) .matches(match_path.as_os_str().as_bytes(), Some(stat.st_mode as u32))
== Some(MatchType::Exclude) == Some(MatchType::Exclude)
{ {
continue; continue;

View File

@ -81,7 +81,7 @@ const VERIFY_ERR_TEMPLATE: &str = r###"
Job ID: {{job.id}} Job ID: {{job.id}}
Datastore: {{job.store}} Datastore: {{job.store}}
Verification failed on these snapshots: Verification failed on these snapshots/groups:
{{#each errors}} {{#each errors}}
{{this~}} {{this~}}

View File

@ -20,6 +20,7 @@ fn files() -> Vec<&'static str> {
fn commands() -> Vec<(&'static str, Vec<&'static str>)> { fn commands() -> Vec<(&'static str, Vec<&'static str>)> {
vec![ vec![
// ("<command>", vec![<arg [, arg]>]) // ("<command>", vec![<arg [, arg]>])
("proxmox-backup-manager", vec!["versions", "--verbose"]),
("df", vec!["-h"]), ("df", vec!["-h"]),
("lsblk", vec!["--ascii"]), ("lsblk", vec!["--ascii"]),
("zpool", vec!["status"]), ("zpool", vec!["status"]),
@ -54,10 +55,10 @@ pub fn generate_report() -> String {
.map(|file_name| { .map(|file_name| {
let content = match file_read_optional_string(Path::new(file_name)) { let content = match file_read_optional_string(Path::new(file_name)) {
Ok(Some(content)) => content, Ok(Some(content)) => content,
Ok(None) => String::from("# file does not exists"), Ok(None) => String::from("# file does not exist"),
Err(err) => err.to_string(), Err(err) => err.to_string(),
}; };
format!("# cat '{}'\n{}", file_name, content) format!("$ cat '{}'\n{}", file_name, content)
}) })
.collect::<Vec<String>>() .collect::<Vec<String>>()
.join("\n\n"); .join("\n\n");
@ -73,14 +74,14 @@ pub fn generate_report() -> String {
Ok(output) => String::from_utf8_lossy(&output.stdout).to_string(), Ok(output) => String::from_utf8_lossy(&output.stdout).to_string(),
Err(err) => err.to_string(), Err(err) => err.to_string(),
}; };
format!("# `{} {}`\n{}", command, args.join(" "), output) format!("$ `{} {}`\n{}", command, args.join(" "), output)
}) })
.collect::<Vec<String>>() .collect::<Vec<String>>()
.join("\n\n"); .join("\n\n");
let function_outputs = function_calls() let function_outputs = function_calls()
.iter() .iter()
.map(|(desc, function)| format!("# {}\n{}", desc, function())) .map(|(desc, function)| format!("$ {}\n{}", desc, function()))
.collect::<Vec<String>>() .collect::<Vec<String>>()
.join("\n\n"); .join("\n\n");

View File

@ -623,6 +623,10 @@ fn check_auth(
.ok_or_else(|| format_err!("failed to split API token header"))?; .ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?; let tokenid: Authid = tokenid.parse()?;
if !user_info.is_active_auth_id(&tokenid) {
bail!("user account or token disabled or expired.");
}
let tokensecret = parts.next() let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?; .ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret) let tokensecret = percent_decode_str(tokensecret)

View File

@ -69,8 +69,15 @@ pub fn do_verification_job(
let result = verify_all_backups(datastore, worker.clone(), worker.upid(), None, Some(&filter)); let result = verify_all_backups(datastore, worker.clone(), worker.upid(), None, Some(&filter));
let job_result = match result { let job_result = match result {
Ok(ref errors) if errors.is_empty() => Ok(()), Ok(ref failed_dirs) if failed_dirs.is_empty() => Ok(()),
Ok(_) => Err(format_err!("verification failed - please check the log for details")), Ok(ref failed_dirs) => {
worker.log("Failed to verify the following snapshots/groups:");
for dir in failed_dirs {
worker.log(format!("\t{}", dir));
}
Err(format_err!("verification failed - please check the log for details"))
},
Err(_) => Err(format_err!("verification failed - job aborted")), Err(_) => Err(format_err!("verification failed - job aborted")),
}; };

View File

@ -5,16 +5,14 @@ use std::any::Any;
use std::collections::HashMap; use std::collections::HashMap;
use std::hash::BuildHasher; use std::hash::BuildHasher;
use std::fs::File; use std::fs::File;
use std::io::{self, BufRead, ErrorKind, Read, Seek, SeekFrom}; use std::io::{self, BufRead, Read, Seek, SeekFrom};
use std::os::unix::io::RawFd; use std::os::unix::io::RawFd;
use std::path::Path; use std::path::Path;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::Value; use serde_json::Value;
use openssl::hash::{hash, DigestBytes, MessageDigest}; use openssl::hash::{hash, DigestBytes, MessageDigest};
use percent_encoding::AsciiSet; use percent_encoding::{utf8_percent_encode, AsciiSet};
use proxmox::tools::vec;
pub use proxmox::tools::fd::Fd; pub use proxmox::tools::fd::Fd;
@ -25,45 +23,43 @@ pub mod borrow;
pub mod cert; pub mod cert;
pub mod daemon; pub mod daemon;
pub mod disks; pub mod disks;
pub mod fs;
pub mod format; pub mod format;
pub mod lru_cache; pub mod fs;
pub mod runtime; pub mod fuse_loop;
pub mod ticket; pub mod http;
pub mod statistics;
pub mod systemd;
pub mod nom;
pub mod logrotate; pub mod logrotate;
pub mod loopdev; pub mod loopdev;
pub mod fuse_loop; pub mod lru_cache;
pub mod nom;
pub mod runtime;
pub mod socket; pub mod socket;
pub mod statistics;
pub mod subscription; pub mod subscription;
pub mod systemd;
pub mod ticket;
pub mod xattr;
pub mod zip; pub mod zip;
pub mod http;
mod parallel_handler; pub mod parallel_handler;
pub use parallel_handler::*; pub use parallel_handler::ParallelHandler;
mod wrapped_reader_stream; mod wrapped_reader_stream;
pub use wrapped_reader_stream::*; pub use wrapped_reader_stream::{AsyncReaderStream, StdChannelStream, WrappedReaderStream};
mod async_channel_writer; mod async_channel_writer;
pub use async_channel_writer::*; pub use async_channel_writer::AsyncChannelWriter;
mod std_channel_writer; mod std_channel_writer;
pub use std_channel_writer::*; pub use std_channel_writer::StdChannelWriter;
pub mod xattr;
mod process_locker; mod process_locker;
pub use process_locker::*; pub use process_locker::{ProcessLocker, ProcessLockExclusiveGuard, ProcessLockSharedGuard};
mod file_logger; mod file_logger;
pub use file_logger::*; pub use file_logger::{FileLogger, FileLogOptions};
mod broadcast_future; mod broadcast_future;
pub use broadcast_future::*; pub use broadcast_future::{BroadcastData, BroadcastFuture};
/// The `BufferedRead` trait provides a single function /// The `BufferedRead` trait provides a single function
/// `buffered_read`. It returns a reference to an internal buffer. The /// `buffered_read`. It returns a reference to an internal buffer. The
@ -75,76 +71,6 @@ pub trait BufferedRead {
fn buffered_read(&mut self, offset: u64) -> Result<&[u8], Error>; fn buffered_read(&mut self, offset: u64) -> Result<&[u8], Error>;
} }
/// Split a file into equal sized chunks. The last chunk may be
/// smaller. Note: We cannot implement an `Iterator`, because iterators
/// cannot return a borrowed buffer ref (we want zero-copy)
pub fn file_chunker<C, R>(mut file: R, chunk_size: usize, mut chunk_cb: C) -> Result<(), Error>
where
C: FnMut(usize, &[u8]) -> Result<bool, Error>,
R: Read,
{
const READ_BUFFER_SIZE: usize = 4 * 1024 * 1024; // 4M
if chunk_size > READ_BUFFER_SIZE {
bail!("chunk size too large!");
}
let mut buf = vec::undefined(READ_BUFFER_SIZE);
let mut pos = 0;
let mut file_pos = 0;
loop {
let mut eof = false;
let mut tmp = &mut buf[..];
// try to read large portions, at least chunk_size
while pos < chunk_size {
match file.read(tmp) {
Ok(0) => {
eof = true;
break;
}
Ok(n) => {
pos += n;
if pos > chunk_size {
break;
}
tmp = &mut tmp[n..];
}
Err(ref e) if e.kind() == ErrorKind::Interrupted => { /* try again */ }
Err(e) => bail!("read chunk failed - {}", e.to_string()),
}
}
let mut start = 0;
while start + chunk_size <= pos {
if !(chunk_cb)(file_pos, &buf[start..start + chunk_size])? {
break;
}
file_pos += chunk_size;
start += chunk_size;
}
if eof {
if start < pos {
(chunk_cb)(file_pos, &buf[start..pos])?;
//file_pos += pos - start;
}
break;
} else {
let rest = pos - start;
if rest > 0 {
let ptr = buf.as_mut_ptr();
unsafe {
std::ptr::copy_nonoverlapping(ptr.add(start), ptr, rest);
}
pos = rest;
} else {
pos = 0;
}
}
}
Ok(())
}
pub fn json_object_to_query(data: Value) -> Result<String, Error> { pub fn json_object_to_query(data: Value) -> Result<String, Error> {
let mut query = url::form_urlencoded::Serializer::new(String::new()); let mut query = url::form_urlencoded::Serializer::new(String::new());
@ -363,6 +289,11 @@ pub fn extract_cookie(cookie: &str, cookie_name: &str) -> Option<String> {
None None
} }
/// percent encode a url component
pub fn percent_encode_component(comp: &str) -> String {
utf8_percent_encode(comp, percent_encoding::NON_ALPHANUMERIC).to_string()
}
pub fn join(data: &Vec<String>, sep: char) -> String { pub fn join(data: &Vec<String>, sep: char) -> String {
let mut list = String::new(); let mut list = String::new();

View File

@ -361,6 +361,7 @@ where
}, },
priority: priority_res, priority: priority_res,
section: section_res, section: section_res,
extra_info: None,
}); });
} }
} }

View File

@ -102,6 +102,40 @@ impl From<u64> for HumanByte {
} }
} }
pub fn as_fingerprint(bytes: &[u8]) -> String {
proxmox::tools::digest_to_hex(bytes)
.as_bytes()
.chunks(2)
.map(|v| std::str::from_utf8(v).unwrap())
.collect::<Vec<&str>>().join(":")
}
pub mod bytes_as_fingerprint {
use serde::{Deserialize, Serializer, Deserializer};
pub fn serialize<S>(
bytes: &[u8; 32],
serializer: S,
) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let s = crate::tools::format::as_fingerprint(bytes);
serializer.serialize_str(&s)
}
pub fn deserialize<'de, D>(
deserializer: D,
) -> Result<[u8; 32], D::Error>
where
D: Deserializer<'de>,
{
let mut s = String::deserialize(deserializer)?;
s.retain(|c| c != ':');
proxmox::tools::hex_to_digest(&s).map_err(serde::de::Error::custom)
}
}
#[test] #[test]
fn correct_byte_convert() { fn correct_byte_convert() {
fn convert(b: usize) -> String { fn convert(b: usize) -> String {

View File

@ -395,7 +395,7 @@ Ext.define('PBS.dashboard.SubscriptionInfo', {
break; break;
case 0: case 0:
icon = 'times-circle critical'; icon = 'times-circle critical';
message = gettext('<h1>No valid subscription</h1>' + PBS.Utils.noSubKeyHtml); message = `<h1>${gettext('No valid subscription')}</h1>${PBS.Utils.noSubKeyHtml}`;
break; break;
default: default:
throw 'invalid subscription status'; throw 'invalid subscription status';

View File

@ -83,7 +83,7 @@ js/proxmox-backup-gui.js: .lint-incremental js OnlineHelpInfo.js ${JSSRC}
.PHONY: check .PHONY: check
check: check:
eslint ${JSSRC} eslint --strict ${JSSRC}
touch ".lint-incremental" touch ".lint-incremental"
.lint-incremental: ${JSSRC} .lint-incremental: ${JSSRC}

View File

@ -75,42 +75,6 @@ const proxmoxOnlineHelpInfo = {
"link": "/docs/pve-integration.html#pve-integration", "link": "/docs/pve-integration.html#pve-integration",
"title": "`Proxmox VE`_ Integration" "title": "`Proxmox VE`_ Integration"
}, },
"rst-primer": {
"link": "/docs/reStructuredText-primer.html#rst-primer",
"title": "reStructuredText Primer"
},
"rst-inline-markup": {
"link": "/docs/reStructuredText-primer.html#rst-inline-markup",
"title": "Inline markup"
},
"rst-literal-blocks": {
"link": "/docs/reStructuredText-primer.html#rst-literal-blocks",
"title": "Literal blocks"
},
"rst-doctest-blocks": {
"link": "/docs/reStructuredText-primer.html#rst-doctest-blocks",
"title": "Doctest blocks"
},
"rst-tables": {
"link": "/docs/reStructuredText-primer.html#rst-tables",
"title": "Tables"
},
"rst-field-lists": {
"link": "/docs/reStructuredText-primer.html#rst-field-lists",
"title": "Field Lists"
},
"rst-roles-alt": {
"link": "/docs/reStructuredText-primer.html#rst-roles-alt",
"title": "Roles"
},
"rst-directives": {
"link": "/docs/reStructuredText-primer.html#rst-directives",
"title": "Directives"
},
"html-meta": {
"link": "/docs/reStructuredText-primer.html#html-meta",
"title": "HTML Metadata"
},
"storage-disk-management": { "storage-disk-management": {
"link": "/docs/storage.html#storage-disk-management", "link": "/docs/storage.html#storage-disk-management",
"title": "Disk Management" "title": "Disk Management"

View File

@ -161,6 +161,11 @@ Ext.define('PBS.Utils', {
return `Datastore ${what} ${id}`; return `Datastore ${what} ${id}`;
}, },
// mimics Display trait in backend
renderKeyID: function(fingerprint) {
return fingerprint.substring(0, 23);
},
parse_datastore_worker_id: function(type, id) { parse_datastore_worker_id: function(type, id) {
let result; let result;
let res; let res;

View File

@ -76,6 +76,17 @@ Ext.define('PBS.config.UserView', {
}).show(); }).show();
}, },
renderName: function(val, cell, rec) {
let name = [];
if (rec.data.firstname) {
name.push(rec.data.firstname);
}
if (rec.data.lastname) {
name.push(rec.data.lastname);
}
return name.join(' ');
},
renderUsername: function(userid) { renderUsername: function(userid) {
return Ext.String.htmlEncode(userid.match(/^(.+)@([^@]+)$/)[1]); return Ext.String.htmlEncode(userid.match(/^(.+)@([^@]+)$/)[1]);
}, },
@ -181,6 +192,7 @@ Ext.define('PBS.config.UserView', {
width: 150, width: 150,
sortable: true, sortable: true,
dataIndex: 'firstname', dataIndex: 'firstname',
renderer: 'renderName',
}, },
{ {
header: gettext('Comment'), header: gettext('Comment'),

View File

@ -12,6 +12,7 @@ Ext.define('pbs-data-store-snapshots', {
'files', 'files',
'owner', 'owner',
'verification', 'verification',
'fingerprint',
{ name: 'size', type: 'int', allowNull: true }, { name: 'size', type: 'int', allowNull: true },
{ {
name: 'crypt-mode', name: 'crypt-mode',
@ -182,6 +183,7 @@ Ext.define('PBS.DataStoreContent', {
for (const file of data.files) { for (const file of data.files) {
file.text = file.filename; file.text = file.filename;
file['crypt-mode'] = PBS.Utils.cryptmap.indexOf(file['crypt-mode']); file['crypt-mode'] = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
file.fingerprint = data.fingerprint;
file.leaf = true; file.leaf = true;
file.matchesFilter = true; file.matchesFilter = true;
@ -699,7 +701,16 @@ Ext.define('PBS.DataStoreContent', {
if (iconCls) { if (iconCls) {
iconTxt = `<i class="fa fa-fw fa-${iconCls}"></i> `; iconTxt = `<i class="fa fa-fw fa-${iconCls}"></i> `;
} }
return (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText; let tip;
if (v !== PBS.Utils.cryptmap.indexOf('none') && record.data.fingerprint !== undefined) {
tip = "Key: " + PBS.Utils.renderKeyID(record.data.fingerprint);
}
let txt = (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText;
if (record.parentNode.id === 'root' || tip === undefined) {
return txt;
} else {
return `<span data-qtip="${tip}">${txt}</span>`;
}
}, },
}, },
{ {

View File

@ -84,13 +84,18 @@ Ext.define('PBS.datastore.DataStoreList', {
for (const [store, panel] of Object.entries(me.datastores)) { for (const [store, panel] of Object.entries(me.datastores)) {
if (!found[store]) { if (!found[store]) {
me.remove(panel); me.remove(panel);
delete me.datastores[store];
} }
} }
let hasDatastores = Object.keys(me.datastores).length > 0;
me.getComponent('emptybox').setHidden(hasDatastores);
}, },
addSorted: function(data) { addSorted: function(data) {
let me = this; let me = this;
let i = 0; let i = 1;
let datastores = Object let datastores = Object
.keys(me.datastores) .keys(me.datastores)
.sort((a, b) => a.localeCompare(b)); .sort((a, b) => a.localeCompare(b));
@ -115,7 +120,14 @@ Ext.define('PBS.datastore.DataStoreList', {
initComponent: function() { initComponent: function() {
let me = this; let me = this;
me.items = []; me.items = [
{
itemId: 'emptybox',
hidden: true,
xtype: 'box',
html: gettext('No Datastores configured'),
},
];
me.datastores = {}; me.datastores = {};
// todo make configurable? // todo make configurable?
me.since = (Date.now()/1000 - 30 * 24*3600).toFixed(0); me.since = (Date.now()/1000 - 30 * 24*3600).toFixed(0);

View File

@ -56,7 +56,7 @@ Ext.define('PBS.DataStoreNotes', {
url: me.url, url: me.url,
waitMsgTarget: me, waitMsgTarget: me,
failure: function(response, opts) { failure: function(response, opts) {
me.update(gettext('Error') + " " + response.htmlStatus); Ext.Msg.alert(gettext('Error'), response.htmlStatus);
me.setCollapsed(false); me.setCollapsed(false);
}, },
success: function(response, opts) { success: function(response, opts) {

View File

@ -78,7 +78,7 @@ Ext.define('PBS.DataStoreInfo', {
let datastore = encodeURIComponent(view.datastore); let datastore = encodeURIComponent(view.datastore);
me.store = Ext.create('Proxmox.data.ObjectStore', { me.store = Ext.create('Proxmox.data.ObjectStore', {
interval: 5*1000, interval: 5*1000,
url: `/api2/json/admin/datastore/${datastore}/status`, url: `/api2/json/admin/datastore/${datastore}/status/?verbose=true`,
}); });
me.store.on('load', me.onLoad, me); me.store.on('load', me.onLoad, me);
}, },
@ -264,6 +264,13 @@ Ext.define('PBS.DataStoreSummary', {
me.down('pbsDataStoreInfo').setTitle(`${me.datastore} (${path})`); me.down('pbsDataStoreInfo').setTitle(`${me.datastore} (${path})`);
me.down('pbsDataStoreNotes').setNotes(response.result.data.comment); me.down('pbsDataStoreNotes').setNotes(response.result.data.comment);
}, },
failure: function(response) {
// fallback if e.g. we have no permissions to the config
let rec = Ext.getStore('pbs-datastore-list').findRecord('store', me.datastore);
if (rec) {
me.down('pbsDataStoreNotes').setNotes(rec.data.comment || "");
}
},
}); });
me.query('proxmoxRRDChart').forEach((chart) => { me.query('proxmoxRRDChart').forEach((chart) => {

View File

@ -14,66 +14,54 @@ Ext.define('PBS.panel.PruneInputPanel', {
column1: [ column1: [
{ {
xtype: 'proxmoxintegerfield', xtype: 'pbsPruneKeepInput',
fieldLabel: gettext('Keep Last'),
name: 'keep-last', name: 'keep-last',
fieldLabel: gettext('keep-last'),
cbind: { cbind: {
deleteEmpty: '{!isCreate}', deleteEmpty: '{!isCreate}',
}, },
minValue: 1,
allowBlank: true,
}, },
{ {
xtype: 'proxmoxintegerfield', xtype: 'pbsPruneKeepInput',
fieldLabel: gettext('Keep Daily'),
name: 'keep-daily', name: 'keep-daily',
fieldLabel: gettext('Keep Daily'),
cbind: { cbind: {
deleteEmpty: '{!isCreate}', deleteEmpty: '{!isCreate}',
}, },
minValue: 1,
allowBlank: true,
}, },
{ {
xtype: 'proxmoxintegerfield', xtype: 'pbsPruneKeepInput',
fieldLabel: gettext('Keep Monthly'),
name: 'keep-monthly', name: 'keep-monthly',
fieldLabel: gettext('Keep Monthly'),
cbind: { cbind: {
deleteEmpty: '{!isCreate}', deleteEmpty: '{!isCreate}',
}, },
minValue: 1,
allowBlank: true,
}, },
], ],
column2: [ column2: [
{ {
xtype: 'proxmoxintegerfield', xtype: 'pbsPruneKeepInput',
fieldLabel: gettext('Keep Hourly'), fieldLabel: gettext('Keep Hourly'),
name: 'keep-hourly', name: 'keep-hourly',
cbind: { cbind: {
deleteEmpty: '{!isCreate}', deleteEmpty: '{!isCreate}',
}, },
minValue: 1,
allowBlank: true,
}, },
{ {
xtype: 'proxmoxintegerfield', xtype: 'pbsPruneKeepInput',
fieldLabel: gettext('Keep Weekly'),
name: 'keep-weekly', name: 'keep-weekly',
fieldLabel: gettext('Keep Weekly'),
cbind: { cbind: {
deleteEmpty: '{!isCreate}', deleteEmpty: '{!isCreate}',
}, },
minValue: 1,
allowBlank: true,
}, },
{ {
xtype: 'proxmoxintegerfield', xtype: 'pbsPruneKeepInput',
fieldLabel: gettext('Keep Yearly'),
name: 'keep-yearly', name: 'keep-yearly',
fieldLabel: gettext('Keep Yearly'),
cbind: { cbind: {
deleteEmpty: '{!isCreate}', deleteEmpty: '{!isCreate}',
}, },
minValue: 1,
allowBlank: true,
}, },
], ],

View File

@ -100,17 +100,26 @@ Ext.define('PBS.window.UserEdit', {
xtype: 'proxmoxtextfield', xtype: 'proxmoxtextfield',
name: 'firstname', name: 'firstname',
fieldLabel: gettext('First Name'), fieldLabel: gettext('First Name'),
cbind: {
deleteEmpty: '{!isCreate}',
},
}, },
{ {
xtype: 'proxmoxtextfield', xtype: 'proxmoxtextfield',
name: 'lastname', name: 'lastname',
fieldLabel: gettext('Last Name'), fieldLabel: gettext('Last Name'),
cbind: {
deleteEmpty: '{!isCreate}',
},
}, },
{ {
xtype: 'proxmoxtextfield', xtype: 'proxmoxtextfield',
name: 'email', name: 'email',
fieldLabel: gettext('E-Mail'), fieldLabel: gettext('E-Mail'),
vtype: 'proxmoxMail', vtype: 'proxmoxMail',
cbind: {
deleteEmpty: '{!isCreate}',
},
}, },
], ],
@ -119,6 +128,9 @@ Ext.define('PBS.window.UserEdit', {
xtype: 'proxmoxtextfield', xtype: 'proxmoxtextfield',
name: 'comment', name: 'comment',
fieldLabel: gettext('Comment'), fieldLabel: gettext('Comment'),
cbind: {
deleteEmpty: '{!isCreate}',
},
}, },
], ],
}, },