Compare commits

...

77 Commits

Author SHA1 Message Date
96f35520a0 bump version to 1.0.5-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 15:30:06 +01:00
490560e0c6 restore: print to STDERR
else restoring to STDOUT is broken..

Reported-by: Dominic Jäger <d.jaeger@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-25 14:38:02 +01:00
52f53d8280 control: update versions 2020-11-25 10:35:51 +01:00
27b8a3f671 bump version to 1.0.4-1 2020-11-25 08:03:11 +01:00
abf9b6da42 docs: fix renamed commands 2020-11-25 08:03:11 +01:00
0c9209b04c cli: rename command "upload-log" to "snapshot upload-log" 2020-11-25 07:57:39 +01:00
edebd52374 cli: rename command "forget" to "snapshot forget" 2020-11-25 07:57:39 +01:00
61205f00fb cli: rename command "files" to "snapshot files" 2020-11-25 07:57:39 +01:00
a303e00289 fingerprint: add new() method 2020-11-25 07:57:39 +01:00
af9f72e9d8 fingerprint: add bytes() accessor
needed for libproxmox-backup-qemu0

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-25 06:34:34 +01:00
5176346b30 ui: fix broken gettext use
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 00:21:17 +01:00
731eeef25b cli: use new alias feature for "snapshots"
Now maps to "snapshot list".
2020-11-24 13:26:43 +01:00
a65e3e4bc0 client: add 'snapshot notes show/update' command
to show and update snapshot notes from the cli

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-24 11:44:19 +01:00
027eb2bbe6 bump version to 1.0.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:56:18 +01:00
6982a54701 gui: add snapshot/file fingerprint tooltip
display short key ID, like backend's Display trait.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
035c40e638 list_snapshots: return manifest fingerprint
for display in clients.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
79c535955d refactor BackupInfo -> SnapshotListItem helper
before adding more fields to the tuple, let's just create the struct
inside the match arms to improve readability.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
8b7f8d3f3d expose previous backup time in backup env
and use this information to add more information to client backup log
and guide the download manifest decision.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
866c859a1e bump version to 1.0.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:33:20 +01:00
23e4e90540 verification: fix message in notification mail
the errors Vec can contain failed groups as well (e.g., if a group has
no or an invalid owner).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
a4fa3fc241 verification job: log failed dirs
else users have to manually search through a potentially very long task
log to find the entries that are different.. this is the same summary
printed at the end of a manual verify task.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
81d10c3b37 cleanup: remove dead code 2020-11-24 08:03:00 +01:00
f1e2904150 paperkey: refactor common code
from formatting functions to main function, and pass along the key data
lines instead of the full string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:57:21 +01:00
23f9503a31 client: check fingerprint after downloading manifest
this is stricter than the check that happened on manifest load, as it
also fails if the manifest is signed but we don't have a key available.

add some additional output at the start of a backup to indicate whether
a previous manifest is available to base the backup on.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:55:12 +01:00
a0ef68b93c manifest: check fingerprint when loading with key
otherwise loading will run into the signature mismatch which is
technically true, but not the complete picture in this case.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:49:51 +01:00
6b127e6ea0 fix #3139: add key fingerprint to manifest
if the manifest is signed/the contained archives/blobs are encrypted.
stored in 'unprotected' area, since there is already a strong binding
between key and manifest via the signature, and this avoids breaking
backwards compatibility for a simple usability improvement.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:45:11 +01:00
5e17dbf2bb cli: cleanup 'key show' - use format_and_print_result_full
We now expose all key derivation functions on the cli, so users can
choose between scrypt or pbkdf2.
2020-11-24 07:32:34 +01:00
dfb04575ad client: add 'key show' command
for (pretty-)printing a keyfile.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:15:29 +01:00
6f2626ae19 client: print key fingerprint and master key
for operations where it makes sense.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:11:26 +01:00
37e60ddcde key: add fingerprint to key config
and set/generate it on
- key creation
- key passphrase change
- key decryption if not already set
- key encryption with master key

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:46 +01:00
05cdc05347 crypt config: add fingerprint mechanism
by computing the ID digest of a hash of a static string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:16 +01:00
6364115b4b OnlineHelpInfo.js problems
Anybody known why I always get the following diff:
2020-11-23 12:57:41 +01:00
2133cd9103 update debian/control 2020-11-23 12:13:58 +01:00
01f84fcce1 ui: datastore content: use our keep field for group pruning
sets some defaults and provides the clear trigger, so less code and
slightly nicer UX.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-21 19:52:03 +01:00
08b3823025 bump dependency on proxmox to 0.7.1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-20 17:38:34 +01:00
968a0ab261 fix systemd-encoded upid strings in http client
since we systemd-encode parts of the upid string, and those can contain
characters that are invalid in urls (e.g. '\'), we have to percent encode
those

add a 'percent_encode_component' helper, so that we can maybe change
the AsciiSet for all uses at the same time

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-19 11:01:19 +01:00
21b552848a prune sim: make numberfields more similar to PBS's
by creating a new class that adds a clear trigger and also uses the
clear-trigger image. Code was taken from the one in PBS's prune window,
but we have default values here, so a bit of adapting was necessary. For
example, we don't want to reset to the original value (which might have
been one of the defaults) when clearing, but always to 'null'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-19 09:47:51 +01:00
fd19256470 gc: treat .bad files like regular chunks
Simplify the phase 2 code by treating .bad files just like regular
chunks, with the exception of stat logging.

To facilitate, we need to touch .bad files in phase 1. We only do this
under the condition that 1) the original chunk is missing (as before),
and 2) the original chunk is still referenced somewhere (since the code
lives in the error handler for a failed chunk touch, it only gets called
for chunks we expect to be there, i.e. ones that are referenced).

Untouched they will then be cleaned up after 24 hours (or after the last
longer-running task finishes).

Reason 2) is also a fix for .bad files not being cleaned up at all if
the original is no longer referenced anywhere (e.g. a user deleting all
snapshots after seeing some corrupt chunks appear).

cond_touch_path is introduced to touch arbitrary paths in the chunk
store with the same logic as touching chunks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-18 14:04:49 +01:00
1ed022576c api: include store in invalid owner errors
since a group might exist in plenty stores

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:24 +01:00
f6aa7b38bf drop now unused BackupInfo::list_backups
all global backup listing now happens via BackupGroup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:21 +01:00
fdfcb74d67 api: filter snapshot counts
unprivileged users should only see the counts related to their part of
the datastore.

while we're at it, switch to a list groups, filter groups, count
snapshots approach (like list_snapshots) to speedup calls to this
endpoint when many unprivileged users share a datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:50 +01:00
98afc7b152 api: make expensive parts of datastore status opt-in
used in the PBS GUI, but also for PVE usage queries which don't need all
the extra expensive information..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:47 +01:00
0d08fceeb9 improve group/snapshot listing
by listing groups first, then filtering, then listing group snapshots.

this cuts down the number of openat/getdirents calls for users that just
have a partial view of the datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 10:37:04 +01:00
3c945d73c2 client/http_client: add put method
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 16:59:14 +01:00
58fcbf5ab7 client: expose all-file-systems option
Useful to avoid the need for a long (and possibly changing) list of include-dev
options in certain situations, e.g. nested ZFS file systems. The option is
already implemented and seems to work as expected. The checks for virtual
filesystems are not affected by this option.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-16 16:59:14 +01:00
3a3f31c947 ui: datastores: hide "no datastore" box by default
avoids that it shows during store load, we do not know if there are
no datastores at that point and have already a loading mask.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-16 16:59:14 +01:00
8fc63287df ui: improve comment behaviour for datastore Summary
when we could not load the config (e.g. missing permissions)
show the comment from the global datastore-list

also show a messagebox for a load error instead of setting
the text of the comment box

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
172473e4de ui: DataStoreList: show message when there are no datastores
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
76f549debb ui: DataStoreList: remove datastores also from hash
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
c9097ff801 pxar: avoid including archive root's exclude patterns in .pxarexclude-cli
The patterns from the archive root's .pxarexclude file are already present in
self.patterns when encode_pxarexclude_cli is called. Pass along the number of
CLI patterns and slice accordingly.

Suggested-By: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 13:05:09 +01:00
fb01fd3af6 visibility cleanups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:53:50 +01:00
fa4bcbcad0 pxar: only generate .pxarexclude-cli if there were CLI parameters
previously a .pxarexclude entry in the root of the archive caused the file to
be generated as well, because the patterns are read before calling
generate_directory_file_list and within the function it wasn't possible to
distinguish between a pattern coming from the CLI and a pattern coming from
archive/root/.pxarexclude

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:08 +01:00
189cdb7427 pxar: include .pxarexclude files in the archive
The documentation states:
.pxarexclude files are treated as regular files and will be included in the
backup archive.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:06 +01:00
874bd5454d pxar: fix anchored exclusion at archive root
There is no leading slash in an entry's full_path, causing an anchored
exclude at the root level to fail, e.g. having "/name" as the content of the
file archive/root/.pxarexclude didn't match the file archive/root/name

Fix this by prepending a leading slash before matching.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:04 +01:00
b649887e9a remove unused function
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:15:15 +01:00
8c62c15f56 follouwp: whitespace cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 11:02:45 +01:00
51ac17b56e api: apt/versions: fix running_kernel string for unknown package case
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-12 11:02:20 +01:00
fc5a012068 manager: versions: non-verbose should actually print server pkg info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 10:28:03 +01:00
5e293f1315 apt: use typed response for get_versions
...and cleanup get_versions for manager CLI.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-12 10:15:32 +01:00
87367decf2 ui: tell ESLint to be strict in check target
the .lint-incremental target, which is implicitly used by the install
target, is still more forgiving to allow faster "change, build, test"
iteration when developing.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
f792220dd4 d/control: update for new pin-project dependency
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
97030c9407 cleanup clippy leftovers
this used to contain a pointer cast, now it doesn't

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
5d1d0f5d6c use pin-project to remove more unsafe blocks
we already have it in our dependency tree, so use it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
294466ee61 manager: versions: unify printing
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
c100fe9108 add versions command to proxmox-backup-manager
Add the versions command to proxmox-backup-manager with a similar output
to pveversion [-v]. It prints the packages line by line with only the
package name, followed by the version and, for proxmox-backup and
proxmox-backup-server, some additional information (running kernel,
running version).

In addition it supports the optional output-format parameter which can
be used to print the complete data in either json, json-pretty or text
format. If output-format is specified, the --verbose parameter is
ignored and the detailed list of packages is printed.

With the addition of the versions command, the report is extended as
well.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 18:30:33 +01:00
e754da3ac2 api: versions: add version also in server package unknown case
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
bc1e52bc38 api: versions: rust fmt cleanups
line length limit is 100

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
6f0073bbb5 api: apt update info: do not serialize extra info if none
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
2decf85d6e add extra_info field to APTUpdateInfo
Add an optional string field to APTUpdateInfo which can be used for
extra information.

This is used for passing running kernel and running version information
in the versions API call together with proxmox-backup and
proxmox-backup-server.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 16:39:11 +01:00
1d8f849457 api2/node/syslog: use 'real_service_name' here also
for now this only does the 'postfix' -> 'postfix@-' conversion,
fixes the issue that we only showed the 'postfix' service syslog
(which is rather empty in a default setup) instead of the instance one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 16:36:42 +01:00
beb07279b6 log source of encryption key
This patch prints the source of the encryption key when running
operations with proxmox-backup-client.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
8c6854c8fd inform user when using default encryption key
Currently if you generate a default encryption key:
`proxmox-backup-client key create --kdf none`

all backup operations which don't explicitly disable encryption will be
encrypted with this key.

I found it quite surprising, that my backups were all encrypted without
me explicitly specfying neither key nor encryption mode

This patch informs the user when the default key is used (and no
crypt-mode is provided explicitly)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
57f472d9bb report: use '$' instead of '#' for showing commands
since some files can contain '#' character for comments. (i.e.,
/etc/hosts)

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:37 +01:00
94ffca10a2 report: fix grammar error
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:33 +01:00
0a274ab0a0 ui: UserView: render name as 'Firstname Lastname'
instead of only the firstname

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
c0026563b0 make user properties deletable
by using our usual pattern for the update call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
e411924c7c rest: check for disabled token (user)
when authenticating a token, and not just when authenticating a
user/ticket.

Reported-By: Dominik Jäger <d.jaeger@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-11 12:21:29 +01:00
56 changed files with 1514 additions and 817 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "1.0.1"
version = "1.0.5"
authors = [
"Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>",
@ -46,8 +46,9 @@ pam = "0.7"
pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pin-project = "0.4"
pathpatterns = "0.1.2"
proxmox = { version = "0.7.0", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox = { version = "0.7.2", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0"

50
debian/changelog vendored
View File

@ -1,3 +1,53 @@
rust-proxmox-backup (1.0.5-1) unstable; urgency=medium
* client: restore: print meta information exclusively to standard error
-- Proxmox Support Team <support@proxmox.com> Wed, 25 Nov 2020 15:29:58 +0100
rust-proxmox-backup (1.0.4-1) unstable; urgency=medium
* fingerprint: add bytes() accessor
* ui: fix broken gettext use
* cli: move more commands into "snapshot" sub-command
-- Proxmox Support Team <support@proxmox.com> Wed, 25 Nov 2020 06:37:41 +0100
rust-proxmox-backup (1.0.3-1) unstable; urgency=medium
* client: inform user when automatically using the default encryption key
* ui: UserView: render name as 'Firstname Lastname'
* proxmox-backup-manager: add versions command
* pxar: fix anchored exclusion at archive root
* pxar: include .pxarexclude files in the archive
* client: expose all-file-systems option
* api: make expensive parts of datastore status opt-in
* api: include datastore ID in invalid owner errors
* garbage collection: treat .bad files like regular chunks to ensure they
are removed if not referenced anymore
* client: fix issues with encoded UPID strings
* encryption: add fingerprint to key config
* client: add 'key show' command
* fix #3139: add key fingerprint to backup snapshot manifest and check it
when loading with a key
* ui: add snapshot/file fingerprint tooltip
-- Proxmox Support Team <support@proxmox.com> Tue, 24 Nov 2020 08:55:47 +0100
rust-proxmox-backup (1.0.1-1) unstable; urgency=medium
* ui: datastore summary: drop 'removed bytes' display

11
debian/control vendored
View File

@ -33,11 +33,12 @@ Build-Depends: debhelper (>= 11),
librust-pam-sys-0.5+default-dev,
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-0.4+default-dev,
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.7+api-macro-dev,
librust-proxmox-0.7+default-dev,
librust-proxmox-0.7+sortable-macro-dev,
librust-proxmox-0.7+websocket-dev,
librust-proxmox-0.7+api-macro-dev (>= 0.7.2-~~),
librust-proxmox-0.7+default-dev (>= 0.7.2-~~),
librust-proxmox-0.7+sortable-macro-dev (>= 0.7.2-~~),
librust-proxmox-0.7+websocket-dev (>= 0.7.2-~~),
librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.6+default-dev (>= 0.6.1-~~),
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),
@ -78,7 +79,7 @@ Build-Depends: debhelper (>= 11),
uuid-dev,
debhelper (>= 12~),
bash-completion,
pve-eslint,
pve-eslint (>= 7.12.1-1),
python3-docutils,
python3-pygments,
rsync,

View File

@ -14,7 +14,7 @@ section = "admin"
build_depends = [
"debhelper (>= 12~)",
"bash-completion",
"pve-eslint",
"pve-eslint (>= 7.12.1-1)",
"python3-docutils",
"python3-pygments",
"rsync",

View File

@ -17,6 +17,7 @@ MANUAL_PAGES := \
PRUNE_SIMULATOR_FILES := \
prune-simulator/index.html \
prune-simulator/documentation.html \
prune-simulator/clear-trigger.png \
prune-simulator/prune-simulator.js
# Sphinx documentation setup

View File

@ -392,11 +392,11 @@ periodic recovery tests to ensure that you can access the data in
case of problems.
First, you need to find the snapshot which you want to restore. The snapshot
command provides a list of all the snapshots on the server:
list command provides a list of all the snapshots on the server:
.. code-block:: console
# proxmox-backup-client snapshots
# proxmox-backup-client snapshot list
┌────────────────────────────────┬─────────────┬────────────────────────────────────┐
│ snapshot │ size │ files │
╞════════════════════════════════╪═════════════╪════════════════════════════════════╡
@ -581,7 +581,7 @@ command:
.. code-block:: console
# proxmox-backup-client forget <snapshot>
# proxmox-backup-client snapshot forget <snapshot>
.. caution:: This command removes all archives in this backup

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -33,6 +33,9 @@
.first-of-month {
border-right: dashed black 4px;
}
.clear-trigger {
background-image: url(./clear-trigger.png);
}
</style>
<script type="text/javascript" src="extjs/ext-all.js"></script>

View File

@ -265,6 +265,34 @@ Ext.onReady(function() {
},
});
Ext.define('PBS.PruneSimulatorKeepInput', {
extend: 'Ext.form.field.Number',
alias: 'widget.prunesimulatorKeepInput',
allowBlank: true,
fieldGroup: 'keep',
minValue: 1,
listeners: {
afterrender: function(field) {
this.triggers.clear.setVisible(field.value !== null);
},
change: function(field, newValue, oldValue) {
this.triggers.clear.setVisible(newValue !== null);
},
},
triggers: {
clear: {
cls: 'clear-trigger',
weight: -1,
handler: function() {
this.triggers.clear.setVisible(false);
this.setValue(null);
},
},
},
});
Ext.define('PBS.PruneSimulatorPanel', {
extend: 'Ext.panel.Panel',
alias: 'widget.prunesimulatorPanel',
@ -560,58 +588,37 @@ Ext.onReady(function() {
keepItems: [
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-last',
allowBlank: true,
fieldLabel: 'keep-last',
minValue: 0,
value: 4,
fieldGroup: 'keep',
},
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-hourly',
allowBlank: true,
fieldLabel: 'keep-hourly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
},
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-daily',
allowBlank: true,
fieldLabel: 'keep-daily',
minValue: 0,
value: 5,
fieldGroup: 'keep',
},
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-weekly',
allowBlank: true,
fieldLabel: 'keep-weekly',
minValue: 0,
value: 2,
fieldGroup: 'keep',
},
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-monthly',
allowBlank: true,
fieldLabel: 'keep-monthly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
},
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-yearly',
allowBlank: true,
fieldLabel: 'keep-yearly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
},
],

View File

@ -268,6 +268,21 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
Ok(user)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
#[allow(non_camel_case_types)]
pub enum DeletableProperty {
/// Delete the comment property.
comment,
/// Delete the firstname property.
firstname,
/// Delete the lastname property.
lastname,
/// Delete the email property.
email,
}
#[api(
protected: true,
input: {
@ -303,6 +318,14 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
schema: user::EMAIL_SCHEMA,
optional: true,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
@ -326,6 +349,7 @@ pub fn update_user(
firstname: Option<String>,
lastname: Option<String>,
email: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
) -> Result<(), Error> {
@ -340,6 +364,17 @@ pub fn update_user(
let mut data: user::User = config.lookup("user", userid.as_str())?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::comment => data.comment = None,
DeletableProperty::firstname => data.firstname = None,
DeletableProperty::lastname => data.lastname = None,
DeletableProperty::email => data.email = None,
}
}
}
if let Some(comment) = comment {
let comment = comment.trim().to_string();
if comment.is_empty() {

View File

@ -1,4 +1,4 @@
use std::collections::{HashSet, HashMap};
use std::collections::HashSet;
use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt;
use std::sync::{Arc, Mutex};
@ -125,19 +125,6 @@ fn get_all_snapshot_files(
Ok((manifest, files))
}
fn group_backups(backup_list: Vec<BackupInfo>) -> HashMap<String, Vec<BackupInfo>> {
let mut group_hash = HashMap::new();
for info in backup_list {
let group_id = info.backup_dir.group().group_path().to_str().unwrap().to_owned();
let time_list = group_hash.entry(group_id).or_insert(vec![]);
time_list.push(info);
}
group_hash
}
#[api(
input: {
properties: {
@ -171,45 +158,64 @@ fn list_groups(
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?;
let backup_list = BackupInfo::list_backups(&datastore.base_path())?;
let group_hash = group_backups(backup_list);
let mut groups = Vec::new();
for (_group_id, mut list) in group_hash {
BackupInfo::sort_list(&mut list, false);
let info = &list[0];
let group = info.backup_dir.group();
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = match datastore.get_owner(group) {
let backup_groups = BackupInfo::list_backup_groups(&datastore.base_path())?;
let group_info = backup_groups
.into_iter()
.fold(Vec::new(), |mut group_info, group| {
let owner = match datastore.get_owner(&group) {
Ok(auth_id) => auth_id,
Err(err) => {
println!("Failed to get owner of group '{}' - {}", group, err);
continue;
eprintln!("Failed to get owner of group '{}/{}' - {}",
&store,
group,
err);
return group_info;
},
};
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
return group_info;
}
let result_item = GroupListItem {
let snapshots = match group.list_backups(&datastore.base_path()) {
Ok(snapshots) => snapshots,
Err(_) => {
return group_info;
},
};
let backup_count: u64 = snapshots.len() as u64;
if backup_count == 0 {
return group_info;
}
let last_backup = snapshots
.iter()
.fold(&snapshots[0], |last, curr| {
if curr.is_finished()
&& curr.backup_dir.backup_time() > last.backup_dir.backup_time() {
curr
} else {
last
}
})
.to_owned();
group_info.push(GroupListItem {
backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(),
last_backup: info.backup_dir.backup_time(),
backup_count: list.len() as u64,
files: info.files.clone(),
last_backup: last_backup.backup_dir.backup_time(),
owner: Some(owner),
};
groups.push(result_item);
}
backup_count,
files: last_backup.files,
});
Ok(groups)
group_info
});
Ok(group_info)
}
#[api(
@ -357,49 +363,56 @@ pub fn list_snapshots (
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let datastore = DataStore::lookup_datastore(&store)?;
let base_path = datastore.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?;
let mut snapshots = vec![];
for info in backup_list {
let group = info.backup_dir.group();
if let Some(ref backup_type) = backup_type {
if backup_type != group.backup_type() { continue; }
}
if let Some(ref backup_id) = backup_id {
if backup_id != group.backup_id() { continue; }
}
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
println!("Failed to get owner of group '{}' - {}", group, err);
continue;
let groups = match (backup_type, backup_id) {
(Some(backup_type), Some(backup_id)) => {
let mut groups = Vec::with_capacity(1);
groups.push(BackupGroup::new(backup_type, backup_id));
groups
},
(Some(backup_type), None) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_type() == backup_type)
.collect()
},
(None, Some(backup_id)) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_id() == backup_id)
.collect()
},
_ => BackupInfo::list_backup_groups(&base_path)?,
};
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
let info_to_snapshot_list_item = |group: &BackupGroup, owner, info: BackupInfo| {
let backup_type = group.backup_type().to_string();
let backup_id = group.backup_id().to_string();
let backup_time = info.backup_dir.backup_time();
let mut size = None;
let (comment, verification, files) = match get_all_snapshot_files(&datastore, &info) {
match get_all_snapshot_files(&datastore, &info) {
Ok((manifest, files)) => {
size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
// extract the first line from notes
let comment: Option<String> = manifest.unprotected["notes"]
.as_str()
.and_then(|notes| notes.lines().next())
.map(String::from);
let verify = manifest.unprotected["verify_state"].clone();
let verify: Option<SnapshotVerifyState> = match serde_json::from_value(verify) {
let fingerprint = match manifest.fingerprint() {
Ok(fp) => fp,
Err(err) => {
eprintln!("error parsing fingerprint: '{}'", err);
None
},
};
let verification = manifest.unprotected["verify_state"].clone();
let verification: Option<SnapshotVerifyState> = match serde_json::from_value(verification) {
Ok(verify) => verify,
Err(err) => {
eprintln!("error parsing verification state : '{}'", err);
@ -407,88 +420,114 @@ pub fn list_snapshots (
}
};
(comment, verify, files)
let size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment,
verification,
fingerprint,
files,
size,
owner,
}
},
Err(err) => {
eprintln!("error during snapshot file listing: '{}'", err);
(
None,
None,
info
let files = info
.files
.iter()
.into_iter()
.map(|x| BackupContent {
filename: x.to_string(),
size: None,
crypt_mode: None,
})
.collect()
)
.collect();
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment: None,
verification: None,
fingerprint: None,
files,
size: None,
owner,
}
},
}
};
groups
.iter()
.try_fold(Vec::new(), |mut snapshots, group| {
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
&store,
group,
err);
return Ok(snapshots);
},
};
let result_item = SnapshotListItem {
backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(),
backup_time: info.backup_dir.backup_time(),
comment,
verification,
files,
size,
owner: Some(owner),
};
snapshots.push(result_item);
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
return Ok(snapshots);
}
let group_backups = group.list_backups(&datastore.base_path())?;
snapshots.extend(
group_backups
.into_iter()
.map(|info| info_to_snapshot_list_item(&group, Some(owner.clone()), info))
);
Ok(snapshots)
})
}
fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
fn get_snapshots_count(store: &DataStore, filter_owner: Option<&Authid>) -> Result<Counts, Error> {
let base_path = store.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?;
let mut groups = HashSet::new();
let groups = BackupInfo::list_backup_groups(&base_path)?;
let mut result = Counts {
ct: None,
host: None,
vm: None,
other: None,
groups.iter()
.filter(|group| {
let owner = match store.get_owner(&group) {
Ok(owner) => owner,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
store.name(),
group,
err);
return false;
},
};
for info in backup_list {
let group = info.backup_dir.group();
let id = group.backup_id();
let backup_type = group.backup_type();
let mut new_id = false;
if groups.insert(format!("{}-{}", &backup_type, &id)) {
new_id = true;
match filter_owner {
Some(filter) => check_backup_owner(&owner, filter).is_ok(),
None => true,
}
})
.try_fold(Counts::default(), |mut counts, group| {
let snapshot_count = group.list_backups(&base_path)?.len() as u64;
let mut counts = match backup_type {
"ct" => result.ct.take().unwrap_or(Default::default()),
"host" => result.host.take().unwrap_or(Default::default()),
"vm" => result.vm.take().unwrap_or(Default::default()),
_ => result.other.take().unwrap_or(Default::default()),
let type_count = match group.backup_type() {
"ct" => counts.ct.get_or_insert(Default::default()),
"vm" => counts.vm.get_or_insert(Default::default()),
"host" => counts.host.get_or_insert(Default::default()),
_ => counts.other.get_or_insert(Default::default()),
};
counts.snapshots += 1;
if new_id {
counts.groups +=1;
}
type_count.groups += 1;
type_count.snapshots += snapshot_count;
match backup_type {
"ct" => result.ct = Some(counts),
"host" => result.host = Some(counts),
"vm" => result.vm = Some(counts),
_ => result.other = Some(counts),
}
}
Ok(result)
Ok(counts)
})
}
#[api(
@ -497,8 +536,15 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
store: {
schema: DATASTORE_SCHEMA,
},
verbose: {
type: bool,
default: false,
optional: true,
description: "Include additional information like snapshot counts and GC status.",
},
},
},
returns: {
type: DataStoreStatus,
},
@ -509,13 +555,30 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
/// Get datastore status.
pub fn status(
store: String,
verbose: bool,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<DataStoreStatus, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let storage = crate::tools::disks::disk_usage(&datastore.base_path())?;
let counts = get_snapshots_count(&datastore)?;
let gc_status = datastore.last_gc_status();
let (counts, gc_status) = if verbose {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let store_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let filter_owner = if store_privs & PRIV_DATASTORE_AUDIT != 0 {
None
} else {
Some(&auth_id)
};
let counts = Some(get_snapshots_count(&datastore, filter_owner)?);
let gc_status = Some(datastore.last_gc_status());
(counts, gc_status)
} else {
(None, None)
};
Ok(DataStoreStatus {
total: storage.total,
@ -648,7 +711,7 @@ pub fn verify(
verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)?
};
if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots/groups:");
worker.log("Failed to verify the following snapshots/groups:");
for dir in failed_dirs {
worker.log(format!("\t{}", dir));
}

View File

@ -311,6 +311,10 @@ pub const BACKUP_API_SUBDIRS: SubdirMap = &[
"previous", &Router::new()
.download(&API_METHOD_DOWNLOAD_PREVIOUS)
),
(
"previous_backup_time", &Router::new()
.get(&API_METHOD_GET_PREVIOUS_BACKUP_TIME)
),
(
"speedtest", &Router::new()
.upload(&API_METHOD_UPLOAD_SPEEDTEST)
@ -694,6 +698,28 @@ fn finish_backup (
Ok(Value::Null)
}
#[sortable]
pub const API_METHOD_GET_PREVIOUS_BACKUP_TIME: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&get_previous_backup_time),
&ObjectSchema::new(
"Get previous backup time.",
&[],
)
);
fn get_previous_backup_time(
_param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let env: &BackupEnvironment = rpcenv.as_ref();
let backup_time = env.last_backup.as_ref().map(|info| info.backup_dir.backup_time());
Ok(json!(backup_time))
}
#[sortable]
pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&download_previous),

View File

@ -34,7 +34,7 @@ pub mod subscription;
pub(crate) mod rrd;
mod journal;
mod services;
pub(crate) mod services;
mod status;
mod syslog;
mod time;

View File

@ -261,7 +261,7 @@ fn apt_get_changelog(
},
)]
/// Get package information for important Proxmox Backup Server packages.
pub fn get_versions() -> Result<Value, Error> {
pub fn get_versions() -> Result<Vec<APTUpdateInfo>, Error> {
const PACKAGES: &[&str] = &[
"ifupdown2",
"libjs-extjs",
@ -276,7 +276,7 @@ pub fn get_versions() -> Result<Value, Error> {
"zfsutils-linux",
];
fn unknown_package(package: String) -> APTUpdateInfo {
fn unknown_package(package: String, extra_info: Option<String>) -> APTUpdateInfo {
APTUpdateInfo {
package,
title: "unknown".into(),
@ -288,6 +288,7 @@ pub fn get_versions() -> Result<Value, Error> {
priority: "unknown".into(),
section: "unknown".into(),
change_log_url: "unknown".into(),
extra_info,
}
}
@ -301,14 +302,28 @@ pub fn get_versions() -> Result<Value, Error> {
},
None,
);
let running_kernel = format!(
"running kernel: {}",
nix::sys::utsname::uname().release().to_owned()
);
if let Some(proxmox_backup) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup") {
packages.push(proxmox_backup.to_owned());
let mut proxmox_backup = proxmox_backup.clone();
proxmox_backup.extra_info = Some(running_kernel);
packages.push(proxmox_backup);
} else {
packages.push(unknown_package("proxmox-backup".into()));
packages.push(unknown_package("proxmox-backup".into(), Some(running_kernel)));
}
let version = crate::api2::version::PROXMOX_PKG_VERSION;
let release = crate::api2::version::PROXMOX_PKG_RELEASE;
let daemon_version_info = Some(format!("running version: {}.{}", version, release));
if let Some(pkg) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup-server") {
packages.push(pkg.to_owned());
let mut pkg = pkg.clone();
pkg.extra_info = daemon_version_info;
packages.push(pkg);
} else {
packages.push(unknown_package("proxmox-backup".into(), daemon_version_info));
}
let mut kernel_pkgs: Vec<APTUpdateInfo> = pbs_packages
@ -334,11 +349,11 @@ pub fn get_versions() -> Result<Value, Error> {
}
match pbs_packages.iter().find(|item| &item.package == pkg) {
Some(apt_pkg) => packages.push(apt_pkg.to_owned()),
None => packages.push(unknown_package(pkg.to_string())),
None => packages.push(unknown_package(pkg.to_string(), None)),
}
}
Ok(json!(packages))
Ok(packages)
}
const SUBDIRS: SubdirMap = &[

View File

@ -22,7 +22,7 @@ static SERVICE_NAME_LIST: [&str; 7] = [
"systemd-timesyncd",
];
fn real_service_name(service: &str) -> &str {
pub fn real_service_name(service: &str) -> &str {
// since postfix package 3.1.0-3.1 the postfix unit is only here
// to manage subinstances, of which the default is called "-".

View File

@ -134,12 +134,18 @@ fn get_syslog(
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let service = if let Some(service) = param["service"].as_str() {
Some(crate::api2::node::services::real_service_name(service))
} else {
None
};
let (count, lines) = dump_journal(
param["start"].as_u64(),
param["limit"].as_u64(),
param["since"].as_str(),
param["until"].as_str(),
param["service"].as_str())?;
service)?;
rpcenv["total"] = Value::from(count);

View File

@ -5,7 +5,7 @@ use proxmox::api::{api, schema::*};
use proxmox::const_regex;
use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::{CryptMode, BACKUP_ID_REGEX};
use crate::backup::{CryptMode, Fingerprint, BACKUP_ID_REGEX};
use crate::server::UPID;
#[macro_use]
@ -484,6 +484,10 @@ pub struct SnapshotVerifyState {
type: SnapshotVerifyState,
optional: true,
},
fingerprint: {
type: String,
optional: true,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
@ -508,6 +512,9 @@ pub struct SnapshotListItem {
/// The result of the last run verify task
#[serde(skip_serializing_if="Option::is_none")]
pub verification: Option<SnapshotVerifyState>,
/// Fingerprint of encryption key
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<Fingerprint>,
/// List of contained archive files.
pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes).
@ -692,7 +699,7 @@ pub struct TypeCounts {
},
},
)]
#[derive(Serialize, Deserialize)]
#[derive(Serialize, Deserialize, Default)]
/// Counts of groups/snapshots per BackupType.
pub struct Counts {
/// The counts for CT backups
@ -707,8 +714,14 @@ pub struct Counts {
#[api(
properties: {
"gc-status": { type: GarbageCollectionStatus, },
counts: { type: Counts, }
"gc-status": {
type: GarbageCollectionStatus,
optional: true,
},
counts: {
type: Counts,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
@ -722,9 +735,11 @@ pub struct DataStoreStatus {
/// Available space (bytes).
pub avail: u64,
/// Status of last GC
pub gc_status: GarbageCollectionStatus,
#[serde(skip_serializing_if="Option::is_none")]
pub gc_status: Option<GarbageCollectionStatus>,
/// Group/Snapshot counts
pub counts: Counts,
#[serde(skip_serializing_if="Option::is_none")]
pub counts: Option<Counts>,
}
#[api(
@ -1177,6 +1192,9 @@ pub struct APTUpdateInfo {
pub section: String,
/// URL under which the package's changelog can be retrieved
pub change_log_url: String,
/// Custom extra field for additional package information
#[serde(skip_serializing_if="Option::is_none")]
pub extra_info: Option<String>,
}
#[api()]

View File

@ -327,26 +327,20 @@ impl BackupInfo {
Ok(files)
}
pub fn list_backups(base_path: &Path) -> Result<Vec<BackupInfo>, Error> {
pub fn list_backup_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
let mut list = Vec::new();
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |l1_fd, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l1_fd, backup_id, &BACKUP_DATE_REGEX, |l2_fd, backup_time_string, file_type| {
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
let backup_dir = BackupDir::with_rfc3339(backup_type, backup_id, backup_time_string)?;
let files = list_backup_files(l2_fd, backup_time_string)?;
list.push(BackupInfo { backup_dir, files });
list.push(BackupGroup::new(backup_type, backup_id));
Ok(())
})
})
})?;
Ok(list)
}

View File

@ -154,9 +154,11 @@ impl ChunkStore {
}
pub fn cond_touch_chunk(&self, digest: &[u8; 32], fail_if_not_exist: bool) -> Result<bool, Error> {
let (chunk_path, _digest_str) = self.chunk_path(digest);
self.cond_touch_path(&chunk_path, fail_if_not_exist)
}
pub fn cond_touch_path(&self, path: &Path, fail_if_not_exist: bool) -> Result<bool, Error> {
const UTIME_NOW: i64 = (1 << 30) - 1;
const UTIME_OMIT: i64 = (1 << 30) - 2;
@ -167,7 +169,7 @@ impl ChunkStore {
use nix::NixPath;
let res = chunk_path.with_nix_path(|cstr| unsafe {
let res = path.with_nix_path(|cstr| unsafe {
let tmp = libc::utimensat(-1, cstr.as_ptr(), &times[0], libc::AT_SYMLINK_NOFOLLOW);
nix::errno::Errno::result(tmp)
})?;
@ -177,7 +179,7 @@ impl ChunkStore {
return Ok(false);
}
bail!("update atime failed for chunk {:?} - {}", chunk_path, err);
bail!("update atime failed for chunk/file {:?} - {}", path, err);
}
Ok(true)
@ -328,49 +330,13 @@ impl ChunkStore {
let lock = self.mutex.lock();
if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) {
if bad {
// filename validity checked in iterator
let orig_filename = std::ffi::CString::new(&filename.to_bytes()[..64])?;
match fstatat(
dirfd,
orig_filename.as_c_str(),
nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW)
{
Ok(_) => {
match unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
Err(err) =>
crate::task_warn!(
worker,
"unlinking corrupt chunk {:?} failed on store '{}' - {}",
filename,
self.name,
err,
),
Ok(_) => {
status.removed_bad += 1;
status.removed_bytes += stat.st_size as u64;
}
}
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
// chunk hasn't been rewritten yet, keep .bad file
status.still_bad += 1;
},
Err(err) => {
// some other error, warn user and keep .bad file around too
status.still_bad += 1;
crate::task_warn!(
worker,
"error during stat on '{:?}' - {}",
orig_filename,
err,
);
}
}
} else if stat.st_atime < min_atime {
if stat.st_atime < min_atime {
//let age = now - stat.st_atime;
//println!("UNLINK {} {:?}", age/(3600*24), filename);
if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
if bad {
status.still_bad += 1;
}
bail!(
"unlinking chunk {:?} failed on store '{}' - {}",
filename,
@ -378,13 +344,23 @@ impl ChunkStore {
err,
);
}
if bad {
status.removed_bad += 1;
} else {
status.removed_chunks += 1;
}
status.removed_bytes += stat.st_size as u64;
} else if stat.st_atime < oldest_writer {
if bad {
status.still_bad += 1;
} else {
status.pending_chunks += 1;
}
status.pending_bytes += stat.st_size as u64;
} else {
if !bad {
status.disk_chunks += 1;
}
status.disk_bytes += stat.st_size as u64;
}
}

View File

@ -7,6 +7,8 @@
//! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption)
//! for a short introduction.
use std::fmt;
use std::fmt::Display;
use std::io::Write;
use anyhow::{bail, Error};
@ -15,8 +17,15 @@ use openssl::pkcs5::pbkdf2_hmac;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
use serde::{Deserialize, Serialize};
use crate::tools::format::{as_fingerprint, bytes_as_fingerprint};
use proxmox::api::api;
// openssl::sha::sha256(b"Proxmox Backup Encryption Key Fingerprint")
const FINGERPRINT_INPUT: [u8; 32] = [ 110, 208, 239, 119, 71, 31, 255, 77,
85, 199, 168, 254, 74, 157, 182, 33,
97, 64, 127, 19, 76, 114, 93, 223,
48, 153, 45, 37, 236, 69, 237, 38, ];
#[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
@ -30,6 +39,30 @@ pub enum CryptMode {
SignOnly,
}
#[derive(Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(transparent)]
/// 32-byte fingerprint, usually calculated with SHA256.
pub struct Fingerprint {
#[serde(with = "bytes_as_fingerprint")]
bytes: [u8; 32],
}
impl Fingerprint {
pub fn new(bytes: [u8; 32]) -> Self {
Self { bytes }
}
pub fn bytes(&self) -> &[u8; 32] {
&self.bytes
}
}
/// Display as short key ID
impl Display for Fingerprint {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", as_fingerprint(&self.bytes[0..8]))
}
}
/// Encryption Configuration with secret key
///
/// This structure stores the secret key and provides helpers for
@ -101,6 +134,10 @@ impl CryptConfig {
tag
}
pub fn fingerprint(&self) -> Fingerprint {
Fingerprint::new(self.compute_digest(&FINGERPRINT_INPUT))
}
pub fn data_crypter(&self, iv: &[u8; 16], mode: Mode) -> Result<Crypter, Error> {
let mut crypter = openssl::symm::Crypter::new(self.cipher, mode, &self.enc_key, Some(iv))?;
crypter.aad_update(b"")?; //??
@ -219,7 +256,13 @@ impl CryptConfig {
) -> Result<Vec<u8>, Error> {
let modified = proxmox::tools::time::epoch_i64();
let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() };
let key_config = super::KeyConfig {
kdf: None,
created,
modified,
data: self.enc_key.to_vec(),
fingerprint: Some(self.fingerprint()),
};
let data = serde_json::to_string(&key_config)?.as_bytes().to_vec();
let mut buffer = vec![0u8; rsa.size() as usize];

View File

@ -446,6 +446,17 @@ impl DataStore {
file_name,
err,
);
// touch any corresponding .bad files to keep them around, meaning if a chunk is
// rewritten correctly they will be removed automatically, as well as if no index
// file requires the chunk anymore (won't get to this loop then)
for i in 0..=9 {
let bad_ext = format!("{}.bad", i);
let mut bad_path = PathBuf::new();
bad_path.push(self.chunk_path(digest).0);
bad_path.set_extension(bad_ext);
self.chunk_store.cond_touch_path(&bad_path, false)?;
}
}
}
Ok(())

View File

@ -219,7 +219,6 @@ impl IndexFile for DynamicIndexReader {
(csum, chunk_end)
}
#[allow(clippy::cast_ptr_alignment)]
fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo> {
if pos >= self.index.len() {
return None;

View File

@ -2,9 +2,34 @@ use anyhow::{bail, format_err, Context, Error};
use serde::{Deserialize, Serialize};
use crate::backup::{CryptConfig, Fingerprint};
use proxmox::api::api;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::try_block;
#[api(default: "scrypt")]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
/// Encrtypt the Key with a password using PBKDF2
PBKDF2,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[derive(Deserialize, Serialize, Debug)]
pub enum KeyDerivationConfig {
Scrypt {
@ -66,6 +91,9 @@ pub struct KeyConfig {
pub modified: i64,
#[serde(with = "proxmox::tools::serde::bytes_as_base64")]
pub data: Vec<u8>,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(default)]
pub fingerprint: Option<Fingerprint>,
}
pub fn store_key_config(
@ -103,15 +131,25 @@ pub fn store_key_config(
pub fn encrypt_key_with_passphrase(
raw_key: &[u8],
passphrase: &[u8],
kdf: Kdf,
) -> Result<KeyConfig, Error> {
let salt = proxmox::sys::linux::random_data(32)?;
let kdf = KeyDerivationConfig::Scrypt {
let kdf = match kdf {
Kdf::Scrypt => KeyDerivationConfig::Scrypt {
n: 65536,
r: 8,
p: 1,
salt,
},
Kdf::PBKDF2 => KeyDerivationConfig::PBKDF2 {
iter: 65535,
salt,
},
Kdf::None => {
bail!("No key derivation function specified");
}
};
let derived_key = kdf.derive_key(passphrase)?;
@ -142,28 +180,22 @@ pub fn encrypt_key_with_passphrase(
created,
modified: created,
data: enc_data,
fingerprint: None,
})
}
pub fn load_and_decrypt_key(
path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> {
do_load_and_decrypt_key(path, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
}
fn do_load_and_decrypt_key(
path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> {
) -> Result<([u8;32], i64, Fingerprint), Error> {
decrypt_key(&file_get_contents(&path)?, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
}
pub fn decrypt_key(
mut keydata: &[u8],
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> {
) -> Result<([u8;32], i64, Fingerprint), Error> {
let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?;
let raw_data = key_config.data;
@ -203,5 +235,13 @@ pub fn decrypt_key(
let mut result = [0u8; 32];
result.copy_from_slice(&key);
Ok((result, created))
let fingerprint = match key_config.fingerprint {
Some(fingerprint) => fingerprint,
None => {
let crypt_config = CryptConfig::new(result.clone())?;
crypt_config.fingerprint()
},
};
Ok((result, created, fingerprint))
}

View File

@ -5,7 +5,7 @@ use std::path::Path;
use serde_json::{json, Value};
use ::serde::{Deserialize, Serialize};
use crate::backup::{BackupDir, CryptMode, CryptConfig};
use crate::backup::{BackupDir, CryptMode, CryptConfig, Fingerprint};
pub const MANIFEST_BLOB_NAME: &str = "index.json.blob";
pub const MANIFEST_LOCK_NAME: &str = ".index.json.lck";
@ -223,12 +223,48 @@ impl BackupManifest {
if let Some(crypt_config) = crypt_config {
let sig = self.signature(crypt_config)?;
manifest["signature"] = proxmox::tools::digest_to_hex(&sig).into();
let fingerprint = &crypt_config.fingerprint();
manifest["unprotected"]["key-fingerprint"] = serde_json::to_value(fingerprint)?;
}
let manifest = serde_json::to_string_pretty(&manifest).unwrap().into();
Ok(manifest)
}
pub fn fingerprint(&self) -> Result<Option<Fingerprint>, Error> {
match &self.unprotected["key-fingerprint"] {
Value::Null => Ok(None),
value => Ok(Some(serde_json::from_value(value.clone())?))
}
}
/// Checks if a BackupManifest and a CryptConfig share a valid fingerprint combination.
///
/// An unsigned manifest is valid with any or no CryptConfig.
/// A signed manifest is only valid with a matching CryptConfig.
pub fn check_fingerprint(&self, crypt_config: Option<&CryptConfig>) -> Result<(), Error> {
if let Some(fingerprint) = self.fingerprint()? {
match crypt_config {
None => bail!(
"missing key - manifest was created with key {}",
fingerprint,
),
Some(crypt_config) => {
let config_fp = crypt_config.fingerprint();
if config_fp != fingerprint {
bail!(
"wrong key - manifest's key {} does not match provided key {}",
fingerprint,
config_fp
);
}
}
}
};
Ok(())
}
/// Try to read the manifest. This verifies the signature if there is a crypt_config.
pub fn from_data(data: &[u8], crypt_config: Option<&CryptConfig>) -> Result<BackupManifest, Error> {
let json: Value = serde_json::from_slice(data)?;
@ -237,6 +273,19 @@ impl BackupManifest {
if let Some(ref crypt_config) = crypt_config {
if let Some(signature) = signature {
let expected_signature = proxmox::tools::digest_to_hex(&Self::json_signature(&json, crypt_config)?);
let fingerprint = &json["unprotected"]["key-fingerprint"];
if fingerprint != &Value::Null {
let fingerprint = serde_json::from_value(fingerprint.clone())?;
let config_fp = crypt_config.fingerprint();
if config_fp != fingerprint {
bail!(
"wrong key - unable to verify signature since manifest's key {} does not match provided key {}",
fingerprint,
config_fp
);
}
}
if signature != expected_signature {
bail!("wrong signature in manifest");
}

View File

@ -53,7 +53,6 @@ use proxmox_backup::backup::{
ChunkStream,
CryptConfig,
CryptMode,
DataBlob,
DynamicIndexReader,
FixedChunkStream,
FixedIndexReader,
@ -456,112 +455,6 @@ async fn change_backup_owner(group: String, mut param: Value) -> Result<(), Erro
Ok(())
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
group: {
type: String,
description: "Backup group.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List backup snapshots.
async fn list_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?)
} else {
None
};
let mut data = api_datastore_list_snapshots(&client, repo.store(), group).await?;
record_repository(&repo);
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned())
};
let render_files = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let mut filenames = Vec::new();
for file in &item.files {
filenames.push(file.filename.to_string());
}
Ok(tools::format::render_backup_file_list(&filenames[..]))
};
let options = default_table_format_options()
.sortby("backup-type", false)
.sortby("backup-id", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
.column(ColumnConfig::new("size").renderer(tools::format::render_bytes_human_readable))
.column(ColumnConfig::new("files").renderer(render_files))
;
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOTS;
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
}
}
)]
/// Forget (remove) backup snapshots.
async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
let result = client.delete(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
Ok(result)
}
#[api(
input: {
properties: {
@ -655,58 +548,6 @@ async fn api_version(param: Value) -> Result<(), Error> {
Ok(())
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List snapshot files.
async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
let mut result = client.get(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOT_FILES;
let mut data: Value = result["data"].take();
let options = default_table_format_options();
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
@ -802,7 +643,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
let keydata = match (keyfile, key_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(file_get_contents(keyfile)?),
(Some(keyfile), None) => {
eprintln!("Using encryption key file: {}", keyfile);
Some(file_get_contents(keyfile)?)
},
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
@ -810,6 +654,7 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
.map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err)
})?;
eprintln!("Using encryption key from file descriptor");
Some(data)
}
};
@ -817,7 +662,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
Ok(match (keydata, crypt_mode) {
// no parameters:
(None, None) => match key::read_optional_default_encryption_key()? {
Some(key) => (Some(key), CryptMode::Encrypt),
Some(key) => {
eprintln!("Encrypting with default encryption key!");
(Some(key), CryptMode::Encrypt)
},
None => (None, CryptMode::None),
},
@ -827,7 +675,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
// just --crypt-mode other than none
(None, Some(crypt_mode)) => match key::read_optional_default_encryption_key()? {
None => bail!("--crypt-mode without --keyfile and no default key file available"),
Some(key) => (Some(key), crypt_mode),
Some(key) => {
eprintln!("Encrypting with default encryption key!");
(Some(key), crypt_mode)
},
}
// just --keyfile
@ -865,6 +716,11 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
description: "Path to file.",
}
},
"all-file-systems": {
type: Boolean,
description: "Include all mounted subdirectories.",
optional: true,
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
@ -1054,7 +910,8 @@ async fn create_backup(
let (crypt_config, rsa_encrypted_key) = match keydata {
None => (None, None),
Some(key) => {
let (key, created) = decrypt_key(&key, &key::get_encryption_key_password)?;
let (key, created, fingerprint) = decrypt_key(&key, &key::get_encryption_key_password)?;
println!("Encryption key fingerprint: {}", fingerprint);
let crypt_config = CryptConfig::new(key)?;
@ -1063,6 +920,8 @@ async fn create_backup(
let pem_data = file_get_contents(path)?;
let rsa = openssl::rsa::Rsa::public_key_from_pem(&pem_data)?;
let enc_key = crypt_config.generate_rsa_encoded_key(rsa, created)?;
println!("Master key '{:?}'", path);
(Some(Arc::new(crypt_config)), Some(enc_key))
}
_ => (Some(Arc::new(crypt_config)), None),
@ -1081,8 +940,40 @@ async fn create_backup(
false
).await?;
let previous_manifest = if let Ok(previous_manifest) = client.download_previous_manifest().await {
Some(Arc::new(previous_manifest))
let download_previous_manifest = match client.previous_backup_time().await {
Ok(Some(backup_time)) => {
println!(
"Downloading previous manifest ({})",
strftime_local("%c", backup_time)?
);
true
}
Ok(None) => {
println!("No previous manifest available.");
false
}
Err(_) => {
// Fallback for outdated server, TODO remove/bubble up with 2.0
true
}
};
let previous_manifest = if download_previous_manifest {
match client.download_previous_manifest().await {
Ok(previous_manifest) => {
match previous_manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref)) {
Ok(()) => Some(Arc::new(previous_manifest)),
Err(err) => {
println!("Couldn't re-use previous manifest - {}", err);
None
}
}
}
Err(err) => {
println!("Couldn't download previous manifest - {}", err);
None
}
}
} else {
None
};
@ -1365,7 +1256,8 @@ async fn restore(param: Value) -> Result<Value, Error> {
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _) = decrypt_key(&key, &key::get_encryption_key_password)?;
let (key, _, fingerprint) = decrypt_key(&key, &key::get_encryption_key_password)?;
eprintln!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?))
}
};
@ -1381,6 +1273,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
).await?;
let (manifest, backup_index_data) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let (archive_name, archive_type) = parse_archive_type(archive_name);
@ -1477,81 +1370,6 @@ async fn restore(param: Value) -> Result<Value, Error> {
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Group/Snapshot path.",
},
logfile: {
type: String,
description: "The path to the log file you want to upload.",
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
}
}
)]
/// Upload backup log file.
async fn upload_log(param: Value) -> Result<Value, Error> {
let logfile = tools::required_string_param(&param, "logfile")?;
let repo = extract_repository_from_value(&param)?;
let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(&repo)?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let data = file_get_contents(logfile)?;
// fixme: howto sign log?
let blob = match crypt_mode {
CryptMode::None | CryptMode::SignOnly => DataBlob::encode(&data, None, true)?,
CryptMode::Encrypt => DataBlob::encode(&data, crypt_config.as_ref().map(Arc::as_ref), true)?,
};
let raw_data = blob.into_inner();
let path = format!("api2/json/admin/datastore/{}/upload-backup-log", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let body = hyper::Body::from(raw_data);
client.upload("application/octet-stream", body, &path, Some(args)).await
}
const API_METHOD_PRUNE: ApiMethod = ApiMethod::new(
&ApiHandler::Async(&prune),
&ObjectSchema::new(
@ -1989,26 +1807,9 @@ fn main() {
.completion_cb("repository", complete_repository)
.completion_cb("keyfile", tools::complete_file_name);
let upload_log_cmd_def = CliCommand::new(&API_METHOD_UPLOAD_LOG)
.arg_param(&["snapshot", "logfile"])
.completion_cb("snapshot", complete_backup_snapshot)
.completion_cb("logfile", tools::complete_file_name)
.completion_cb("keyfile", tools::complete_file_name)
.completion_cb("repository", complete_repository);
let list_cmd_def = CliCommand::new(&API_METHOD_LIST_BACKUP_GROUPS)
.completion_cb("repository", complete_repository);
let snapshots_cmd_def = CliCommand::new(&API_METHOD_LIST_SNAPSHOTS)
.arg_param(&["group"])
.completion_cb("group", complete_backup_group)
.completion_cb("repository", complete_repository);
let forget_cmd_def = CliCommand::new(&API_METHOD_FORGET_SNAPSHOTS)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot);
let garbage_collect_cmd_def = CliCommand::new(&API_METHOD_START_GARBAGE_COLLECTION)
.completion_cb("repository", complete_repository);
@ -2019,11 +1820,6 @@ fn main() {
.completion_cb("archive-name", complete_archive_name)
.completion_cb("target", tools::complete_file_name);
let files_cmd_def = CliCommand::new(&API_METHOD_LIST_SNAPSHOT_FILES)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot);
let prune_cmd_def = CliCommand::new(&API_METHOD_PRUNE)
.arg_param(&["group"])
.completion_cb("group", complete_backup_group)
@ -2049,16 +1845,13 @@ fn main() {
let cmd_def = CliCommandMap::new()
.insert("backup", backup_cmd_def)
.insert("upload-log", upload_log_cmd_def)
.insert("forget", forget_cmd_def)
.insert("garbage-collect", garbage_collect_cmd_def)
.insert("list", list_cmd_def)
.insert("login", login_cmd_def)
.insert("logout", logout_cmd_def)
.insert("prune", prune_cmd_def)
.insert("restore", restore_cmd_def)
.insert("snapshots", snapshots_cmd_def)
.insert("files", files_cmd_def)
.insert("snapshot", snapshot_mgtm_cli())
.insert("status", status_cmd_def)
.insert("key", key::cli())
.insert("mount", mount_cmd_def())
@ -2068,7 +1861,13 @@ fn main() {
.insert("task", task_mgmt_cli())
.insert("version", version_cmd_def)
.insert("benchmark", benchmark_cmd_def)
.insert("change-owner", change_owner_cmd_def);
.insert("change-owner", change_owner_cmd_def)
.alias(&["files"], &["snapshot", "files"])
.alias(&["forget"], &["snapshot", "forget"])
.alias(&["upload-log"], &["snapshot", "upload-log"])
.alias(&["snapshots"], &["snapshot", "list"])
;
let rpcenv = CliEnvironment::new();
run_cli_command(cmd_def, rpcenv, Some(|future| {

View File

@ -245,7 +245,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let mut client = connect()?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
let _ = client.delete(&path, None).await?;
Ok(Value::Null)
@ -363,6 +363,43 @@ async fn report() -> Result<Value, Error> {
Ok(Value::Null)
}
#[api(
input: {
properties: {
verbose: {
type: Boolean,
optional: true,
default: false,
description: "Output verbose package information. It is ignored if output-format is specified.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
}
}
}
)]
/// List package versions for important Proxmox Backup Server packages.
async fn get_versions(verbose: bool, param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let packages = crate::api2::node::apt::get_versions()?;
let mut packages = json!(if verbose { &packages[..] } else { &packages[1..2] });
let options = default_table_format_options()
.disable_sort()
.noborder(true) // just not helpfull for version info which gets copy pasted often
.column(ColumnConfig::new("Package"))
.column(ColumnConfig::new("Version"))
.column(ColumnConfig::new("ExtraInfo").header("Extra Info"))
;
let schema = &crate::api2::node::apt::API_RETURN_SCHEMA_GET_VERSIONS;
format_and_print_result_full(&mut packages, schema, &output_format, &options);
Ok(Value::Null)
}
fn main() {
proxmox_backup::tools::setup_safe_path_env();
@ -396,6 +433,9 @@ fn main() {
)
.insert("report",
CliCommand::new(&API_METHOD_REPORT)
)
.insert("versions",
CliCommand::new(&API_METHOD_GET_VERSIONS)
);

View File

@ -151,7 +151,7 @@ pub async fn benchmark(
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let (key, _, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}

View File

@ -73,7 +73,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?;
let (key, _created, _fingerprint) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
@ -92,6 +92,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
).await?;
let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let index = client.download_dynamic_index(&manifest, CATALOG_NAME).await?;
@ -170,7 +171,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?;
let (key, _created, _fingerprint) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
@ -199,6 +200,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
.open("/tmp")?;
let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8);

View File

@ -4,14 +4,28 @@ use std::process::{Stdio, Command};
use anyhow::{bail, format_err, Error};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use proxmox::api::api;
use proxmox::api::cli::{CliCommand, CliCommandMap};
use proxmox::api::cli::{
ColumnConfig,
CliCommand,
CliCommandMap,
format_and_print_result_full,
get_output_format,
OUTPUT_FORMAT,
};
use proxmox::sys::linux::tty;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox_backup::backup::{
encrypt_key_with_passphrase, load_and_decrypt_key, store_key_config, KeyConfig,
encrypt_key_with_passphrase,
load_and_decrypt_key,
store_key_config,
CryptConfig,
Kdf,
KeyConfig,
KeyDerivationConfig,
};
use proxmox_backup::tools;
@ -71,27 +85,6 @@ pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
bail!("no password input mechanism available");
}
#[api(
default: "scrypt",
)]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api(
input: {
properties: {
@ -120,7 +113,10 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let kdf = kdf.unwrap_or_default();
let key = proxmox::sys::linux::random_data(32)?;
let mut key_array = [0u8; 32];
proxmox::sys::linux::fill_with_random_data(&mut key_array)?;
let crypt_config = CryptConfig::new(key_array.clone())?;
let key = key_array.to_vec();
match kdf {
Kdf::None => {
@ -134,10 +130,11 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
created,
modified: created,
data: key,
fingerprint: Some(crypt_config.fingerprint()),
},
)?;
}
Kdf::Scrypt => {
Kdf::Scrypt | Kdf::PBKDF2 => {
// always read passphrase from tty
if !tty::stdin_isatty() {
bail!("unable to read passphrase - no tty");
@ -145,7 +142,8 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let password = tty::read_and_verify_password("Encryption Key Password: ")?;
let key_config = encrypt_key_with_passphrase(&key, &password)?;
let mut key_config = encrypt_key_with_passphrase(&key, &password, kdf)?;
key_config.fingerprint = Some(crypt_config.fingerprint());
store_key_config(&path, false, key_config)?;
}
@ -188,7 +186,7 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
bail!("unable to change passphrase - no tty");
}
let (key, created) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
let (key, created, fingerprint) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
match kdf {
Kdf::None => {
@ -202,14 +200,16 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
created, // keep original value
modified,
data: key.to_vec(),
fingerprint: Some(fingerprint),
},
)?;
}
Kdf::Scrypt => {
Kdf::Scrypt | Kdf::PBKDF2 => {
let password = tty::read_and_verify_password("New Password: ")?;
let mut new_key_config = encrypt_key_with_passphrase(&key, &password)?;
let mut new_key_config = encrypt_key_with_passphrase(&key, &password, kdf)?;
new_key_config.created = created; // keep original value
new_key_config.fingerprint = Some(fingerprint);
store_key_config(&path, true, new_key_config)?;
}
@ -218,6 +218,91 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
Ok(())
}
#[api(
properties: {
kdf: {
type: Kdf,
},
},
)]
#[derive(Deserialize, Serialize)]
/// Encryption Key Information
struct KeyInfo {
/// Path to key
path: String,
kdf: Kdf,
/// Key creation time
pub created: i64,
/// Key modification time
pub modified: i64,
/// Key fingerprint
pub fingerprint: Option<String>,
}
#[api(
input: {
properties: {
path: {
description: "Key file. Without this the default key's metadata will be shown.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// Print the encryption key's metadata.
fn show_key(
path: Option<String>,
param: Value,
) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => {
let path = find_default_encryption_key()?
.ok_or_else(|| {
format_err!("no encryption file provided and no default file found")
})?;
path
}
};
let config: KeyConfig = serde_json::from_slice(&file_get_contents(path.clone())?)?;
let output_format = get_output_format(&param);
let info = KeyInfo {
path: format!("{:?}", path),
kdf: match config.kdf {
Some(KeyDerivationConfig::PBKDF2 { .. }) => Kdf::PBKDF2,
Some(KeyDerivationConfig::Scrypt { .. }) => Kdf::Scrypt,
None => Kdf::None,
},
created: config.created,
modified: config.modified,
fingerprint: match config.fingerprint {
Some(ref fp) => Some(format!("{}", fp)),
None => None,
},
};
let options = proxmox::api::cli::default_table_format_options()
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("kdf"))
.column(ColumnConfig::new("created").renderer(tools::format::render_epoch))
.column(ColumnConfig::new("modified").renderer(tools::format::render_epoch))
.column(ColumnConfig::new("fingerprint"));
let schema = &KeyInfo::API_SCHEMA;
format_and_print_result_full(&mut serde_json::to_value(info)?, schema, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
@ -313,13 +398,47 @@ fn paper_key(
};
let data = file_get_contents(&path)?;
let data = std::str::from_utf8(&data)?;
let data = String::from_utf8(data)?;
let (data, is_private_key) = if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data
.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
(lines, true)
} else {
match serde_json::from_str::<KeyConfig>(&data) {
Ok(key_config) => {
let lines = serde_json::to_string_pretty(&key_config)?
.lines()
.map(String::from)
.collect();
(lines, false)
},
Err(err) => {
eprintln!("Couldn't parse '{:?}' as KeyConfig - {}", path, err);
bail!("Neither a PEM-formatted private key, nor a PBS key file.");
},
}
};
let format = output_format.unwrap_or(PaperkeyFormat::Html);
match format {
PaperkeyFormat::Html => paperkey_html(data, subject),
PaperkeyFormat::Text => paperkey_text(data, subject),
PaperkeyFormat::Html => paperkey_html(&data, subject, is_private_key),
PaperkeyFormat::Text => paperkey_text(&data, subject, is_private_key),
}
}
@ -337,6 +456,10 @@ pub fn cli() -> CliCommandMap {
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let key_show_cmd_def = CliCommand::new(&API_METHOD_SHOW_KEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let paper_key_cmd_def = CliCommand::new(&API_METHOD_PAPER_KEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
@ -346,10 +469,11 @@ pub fn cli() -> CliCommandMap {
.insert("create-master-key", key_create_master_key_cmd_def)
.insert("import-master-pubkey", key_import_master_pubkey_cmd_def)
.insert("change-passphrase", key_change_passphrase_cmd_def)
.insert("show", key_show_cmd_def)
.insert("paperkey", paper_key_cmd_def)
}
fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
fn paperkey_html(lines: &[String], subject: Option<String>, is_private: bool) -> Result<(), Error> {
let img_size_pt = 500;
@ -378,21 +502,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("<p>Subject: {}</p>", subject);
}
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
if is_private {
const BLOCK_SIZE: usize = 20;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
@ -413,8 +523,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>");
let data = data.join("\n");
let qr_code = generate_qr_code("svg", data.as_bytes())?;
let qr_code = generate_qr_code("svg", data)?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>");
@ -430,16 +539,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
return Ok(());
}
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("<div style=\"page-break-inside: avoid\">");
println!("<p>");
println!("-----BEGIN PROXMOX BACKUP KEY-----");
for line in key_text.lines() {
for line in lines {
println!("{}", line);
}
@ -447,7 +553,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>");
let qr_code = generate_qr_code("svg", key_text.as_bytes())?;
let qr_code = generate_qr_code("svg", lines)?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>");
@ -464,27 +570,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
Ok(())
}
fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
fn paperkey_text(lines: &[String], subject: Option<String>, is_private: bool) -> Result<(), Error> {
if let Some(subject) = subject {
println!("Subject: {}\n", subject);
}
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
if is_private {
const BLOCK_SIZE: usize = 5;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
@ -499,8 +591,7 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
for l in start..end {
println!("{:-2}: {}", l, lines[l]);
}
let data = data.join("\n");
let qr_code = generate_qr_code("utf8i", data.as_bytes())?;
let qr_code = generate_qr_code("utf8i", data)?;
let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
println!("{}", qr_code);
@ -510,14 +601,13 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
return Ok(());
}
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("-----BEGIN PROXMOX BACKUP KEY-----");
println!("{}", key_text);
for line in lines {
println!("{}", line);
}
println!("-----END PROXMOX BACKUP KEY-----");
let qr_code = generate_qr_code("utf8i", key_text.as_bytes())?;
let qr_code = generate_qr_code("utf8i", &lines)?;
let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
@ -526,8 +616,7 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
Ok(())
}
fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> {
fn generate_qr_code(output_type: &str, lines: &[String]) -> Result<Vec<u8>, Error> {
let mut child = Command::new("qrencode")
.args(&["-t", output_type, "-m0", "-s1", "-lm", "--output", "-"])
.stdin(Stdio::piped())
@ -537,7 +626,8 @@ fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> {
{
let stdin = child.stdin.as_mut()
.ok_or_else(|| format_err!("Failed to open stdin"))?;
stdin.write_all(data)
let data = lines.join("\n");
stdin.write_all(data.as_bytes())
.map_err(|_| format_err!("Failed to write to stdin"))?;
}

View File

@ -8,6 +8,8 @@ mod task;
pub use task::*;
mod catalog;
pub use catalog::*;
mod snapshot;
pub use snapshot::*;
pub mod key;

View File

@ -182,7 +182,9 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
println!("Encryption key file: '{:?}'", path);
let (key, _, fingerprint) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
println!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?))
}
};
@ -212,6 +214,7 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
).await?;
let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let file_info = manifest.lookup_file_info(&server_archive_name)?;

View File

@ -0,0 +1,416 @@
use std::sync::Arc;
use anyhow::Error;
use serde_json::{json, Value};
use proxmox::{
api::{api, cli::*},
tools::fs::file_get_contents,
};
use proxmox_backup::{
tools,
api2::types::*,
backup::{
CryptMode,
CryptConfig,
DataBlob,
BackupGroup,
decrypt_key,
}
};
use crate::{
REPO_URL_SCHEMA,
KEYFILE_SCHEMA,
KEYFD_SCHEMA,
BackupDir,
api_datastore_list_snapshots,
complete_backup_snapshot,
complete_backup_group,
complete_repository,
connect,
extract_repository_from_value,
record_repository,
keyfile_parameters,
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
group: {
type: String,
description: "Backup group.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List backup snapshots.
async fn list_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?)
} else {
None
};
let mut data = api_datastore_list_snapshots(&client, repo.store(), group).await?;
record_repository(&repo);
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned())
};
let render_files = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let mut filenames = Vec::new();
for file in &item.files {
filenames.push(file.filename.to_string());
}
Ok(tools::format::render_backup_file_list(&filenames[..]))
};
let options = default_table_format_options()
.sortby("backup-type", false)
.sortby("backup-id", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
.column(ColumnConfig::new("size").renderer(tools::format::render_bytes_human_readable))
.column(ColumnConfig::new("files").renderer(render_files))
;
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOTS;
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List snapshot files.
async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
let mut result = client.get(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOT_FILES;
let mut data: Value = result["data"].take();
let options = default_table_format_options();
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
}
}
)]
/// Forget (remove) backup snapshots.
async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
let result = client.delete(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
Ok(result)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Group/Snapshot path.",
},
logfile: {
type: String,
description: "The path to the log file you want to upload.",
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
}
}
)]
/// Upload backup log file.
async fn upload_log(param: Value) -> Result<Value, Error> {
let logfile = tools::required_string_param(&param, "logfile")?;
let repo = extract_repository_from_value(&param)?;
let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(&repo)?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created, _) = decrypt_key(&key, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let data = file_get_contents(logfile)?;
// fixme: howto sign log?
let blob = match crypt_mode {
CryptMode::None | CryptMode::SignOnly => DataBlob::encode(&data, None, true)?,
CryptMode::Encrypt => DataBlob::encode(&data, crypt_config.as_ref().map(Arc::as_ref), true)?,
};
let raw_data = blob.into_inner();
let path = format!("api2/json/admin/datastore/{}/upload-backup-log", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let body = hyper::Body::from(raw_data);
client.upload("application/octet-stream", body, &path, Some(args)).await
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// Show notes
async fn show_notes(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/notes", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let output_format = get_output_format(&param);
let mut result = client.get(&path, Some(args)).await?;
let notes = result["data"].take();
if output_format == "text" {
if let Some(notes) = notes.as_str() {
println!("{}", notes);
}
} else {
format_and_print_result(
&json!({
"notes": notes,
}),
&output_format,
);
}
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
notes: {
type: String,
description: "The Notes.",
},
}
}
)]
/// Update Notes
async fn update_notes(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let notes = tools::required_string_param(&param, "notes")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/notes", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
"notes": notes,
});
client.put(&path, Some(args)).await?;
Ok(Value::Null)
}
fn notes_cli() -> CliCommandMap {
CliCommandMap::new()
.insert(
"show",
CliCommand::new(&API_METHOD_SHOW_NOTES)
.arg_param(&["snapshot"])
.completion_cb("snapshot", complete_backup_snapshot),
)
.insert(
"update",
CliCommand::new(&API_METHOD_UPDATE_NOTES)
.arg_param(&["snapshot", "notes"])
.completion_cb("snapshot", complete_backup_snapshot),
)
}
pub fn snapshot_mgtm_cli() -> CliCommandMap {
CliCommandMap::new()
.insert("notes", notes_cli())
.insert(
"list",
CliCommand::new(&API_METHOD_LIST_SNAPSHOTS)
.arg_param(&["group"])
.completion_cb("group", complete_backup_group)
.completion_cb("repository", complete_repository)
)
.insert(
"files",
CliCommand::new(&API_METHOD_LIST_SNAPSHOT_FILES)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot)
)
.insert(
"forget",
CliCommand::new(&API_METHOD_FORGET_SNAPSHOTS)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot)
)
.insert(
"upload-log",
CliCommand::new(&API_METHOD_UPLOAD_LOG)
.arg_param(&["snapshot", "logfile"])
.completion_cb("snapshot", complete_backup_snapshot)
.completion_cb("logfile", tools::complete_file_name)
.completion_cb("keyfile", tools::complete_file_name)
.completion_cb("repository", complete_repository)
)
}

View File

@ -124,7 +124,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let mut client = connect(&repo)?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
let _ = client.delete(&path, None).await?;
Ok(Value::Null)

View File

@ -475,6 +475,13 @@ impl BackupWriter {
Ok(index)
}
/// Retrieve backup time of last backup
pub async fn previous_backup_time(&self) -> Result<Option<i64>, Error> {
let data = self.h2.get("previous_backup_time", None).await?;
serde_json::from_value(data)
.map_err(|err| format_err!("Failed to parse backup time value returned by server - {}", err))
}
/// Download backup manifest (index.json) of last backup
pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {

View File

@ -534,6 +534,15 @@ impl HttpClient {
self.request(req).await
}
pub async fn put(
&mut self,
path: &str,
data: Option<Value>,
) -> Result<Value, Error> {
let req = Self::request_builder(&self.server, self.port, "PUT", path, data)?;
self.request(req).await
}
pub async fn download(
&mut self,
path: &str,

View File

@ -3,6 +3,7 @@ use std::task::{Context, Poll};
use anyhow::{Error};
use futures::*;
use pin_project::pin_project;
use crate::backup::ChunkInfo;
@ -15,7 +16,9 @@ pub trait MergeKnownChunks: Sized {
fn merge_known_chunks(self) -> MergeKnownChunksQueue<Self>;
}
#[pin_project]
pub struct MergeKnownChunksQueue<S> {
#[pin]
input: S,
buffer: Option<MergedChunkInfo>,
}
@ -39,10 +42,10 @@ where
type Item = Result<MergedChunkInfo, Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
let this = unsafe { self.get_unchecked_mut() };
let mut this = self.project();
loop {
match ready!(unsafe { Pin::new_unchecked(&mut this.input) }.poll_next(cx)) {
match ready!(this.input.as_mut().poll_next(cx)) {
Some(Err(err)) => return Poll::Ready(Some(Err(err))),
None => {
if let Some(last) = this.buffer.take() {
@ -58,13 +61,13 @@ where
match last {
None => {
this.buffer = Some(MergedChunkInfo::Known(list));
*this.buffer = Some(MergedChunkInfo::Known(list));
// continue
}
Some(MergedChunkInfo::Known(mut last_list)) => {
last_list.extend_from_slice(&list);
let len = last_list.len();
this.buffer = Some(MergedChunkInfo::Known(last_list));
*this.buffer = Some(MergedChunkInfo::Known(last_list));
if len >= 64 {
return Poll::Ready(this.buffer.take().map(Ok));
@ -72,7 +75,7 @@ where
// continue
}
Some(MergedChunkInfo::New(_)) => {
this.buffer = Some(MergedChunkInfo::Known(list));
*this.buffer = Some(MergedChunkInfo::Known(list));
return Poll::Ready(last.map(Ok));
}
}
@ -80,7 +83,7 @@ where
MergedChunkInfo::New(chunk_info) => {
let new = MergedChunkInfo::New(chunk_info);
if let Some(last) = this.buffer.take() {
this.buffer = Some(new);
*this.buffer = Some(new);
return Poll::Ready(Some(Ok(last)));
} else {
return Poll::Ready(Some(Ok(new)));

View File

@ -2,6 +2,7 @@ use anyhow::{bail, Error};
use serde_json::json;
use super::HttpClient;
use crate::tools;
pub async fn display_task_log(
client: HttpClient,
@ -9,7 +10,7 @@ pub async fn display_task_log(
strip_date: bool,
) -> Result<(), Error> {
let path = format!("api2/json/nodes/localhost/tasks/{}/log", upid_str);
let path = format!("api2/json/nodes/localhost/tasks/{}/log", tools::percent_encode_component(upid_str));
let mut start = 1;
let limit = 500;

View File

@ -237,7 +237,15 @@ impl<'a, 'b> Archiver<'a, 'b> {
let old_patterns_count = self.patterns.len();
self.read_pxar_excludes(dir.as_raw_fd())?;
let file_list = self.generate_directory_file_list(&mut dir, is_root)?;
let mut file_list = self.generate_directory_file_list(&mut dir, is_root)?;
if is_root && old_patterns_count > 0 {
file_list.push(FileListEntry {
name: CString::new(".pxarexclude-cli").unwrap(),
path: PathBuf::new(),
stat: unsafe { std::mem::zeroed() },
});
}
let dir_fd = dir.as_raw_fd();
@ -247,7 +255,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
let file_name = file_entry.name.to_bytes();
if is_root && file_name == b".pxarexclude-cli" {
self.encode_pxarexclude_cli(encoder, &file_entry.name)?;
self.encode_pxarexclude_cli(encoder, &file_entry.name, old_patterns_count)?;
continue;
}
@ -379,8 +387,9 @@ impl<'a, 'b> Archiver<'a, 'b> {
&mut self,
encoder: &mut Encoder,
file_name: &CStr,
patterns_count: usize,
) -> Result<(), Error> {
let content = generate_pxar_excludes_cli(&self.patterns);
let content = generate_pxar_excludes_cli(&self.patterns[..patterns_count]);
if let Some(ref mut catalog) = self.catalog {
catalog.add_file(file_name, content.len() as u64, 0)?;
@ -404,14 +413,6 @@ impl<'a, 'b> Archiver<'a, 'b> {
let mut file_list = Vec::new();
if is_root && !self.patterns.is_empty() {
file_list.push(FileListEntry {
name: CString::new(".pxarexclude-cli").unwrap(),
path: PathBuf::new(),
stat: unsafe { std::mem::zeroed() },
});
}
for file in dir.iter() {
let file = file?;
@ -425,10 +426,6 @@ impl<'a, 'b> Archiver<'a, 'b> {
continue;
}
if file_name_bytes == b".pxarexclude" {
continue;
}
let os_file_name = OsStr::from_bytes(file_name_bytes);
assert_single_path_component(os_file_name)?;
let full_path = self.path.join(os_file_name);
@ -443,9 +440,10 @@ impl<'a, 'b> Archiver<'a, 'b> {
Err(err) => bail!("stat failed on {:?}: {}", full_path, err),
};
let match_path = PathBuf::from("/").join(full_path.clone());
if self
.patterns
.matches(full_path.as_os_str().as_bytes(), Some(stat.st_mode as u32))
.matches(match_path.as_os_str().as_bytes(), Some(stat.st_mode as u32))
== Some(MatchType::Exclude)
{
continue;

View File

@ -81,7 +81,7 @@ const VERIFY_ERR_TEMPLATE: &str = r###"
Job ID: {{job.id}}
Datastore: {{job.store}}
Verification failed on these snapshots:
Verification failed on these snapshots/groups:
{{#each errors}}
{{this~}}

View File

@ -20,6 +20,7 @@ fn files() -> Vec<&'static str> {
fn commands() -> Vec<(&'static str, Vec<&'static str>)> {
vec![
// ("<command>", vec![<arg [, arg]>])
("proxmox-backup-manager", vec!["versions", "--verbose"]),
("df", vec!["-h"]),
("lsblk", vec!["--ascii"]),
("zpool", vec!["status"]),
@ -54,10 +55,10 @@ pub fn generate_report() -> String {
.map(|file_name| {
let content = match file_read_optional_string(Path::new(file_name)) {
Ok(Some(content)) => content,
Ok(None) => String::from("# file does not exists"),
Ok(None) => String::from("# file does not exist"),
Err(err) => err.to_string(),
};
format!("# cat '{}'\n{}", file_name, content)
format!("$ cat '{}'\n{}", file_name, content)
})
.collect::<Vec<String>>()
.join("\n\n");
@ -73,14 +74,14 @@ pub fn generate_report() -> String {
Ok(output) => String::from_utf8_lossy(&output.stdout).to_string(),
Err(err) => err.to_string(),
};
format!("# `{} {}`\n{}", command, args.join(" "), output)
format!("$ `{} {}`\n{}", command, args.join(" "), output)
})
.collect::<Vec<String>>()
.join("\n\n");
let function_outputs = function_calls()
.iter()
.map(|(desc, function)| format!("# {}\n{}", desc, function()))
.map(|(desc, function)| format!("$ {}\n{}", desc, function()))
.collect::<Vec<String>>()
.join("\n\n");

View File

@ -623,6 +623,10 @@ fn check_auth(
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?;
if !user_info.is_active_auth_id(&tokenid) {
bail!("user account or token disabled or expired.");
}
let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret)

View File

@ -69,8 +69,15 @@ pub fn do_verification_job(
let result = verify_all_backups(datastore, worker.clone(), worker.upid(), None, Some(&filter));
let job_result = match result {
Ok(ref errors) if errors.is_empty() => Ok(()),
Ok(_) => Err(format_err!("verification failed - please check the log for details")),
Ok(ref failed_dirs) if failed_dirs.is_empty() => Ok(()),
Ok(ref failed_dirs) => {
worker.log("Failed to verify the following snapshots/groups:");
for dir in failed_dirs {
worker.log(format!("\t{}", dir));
}
Err(format_err!("verification failed - please check the log for details"))
},
Err(_) => Err(format_err!("verification failed - job aborted")),
};

View File

@ -5,16 +5,14 @@ use std::any::Any;
use std::collections::HashMap;
use std::hash::BuildHasher;
use std::fs::File;
use std::io::{self, BufRead, ErrorKind, Read, Seek, SeekFrom};
use std::io::{self, BufRead, Read, Seek, SeekFrom};
use std::os::unix::io::RawFd;
use std::path::Path;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use openssl::hash::{hash, DigestBytes, MessageDigest};
use percent_encoding::AsciiSet;
use proxmox::tools::vec;
use percent_encoding::{utf8_percent_encode, AsciiSet};
pub use proxmox::tools::fd::Fd;
@ -25,45 +23,43 @@ pub mod borrow;
pub mod cert;
pub mod daemon;
pub mod disks;
pub mod fs;
pub mod format;
pub mod lru_cache;
pub mod runtime;
pub mod ticket;
pub mod statistics;
pub mod systemd;
pub mod nom;
pub mod fs;
pub mod fuse_loop;
pub mod http;
pub mod logrotate;
pub mod loopdev;
pub mod fuse_loop;
pub mod lru_cache;
pub mod nom;
pub mod runtime;
pub mod socket;
pub mod statistics;
pub mod subscription;
pub mod systemd;
pub mod ticket;
pub mod xattr;
pub mod zip;
pub mod http;
mod parallel_handler;
pub use parallel_handler::*;
pub mod parallel_handler;
pub use parallel_handler::ParallelHandler;
mod wrapped_reader_stream;
pub use wrapped_reader_stream::*;
pub use wrapped_reader_stream::{AsyncReaderStream, StdChannelStream, WrappedReaderStream};
mod async_channel_writer;
pub use async_channel_writer::*;
pub use async_channel_writer::AsyncChannelWriter;
mod std_channel_writer;
pub use std_channel_writer::*;
pub mod xattr;
pub use std_channel_writer::StdChannelWriter;
mod process_locker;
pub use process_locker::*;
pub use process_locker::{ProcessLocker, ProcessLockExclusiveGuard, ProcessLockSharedGuard};
mod file_logger;
pub use file_logger::*;
pub use file_logger::{FileLogger, FileLogOptions};
mod broadcast_future;
pub use broadcast_future::*;
pub use broadcast_future::{BroadcastData, BroadcastFuture};
/// The `BufferedRead` trait provides a single function
/// `buffered_read`. It returns a reference to an internal buffer. The
@ -75,76 +71,6 @@ pub trait BufferedRead {
fn buffered_read(&mut self, offset: u64) -> Result<&[u8], Error>;
}
/// Split a file into equal sized chunks. The last chunk may be
/// smaller. Note: We cannot implement an `Iterator`, because iterators
/// cannot return a borrowed buffer ref (we want zero-copy)
pub fn file_chunker<C, R>(mut file: R, chunk_size: usize, mut chunk_cb: C) -> Result<(), Error>
where
C: FnMut(usize, &[u8]) -> Result<bool, Error>,
R: Read,
{
const READ_BUFFER_SIZE: usize = 4 * 1024 * 1024; // 4M
if chunk_size > READ_BUFFER_SIZE {
bail!("chunk size too large!");
}
let mut buf = vec::undefined(READ_BUFFER_SIZE);
let mut pos = 0;
let mut file_pos = 0;
loop {
let mut eof = false;
let mut tmp = &mut buf[..];
// try to read large portions, at least chunk_size
while pos < chunk_size {
match file.read(tmp) {
Ok(0) => {
eof = true;
break;
}
Ok(n) => {
pos += n;
if pos > chunk_size {
break;
}
tmp = &mut tmp[n..];
}
Err(ref e) if e.kind() == ErrorKind::Interrupted => { /* try again */ }
Err(e) => bail!("read chunk failed - {}", e.to_string()),
}
}
let mut start = 0;
while start + chunk_size <= pos {
if !(chunk_cb)(file_pos, &buf[start..start + chunk_size])? {
break;
}
file_pos += chunk_size;
start += chunk_size;
}
if eof {
if start < pos {
(chunk_cb)(file_pos, &buf[start..pos])?;
//file_pos += pos - start;
}
break;
} else {
let rest = pos - start;
if rest > 0 {
let ptr = buf.as_mut_ptr();
unsafe {
std::ptr::copy_nonoverlapping(ptr.add(start), ptr, rest);
}
pos = rest;
} else {
pos = 0;
}
}
}
Ok(())
}
pub fn json_object_to_query(data: Value) -> Result<String, Error> {
let mut query = url::form_urlencoded::Serializer::new(String::new());
@ -363,6 +289,11 @@ pub fn extract_cookie(cookie: &str, cookie_name: &str) -> Option<String> {
None
}
/// percent encode a url component
pub fn percent_encode_component(comp: &str) -> String {
utf8_percent_encode(comp, percent_encoding::NON_ALPHANUMERIC).to_string()
}
pub fn join(data: &Vec<String>, sep: char) -> String {
let mut list = String::new();

View File

@ -361,6 +361,7 @@ where
},
priority: priority_res,
section: section_res,
extra_info: None,
});
}
}

View File

@ -102,6 +102,40 @@ impl From<u64> for HumanByte {
}
}
pub fn as_fingerprint(bytes: &[u8]) -> String {
proxmox::tools::digest_to_hex(bytes)
.as_bytes()
.chunks(2)
.map(|v| std::str::from_utf8(v).unwrap())
.collect::<Vec<&str>>().join(":")
}
pub mod bytes_as_fingerprint {
use serde::{Deserialize, Serializer, Deserializer};
pub fn serialize<S>(
bytes: &[u8; 32],
serializer: S,
) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let s = crate::tools::format::as_fingerprint(bytes);
serializer.serialize_str(&s)
}
pub fn deserialize<'de, D>(
deserializer: D,
) -> Result<[u8; 32], D::Error>
where
D: Deserializer<'de>,
{
let mut s = String::deserialize(deserializer)?;
s.retain(|c| c != ':');
proxmox::tools::hex_to_digest(&s).map_err(serde::de::Error::custom)
}
}
#[test]
fn correct_byte_convert() {
fn convert(b: usize) -> String {

View File

@ -395,7 +395,7 @@ Ext.define('PBS.dashboard.SubscriptionInfo', {
break;
case 0:
icon = 'times-circle critical';
message = gettext('<h1>No valid subscription</h1>' + PBS.Utils.noSubKeyHtml);
message = `<h1>${gettext('No valid subscription')}</h1>${PBS.Utils.noSubKeyHtml}`;
break;
default:
throw 'invalid subscription status';

View File

@ -83,7 +83,7 @@ js/proxmox-backup-gui.js: .lint-incremental js OnlineHelpInfo.js ${JSSRC}
.PHONY: check
check:
eslint ${JSSRC}
eslint --strict ${JSSRC}
touch ".lint-incremental"
.lint-incremental: ${JSSRC}

View File

@ -75,42 +75,6 @@ const proxmoxOnlineHelpInfo = {
"link": "/docs/pve-integration.html#pve-integration",
"title": "`Proxmox VE`_ Integration"
},
"rst-primer": {
"link": "/docs/reStructuredText-primer.html#rst-primer",
"title": "reStructuredText Primer"
},
"rst-inline-markup": {
"link": "/docs/reStructuredText-primer.html#rst-inline-markup",
"title": "Inline markup"
},
"rst-literal-blocks": {
"link": "/docs/reStructuredText-primer.html#rst-literal-blocks",
"title": "Literal blocks"
},
"rst-doctest-blocks": {
"link": "/docs/reStructuredText-primer.html#rst-doctest-blocks",
"title": "Doctest blocks"
},
"rst-tables": {
"link": "/docs/reStructuredText-primer.html#rst-tables",
"title": "Tables"
},
"rst-field-lists": {
"link": "/docs/reStructuredText-primer.html#rst-field-lists",
"title": "Field Lists"
},
"rst-roles-alt": {
"link": "/docs/reStructuredText-primer.html#rst-roles-alt",
"title": "Roles"
},
"rst-directives": {
"link": "/docs/reStructuredText-primer.html#rst-directives",
"title": "Directives"
},
"html-meta": {
"link": "/docs/reStructuredText-primer.html#html-meta",
"title": "HTML Metadata"
},
"storage-disk-management": {
"link": "/docs/storage.html#storage-disk-management",
"title": "Disk Management"

View File

@ -161,6 +161,11 @@ Ext.define('PBS.Utils', {
return `Datastore ${what} ${id}`;
},
// mimics Display trait in backend
renderKeyID: function(fingerprint) {
return fingerprint.substring(0, 23);
},
parse_datastore_worker_id: function(type, id) {
let result;
let res;

View File

@ -76,6 +76,17 @@ Ext.define('PBS.config.UserView', {
}).show();
},
renderName: function(val, cell, rec) {
let name = [];
if (rec.data.firstname) {
name.push(rec.data.firstname);
}
if (rec.data.lastname) {
name.push(rec.data.lastname);
}
return name.join(' ');
},
renderUsername: function(userid) {
return Ext.String.htmlEncode(userid.match(/^(.+)@([^@]+)$/)[1]);
},
@ -181,6 +192,7 @@ Ext.define('PBS.config.UserView', {
width: 150,
sortable: true,
dataIndex: 'firstname',
renderer: 'renderName',
},
{
header: gettext('Comment'),

View File

@ -12,6 +12,7 @@ Ext.define('pbs-data-store-snapshots', {
'files',
'owner',
'verification',
'fingerprint',
{ name: 'size', type: 'int', allowNull: true },
{
name: 'crypt-mode',
@ -182,6 +183,7 @@ Ext.define('PBS.DataStoreContent', {
for (const file of data.files) {
file.text = file.filename;
file['crypt-mode'] = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
file.fingerprint = data.fingerprint;
file.leaf = true;
file.matchesFilter = true;
@ -699,7 +701,16 @@ Ext.define('PBS.DataStoreContent', {
if (iconCls) {
iconTxt = `<i class="fa fa-fw fa-${iconCls}"></i> `;
}
return (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText;
let tip;
if (v !== PBS.Utils.cryptmap.indexOf('none') && record.data.fingerprint !== undefined) {
tip = "Key: " + PBS.Utils.renderKeyID(record.data.fingerprint);
}
let txt = (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText;
if (record.parentNode.id === 'root' || tip === undefined) {
return txt;
} else {
return `<span data-qtip="${tip}">${txt}</span>`;
}
},
},
{

View File

@ -84,13 +84,18 @@ Ext.define('PBS.datastore.DataStoreList', {
for (const [store, panel] of Object.entries(me.datastores)) {
if (!found[store]) {
me.remove(panel);
delete me.datastores[store];
}
}
let hasDatastores = Object.keys(me.datastores).length > 0;
me.getComponent('emptybox').setHidden(hasDatastores);
},
addSorted: function(data) {
let me = this;
let i = 0;
let i = 1;
let datastores = Object
.keys(me.datastores)
.sort((a, b) => a.localeCompare(b));
@ -115,7 +120,14 @@ Ext.define('PBS.datastore.DataStoreList', {
initComponent: function() {
let me = this;
me.items = [];
me.items = [
{
itemId: 'emptybox',
hidden: true,
xtype: 'box',
html: gettext('No Datastores configured'),
},
];
me.datastores = {};
// todo make configurable?
me.since = (Date.now()/1000 - 30 * 24*3600).toFixed(0);

View File

@ -56,7 +56,7 @@ Ext.define('PBS.DataStoreNotes', {
url: me.url,
waitMsgTarget: me,
failure: function(response, opts) {
me.update(gettext('Error') + " " + response.htmlStatus);
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
me.setCollapsed(false);
},
success: function(response, opts) {

View File

@ -78,7 +78,7 @@ Ext.define('PBS.DataStoreInfo', {
let datastore = encodeURIComponent(view.datastore);
me.store = Ext.create('Proxmox.data.ObjectStore', {
interval: 5*1000,
url: `/api2/json/admin/datastore/${datastore}/status`,
url: `/api2/json/admin/datastore/${datastore}/status/?verbose=true`,
});
me.store.on('load', me.onLoad, me);
},
@ -264,6 +264,13 @@ Ext.define('PBS.DataStoreSummary', {
me.down('pbsDataStoreInfo').setTitle(`${me.datastore} (${path})`);
me.down('pbsDataStoreNotes').setNotes(response.result.data.comment);
},
failure: function(response) {
// fallback if e.g. we have no permissions to the config
let rec = Ext.getStore('pbs-datastore-list').findRecord('store', me.datastore);
if (rec) {
me.down('pbsDataStoreNotes').setNotes(rec.data.comment || "");
}
},
});
me.query('proxmoxRRDChart').forEach((chart) => {

View File

@ -14,66 +14,54 @@ Ext.define('PBS.panel.PruneInputPanel', {
column1: [
{
xtype: 'proxmoxintegerfield',
fieldLabel: gettext('Keep Last'),
xtype: 'pbsPruneKeepInput',
name: 'keep-last',
fieldLabel: gettext('keep-last'),
cbind: {
deleteEmpty: '{!isCreate}',
},
minValue: 1,
allowBlank: true,
},
{
xtype: 'proxmoxintegerfield',
fieldLabel: gettext('Keep Daily'),
xtype: 'pbsPruneKeepInput',
name: 'keep-daily',
fieldLabel: gettext('Keep Daily'),
cbind: {
deleteEmpty: '{!isCreate}',
},
minValue: 1,
allowBlank: true,
},
{
xtype: 'proxmoxintegerfield',
fieldLabel: gettext('Keep Monthly'),
xtype: 'pbsPruneKeepInput',
name: 'keep-monthly',
fieldLabel: gettext('Keep Monthly'),
cbind: {
deleteEmpty: '{!isCreate}',
},
minValue: 1,
allowBlank: true,
},
],
column2: [
{
xtype: 'proxmoxintegerfield',
xtype: 'pbsPruneKeepInput',
fieldLabel: gettext('Keep Hourly'),
name: 'keep-hourly',
cbind: {
deleteEmpty: '{!isCreate}',
},
minValue: 1,
allowBlank: true,
},
{
xtype: 'proxmoxintegerfield',
fieldLabel: gettext('Keep Weekly'),
xtype: 'pbsPruneKeepInput',
name: 'keep-weekly',
fieldLabel: gettext('Keep Weekly'),
cbind: {
deleteEmpty: '{!isCreate}',
},
minValue: 1,
allowBlank: true,
},
{
xtype: 'proxmoxintegerfield',
fieldLabel: gettext('Keep Yearly'),
xtype: 'pbsPruneKeepInput',
name: 'keep-yearly',
fieldLabel: gettext('Keep Yearly'),
cbind: {
deleteEmpty: '{!isCreate}',
},
minValue: 1,
allowBlank: true,
},
],

View File

@ -100,17 +100,26 @@ Ext.define('PBS.window.UserEdit', {
xtype: 'proxmoxtextfield',
name: 'firstname',
fieldLabel: gettext('First Name'),
cbind: {
deleteEmpty: '{!isCreate}',
},
},
{
xtype: 'proxmoxtextfield',
name: 'lastname',
fieldLabel: gettext('Last Name'),
cbind: {
deleteEmpty: '{!isCreate}',
},
},
{
xtype: 'proxmoxtextfield',
name: 'email',
fieldLabel: gettext('E-Mail'),
vtype: 'proxmoxMail',
cbind: {
deleteEmpty: '{!isCreate}',
},
},
],
@ -119,6 +128,9 @@ Ext.define('PBS.window.UserEdit', {
xtype: 'proxmoxtextfield',
name: 'comment',
fieldLabel: gettext('Comment'),
cbind: {
deleteEmpty: '{!isCreate}',
},
},
],
},