Compare commits

...

263 Commits

Author SHA1 Message Date
2d87f2fb73 bump version to 1.0.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-11 14:19:28 +01:00
4c81273274 debian: just install whole images directory
fixes build for recently added tape icon (and includes it for real)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-11 14:19:28 +01:00
73b8f6793e tape: add svg icon 2020-12-11 13:02:23 +01:00
663ef85992 tape: use WorkerTask for erase and rewind 2020-12-11 11:19:33 +01:00
e92c75815b tape: split inventory api
inventory: sync, list labels with uuids,
update_inventory: WorkerTask, updates database
2020-12-11 10:42:29 +01:00
6dbad5b4b5 tape: run label commands as WorkerTask (threads) 2020-12-11 09:10:22 +01:00
bff7e3f3e4 tape: implement barcode-label-mdedia 2020-12-11 07:50:19 +01:00
83abc7497d tape: implement inventory command 2020-12-11 07:39:28 +01:00
8bc5eebeb8 depend on package mt-st
We do not use the mt utility directly, but the package also provides
an udev helper to correctly initialize tape drives (stinit). Also,
the mt utility is helpful for debugging tap issues.
2020-12-11 06:38:45 +01:00
1433b96ba0 control.in: fix indentation
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-12-11 06:31:30 +01:00
be1a8c94ae fix build: add missing file 2020-12-10 13:40:20 +01:00
4606f34353 tape: implement read-label command 2020-12-10 13:20:39 +01:00
7bb720cb4d tape: implement label command 2020-12-10 12:30:27 +01:00
c4d8542ec1 tape: add media pool handling 2020-12-10 11:41:35 +01:00
9700d5374a tape: add media pool cli 2020-12-10 11:13:12 +01:00
05e90d6463 tape: add media pool config api 2020-12-10 10:52:27 +01:00
55118ca18e tape: correctly sort drive api subdir 2020-12-10 10:09:12 +01:00
f70d8091d3 tape: implement option changer-drive-id 2020-12-10 09:09:06 +01:00
a3c709ef21 tape: cli cleanup - avoid api redefinition 2020-12-10 08:35:11 +01:00
4917f1e2d4 tape: implement delete property for drive update command 2020-12-10 08:25:46 +01:00
93829fc680 tape: cleanup load-slot api 2020-12-10 08:04:55 +01:00
5605ca5619 tape: cli cleanup - rename scana-for-* into scan 2020-12-10 07:58:45 +01:00
e49f0c03d9 tape: implement load-media command 2020-12-10 07:52:56 +01:00
0098b712a5 tape: implement eject 2020-12-09 17:50:48 +01:00
5fb694e8c0 tape: implement rewind 2020-12-09 17:43:38 +01:00
583a68a446 tape: implement erase media 2020-12-09 17:35:31 +01:00
e6604cf391 tape: add command line interface proxmox-tape 2020-12-09 13:00:20 +01:00
43cfb3c35a tape: do not remove changer while still used 2020-12-09 12:55:54 +01:00
8a16c571d2 tape: add changer property to drive create api 2020-12-09 12:55:10 +01:00
314652a499 tape: set protected flag for configuration change api methods 2020-12-09 12:02:55 +01:00
6b68e5d597 client: move connect_to_localhost into client module 2020-12-09 11:59:50 +01:00
cafd51bf42 tape: add media state database 2020-12-09 11:21:56 +01:00
eaff09f483 update control file 2020-12-09 11:21:56 +01:00
9b93c62044 remove unused descriptions from api macros
these are now a hard error in the api macro

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-12-09 10:55:18 +01:00
5d90860688 tape: expose basic tape/changer functionality at api2/tape/ 2020-12-08 15:42:50 +01:00
5ba83ed099 tape: check digest on config update 2020-12-08 11:24:38 +01:00
50bf10ad56 tape: add changer configuration API 2020-12-08 09:04:56 +01:00
16d444c979 tape: add tape drive configuration API 2020-12-07 13:04:32 +01:00
fa9c9be737 tape: add tape device driver 2020-12-07 08:29:22 +01:00
2e7014e31d tape: add BlockeReader/BlockedWriter streams
This is the basic format used to write data to tapes.
2020-12-06 12:09:55 +01:00
a84050c1f0 tape: add BlockHeader impl 2020-12-06 10:26:24 +01:00
7c9835465e tape: add helpers to emulate tape read/write behavior 2020-12-06 09:41:16 +01:00
ec00200411 fix bug #3189: fix change_password permission checks, run protected 2020-12-05 16:20:29 +01:00
956e5fec1f depend on mtx (tape changer control)
A very small package with no additional dependencies.
2020-12-05 14:54:12 +01:00
b107fdb99a tape: add tape changer support using 'mtx' command 2020-12-05 14:54:12 +01:00
7320e9ff4b tape: add media invenotry 2020-12-05 12:54:15 +01:00
c4d2d54a6d tape: define useful constants 2020-12-05 12:20:46 +01:00
1142350e8d tape: add media pool config 2020-12-05 11:59:38 +01:00
d735b31345 tape: add tape read trait 2020-12-05 10:54:38 +01:00
e211fee562 tape: add tape write trait 2020-12-05 10:51:34 +01:00
8c15560b68 tape: add file format definitions 2020-12-05 10:45:08 +01:00
327e93711f commit missing file: tape api type definitions 2020-12-04 16:00:52 +01:00
a076571470 tape support: add drive configuration 2020-12-04 15:42:32 +01:00
ff50c07ebf start experimental tape management GUI
You need to set the environment TEST_TAPE_GUI=1 to enable this.
The current GUI is only a placeholder.
2020-12-04 12:50:08 +01:00
179145dc24 backup/datastore: move manifest locking to /run
this fixes the issue that on some filesystems, you cannot recursively
remove a directory when you hold a lock on a file inside (e.g. nfs/cifs)

it is not really backwards compatible (so during an upgrade, there
could be two daemons have the lock), but since the locking was
broken before (see previous patch) it should not really matter
(also it seems very unlikely that someone will trigger this)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-03 09:56:42 +01:00
6bd0a00c46 backup/datastore: really lock manifest on delete
'lock_manifest' returns a Result<File, Error> so we always got the result,
even when we did not get the lock, but we acted like we had.

bubble the locking error up

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-02 14:37:05 +01:00
f6e28f4e62 client/pull: log how many groups to pull were found
if no groups were found, the task log was very confusing as it
contained no real information why nothing was synced, e.g.:

 Starting datastore sync job 'remote:datastore:local-datastore:s-79412799-e6ee'
 Sync datastore 'local-datastore' from 'remote/datastore'
 sync job 'remote:datastore:local-datastore:s-79412799-e6ee' end
 TASK OK

this patch simply logs how many groups were found and are about to be synced:

 Starting datastore sync job 'remote:datastore:local-datastore:s-79412799-e6ee'
 Sync datastore 'local-datastore' from 'remote/datastore'
 found 0 groups to sync
 sync job 'remote:datastore:local-datastore:s-79412799-e6ee' end
 TASK OK

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-02 07:22:50 +01:00
37f1b7dd8d docs: add more thoughts about chunk size 2020-12-01 10:28:06 +01:00
60e6ee46de doc: add some thoughts about large chunk sizes 2020-12-01 08:47:15 +01:00
2260f065d4 cleanup: use extra file for StoreProgress 2020-12-01 06:34:33 +01:00
6eff8dec4f cleanup: remove unnecessary StoreProgress clone() 2020-12-01 06:29:11 +01:00
7e25b9aaaa verify: use same progress as pull
percentage of verified groups, interpolating based on snapshot count
within the group. in most cases, this will also be closer to 'real'
progress since added snapshots (those which will be verified) in active
backup groups will be roughly evenly distributed, while number of total
snapshots per group will be heavily skewed towards those groups which
have existed the longest, even though most of those old snapshots will
only be re-verified very infrequently.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:22:55 +01:00
f867ef9c4a progress: add format variants
for iterating over a single group, or iterating just on the group level

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:22:12 +01:00
fc8920e35d pull: factor out interpolated progress
and add group/snapshot count info.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:13:11 +01:00
7f3b0f67e7 remove BackupGroup::list_groups
BackupInfo::list_backup_groups is identical code-wise, and makes more
sense as entry point for listing groups.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:09:44 +01:00
844660036b gc: don't limit index listing to same filesystem
WalkDir does not follow symlinks by default anyway, and this behaviour
is not documented anywhere. e.g., if a sysadmin mounts 'extra storage'
for some backup group or type (not knowing that only metadata is stored
in those directories), GC will ignore all the indices contained within
and happily garbage collect their chunks..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:07:09 +01:00
efcac39d34 gc: remove duplicate variable
list_images already returns absolute paths, we don't need to prepend
anything.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:06:51 +01:00
cb4b721cb0 gc: log index files found outside of expected scheme
for safety reason, GC finds and marks all index files below the
datastore base path. as a result of regular operations, only index files
within the expected scheme of <TYPE>/<ID>/<TIMESTAMP> should exist.

add a small check + warning if the index list contains index files out
side of this expected scheme, so that an admin with shell access can
investigate.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:06:17 +01:00
7956877f14 gc: shorten progress messages
we have messages starting the phases anyway, and limit the number of
progress updates so that context remains available at all times.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:04:13 +01:00
2241c6795f d/control bump
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:28:02 +01:00
43e60ceb41 file logger: remove test.log after test as well
and a doc formatting fixup

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:13:21 +01:00
b760d8a23f derive PartialEq for Userid
the manual implementation is equivalent

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:10:17 +01:00
2c1592263d tiny clippy hint
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:03:43 +01:00
616533823c don't enforce Vec and String in tools::join
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:56:59 +01:00
913dddea85 minor cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:56:21 +01:00
3530430365 tools avoid unnecessary copying of parameters/properties
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:53:49 +01:00
a4ba60be8f minor cleanups
whitespace, formatting and superfluous lifetime annotations

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:47:31 +01:00
99e98f605c network helpers: fix fd leak in get_network_interfaces
This one always leaked.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
935ee97b17 use fd_change_cloexec helper
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
6b9bfd7fe9 minor cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
dd519bbad1 pxar: stricter file descriptor guards
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
35fe981c7d client: use tools::pipe instead of nix
nix::unistd::pipe returns unguarded RawFds which should be
avoided

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
b6570abe79 changes for proxmox 0.8
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
54813c650e bump proxmox dep to 0.8.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
781106f8c5 ui: fix usage of findRecord
findRecord does not match exactly, but only at the beginning and
case insensitive, by default. Change all calls to be case sensitive
and an exactmatch (we never want the default behaviour afaics).

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-27 07:20:32 +01:00
96f35520a0 bump version to 1.0.5-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 15:30:06 +01:00
490560e0c6 restore: print to STDERR
else restoring to STDOUT is broken..

Reported-by: Dominic Jäger <d.jaeger@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-25 14:38:02 +01:00
52f53d8280 control: update versions 2020-11-25 10:35:51 +01:00
27b8a3f671 bump version to 1.0.4-1 2020-11-25 08:03:11 +01:00
abf9b6da42 docs: fix renamed commands 2020-11-25 08:03:11 +01:00
0c9209b04c cli: rename command "upload-log" to "snapshot upload-log" 2020-11-25 07:57:39 +01:00
edebd52374 cli: rename command "forget" to "snapshot forget" 2020-11-25 07:57:39 +01:00
61205f00fb cli: rename command "files" to "snapshot files" 2020-11-25 07:57:39 +01:00
a303e00289 fingerprint: add new() method 2020-11-25 07:57:39 +01:00
af9f72e9d8 fingerprint: add bytes() accessor
needed for libproxmox-backup-qemu0

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-25 06:34:34 +01:00
5176346b30 ui: fix broken gettext use
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 00:21:17 +01:00
731eeef25b cli: use new alias feature for "snapshots"
Now maps to "snapshot list".
2020-11-24 13:26:43 +01:00
a65e3e4bc0 client: add 'snapshot notes show/update' command
to show and update snapshot notes from the cli

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-24 11:44:19 +01:00
027eb2bbe6 bump version to 1.0.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:56:18 +01:00
6982a54701 gui: add snapshot/file fingerprint tooltip
display short key ID, like backend's Display trait.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
035c40e638 list_snapshots: return manifest fingerprint
for display in clients.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
79c535955d refactor BackupInfo -> SnapshotListItem helper
before adding more fields to the tuple, let's just create the struct
inside the match arms to improve readability.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
8b7f8d3f3d expose previous backup time in backup env
and use this information to add more information to client backup log
and guide the download manifest decision.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
866c859a1e bump version to 1.0.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:33:20 +01:00
23e4e90540 verification: fix message in notification mail
the errors Vec can contain failed groups as well (e.g., if a group has
no or an invalid owner).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
a4fa3fc241 verification job: log failed dirs
else users have to manually search through a potentially very long task
log to find the entries that are different.. this is the same summary
printed at the end of a manual verify task.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
81d10c3b37 cleanup: remove dead code 2020-11-24 08:03:00 +01:00
f1e2904150 paperkey: refactor common code
from formatting functions to main function, and pass along the key data
lines instead of the full string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:57:21 +01:00
23f9503a31 client: check fingerprint after downloading manifest
this is stricter than the check that happened on manifest load, as it
also fails if the manifest is signed but we don't have a key available.

add some additional output at the start of a backup to indicate whether
a previous manifest is available to base the backup on.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:55:12 +01:00
a0ef68b93c manifest: check fingerprint when loading with key
otherwise loading will run into the signature mismatch which is
technically true, but not the complete picture in this case.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:49:51 +01:00
6b127e6ea0 fix #3139: add key fingerprint to manifest
if the manifest is signed/the contained archives/blobs are encrypted.
stored in 'unprotected' area, since there is already a strong binding
between key and manifest via the signature, and this avoids breaking
backwards compatibility for a simple usability improvement.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:45:11 +01:00
5e17dbf2bb cli: cleanup 'key show' - use format_and_print_result_full
We now expose all key derivation functions on the cli, so users can
choose between scrypt or pbkdf2.
2020-11-24 07:32:34 +01:00
dfb04575ad client: add 'key show' command
for (pretty-)printing a keyfile.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:15:29 +01:00
6f2626ae19 client: print key fingerprint and master key
for operations where it makes sense.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:11:26 +01:00
37e60ddcde key: add fingerprint to key config
and set/generate it on
- key creation
- key passphrase change
- key decryption if not already set
- key encryption with master key

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:46 +01:00
05cdc05347 crypt config: add fingerprint mechanism
by computing the ID digest of a hash of a static string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:16 +01:00
6364115b4b OnlineHelpInfo.js problems
Anybody known why I always get the following diff:
2020-11-23 12:57:41 +01:00
2133cd9103 update debian/control 2020-11-23 12:13:58 +01:00
01f84fcce1 ui: datastore content: use our keep field for group pruning
sets some defaults and provides the clear trigger, so less code and
slightly nicer UX.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-21 19:52:03 +01:00
08b3823025 bump dependency on proxmox to 0.7.1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-20 17:38:34 +01:00
968a0ab261 fix systemd-encoded upid strings in http client
since we systemd-encode parts of the upid string, and those can contain
characters that are invalid in urls (e.g. '\'), we have to percent encode
those

add a 'percent_encode_component' helper, so that we can maybe change
the AsciiSet for all uses at the same time

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-19 11:01:19 +01:00
21b552848a prune sim: make numberfields more similar to PBS's
by creating a new class that adds a clear trigger and also uses the
clear-trigger image. Code was taken from the one in PBS's prune window,
but we have default values here, so a bit of adapting was necessary. For
example, we don't want to reset to the original value (which might have
been one of the defaults) when clearing, but always to 'null'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-19 09:47:51 +01:00
fd19256470 gc: treat .bad files like regular chunks
Simplify the phase 2 code by treating .bad files just like regular
chunks, with the exception of stat logging.

To facilitate, we need to touch .bad files in phase 1. We only do this
under the condition that 1) the original chunk is missing (as before),
and 2) the original chunk is still referenced somewhere (since the code
lives in the error handler for a failed chunk touch, it only gets called
for chunks we expect to be there, i.e. ones that are referenced).

Untouched they will then be cleaned up after 24 hours (or after the last
longer-running task finishes).

Reason 2) is also a fix for .bad files not being cleaned up at all if
the original is no longer referenced anywhere (e.g. a user deleting all
snapshots after seeing some corrupt chunks appear).

cond_touch_path is introduced to touch arbitrary paths in the chunk
store with the same logic as touching chunks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-18 14:04:49 +01:00
1ed022576c api: include store in invalid owner errors
since a group might exist in plenty stores

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:24 +01:00
f6aa7b38bf drop now unused BackupInfo::list_backups
all global backup listing now happens via BackupGroup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:21 +01:00
fdfcb74d67 api: filter snapshot counts
unprivileged users should only see the counts related to their part of
the datastore.

while we're at it, switch to a list groups, filter groups, count
snapshots approach (like list_snapshots) to speedup calls to this
endpoint when many unprivileged users share a datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:50 +01:00
98afc7b152 api: make expensive parts of datastore status opt-in
used in the PBS GUI, but also for PVE usage queries which don't need all
the extra expensive information..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:47 +01:00
0d08fceeb9 improve group/snapshot listing
by listing groups first, then filtering, then listing group snapshots.

this cuts down the number of openat/getdirents calls for users that just
have a partial view of the datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 10:37:04 +01:00
3c945d73c2 client/http_client: add put method
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 16:59:14 +01:00
58fcbf5ab7 client: expose all-file-systems option
Useful to avoid the need for a long (and possibly changing) list of include-dev
options in certain situations, e.g. nested ZFS file systems. The option is
already implemented and seems to work as expected. The checks for virtual
filesystems are not affected by this option.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-16 16:59:14 +01:00
3a3f31c947 ui: datastores: hide "no datastore" box by default
avoids that it shows during store load, we do not know if there are
no datastores at that point and have already a loading mask.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-16 16:59:14 +01:00
8fc63287df ui: improve comment behaviour for datastore Summary
when we could not load the config (e.g. missing permissions)
show the comment from the global datastore-list

also show a messagebox for a load error instead of setting
the text of the comment box

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
172473e4de ui: DataStoreList: show message when there are no datastores
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
76f549debb ui: DataStoreList: remove datastores also from hash
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
c9097ff801 pxar: avoid including archive root's exclude patterns in .pxarexclude-cli
The patterns from the archive root's .pxarexclude file are already present in
self.patterns when encode_pxarexclude_cli is called. Pass along the number of
CLI patterns and slice accordingly.

Suggested-By: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 13:05:09 +01:00
fb01fd3af6 visibility cleanups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:53:50 +01:00
fa4bcbcad0 pxar: only generate .pxarexclude-cli if there were CLI parameters
previously a .pxarexclude entry in the root of the archive caused the file to
be generated as well, because the patterns are read before calling
generate_directory_file_list and within the function it wasn't possible to
distinguish between a pattern coming from the CLI and a pattern coming from
archive/root/.pxarexclude

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:08 +01:00
189cdb7427 pxar: include .pxarexclude files in the archive
The documentation states:
.pxarexclude files are treated as regular files and will be included in the
backup archive.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:06 +01:00
874bd5454d pxar: fix anchored exclusion at archive root
There is no leading slash in an entry's full_path, causing an anchored
exclude at the root level to fail, e.g. having "/name" as the content of the
file archive/root/.pxarexclude didn't match the file archive/root/name

Fix this by prepending a leading slash before matching.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:04 +01:00
b649887e9a remove unused function
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:15:15 +01:00
8c62c15f56 follouwp: whitespace cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 11:02:45 +01:00
51ac17b56e api: apt/versions: fix running_kernel string for unknown package case
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-12 11:02:20 +01:00
fc5a012068 manager: versions: non-verbose should actually print server pkg info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 10:28:03 +01:00
5e293f1315 apt: use typed response for get_versions
...and cleanup get_versions for manager CLI.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-12 10:15:32 +01:00
87367decf2 ui: tell ESLint to be strict in check target
the .lint-incremental target, which is implicitly used by the install
target, is still more forgiving to allow faster "change, build, test"
iteration when developing.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
f792220dd4 d/control: update for new pin-project dependency
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
97030c9407 cleanup clippy leftovers
this used to contain a pointer cast, now it doesn't

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
5d1d0f5d6c use pin-project to remove more unsafe blocks
we already have it in our dependency tree, so use it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
294466ee61 manager: versions: unify printing
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
c100fe9108 add versions command to proxmox-backup-manager
Add the versions command to proxmox-backup-manager with a similar output
to pveversion [-v]. It prints the packages line by line with only the
package name, followed by the version and, for proxmox-backup and
proxmox-backup-server, some additional information (running kernel,
running version).

In addition it supports the optional output-format parameter which can
be used to print the complete data in either json, json-pretty or text
format. If output-format is specified, the --verbose parameter is
ignored and the detailed list of packages is printed.

With the addition of the versions command, the report is extended as
well.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 18:30:33 +01:00
e754da3ac2 api: versions: add version also in server package unknown case
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
bc1e52bc38 api: versions: rust fmt cleanups
line length limit is 100

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
6f0073bbb5 api: apt update info: do not serialize extra info if none
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
2decf85d6e add extra_info field to APTUpdateInfo
Add an optional string field to APTUpdateInfo which can be used for
extra information.

This is used for passing running kernel and running version information
in the versions API call together with proxmox-backup and
proxmox-backup-server.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 16:39:11 +01:00
1d8f849457 api2/node/syslog: use 'real_service_name' here also
for now this only does the 'postfix' -> 'postfix@-' conversion,
fixes the issue that we only showed the 'postfix' service syslog
(which is rather empty in a default setup) instead of the instance one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 16:36:42 +01:00
beb07279b6 log source of encryption key
This patch prints the source of the encryption key when running
operations with proxmox-backup-client.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
8c6854c8fd inform user when using default encryption key
Currently if you generate a default encryption key:
`proxmox-backup-client key create --kdf none`

all backup operations which don't explicitly disable encryption will be
encrypted with this key.

I found it quite surprising, that my backups were all encrypted without
me explicitly specfying neither key nor encryption mode

This patch informs the user when the default key is used (and no
crypt-mode is provided explicitly)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
57f472d9bb report: use '$' instead of '#' for showing commands
since some files can contain '#' character for comments. (i.e.,
/etc/hosts)

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:37 +01:00
94ffca10a2 report: fix grammar error
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:33 +01:00
0a274ab0a0 ui: UserView: render name as 'Firstname Lastname'
instead of only the firstname

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
c0026563b0 make user properties deletable
by using our usual pattern for the update call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
e411924c7c rest: check for disabled token (user)
when authenticating a token, and not just when authenticating a
user/ticket.

Reported-By: Dominik Jäger <d.jaeger@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-11 12:21:29 +01:00
709c15abaa bump version to 1.0.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 10:21:30 +01:00
b404e4d930 d/control: check in new dependnecies to generated control
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 10:21:30 +01:00
f507580c3f docs: faq: fix first releases
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 10:14:01 +01:00
291b786076 docs: fix prune retention example
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 10:14:01 +01:00
06c9059dac daemon: rename method, endless loop, bail on exec error
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 10:14:01 +01:00
d7c6ad60dd daemon: add hack for sd_notify
sd_notify is not synchronous, iow. it only waits until the message
reaches the queue not until it is processed by systemd

when the process that sent such a message exits before systemd could
process it, it cannot be associated to the correct pid

so in case of reloading, we send a message with 'MAINPID=<newpid>'
to signal that it will change. if now the old process exits before
systemd knows this, it will not accept the 'READY=1' message from the
child, since it rejects the MAINPID change

since there is no (AFAICS) library interface to check the unit status,
we use 'systemctl is-active <SERVICE_NAME>' to check the state until
it is not 'reloading' anymore.

on newer systemd versions, there is 'sd_notify_barrier' which would
allow us to wait for systemd to have all messages from the current
pid to be processed before acknowledging to the child, but on buster
the systemd version is to old...

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 09:43:00 +01:00
0a0ba0785b prune sim: avoid colon to separate keep desc from count
hack for space issues for monthly keeps and >9 counts

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 08:20:13 +01:00
6ed79592f2 prune sim: make backup schedule a form, bind update button to its validity
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 08:11:46 +01:00
4c75ee3471 prune sim: do not use unecesarry variable, declare in line
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 08:11:16 +01:00
6f997da8cd prune sim: set min-heigth for calendar day cells
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 08:10:43 +01:00
03e40aa4ee ui: datastore add: set default schedule
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 07:49:01 +01:00
be1d6cbcc6 ui: shorten automatic ID length a bit
Without hyphens, we had 20 hex digits, so ~80 bit which is probably overkill.
Use 12 (13 with hyphen), this is still 48 bit.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 07:40:23 +01:00
ffaca016ad ui: datastore summary: drop removed bytes display
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 07:27:21 +01:00
71f82a98d7 d/control: add missing dependencies for non ISO installations
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 07:26:05 +01:00
deef6fbc0c cargo: extend authors list
this was mostly selected by executing

and adding those with more than a hand full of commits, so no hard
feelings here, this was definitively also a team effort to get stuff
polished!

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 14:47:48 +01:00
4ac529141f bump version to 1.0.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 14:47:48 +01:00
a108a2e967 ui: drop debug beta code
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 14:47:48 +01:00
ff7a29104c postinst: fix version check for remote.cfg cleanup
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-10 14:35:37 +01:00
240b2ffb9b ui: improve activeTab selection from fragment and state
handle invalid fragments for tabs, as well as not rendered tabpanels

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 14:21:54 +01:00
a86e703661 tools::runtime: pin_mut instead of unsafe block
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-10 14:18:45 +01:00
1ecf4e6d20 async_io: require Unpin for EitherStream and HyperAccept
We use it with Unpin types and this way we get rid of a lot
of `unsafe` blocks.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-10 14:18:45 +01:00
9f9a661b1a verify: cleanup logging order/messages
otherwise we end up printing warnings before the start message..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-10 14:11:36 +01:00
1b1cab8321 verify: log/warn on invalid owner
in order to trigger a notification/make the problem more visible than
just in syslog.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-10 14:11:36 +01:00
f4f9a503de ui: add mising panel help buttons
add missing help buttons (question mark, top right) so that we are
consistent and each panel has it.

I chose the IMHO most fitting sections.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 13:53:21 +01:00
a4971d5f90 docs: add ref for sysadmin host admin section
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 13:53:21 +01:00
477ebe6b78 docs: user management: avoid some inconsistencies
The space between '--' and 'path' in two of the commands was wrong. The other
changes make the names of the store and token consistent with the rest of the
section and should improve readability.

Also add the Datastore.Verify permission in the output of the command:
proxmox-backup-manager user permissions john@pbs --path /datastore/store1
A DatastoreAdmin now has this permission and that's what john@pbs is in the
example.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 13:47:52 +01:00
38efbfc148 ui: app: fix fixme
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 13:38:30 +01:00
10052ea644 remote.cfg: rename userid to 'auth-id'
and fixup config file on upgrades accordingly

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-10 13:25:24 +01:00
b57619ea29 ui: datastores sync: future proof and move local store column in front
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 13:22:54 +01:00
445b0043b2 ui: show (local)datastore column only in global sync/verifyview
its rather hacky, but our cbind mixin does not support columns (yet).
if it does sometime in the future, we could use that instead

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 13:14:47 +01:00
8b62cbe752 docs: update package repositories
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 13:14:04 +01:00
81f99362d9 docs: installation: don't mention ext3 as an option anymore
Support for ext3 was removed by commit 0abf0d3683b74421eca24ba61d1d4e100d35211a
in pve-installer.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 13:13:44 +01:00
414c23facb fix #3060:: improve get_owner error handling
log invalid owners to system log, and continue with next group just as
if permission checks fail for the following operations:
- verify store with limited permissions
- list store groups
- list store snapshots

all other call sites either handle it correctly already (sync/pull), or
operate on a single group/snapshot and can bubble up the error.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-10 12:58:44 +01:00
c5608cf86c encryption: add best practice for storing master key
Further clarify that the paperkey should be a last resort
recovery option, after a password manager and usb drive.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-11-10 12:51:30 +01:00
5d08c750ef HttpsConnector: include destination on connect errors
for more useful log output
old:
Nov 10 11:50:51 foo pvestatd[3378]: proxmox-backup-client failed: Error: error trying to connect: tcp connect error: No route to host (os error 113)
new:
Nov 10 11:55:21 foo pvestatd[3378]: proxmox-backup-client failed: Error: error trying to connect: error connecting to https://thebackuphost:8007/ - tcp connect error: No route to host (os error 113)

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-10 11:58:19 +01:00
f3fde36beb client: error context when building HttpClient
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-10 11:58:19 +01:00
0c83e8891e ui: fix task description 2020-11-10 11:53:39 +01:00
133de2dd1f ui: add/fix help buttons
added a few more help buttons were appropriate:

* GC and Prune schedule windows
* Create Directory window
* API Tokens, link directly to token section
* verify jobs window

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 11:51:03 +01:00
c8219747f0 ui: add all online help refs found in docs
recommit the onlinehelp after the scanrefs script has been adapted and
the docs are up to date

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 11:50:56 +01:00
0247f794e9 docs: add network management reference
needed in order for the help button in the network edit window to work.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 11:50:17 +01:00
710f787c41 docs: add maintenance chapter prefix to verification ref
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 11:50:12 +01:00
d8916a326c scanrefs: only scan docs, not JS files
This is a temporary hack until we find a sensible way to scan the
proxmox-widget-toolkit JS files as well.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 11:50:09 +01:00
924d6d4072 prune sim: show count for rule
and rename 'all zero' to 'keep-all' to make it consistent with the prune dialog
in PBS.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 11:47:37 +01:00
984ac33d5c ui: subscription: usage chart: render date as ISO 8601
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 11:46:22 +01:00
0a4dfd63c9 ui: usage graph: show axis and set maximum
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 11:46:05 +01:00
a6e746f652 ui: datastore list summary: add more padding between elements
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 11:46:05 +01:00
30f73fa2e0 fix bug #3060: continue sync if we cannot aquire the group lock 2020-11-10 11:29:36 +01:00
9f0ee346e9 ui: Datastores Summary: change layout and chart
changes the layout to look i little bit more like the statistics panel
we have for ceph in pve, while changing to the UsageChart and adding
some more datastore infos (from last garbage collect)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 10:43:07 +01:00
48d6dede4a ui: refactor calculate_dedup_factor
so that we can reuse this

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 10:43:07 +01:00
8432e4a655 ui: add panel/UsageChart
heavily inspired by pveRunningChart, without the dynamically adding
of data and specific for the usage of datastores

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 10:43:07 +01:00
b35eb0a175 api2/status/datastore-usage: add gc-status and history start and delta
so that we can show more info and calculate the points in time for the
history

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 10:43:07 +01:00
c3a1b34ed3 ui: subscription: add more button icons, small UX fix
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 10:42:45 +01:00
bb26843cd6 ui/docs: add get help onlineHelp
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 10:35:35 +01:00
ee0ab12dd0 ui: move disks/directory stuff to tab panel
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 10:15:44 +01:00
d5f7755467 docs: online help scanner: also include help tool links
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 10:15:08 +01:00
5c64e83b1e ui: datastore: set onlineHelp for chaging group owner
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 09:53:05 +01:00
0f6f99b4ec ui: prune: set onlineHelp
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 09:51:30 +01:00
f668862ae0 ui: prune: add clear-trigger to keep fields
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 09:51:20 +01:00
c960d2b501 bail if mount point already exists for directories
similar to what we do for zfs. By bailing before partitioning, the disk is
still considered unused after a failed attempt.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 09:25:58 +01:00
f5d9f2534b mount zpools created via API under /mnt/datastore
as we do for other file systems

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 09:25:58 +01:00
9a3ddcea33 ui: utils: eslint format fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 09:24:35 +01:00
030464d3a9 docs: s/DataStore/Datastore/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 09:24:13 +01:00
3f30b32c2e ui: prune: show count for rule
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 09:24:13 +01:00
5eafe6aabc ui: prune: show which rule keeps backup
and adjust layout so the description fits.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 09:24:13 +01:00
2c9f274efa ui: add help tool to user and remote config 2020-11-10 09:23:22 +01:00
31112c79ac ui: add help tool to datastore panel 2020-11-10 09:15:12 +01:00
d89f91b538 ui: acl editor: disallow path editing for datastore permission views
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 08:19:17 +01:00
a6310ec294 ui: fix widget height in dashboard 2020-11-10 08:12:35 +01:00
98d9323534 ui: add link to www.proxmox.com for subscription plans 2020-11-10 08:07:49 +01:00
09f1f28800 ui: ACL view: fix path filtering
and add some comments about actual behavior of those config
properties..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 07:33:20 +01:00
e1da9ca4bb ui: datastore dashboard: use gauge for usage, rework layout a bit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 19:26:48 +01:00
625c7bfc0b ui: task summary: enable grid mouse track over
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 19:25:43 +01:00
d9503950e3 ui: tasl summary: add pointer cursor if clickable
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 18:09:05 +01:00
376e927980 ui: datastore summary: increase usage graph height
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 17:55:59 +01:00
5204cbcf0f ui: datastore summary: add line chart icon to full-estimation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 17:48:53 +01:00
e373dcc564 ui: datastore/content: improve action button layout
Fix font-size to 14px to improve font-awesome rendering, add some
slight margin between the buttons so that they are not glued
together, add a slight text-shadow on mouse over.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 17:45:08 +01:00
137a6ebcad apt: allow changelog retrieval from enterprise repo
If a package is or will be installed from the enterprise repo, retrieve
the changelog from there as well (securely via HTTPS and authenticated
with the subcription key).

Extends the get_string method to take additional headers, in this case
used for 'Authorization'. Hyper does not have built-in basic auth
support AFAICT but it's simple enough to just build the header manually.

Take the opportunity and also set the User-Agent sensibly for GET
requests, just like for POST.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-09 17:28:58 +01:00
ed1329ecf7 ui: make Datastore clickable again
by showing the previously added pbsDataStores panel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
2371c1e371 ui: add Panels necessary for Datastores Overview
a panel for a single datastore that gets updated from an external caller
shows the usage, estimated full date, history and task summary grid

a panel that dynamically generates the panel above for each datastore

and a tabpanel that includes the panel above, as well as a global
syncview, verifiyview and aclview

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
63c07d950c ui: TaskSummary: handle less defined parameters of tasks
this makes it a little easier to provide good data, without
hardcoding all types in the source object

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
a3cdb19e33 ui: TaskSummary: add subPanelModal and datastore parameters
in preparation for the per-datastore grid

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
4623cd6497 ui: TaskSummary: move state/types/titles out of the controller
it seems that under certain circumstances, extjs does not initialize
or remove the content from objects in controllers

move it to the view, were they always exist

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
ab81bb13ad ui: make Sync/VerifyView and Edit usable without datastore
we want to use this panel again for a 'global' overview, without
any datastore preselected, so we have to handle that, and
adding a datastore selector in the editwindow

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
616650a198 ui: Utils: add parse_datastore_worker_id
to parse the datastore out of a worker_id
for this we need some regexes that are the same as in the backend

for now we only parse out the datastore, but we can extend this
in the future to parse relevant info (e.g. remote for syncs,
id/type for backups)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
78763d21b1 ui: refactor render_size_usage to Utils
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
f2d6324958 ui: refactor render_estimate
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
6e880f19cc api2/node/tasks: add check_job_store and use it
to easily check the store of a worker_id
this fixes the issue that one could not filter by type 'syncjob' and
datastore simultaneously

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
64623f329e ui: recommit onlinehelp
now that the last commit fixed the title generation

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 16:36:00 +01:00
407f3fb994 scanrefs: remove term prefix from title
It can happen, that a title is defined as term in the following way:
:term:`My title`

This patch checks for it and strips the leading part and the last `.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-09 16:35:29 +01:00
0eb0c4bd63 proxy: fix log message for auth log rotation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 16:34:03 +01:00
82422c115a ui: admin/summary: add versions button/window
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 16:33:22 +01:00
ed2beb334d api: node/apt: add versions call
very basic, based on API/concepts of PVE one.

Still missing, addint an extra_info string option to APTUpdateInfo
and pass along running kernel/PBS version there.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 16:31:56 +01:00
f3b4820d06 www: show more ACLs in datastore panel
since just the ACLs defined on the exact datastore path don't give
anywhere near a complete picture of who has access to it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-09 15:19:15 +01:00
8f7cd96df4 installation: minor wording fix
very minor but worthwhile edits

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-11-09 15:18:44 +01:00
4accbc5853 backup-client: encryption: discuss paperkey command
adds a paragraph to the encryption section about
encoding the master key into a qr code for printing

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-11-09 15:18:44 +01:00
2791318ff1 fix bug #3121: forbid removing used reemotes 2020-11-09 12:48:29 +01:00
47208b4147 pxar: log when skipping mount points
Clippy complains about the number of paramters we have for
create_archive and it really does need to be made somewhat
less awkward and more usable. For now we just log to stderr
as we previously did. Added todo-comments for this.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-09 12:43:16 +01:00
b783591fb5 ui: datastore content: ensure action column is wide enough
with the "change owner" action added we now need more than the
default of 100 px, so increase to 120 px for now.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 12:31:14 +01:00
9dd6175808 ui: token selector: use same layout as auth id selector
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 12:24:54 +01:00
5e8b97178e ui: auth/token selector: tell ExtJS we injected data into the store
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 12:21:02 +01:00
38260cddf5 tools apt: include package name in filter data
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 08:55:08 +01:00
189 changed files with 11924 additions and 1427 deletions

View File

@ -1,7 +1,16 @@
[package]
name = "proxmox-backup"
version = "0.9.7"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
version = "1.0.6"
authors = [
"Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>",
"Christian Ebner <c.ebner@proxmox.com>",
"Fabian Grünbichler <f.gruenbichler@proxmox.com>",
"Stefan Reiter <s.reiter@proxmox.com>",
"Thomas Lamprecht <t.lamprecht@proxmox.com>",
"Wolfgang Bumiller <w.bumiller@proxmox.com>",
"Proxmox Support Team <support@proxmox.com>",
]
edition = "2018"
license = "AGPL-3"
description = "Proxmox Backup"
@ -37,8 +46,9 @@ pam = "0.7"
pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pin-project = "0.4"
pathpatterns = "0.1.2"
proxmox = { version = "0.7.0", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox = { version = "0.8.1", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0"

View File

@ -53,3 +53,83 @@ Setup:
Note: 2. may be skipped if you already added the PVE or PBS package repository
You are now able to build using the Makefile or cargo itself.
Design Notes
============
Here are some random thought about the software design (unless I find a better place).
Large chunk sizes
-----------------
It is important to notice that large chunk sizes are crucial for
performance. We have a multi-user system, where different people can do
different operations on a datastore at the same time, and most operation
involves reading a series of chunks.
So what is the maximal theoretical speed we can get when reading a
series of chunks? Reading a chunk sequence need the following steps:
- seek to the first chunk start location
- read the chunk data
- seek to the first chunk start location
- read the chunk data
- ...
Lets use the following disk performance metrics:
:AST: Average Seek Time (second)
:MRS: Maximum sequential Read Speed (bytes/second)
:ACS: Average Chunk Size (bytes)
The maximum performance you can get is::
MAX(ACS) = ACS /(AST + ACS/MRS)
Please note that chunk data is likely to be sequential arranged on disk, but
this it is sort of a best case assumption.
For a typical rotational disk, we assume the following values::
AST: 10ms
MRS: 170MB/s
MAX(4MB) = 115.37 MB/s
MAX(1MB) = 61.85 MB/s;
MAX(64KB) = 6.02 MB/s;
MAX(4KB) = 0.39 MB/s;
MAX(1KB) = 0.10 MB/s;
Modern SSD are much faster, lets assume the following::
max IOPS: 20000 => AST = 0.00005
MRS: 500Mb/s
MAX(4MB) = 474 MB/s
MAX(1MB) = 465 MB/s;
MAX(64KB) = 354 MB/s;
MAX(4KB) = 67 MB/s;
MAX(1KB) = 18 MB/s;
Also, the average chunk directly relates to the number of chunks produced by
a backup::
CHUNK_COUNT = BACKUP_SIZE / ACS
Here are some staticics from my developer worstation::
Disk Usage: 65 GB
Directories: 58971
Files: 726314
Files < 64KB: 617541
As you see, there are really many small files. If we would do file
level deduplication, i.e. generate one chunk per file, we end up with
more than 700000 chunks.
Instead, our current algorithm only produce large chunks with an
average chunks size of 4MB. With above data, this produce about 15000
chunks (factor 50 less chunks).

116
debian/changelog vendored
View File

@ -1,3 +1,119 @@
rust-proxmox-backup (1.0.6-1) unstable; urgency=medium
* stricter handling of file-descriptors, fixes some cases where some could
leak
* ui: fix various usages of the findRecord emthod, ensuring it matches exact
* garbage collection: improve task log format
* verification: improve progress log, make it similar to what's logged on
pull (sync)
* datastore: move manifest locking to /run. This avoids issues with
filesystems which cannot natively handle removing in-use files ("delete on
last close"), and create a virtual, internal, replacement file to work
around that. This is done, for example, by NFS or CIFS (samba).
-- Proxmox Support Team <support@proxmox.com> Fri, 11 Dec 2020 12:51:33 +0100
rust-proxmox-backup (1.0.5-1) unstable; urgency=medium
* client: restore: print meta information exclusively to standard error
-- Proxmox Support Team <support@proxmox.com> Wed, 25 Nov 2020 15:29:58 +0100
rust-proxmox-backup (1.0.4-1) unstable; urgency=medium
* fingerprint: add bytes() accessor
* ui: fix broken gettext use
* cli: move more commands into "snapshot" sub-command
-- Proxmox Support Team <support@proxmox.com> Wed, 25 Nov 2020 06:37:41 +0100
rust-proxmox-backup (1.0.3-1) unstable; urgency=medium
* client: inform user when automatically using the default encryption key
* ui: UserView: render name as 'Firstname Lastname'
* proxmox-backup-manager: add versions command
* pxar: fix anchored exclusion at archive root
* pxar: include .pxarexclude files in the archive
* client: expose all-file-systems option
* api: make expensive parts of datastore status opt-in
* api: include datastore ID in invalid owner errors
* garbage collection: treat .bad files like regular chunks to ensure they
are removed if not referenced anymore
* client: fix issues with encoded UPID strings
* encryption: add fingerprint to key config
* client: add 'key show' command
* fix #3139: add key fingerprint to backup snapshot manifest and check it
when loading with a key
* ui: add snapshot/file fingerprint tooltip
-- Proxmox Support Team <support@proxmox.com> Tue, 24 Nov 2020 08:55:47 +0100
rust-proxmox-backup (1.0.1-1) unstable; urgency=medium
* ui: datastore summary: drop 'removed bytes' display
* ui: datastore add: set default schedule
* prune sim: make backup schedule a form, bind update button to its validity
* daemon: add workaround for race in reloading and systemd 'ready' notification
-- Proxmox Support Team <support@proxmox.com> Wed, 11 Nov 2020 10:18:12 +0100
rust-proxmox-backup (1.0.0-1) unstable; urgency=medium
* fix #3121: forbid removing used remotes
* docs: backup-client: encryption: discuss paperkey command
* pxar: log when skipping mount points
* ui: show also parent ACLs which affect a datastore in its panel
* api: node/apt: add versions call
* ui: make Datastore a selectable panel again. Show a datastore summary
list, and provide unfiltered access to all sync and verify jobs.
* ui: add help tool-button to various paneös
* ui: set various onlineHelp buttons
* zfs: mount new zpools created via API under /mnt/datastore/<id>
* ui: move disks/directory views to its own tab panel
* fix #3060: continue sync if we cannot aquire the group lock
* HttpsConnector: include destination on connect errors
* fix #3060:: improve get_owner error handling
* remote.cfg: rename userid to 'auth-id'
* verify: log/warn on invalid owner
-- Proxmox Support Team <support@proxmox.com> Tue, 10 Nov 2020 14:36:13 +0100
rust-proxmox-backup (0.9.7-1) unstable; urgency=medium
* ui: add remote store selector

16
debian/control vendored
View File

@ -33,11 +33,12 @@ Build-Depends: debhelper (>= 11),
librust-pam-sys-0.5+default-dev,
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-0.4+default-dev,
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.7+api-macro-dev,
librust-proxmox-0.7+default-dev,
librust-proxmox-0.7+sortable-macro-dev,
librust-proxmox-0.7+websocket-dev,
librust-proxmox-0.8+api-macro-dev (>= 0.8.1-~~),
librust-proxmox-0.8+default-dev (>= 0.8.1-~~),
librust-proxmox-0.8+sortable-macro-dev (>= 0.8.1-~~),
librust-proxmox-0.8+websocket-dev (>= 0.8.1-~~),
librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.6+default-dev (>= 0.6.1-~~),
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),
@ -78,7 +79,7 @@ Build-Depends: debhelper (>= 11),
uuid-dev,
debhelper (>= 12~),
bash-completion,
pve-eslint,
pve-eslint (>= 7.12.1-1),
python3-docutils,
python3-pygments,
rsync,
@ -104,15 +105,20 @@ Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8),
lvm2,
openssh-server,
pbs-i18n,
postfix | mail-transport-agent,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.3-6),
pve-xtermjs (>= 4.7.0-1),
smartmontools,
mtx,
mt-st,
${misc:Depends},
${shlibs:Depends},
Recommends: zfsutils-linux,
ifupdown2,
Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface.

5
debian/control.in vendored
View File

@ -4,15 +4,20 @@ Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8),
lvm2,
openssh-server,
pbs-i18n,
postfix | mail-transport-agent,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.3-6),
pve-xtermjs (>= 4.7.0-1),
smartmontools,
mtx,
mt-st,
${misc:Depends},
${shlibs:Depends},
Recommends: zfsutils-linux,
ifupdown2,
Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface.

View File

@ -14,7 +14,7 @@ section = "admin"
build_depends = [
"debhelper (>= 12~)",
"bash-completion",
"pve-eslint",
"pve-eslint (>= 7.12.1-1)",
"python3-docutils",
"python3-pygments",
"rsync",

View File

@ -1,2 +1,2 @@
proxmox-backup-server: package-installs-apt-sources etc/apt/sources.list.d/pbstest-beta.list
proxmox-backup-server: package-installs-apt-sources etc/apt/sources.list.d/pbs-enterprise.list
proxmox-backup-server: systemd-service-file-refers-to-unusual-wantedby-target lib/systemd/system/proxmox-backup-banner.service getty.target

9
debian/postinst vendored
View File

@ -28,6 +28,15 @@ case "$1" in
if dpkg --compare-versions "$2" 'le' '0.9.5-1'; then
chown --quiet backup:backup /var/log/proxmox-backup/api/auth.log || true
fi
if dpkg --compare-versions "$2" 'le' '0.9.7-1'; then
if [ -e /etc/proxmox-backup/remote.cfg ]; then
echo "NOTE: Switching over remote.cfg to new field names.."
flock -w 30 /etc/proxmox-backup/.remote.lck \
sed -i \
-e 's/^\s\+userid /\tauth-id /g' \
/etc/proxmox-backup/remote.cfg || true
fi
fi
fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then

View File

@ -3,7 +3,7 @@ etc/proxmox-backup.service /lib/systemd/system/
etc/proxmox-backup-banner.service /lib/systemd/system/
etc/proxmox-backup-daily-update.service /lib/systemd/system/
etc/proxmox-backup-daily-update.timer /lib/systemd/system/
etc/pbstest-beta.list /etc/apt/sources.list.d/
etc/pbs-enterprise.list /etc/apt/sources.list.d/
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner
@ -11,8 +11,7 @@ usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-daily-update
usr/sbin/proxmox-backup-manager
usr/share/javascript/proxmox-backup/index.hbs
usr/share/javascript/proxmox-backup/css/ext6-pbs.css
usr/share/javascript/proxmox-backup/images/logo-128.png
usr/share/javascript/proxmox-backup/images/proxmox_logo.png
usr/share/javascript/proxmox-backup/images
usr/share/javascript/proxmox-backup/js/proxmox-backup-gui.js
usr/share/man/man1/proxmox-backup-manager.1
usr/share/man/man1/proxmox-backup-proxy.1

View File

@ -0,0 +1 @@
rm_conffile /etc/apt/sources.list.d/pbstest-beta.list 1.0.0~ proxmox-backup-server

View File

@ -17,6 +17,7 @@ MANUAL_PAGES := \
PRUNE_SIMULATOR_FILES := \
prune-simulator/index.html \
prune-simulator/documentation.html \
prune-simulator/clear-trigger.png \
prune-simulator/prune-simulator.js
# Sphinx documentation setup

View File

@ -44,7 +44,7 @@ def scan_extjs_files(wwwdir="../www"): # a bit rough i know, but we can optimize
js_files.append(os.path.join(root, filename))
for js_file in js_files:
fd = open(js_file).read()
allmatch = re.findall("onlineHelp:\s*[\'\"](.*?)[\'\"]", fd, re.M)
allmatch = re.findall("(?:onlineHelp:|get_help_tool\s*\()\s*[\'\"](.*?)[\'\"]", fd, re.M)
for match in allmatch:
anchor = match
anchor = re.sub('_', '-', anchor) # normalize labels
@ -73,7 +73,9 @@ class ReflabelMapper(Builder):
'link': '/docs/index.html',
'title': 'Proxmox Backup Server Documentation Index',
}
self.env.used_anchors = scan_extjs_files()
# Disabled until we find a sensible way to scan proxmox-widget-toolkit
# as well
#self.env.used_anchors = scan_extjs_files()
if not os.path.isdir(self.outdir):
os.mkdir(self.outdir)
@ -93,6 +95,9 @@ class ReflabelMapper(Builder):
logger.info('traversing section {}'.format(title.astext()))
ref_name = getattr(title, 'rawsource', title.astext())
if (ref_name[:7] == ':term:`'):
ref_name = ref_name[7:-1]
self.env.online_help[labelid] = {'link': '', 'title': ''}
self.env.online_help[labelid]['link'] = "/docs/" + os.path.basename(filename_html) + "#{}".format(labelid)
self.env.online_help[labelid]['title'] = ref_name
@ -112,15 +117,18 @@ class ReflabelMapper(Builder):
def validate_anchors(self):
#pprint(self.env.online_help)
to_remove = []
for anchor in self.env.used_anchors:
if anchor not in self.env.online_help:
logger.info("[-] anchor {} is missing from onlinehelp!".format(anchor))
for anchor in self.env.online_help:
if anchor not in self.env.used_anchors and anchor != 'pbs_documentation_index':
logger.info("[*] anchor {} not used! deleting...".format(anchor))
to_remove.append(anchor)
for anchor in to_remove:
self.env.online_help.pop(anchor, None)
# Disabled until we find a sensible way to scan proxmox-widget-toolkit
# as well
#for anchor in self.env.used_anchors:
# if anchor not in self.env.online_help:
# logger.info("[-] anchor {} is missing from onlinehelp!".format(anchor))
#for anchor in self.env.online_help:
# if anchor not in self.env.used_anchors and anchor != 'pbs_documentation_index':
# logger.info("[*] anchor {} not used! deleting...".format(anchor))
# to_remove.append(anchor)
#for anchor in to_remove:
# self.env.online_help.pop(anchor, None)
return
def finish(self):

View File

@ -365,9 +365,22 @@ To set up a master key:
backed up. It can happen, for example, that you back up an entire system, using
a key on that system. If the system then becomes inaccessible for any reason
and needs to be restored, this will not be possible as the encryption key will be
lost along with the broken system. In preparation for the worst case scenario,
you should consider keeping a paper copy of this key locked away in
a safe place.
lost along with the broken system.
It is recommended that you keep your master key safe, but easily accessible, in
order for quick disaster recovery. For this reason, the best place to store it
is in your password manager, where it is immediately recoverable. As a backup to
this, you should also save the key to a USB drive and store that in a secure
place. This way, it is detached from any system, but is still easy to recover
from, in case of emergency. Finally, in preparation for the worst case scenario,
you should also consider keeping a paper copy of your master key locked away in
a safe place. The ``paperkey`` subcommand can be used to create a QR encoded
version of your master key. The following command sends the output of the
``paperkey`` command to a text file, for easy printing.
.. code-block:: console
proxmox-backup-client key paperkey --output-format text > qrkey.txt
Restoring Data
@ -379,11 +392,11 @@ periodic recovery tests to ensure that you can access the data in
case of problems.
First, you need to find the snapshot which you want to restore. The snapshot
command provides a list of all the snapshots on the server:
list command provides a list of all the snapshots on the server:
.. code-block:: console
# proxmox-backup-client snapshots
# proxmox-backup-client snapshot list
┌────────────────────────────────┬─────────────┬────────────────────────────────────┐
│ snapshot │ size │ files │
╞════════════════════════════════╪═════════════╪════════════════════════════════════╡
@ -568,7 +581,7 @@ command:
.. code-block:: console
# proxmox-backup-client forget <snapshot>
# proxmox-backup-client snapshot forget <snapshot>
.. caution:: This command removes all archives in this backup

View File

@ -27,7 +27,7 @@ How long will my Proxmox Backup Server version be supported?
+-----------------------+--------------------+---------------+------------+--------------------+
|Proxmox Backup Version | Debian Version | First Release | Debian EOL | Proxmox Backup EOL |
+=======================+====================+===============+============+====================+
|Proxmox Backup 1.x | Debian 10 (Buster) | tba | tba | tba |
|Proxmox Backup 1.x | Debian 10 (Buster) | 2020-11 | tba | tba |
+-----------------------+--------------------+---------------+------------+--------------------+

View File

@ -132,5 +132,5 @@ top panel to view:
collection <garbage-collection>` operations, and run garbage collection
manually
* **Sync Jobs**: Create, manage and run :ref:`syncjobs` from remote servers
* **Verify Jobs**: Create, manage and run :ref:`verification` jobs on the
* **Verify Jobs**: Create, manage and run :ref:`maintenance_verification` jobs on the
datastore

View File

@ -37,16 +37,15 @@ Download the ISO from |DOWNLOADS|.
It includes the following:
* The `Proxmox Backup`_ server installer, which partitions the local
disk(s) with ext4, ext3, xfs or ZFS, and installs the operating
system
disk(s) with ext4, xfs or ZFS, and installs the operating system
* Complete operating system (Debian Linux, 64-bit)
* Our Linux kernel with ZFS support
* Proxmox Linux kernel with ZFS support
* Complete tool-set to administer backups and all necessary resources
* Web based GUI management interface
* Web based management interface
.. note:: During the installation process, the complete server
is used by default and all existing data is removed.

View File

@ -127,8 +127,7 @@ language.
-- `The Rust Programming Language <https://doc.rust-lang.org/book/ch00-00-introduction.html>`_
.. todo:: further explain the software stack
.. _get_help:
Getting Help
------------

View File

@ -77,6 +77,42 @@ edit the interval at which pruning takes place.
:alt: Prune and garbage collection options
Retention Settings Example
^^^^^^^^^^^^^^^^^^^^^^^^^^
The backup frequency and retention of old backups may depend on how often data
changes, and how important an older state may be, in a specific work load.
When backups act as a company's document archive, there may also be legal
requirements for how long backup snapshots must be kept.
For this example, we assume that you are doing daily backups, have a retention
period of 10 years, and the period between backups stored gradually grows.
- **keep-last:** ``3`` - even if only daily backups, an admin may want to create
an extra one just before or after a big upgrade. Setting keep-last ensures
this.
- **keep-hourly:** not set - for daily backups this is not relevant. You cover
extra manual backups already, with keep-last.
- **keep-daily:** ``13`` - together with keep-last, which covers at least one
day, this ensures that you have at least two weeks of backups.
- **keep-weekly:** ``8`` - ensures that you have at least two full months of
weekly backups.
- **keep-monthly:** ``11`` - together with the previous keep settings, this
ensures that you have at least a year of monthly backups.
- **keep-yearly:** ``9`` - this is for the long term archive. As you covered the
current year with the previous options, you would set this to nine for the
remaining ones, giving you a total of at least 10 years of coverage.
We recommend that you use a higher retention period than is minimally required
by your environment; you can always reduce it if you find it is unnecessarily
high, but you cannot recreate backup snapshots from the past.
.. _maintenance_gc:
Garbage Collection
@ -93,7 +129,7 @@ GC** from the top panel. From here, you can edit the schedule at which garbage
collection runs and manually start the operation.
.. _verification:
.. _maintenance_verification:
Verification
------------

View File

@ -1,3 +1,5 @@
.. _sysadmin_network_configuration:
Network Management
==================

View File

@ -26,11 +26,8 @@ update``.
.. FIXME for 7.0: change security update suite to bullseye-security
In addition, you need a package repository from Proxmox to get Proxmox Backup updates.
During the Proxmox Backup beta phase, only one repository (pbstest) will be
available. Once released, an Enterprise repository for production use and a
no-subscription repository will be provided.
In addition, you need a package repository from Proxmox to get Proxmox Backup
updates.
SecureApt
~~~~~~~~~
@ -72,68 +69,63 @@ Here, the output should be:
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
.. comment
`Proxmox Backup`_ Enterprise Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`Proxmox Backup`_ Enterprise Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This will be the default, stable, and recommended repository. It is available for
all `Proxmox Backup`_ subscription users. It contains the most stable packages,
and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default:
This will be the default, stable, and recommended repository. It is available for
all `Proxmox Backup`_ subscription users. It contains the most stable packages,
and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default:
.. note:: During the Proxmox Backup beta phase only one repository (pbstest)
will be available.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list``
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list``
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise
To never miss important security fixes, the superuser (``root@pam`` user) is
notified via email about new packages as soon as they are available. The
change-log and details of each package can be viewed in the GUI (if available).
To never miss important security fixes, the superuser (``root@pam`` user) is
notified via email about new packages as soon as they are available. The
change-log and details of each package can be viewed in the GUI (if available).
Please note that you need a valid subscription key to access this
repository. More information regarding subscription levels and pricing can be
found at https://www.proxmox.com/en/proxmox-backup/pricing.
Please note that you need a valid subscription key to access this
repository. More information regarding subscription levels and pricing can be
found at https://www.proxmox.com/en/proxmox-backup-server/pricing
.. note:: You can disable this repository by commenting out the above
line using a `#` (at the start of the line). This prevents error
messages if you do not have a subscription key. Please configure the
``pbs-no-subscription`` repository in that case.
.. note:: You can disable this repository by commenting out the above line
using a `#` (at the start of the line). This prevents error messages if you do
not have a subscription key. Please configure the ``pbs-no-subscription``
repository in that case.
`Proxmox Backup`_ No-Subscription Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`Proxmox Backup`_ No-Subscription Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As the name suggests, you do not need a subscription key to access
this repository. It can be used for testing and non-production
use. It is not recommended to use it on production servers, because these
packages are not always heavily tested and validated.
As the name suggests, you do not need a subscription key to access
this repository. It can be used for testing and non-production
use. It is not recommended to use it on production servers, because these
packages are not always heavily tested and validated.
We recommend to configure this repository in ``/etc/apt/sources.list``.
We recommend to configure this repository in ``/etc/apt/sources.list``.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
# PBS pbs-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pbs buster pbs-no-subscription
# PBS pbs-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pbs buster pbs-no-subscription
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
`Proxmox Backup`_ Beta Repository
`Proxmox Backup`_ Test Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
During the public beta, there is a repository called ``pbstest``. This one
contains the latest packages and is heavily used by developers to test new
features.
This repository contains the latest packages and is heavily used by developers
to test new features.
.. .. warning:: the ``pbstest`` repository should (as the name implies)
only be used to test new features or bug fixes.
@ -145,7 +137,3 @@ You can access this repository by adding the following line to
:caption: sources.list entry for ``pbstest``
deb http://download.proxmox.com/debian/pbs buster pbstest
If you installed Proxmox Backup Server from the official beta ISO, you should
have this repository already configured in
``/etc/apt/sources.list.d/pbstest-beta.list``

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -13,6 +13,7 @@
.cal-day {
vertical-align: top;
width: 150px;
height: 75px; /* this is like min-height when used in tables */
border: #939393 1px solid;
color: #454545;
}
@ -32,6 +33,9 @@
.first-of-month {
border-right: dashed black 4px;
}
.clear-trigger {
background-image: url(./clear-trigger.png);
}
</style>
<script type="text/javascript" src="extjs/ext-all.js"></script>

View File

@ -152,7 +152,12 @@ Ext.onReady(function() {
dataIndex: 'mark',
renderer: function(value, metaData, record) {
if (record.data.mark === 'keep') {
return 'keep (' + record.data.keepName + ')';
if (record.data.keepCount) {
return 'keep (' + record.data.keepName +
': ' + record.data.keepCount + ')';
} else {
return 'keep (' + record.data.keepName + ')';
}
} else {
return value;
}
@ -213,7 +218,11 @@ Ext.onReady(function() {
if (backup.data.mark === 'remove') {
html += `<span class="strikethrough">${text}</span>`;
} else {
text += ` (${backup.data.keepName})`;
if (backup.data.keepCount) {
text += ` (${backup.data.keepName} ${backup.data.keepCount})`;
} else {
text += ` (${backup.data.keepName})`;
}
if (me.useColors) {
let bgColor = COLORS[backup.data.keepName];
let textColor = TEXT_COLORS[backup.data.keepName];
@ -256,6 +265,34 @@ Ext.onReady(function() {
},
});
Ext.define('PBS.PruneSimulatorKeepInput', {
extend: 'Ext.form.field.Number',
alias: 'widget.prunesimulatorKeepInput',
allowBlank: true,
fieldGroup: 'keep',
minValue: 1,
listeners: {
afterrender: function(field) {
this.triggers.clear.setVisible(field.value !== null);
},
change: function(field, newValue, oldValue) {
this.triggers.clear.setVisible(newValue !== null);
},
},
triggers: {
clear: {
cls: 'clear-trigger',
weight: -1,
handler: function() {
this.triggers.clear.setVisible(false);
this.setValue(null);
},
},
},
});
Ext.define('PBS.PruneSimulatorPanel', {
extend: 'Ext.panel.Panel',
alias: 'widget.prunesimulatorPanel',
@ -470,6 +507,7 @@ Ext.onReady(function() {
newlyIncludedCount++;
backup.mark = 'keep';
backup.keepName = keepName;
backup.keepCount = newlyIncludedCount;
} else {
backup.mark = 'remove';
}
@ -488,7 +526,7 @@ Ext.onReady(function() {
Number(keepParams['keep-yearly']) === 0) {
backups.forEach(function(backup) {
backup.mark = 'keep';
backup.keepName = 'all zero';
backup.keepName = 'keep-all';
});
return;
@ -550,58 +588,37 @@ Ext.onReady(function() {
keepItems: [
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-last',
allowBlank: true,
fieldLabel: 'keep-last',
minValue: 0,
value: 4,
fieldGroup: 'keep',
},
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-hourly',
allowBlank: true,
fieldLabel: 'keep-hourly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
},
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-daily',
allowBlank: true,
fieldLabel: 'keep-daily',
minValue: 0,
value: 5,
fieldGroup: 'keep',
},
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-weekly',
allowBlank: true,
fieldLabel: 'keep-weekly',
minValue: 0,
value: 2,
fieldGroup: 'keep',
},
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-monthly',
allowBlank: true,
fieldLabel: 'keep-monthly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
},
{
xtype: 'numberfield',
xtype: 'prunesimulatorKeepInput',
name: 'keep-yearly',
allowBlank: true,
fieldLabel: 'keep-yearly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
},
],
@ -613,42 +630,6 @@ Ext.onReady(function() {
sorters: { property: 'backuptime', direction: 'DESC' },
});
let scheduleItems = [
{
xtype: 'prunesimulatorDayOfWeekSelector',
name: 'schedule-weekdays',
fieldLabel: 'Day of week',
value: ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun'],
allowBlank: false,
multiSelect: true,
padding: '0 0 0 10',
},
{
xtype: 'prunesimulatorCalendarEvent',
name: 'schedule-time',
allowBlank: false,
value: '0/6:00',
fieldLabel: 'Backup schedule',
padding: '0 0 0 10',
},
{
xtype: 'numberfield',
name: 'numberOfWeeks',
allowBlank: false,
fieldLabel: 'Number of weeks',
minValue: 1,
value: 15,
maxValue: 260, // five years
padding: '0 0 0 10',
},
{
xtype: 'button',
name: 'schedule-button',
text: 'Update Schedule',
handler: 'reloadFull',
},
];
me.items = [
{
xtype: 'panel',
@ -684,6 +665,7 @@ Ext.onReady(function() {
},
{ xtype: "panel", width: 1, border: 1 },
{
xtype: 'form',
layout: 'anchor',
flex: 1,
border: false,
@ -692,7 +674,42 @@ Ext.onReady(function() {
labelWidth: 120,
},
bodyPadding: 10,
items: scheduleItems,
items: [
{
xtype: 'prunesimulatorDayOfWeekSelector',
name: 'schedule-weekdays',
fieldLabel: 'Day of week',
value: ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun'],
allowBlank: false,
multiSelect: true,
padding: '0 0 0 10',
},
{
xtype: 'prunesimulatorCalendarEvent',
name: 'schedule-time',
allowBlank: false,
value: '0/6:00',
fieldLabel: 'Backup schedule',
padding: '0 0 0 10',
},
{
xtype: 'numberfield',
name: 'numberOfWeeks',
allowBlank: false,
fieldLabel: 'Number of weeks',
minValue: 1,
value: 15,
maxValue: 260, // five years
padding: '0 0 0 10',
},
{
xtype: 'button',
name: 'schedule-button',
text: 'Update Schedule',
formBind: true,
handler: 'reloadFull',
},
],
},
],
},

View File

@ -1,6 +1,8 @@
Storage
=======
.. _storage_disk_management:
Disk Management
---------------
@ -57,7 +59,7 @@ create a datastore at the location ``/mnt/datastore/store1``:
You can also create a ``zpool`` with various raid levels from **Administration
-> Disks -> Zpool** in the web interface, or by using ``zpool create``. The command
below creates a mirrored ``zpool`` using two disks (``sdb`` & ``sdc``) and
mounts it on the root directory (default):
mounts it under ``/mnt/datastore/zpool1``:
.. code-block:: console
@ -85,7 +87,7 @@ display S.M.A.R.T. attributes from the web interface or by using the command:
.. _datastore_intro:
:term:`DataStore`
:term:`Datastore`
-----------------
A datastore refers to a location at which backups are stored. The current
@ -121,6 +123,8 @@ number of backups to keep in that store. :ref:`backup-pruning` and
periodically based on a configured schedule (see :ref:`calendar-events`) per datastore.
.. _storage_datastore_create:
Creating a Datastore
^^^^^^^^^^^^^^^^^^^^
.. image:: images/screenshots/pbs-gui-datastore-create-general.png

View File

@ -1,3 +1,5 @@
.. _sysadmin_host_administration:
Host System Administration
==========================

View File

@ -230,11 +230,11 @@ You can list the ACLs of each user/token using the following command:
.. code-block:: console
# proxmox-backup-manager acl list
┌──────────┬──────────────────┬───────────┬────────────────┐
│ ugid │ path │ propagate │ roleid │
╞══════════╪══════════════════╪═══════════╪════════════════╡
│ john@pbs │ /datastore/disk1 │ 1 │ DatastoreAdmin │
└──────────┴──────────────────┴───────────┴────────────────┘
┌──────────┬──────────────────┬───────────┬────────────────┐
│ ugid │ path │ propagate │ roleid │
╞══════════╪══════════════════╪═══════════╪════════════════╡
│ john@pbs │ /datastore/store1 │ 1 │ DatastoreAdmin │
└──────────┴──────────────────┴───────────┴────────────────┘
A single user/token can be assigned multiple permission sets for different datastores.
@ -267,7 +267,7 @@ you can use the ``proxmox-backup-manager user permission`` command:
.. code-block:: console
# proxmox-backup-manager user permissions john@pbs -- path /datastore/store1
# proxmox-backup-manager user permissions john@pbs --path /datastore/store1
Privileges with (*) have the propagate flag set
Path: /datastore/store1
@ -276,9 +276,10 @@ you can use the ``proxmox-backup-manager user permission`` command:
- Datastore.Modify (*)
- Datastore.Prune (*)
- Datastore.Read (*)
- Datastore.Verify (*)
# proxmox-backup-manager acl update /datastore/store1 DatastoreBackup --auth-id 'john@pbs!client1'
# proxmox-backup-manager user permissions 'john@pbs!test' -- path /datastore/store1
# proxmox-backup-manager user permissions 'john@pbs!client1' --path /datastore/store1
Privileges with (*) have the propagate flag set
Path: /datastore/store1

View File

@ -9,7 +9,7 @@ DYNAMIC_UNITS := \
proxmox-backup.service \
proxmox-backup-proxy.service
all: $(UNITS) $(DYNAMIC_UNITS) pbstest-beta.list
all: $(UNITS) $(DYNAMIC_UNITS) pbs-enterprise.list
clean:
rm -f $(DYNAMIC_UNITS)

1
etc/pbs-enterprise.list Normal file
View File

@ -0,0 +1 @@
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise

View File

@ -1 +0,0 @@
deb http://download.proxmox.com/debian/pbs buster pbstest

View File

@ -9,6 +9,7 @@ pub mod types;
pub mod version;
pub mod ping;
pub mod pull;
pub mod tape;
mod helpers;
use proxmox::api::router::SubdirMap;
@ -27,6 +28,7 @@ pub const SUBDIRS: SubdirMap = &[
("pull", &pull::ROUTER),
("reader", &reader::ROUTER),
("status", &status::ROUTER),
("tape", &tape::ROUTER),
("version", &version::ROUTER),
];

View File

@ -181,6 +181,7 @@ fn create_ticket(
}
#[api(
protected: true,
input: {
properties: {
userid: {
@ -195,7 +196,6 @@ fn create_ticket(
description: "Anybody is allowed to change there own password. In addition, users with 'Permissions:Modify' privilege may change any password.",
permission: &Permission::Anybody,
},
)]
/// Change user password
///
@ -215,7 +215,7 @@ fn change_password(
let mut allowed = userid == current_user;
if userid == "root@pam" { allowed = true; }
if current_user == "root@pam" { allowed = true; }
if !allowed {
let user_info = CachedUserInfo::new()?;

View File

@ -249,10 +249,7 @@ pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error>
},
},
},
returns: {
description: "The user configuration (with config digest).",
type: user::User,
},
returns: { type: user::User },
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
@ -268,6 +265,21 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
Ok(user)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
#[allow(non_camel_case_types)]
pub enum DeletableProperty {
/// Delete the comment property.
comment,
/// Delete the firstname property.
firstname,
/// Delete the lastname property.
lastname,
/// Delete the email property.
email,
}
#[api(
protected: true,
input: {
@ -303,6 +315,14 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
schema: user::EMAIL_SCHEMA,
optional: true,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
@ -326,6 +346,7 @@ pub fn update_user(
firstname: Option<String>,
lastname: Option<String>,
email: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
) -> Result<(), Error> {
@ -340,6 +361,17 @@ pub fn update_user(
let mut data: user::User = config.lookup("user", userid.as_str())?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::comment => data.comment = None,
DeletableProperty::firstname => data.firstname = None,
DeletableProperty::lastname => data.lastname = None,
DeletableProperty::email => data.email = None,
}
}
}
if let Some(comment) = comment {
let comment = comment.trim().to_string();
if comment.is_empty() {
@ -433,10 +465,7 @@ pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error>
},
},
},
returns: {
description: "Get API token metadata (with config digest).",
type: user::ApiToken,
},
returns: { type: user::ApiToken },
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),

View File

@ -1,4 +1,4 @@
use std::collections::{HashSet, HashMap};
use std::collections::HashSet;
use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt;
use std::sync::{Arc, Mutex};
@ -125,19 +125,6 @@ fn get_all_snapshot_files(
Ok((manifest, files))
}
fn group_backups(backup_list: Vec<BackupInfo>) -> HashMap<String, Vec<BackupInfo>> {
let mut group_hash = HashMap::new();
for info in backup_list {
let group_id = info.backup_dir.group().group_path().to_str().unwrap().to_owned();
let time_list = group_hash.entry(group_id).or_insert(vec![]);
time_list.push(info);
}
group_hash
}
#[api(
input: {
properties: {
@ -171,39 +158,64 @@ fn list_groups(
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?;
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let backup_list = BackupInfo::list_backups(&datastore.base_path())?;
let backup_groups = BackupInfo::list_backup_groups(&datastore.base_path())?;
let group_hash = group_backups(backup_list);
let group_info = backup_groups
.into_iter()
.fold(Vec::new(), |mut group_info, group| {
let owner = match datastore.get_owner(&group) {
Ok(auth_id) => auth_id,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
&store,
group,
err);
return group_info;
},
};
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
return group_info;
}
let mut groups = Vec::new();
let snapshots = match group.list_backups(&datastore.base_path()) {
Ok(snapshots) => snapshots,
Err(_) => {
return group_info;
},
};
for (_group_id, mut list) in group_hash {
let backup_count: u64 = snapshots.len() as u64;
if backup_count == 0 {
return group_info;
}
BackupInfo::sort_list(&mut list, false);
let last_backup = snapshots
.iter()
.fold(&snapshots[0], |last, curr| {
if curr.is_finished()
&& curr.backup_dir.backup_time() > last.backup_dir.backup_time() {
curr
} else {
last
}
})
.to_owned();
let info = &list[0];
group_info.push(GroupListItem {
backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(),
last_backup: last_backup.backup_dir.backup_time(),
owner: Some(owner),
backup_count,
files: last_backup.files,
});
let group = info.backup_dir.group();
group_info
});
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?;
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
let result_item = GroupListItem {
backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(),
last_backup: info.backup_dir.backup_time(),
backup_count: list.len() as u64,
files: info.files.clone(),
owner: Some(owner),
};
groups.push(result_item);
}
Ok(groups)
Ok(group_info)
}
#[api(
@ -351,43 +363,56 @@ pub fn list_snapshots (
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let datastore = DataStore::lookup_datastore(&store)?;
let base_path = datastore.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?;
let groups = match (backup_type, backup_id) {
(Some(backup_type), Some(backup_id)) => {
let mut groups = Vec::with_capacity(1);
groups.push(BackupGroup::new(backup_type, backup_id));
groups
},
(Some(backup_type), None) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_type() == backup_type)
.collect()
},
(None, Some(backup_id)) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_id() == backup_id)
.collect()
},
_ => BackupInfo::list_backup_groups(&base_path)?,
};
let mut snapshots = vec![];
let info_to_snapshot_list_item = |group: &BackupGroup, owner, info: BackupInfo| {
let backup_type = group.backup_type().to_string();
let backup_id = group.backup_id().to_string();
let backup_time = info.backup_dir.backup_time();
for info in backup_list {
let group = info.backup_dir.group();
if let Some(ref backup_type) = backup_type {
if backup_type != group.backup_type() { continue; }
}
if let Some(ref backup_id) = backup_id {
if backup_id != group.backup_id() { continue; }
}
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?;
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
let mut size = None;
let (comment, verification, files) = match get_all_snapshot_files(&datastore, &info) {
match get_all_snapshot_files(&datastore, &info) {
Ok((manifest, files)) => {
size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
// extract the first line from notes
let comment: Option<String> = manifest.unprotected["notes"]
.as_str()
.and_then(|notes| notes.lines().next())
.map(String::from);
let verify = manifest.unprotected["verify_state"].clone();
let verify: Option<SnapshotVerifyState> = match serde_json::from_value(verify) {
let fingerprint = match manifest.fingerprint() {
Ok(fp) => fp,
Err(err) => {
eprintln!("error parsing fingerprint: '{}'", err);
None
},
};
let verification = manifest.unprotected["verify_state"].clone();
let verification: Option<SnapshotVerifyState> = match serde_json::from_value(verification) {
Ok(verify) => verify,
Err(err) => {
eprintln!("error parsing verification state : '{}'", err);
@ -395,88 +420,114 @@ pub fn list_snapshots (
}
};
(comment, verify, files)
let size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment,
verification,
fingerprint,
files,
size,
owner,
}
},
Err(err) => {
eprintln!("error during snapshot file listing: '{}'", err);
(
None,
None,
info
let files = info
.files
.iter()
.into_iter()
.map(|x| BackupContent {
filename: x.to_string(),
size: None,
crypt_mode: None,
})
.collect()
)
.collect();
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment: None,
verification: None,
fingerprint: None,
files,
size: None,
owner,
}
},
};
let result_item = SnapshotListItem {
backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(),
backup_time: info.backup_dir.backup_time(),
comment,
verification,
files,
size,
owner: Some(owner),
};
snapshots.push(result_item);
}
Ok(snapshots)
}
fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
let base_path = store.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?;
let mut groups = HashSet::new();
let mut result = Counts {
ct: None,
host: None,
vm: None,
other: None,
}
};
for info in backup_list {
let group = info.backup_dir.group();
groups
.iter()
.try_fold(Vec::new(), |mut snapshots, group| {
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
&store,
group,
err);
return Ok(snapshots);
},
};
let id = group.backup_id();
let backup_type = group.backup_type();
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
return Ok(snapshots);
}
let mut new_id = false;
let group_backups = group.list_backups(&datastore.base_path())?;
if groups.insert(format!("{}-{}", &backup_type, &id)) {
new_id = true;
}
snapshots.extend(
group_backups
.into_iter()
.map(|info| info_to_snapshot_list_item(&group, Some(owner.clone()), info))
);
let mut counts = match backup_type {
"ct" => result.ct.take().unwrap_or(Default::default()),
"host" => result.host.take().unwrap_or(Default::default()),
"vm" => result.vm.take().unwrap_or(Default::default()),
_ => result.other.take().unwrap_or(Default::default()),
};
Ok(snapshots)
})
}
counts.snapshots += 1;
if new_id {
counts.groups +=1;
}
fn get_snapshots_count(store: &DataStore, filter_owner: Option<&Authid>) -> Result<Counts, Error> {
let base_path = store.base_path();
let groups = BackupInfo::list_backup_groups(&base_path)?;
match backup_type {
"ct" => result.ct = Some(counts),
"host" => result.host = Some(counts),
"vm" => result.vm = Some(counts),
_ => result.other = Some(counts),
}
}
groups.iter()
.filter(|group| {
let owner = match store.get_owner(&group) {
Ok(owner) => owner,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
store.name(),
group,
err);
return false;
},
};
Ok(result)
match filter_owner {
Some(filter) => check_backup_owner(&owner, filter).is_ok(),
None => true,
}
})
.try_fold(Counts::default(), |mut counts, group| {
let snapshot_count = group.list_backups(&base_path)?.len() as u64;
let type_count = match group.backup_type() {
"ct" => counts.ct.get_or_insert(Default::default()),
"vm" => counts.vm.get_or_insert(Default::default()),
"host" => counts.host.get_or_insert(Default::default()),
_ => counts.other.get_or_insert(Default::default()),
};
type_count.groups += 1;
type_count.snapshots += snapshot_count;
Ok(counts)
})
}
#[api(
@ -485,7 +536,14 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
store: {
schema: DATASTORE_SCHEMA,
},
verbose: {
type: bool,
default: false,
optional: true,
description: "Include additional information like snapshot counts and GC status.",
},
},
},
returns: {
type: DataStoreStatus,
@ -497,13 +555,30 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
/// Get datastore status.
pub fn status(
store: String,
verbose: bool,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<DataStoreStatus, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let storage = crate::tools::disks::disk_usage(&datastore.base_path())?;
let counts = get_snapshots_count(&datastore)?;
let gc_status = datastore.last_gc_status();
let (counts, gc_status) = if verbose {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let store_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let filter_owner = if store_privs & PRIV_DATASTORE_AUDIT != 0 {
None
} else {
Some(&auth_id)
};
let counts = Some(get_snapshots_count(&datastore, filter_owner)?);
let gc_status = Some(datastore.last_gc_status());
(counts, gc_status)
} else {
(None, None)
};
Ok(DataStoreStatus {
total: storage.total,
@ -612,12 +687,12 @@ pub fn verify(
}
res
} else if let Some(backup_group) = backup_group {
let (_count, failed_dirs) = verify_backup_group(
let failed_dirs = verify_backup_group(
datastore,
&backup_group,
verified_chunks,
corrupt_chunks,
None,
&mut StoreProgress::new(1),
worker.clone(),
worker.upid(),
None,
@ -636,7 +711,7 @@ pub fn verify(
verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)?
};
if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:");
worker.log("Failed to verify the following snapshots/groups:");
for dir in failed_dirs {
worker.log(format!("\t{}", dir));
}
@ -900,10 +975,7 @@ pub fn garbage_collection_status(
returns: {
description: "List the accessible datastores.",
type: Array,
items: {
description: "Datastore name and description.",
type: DataStoreListItem,
},
items: { type: DataStoreListItem },
},
access: {
permission: &Permission::Anybody,

View File

@ -311,6 +311,10 @@ pub const BACKUP_API_SUBDIRS: SubdirMap = &[
"previous", &Router::new()
.download(&API_METHOD_DOWNLOAD_PREVIOUS)
),
(
"previous_backup_time", &Router::new()
.get(&API_METHOD_GET_PREVIOUS_BACKUP_TIME)
),
(
"speedtest", &Router::new()
.upload(&API_METHOD_UPLOAD_SPEEDTEST)
@ -694,6 +698,28 @@ fn finish_backup (
Ok(Value::Null)
}
#[sortable]
pub const API_METHOD_GET_PREVIOUS_BACKUP_TIME: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&get_previous_backup_time),
&ObjectSchema::new(
"Get previous backup time.",
&[],
)
);
fn get_previous_backup_time(
_param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let env: &BackupEnvironment = rpcenv.as_ref();
let backup_time = env.last_backup.as_ref().map(|info| info.backup_dir.backup_time());
Ok(json!(backup_time))
}
#[sortable]
pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&download_previous),

View File

@ -5,9 +5,15 @@ pub mod datastore;
pub mod remote;
pub mod sync;
pub mod verify;
pub mod drive;
pub mod changer;
pub mod media_pool;
const SUBDIRS: SubdirMap = &[
("changer", &changer::ROUTER),
("datastore", &datastore::ROUTER),
("drive", &drive::ROUTER),
("media-pool", &media_pool::ROUTER),
("remote", &remote::ROUTER),
("sync", &sync::ROUTER),
("verify", &verify::ROUTER)

243
src/api2/config/changer.rs Normal file
View File

@ -0,0 +1,243 @@
use anyhow::{bail, Error};
use serde_json::Value;
use proxmox::api::{api, Router, RpcEnvironment};
use crate::{
config,
api2::types::{
PROXMOX_CONFIG_DIGEST_SCHEMA,
CHANGER_ID_SCHEMA,
LINUX_DRIVE_PATH_SCHEMA,
DriveListEntry,
ScsiTapeChanger,
LinuxTapeDrive,
},
tape::{
linux_tape_changer_list,
check_drive_path,
lookup_drive,
},
};
#[api(
protected: true,
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
},
},
},
)]
/// Create a new changer device
pub fn create_changer(
name: String,
path: String,
) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
let linux_changers = linux_tape_changer_list();
check_drive_path(&linux_changers, &path)?;
if config.sections.get(&name).is_some() {
bail!("Entry '{}' already exists", name);
}
let item = ScsiTapeChanger {
name: name.clone(),
path,
};
config.set_data(&name, "changer", &item)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
},
},
returns: {
type: ScsiTapeChanger,
},
)]
/// Get tape changer configuration
pub fn get_config(
name: String,
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<ScsiTapeChanger, Error> {
let (config, digest) = config::drive::config()?;
let data: ScsiTapeChanger = config.lookup("changer", &name)?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(data)
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of configured changers (with config digest).",
type: Array,
items: {
type: DriveListEntry,
},
},
)]
/// List changers
pub fn list_changers(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<DriveListEntry>, Error> {
let (config, digest) = config::drive::config()?;
let linux_changers = linux_tape_changer_list();
let changer_list: Vec<ScsiTapeChanger> = config.convert_to_typed_array("changer")?;
let mut list = Vec::new();
for changer in changer_list {
let mut entry = DriveListEntry {
name: changer.name,
path: changer.path.clone(),
changer: None,
vendor: None,
model: None,
serial: None,
};
if let Some(info) = lookup_drive(&linux_changers, &changer.path) {
entry.vendor = Some(info.vendor.clone());
entry.model = Some(info.model.clone());
entry.serial = Some(info.serial.clone());
}
list.push(entry);
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
optional: true,
},
digest: {
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
optional: true,
},
},
},
)]
/// Update a tape changer configuration
pub fn update_changer(
name: String,
path: Option<String>,
digest: Option<String>,
_param: Value,
) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, expected_digest) = config::drive::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let mut data: ScsiTapeChanger = config.lookup("changer", &name)?;
if let Some(path) = path {
let changers = linux_tape_changer_list();
check_drive_path(&changers, &path)?;
data.path = path;
}
config.set_data(&name, "changer", &data)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
},
},
)]
/// Delete a tape changer configuration
pub fn delete_changer(name: String, _param: Value) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
match config.sections.get(&name) {
Some((section_type, _)) => {
if section_type != "changer" {
bail!("Entry '{}' exists, but is not a changer device", name);
}
config.sections.remove(&name);
},
None => bail!("Delete changer '{}' failed - no such entry", name),
}
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
for drive in drive_list {
if let Some(changer) = drive.changer {
if changer == name {
bail!("Delete changer '{}' failed - used by drive '{}'", name, drive.name);
}
}
}
config::drive::save_config(&config)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_CONFIG)
.put(&API_METHOD_UPDATE_CHANGER)
.delete(&API_METHOD_DELETE_CHANGER);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_CHANGERS)
.post(&API_METHOD_CREATE_CHANGER)
.match_all("name", &ITEM_ROUTER);

View File

@ -151,10 +151,7 @@ pub fn create_datastore(param: Value) -> Result<(), Error> {
},
},
},
returns: {
description: "The datastore configuration (with config digest).",
type: datastore::DataStoreConfig,
},
returns: { type: datastore::DataStoreConfig },
access: {
permission: &Permission::Privilege(&["datastore", "{name}"], PRIV_DATASTORE_AUDIT, false),
},

297
src/api2/config/drive.rs Normal file
View File

@ -0,0 +1,297 @@
use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize};
use serde_json::Value;
use proxmox::api::{api, Router, RpcEnvironment};
use crate::{
config,
api2::types::{
PROXMOX_CONFIG_DIGEST_SCHEMA,
DRIVE_ID_SCHEMA,
CHANGER_ID_SCHEMA,
CHANGER_DRIVE_ID_SCHEMA,
LINUX_DRIVE_PATH_SCHEMA,
DriveListEntry,
LinuxTapeDrive,
ScsiTapeChanger,
},
tape::{
linux_tape_device_list,
check_drive_path,
lookup_drive,
},
};
#[api(
protected: true,
input: {
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
},
changer: {
schema: CHANGER_ID_SCHEMA,
optional: true,
},
"changer-drive-id": {
schema: CHANGER_DRIVE_ID_SCHEMA,
optional: true,
},
},
},
)]
/// Create a new drive
pub fn create_drive(param: Value) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
let item: LinuxTapeDrive = serde_json::from_value(param)?;
let linux_drives = linux_tape_device_list();
check_drive_path(&linux_drives, &item.path)?;
if config.sections.get(&item.name).is_some() {
bail!("Entry '{}' already exists", item.name);
}
config.set_data(&item.name, "linux", &item)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
},
},
},
returns: {
type: LinuxTapeDrive,
},
)]
/// Get drive configuration
pub fn get_config(
name: String,
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<LinuxTapeDrive, Error> {
let (config, digest) = config::drive::config()?;
let data: LinuxTapeDrive = config.lookup("linux", &name)?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(data)
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of configured remotes (with config digest).",
type: Array,
items: {
type: DriveListEntry,
},
},
)]
/// List drives
pub fn list_drives(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<DriveListEntry>, Error> {
let (config, digest) = config::drive::config()?;
let linux_drives = linux_tape_device_list();
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
let mut list = Vec::new();
for drive in drive_list {
let mut entry = DriveListEntry {
name: drive.name,
path: drive.path.clone(),
changer: drive.changer,
vendor: None,
model: None,
serial: None,
};
if let Some(info) = lookup_drive(&linux_drives, &drive.path) {
entry.vendor = Some(info.vendor.clone());
entry.model = Some(info.model.clone());
entry.serial = Some(info.serial.clone());
}
list.push(entry);
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[allow(non_camel_case_types)]
#[serde(rename_all = "kebab-case")]
/// Deletable property name
pub enum DeletableProperty {
/// Delete the changer property.
changer,
/// Delete the changer-drive-id property.
changer_drive_id,
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
optional: true,
},
changer: {
schema: CHANGER_ID_SCHEMA,
optional: true,
},
"changer-drive-id": {
schema: CHANGER_DRIVE_ID_SCHEMA,
optional: true,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: {
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
optional: true,
},
},
},
)]
/// Update a drive configuration
pub fn update_drive(
name: String,
path: Option<String>,
changer: Option<String>,
changer_drive_id: Option<u64>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
_param: Value,
) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, expected_digest) = config::drive::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let mut data: LinuxTapeDrive = config.lookup("linux", &name)?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::changer => {
data.changer = None;
data.changer_drive_id = None;
},
DeletableProperty::changer_drive_id => { data.changer_drive_id = None; },
}
}
}
if let Some(path) = path {
let linux_drives = linux_tape_device_list();
check_drive_path(&linux_drives, &path)?;
data.path = path;
}
if let Some(changer) = changer {
let _: ScsiTapeChanger = config.lookup("changer", &changer)?;
data.changer = Some(changer);
}
if let Some(changer_drive_id) = changer_drive_id {
if changer_drive_id == 0 {
data.changer_drive_id = None;
} else {
if data.changer.is_none() {
bail!("Option 'changer-drive-id' requires option 'changer'.");
}
data.changer_drive_id = Some(changer_drive_id);
}
}
config.set_data(&name, "linux", &data)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
},
},
},
)]
/// Delete a drive configuration
pub fn delete_drive(name: String, _param: Value) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
match config.sections.get(&name) {
Some((section_type, _)) => {
if section_type != "linux" {
bail!("Entry '{}' exists, but is not a linux tape drive", name);
}
config.sections.remove(&name);
},
None => bail!("Delete drive '{}' failed - no such drive", name),
}
config::drive::save_config(&config)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_CONFIG)
.put(&API_METHOD_UPDATE_DRIVE)
.delete(&API_METHOD_DELETE_DRIVE);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_DRIVES)
.post(&API_METHOD_CREATE_DRIVE)
.match_all("name", &ITEM_ROUTER);

View File

@ -0,0 +1,252 @@
use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize};
use proxmox::{
api::{
api,
Router,
RpcEnvironment,
},
};
use crate::{
api2::types::{
DRIVE_ID_SCHEMA,
MEDIA_POOL_NAME_SCHEMA,
MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
MEDIA_RETENTION_POLICY_SCHEMA,
MediaPoolConfig,
},
config::{
self,
drive::{
check_drive_exists,
},
},
};
#[api(
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
drive: {
schema: DRIVE_ID_SCHEMA,
},
allocation: {
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
optional: true,
},
retention: {
schema: MEDIA_RETENTION_POLICY_SCHEMA,
optional: true,
},
template: {
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
optional: true,
},
},
},
)]
/// Create a new media pool
pub fn create_pool(
name: String,
drive: String,
allocation: Option<String>,
retention: Option<String>,
template: Option<String>,
) -> Result<(), Error> {
let _lock = config::media_pool::lock()?;
let (mut config, _digest) = config::media_pool::config()?;
if config.sections.get(&name).is_some() {
bail!("Media pool '{}' already exists", name);
}
let (drive_config, _) = config::drive::config()?;
check_drive_exists(&drive_config, &drive)?;
let item = MediaPoolConfig {
name: name.clone(),
drive,
allocation,
retention,
template,
};
config.set_data(&name, "pool", &item)?;
config::media_pool::save_config(&config)?;
Ok(())
}
#[api(
returns: {
description: "The list of configured media pools (with config digest).",
type: Array,
items: {
type: MediaPoolConfig,
},
},
)]
/// List media pools
pub fn list_pools(
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<MediaPoolConfig>, Error> {
let (config, digest) = config::media_pool::config()?;
let list = config.convert_to_typed_array("pool")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
},
},
returns: {
type: MediaPoolConfig,
},
)]
/// Get media pool configuration
pub fn get_config(name: String) -> Result<MediaPoolConfig, Error> {
let (config, _digest) = config::media_pool::config()?;
let data: MediaPoolConfig = config.lookup("pool", &name)?;
Ok(data)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[allow(non_camel_case_types)]
/// Deletable property name
pub enum DeletableProperty {
/// Delete media set allocation policy.
allocation,
/// Delete pool retention policy
retention,
/// Delete media set naming template
template,
}
#[api(
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
allocation: {
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
optional: true,
},
retention: {
schema: MEDIA_RETENTION_POLICY_SCHEMA,
optional: true,
},
template: {
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
optional: true,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
},
},
)]
/// Update media pool settings
pub fn update_pool(
name: String,
drive: Option<String>,
allocation: Option<String>,
retention: Option<String>,
template: Option<String>,
delete: Option<Vec<DeletableProperty>>,
) -> Result<(), Error> {
let _lock = config::media_pool::lock()?;
let (mut config, _digest) = config::media_pool::config()?;
let mut data: MediaPoolConfig = config.lookup("pool", &name)?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::allocation => { data.allocation = None; },
DeletableProperty::retention => { data.retention = None; },
DeletableProperty::template => { data.template = None; },
}
}
}
if let Some(drive) = drive { data.drive = drive; }
if allocation.is_some() { data.allocation = allocation; }
if retention.is_some() { data.retention = retention; }
if template.is_some() { data.template = template; }
config.set_data(&name, "pool", &data)?;
config::media_pool::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
},
},
)]
/// Delete a media pool configuration
pub fn delete_pool(name: String) -> Result<(), Error> {
let _lock = config::media_pool::lock()?;
let (mut config, _digest) = config::media_pool::config()?;
match config.sections.get(&name) {
Some(_) => { config.sections.remove(&name); },
None => bail!("delete pool '{}' failed - no such pool", name),
}
config::media_pool::save_config(&config)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_CONFIG)
.put(&API_METHOD_UPDATE_POOL)
.delete(&API_METHOD_DELETE_POOL);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_POOLS)
.post(&API_METHOD_CREATE_POOL)
.match_all("name", &ITEM_ROUTER);

View File

@ -19,10 +19,7 @@ use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
returns: {
description: "The list of configured remotes (with config digest).",
type: Array,
items: {
type: remote::Remote,
description: "Remote configuration (without password).",
},
items: { type: remote::Remote },
},
access: {
description: "List configured remotes filtered by Remote.Audit privileges",
@ -78,7 +75,7 @@ pub fn list_remotes(
optional: true,
default: 8007,
},
userid: {
"auth-id": {
type: Authid,
},
password: {
@ -124,10 +121,7 @@ pub fn create_remote(password: String, param: Value) -> Result<(), Error> {
},
},
},
returns: {
description: "The remote configuration (with config digest).",
type: remote::Remote,
},
returns: { type: remote::Remote },
access: {
permission: &Permission::Privilege(&["remote", "{name}"], PRIV_REMOTE_AUDIT, false),
}
@ -178,7 +172,7 @@ pub enum DeletableProperty {
type: u16,
optional: true,
},
userid: {
"auth-id": {
optional: true,
type: Authid,
},
@ -214,7 +208,7 @@ pub fn update_remote(
comment: Option<String>,
host: Option<String>,
port: Option<u16>,
userid: Option<Authid>,
auth_id: Option<Authid>,
password: Option<String>,
fingerprint: Option<String>,
delete: Option<Vec<DeletableProperty>>,
@ -252,7 +246,7 @@ pub fn update_remote(
}
if let Some(host) = host { data.host = host; }
if port.is_some() { data.port = port; }
if let Some(userid) = userid { data.userid = userid; }
if let Some(auth_id) = auth_id { data.auth_id = auth_id; }
if let Some(password) = password { data.password = password; }
if let Some(fingerprint) = fingerprint { data.fingerprint = Some(fingerprint); }
@ -284,6 +278,17 @@ pub fn update_remote(
/// Remove a remote from the configuration file.
pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error> {
use crate::config::sync::{self, SyncJobConfig};
let (sync_jobs, _) = sync::config()?;
let job_list: Vec<SyncJobConfig> = sync_jobs.convert_to_typed_array("sync")?;
for job in job_list {
if job.remote == name {
bail!("remote '{}' is used by sync job '{}' (datastore '{}')", name, job.id, job.store);
}
}
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = remote::config()?;
@ -312,7 +317,7 @@ pub async fn remote_client(remote: remote::Remote) -> Result<HttpClient, Error>
let client = HttpClient::new(
&remote.host,
remote.port.unwrap_or(8007),
&remote.userid,
&remote.auth_id,
options)?;
let _auth_info = client.login() // make sure we can auth
.await
@ -336,10 +341,7 @@ pub async fn remote_client(remote: remote::Remote) -> Result<HttpClient, Error>
returns: {
description: "List the accessible datastores.",
type: Array,
items: {
description: "Datastore name and description.",
type: DataStoreListItem,
},
items: { type: DataStoreListItem },
},
)]
/// List datastores of a remote.cfg entry

View File

@ -182,10 +182,7 @@ pub fn create_sync_job(
},
},
},
returns: {
description: "The sync job configuration.",
type: sync::SyncJobConfig,
},
returns: { type: sync::SyncJobConfig },
access: {
description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody,

View File

@ -127,10 +127,7 @@ pub fn create_verification_job(
},
},
},
returns: {
description: "The verification job configuration.",
type: verify::VerificationJobConfig,
},
returns: { type: verify::VerificationJobConfig },
access: {
permission: &Permission::Anybody,
description: "Requires Datastore.Audit or Datastore.Verify on job's datastore.",

View File

@ -6,7 +6,6 @@ use futures::future::{FutureExt, TryFutureExt};
use hyper::body::Body;
use hyper::http::request::Parts;
use hyper::upgrade::Upgraded;
use nix::fcntl::{fcntl, FcntlArg, FdFlag};
use serde_json::{json, Value};
use tokio::io::{AsyncBufReadExt, BufReader};
@ -34,7 +33,7 @@ pub mod subscription;
pub(crate) mod rrd;
mod journal;
mod services;
pub(crate) mod services;
mod status;
mod syslog;
mod time;
@ -145,18 +144,10 @@ async fn termproxy(
move |worker| async move {
// move inside the worker so that it survives and does not close the port
// remove CLOEXEC from listenere so that we can reuse it in termproxy
let fd = listener.as_raw_fd();
let mut flags = match fcntl(fd, FcntlArg::F_GETFD) {
Ok(bits) => FdFlag::from_bits_truncate(bits),
Err(err) => bail!("could not get fd: {}", err),
};
flags.remove(FdFlag::FD_CLOEXEC);
if let Err(err) = fcntl(fd, FcntlArg::F_SETFD(flags)) {
bail!("could not set fd: {}", err);
}
tools::fd_change_cloexec(listener.as_raw_fd(), false)?;
let mut arguments: Vec<&str> = Vec::new();
let fd_string = fd.to_string();
let fd_string = listener.as_raw_fd().to_string();
arguments.push(&fd_string);
arguments.extend_from_slice(&[
"--path",

View File

@ -1,12 +1,13 @@
use anyhow::{Error, bail, format_err};
use serde_json::{json, Value};
use std::collections::HashMap;
use proxmox::list_subdirs_api_method;
use proxmox::api::{api, RpcEnvironment, RpcEnvironmentType, Permission};
use proxmox::api::router::{Router, SubdirMap};
use crate::server::WorkerTask;
use crate::tools::{apt, http};
use crate::tools::{apt, http, subscription};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::api2::types::{Authid, APTUpdateInfo, NODE_SCHEMA, UPID_SCHEMA};
@ -202,9 +203,34 @@ fn apt_get_changelog(
let changelog_url = &pkg_info[0].change_log_url;
// FIXME: use 'apt-get changelog' for proxmox packages as well, once repo supports it
if changelog_url.starts_with("http://download.proxmox.com/") {
let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url))
let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url, None))
.map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?;
return Ok(json!(changelog));
} else if changelog_url.starts_with("https://enterprise.proxmox.com/") {
let sub = match subscription::read_subscription()? {
Some(sub) => sub,
None => bail!("cannot retrieve changelog from enterprise repo: no subscription info found")
};
let (key, id) = match sub.key {
Some(key) => {
match sub.serverid {
Some(id) => (key, id),
None =>
bail!("cannot retrieve changelog from enterprise repo: no server id found")
}
},
None => bail!("cannot retrieve changelog from enterprise repo: no subscription key found")
};
let mut auth_header = HashMap::new();
auth_header.insert("Authorization".to_owned(),
format!("Basic {}", base64::encode(format!("{}:{}", key, id))));
let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url, Some(&auth_header)))
.map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?;
return Ok(json!(changelog));
} else {
let mut command = std::process::Command::new("apt-get");
command.arg("changelog");
@ -215,12 +241,128 @@ fn apt_get_changelog(
}
}
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
description: "List of more relevant packages.",
type: Array,
items: {
type: APTUpdateInfo,
},
},
access: {
permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
},
)]
/// Get package information for important Proxmox Backup Server packages.
pub fn get_versions() -> Result<Vec<APTUpdateInfo>, Error> {
const PACKAGES: &[&str] = &[
"ifupdown2",
"libjs-extjs",
"proxmox-backup",
"proxmox-backup-docs",
"proxmox-backup-client",
"proxmox-backup-server",
"proxmox-mini-journalreader",
"proxmox-widget-toolkit",
"pve-xtermjs",
"smartmontools",
"zfsutils-linux",
];
fn unknown_package(package: String, extra_info: Option<String>) -> APTUpdateInfo {
APTUpdateInfo {
package,
title: "unknown".into(),
arch: "unknown".into(),
description: "unknown".into(),
version: "unknown".into(),
old_version: "unknown".into(),
origin: "unknown".into(),
priority: "unknown".into(),
section: "unknown".into(),
change_log_url: "unknown".into(),
extra_info,
}
}
let is_kernel = |name: &str| name.starts_with("pve-kernel-");
let mut packages: Vec<APTUpdateInfo> = Vec::new();
let pbs_packages = apt::list_installed_apt_packages(
|filter| {
filter.installed_version == Some(filter.active_version)
&& (is_kernel(filter.package) || PACKAGES.contains(&filter.package))
},
None,
);
let running_kernel = format!(
"running kernel: {}",
nix::sys::utsname::uname().release().to_owned()
);
if let Some(proxmox_backup) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup") {
let mut proxmox_backup = proxmox_backup.clone();
proxmox_backup.extra_info = Some(running_kernel);
packages.push(proxmox_backup);
} else {
packages.push(unknown_package("proxmox-backup".into(), Some(running_kernel)));
}
let version = crate::api2::version::PROXMOX_PKG_VERSION;
let release = crate::api2::version::PROXMOX_PKG_RELEASE;
let daemon_version_info = Some(format!("running version: {}.{}", version, release));
if let Some(pkg) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup-server") {
let mut pkg = pkg.clone();
pkg.extra_info = daemon_version_info;
packages.push(pkg);
} else {
packages.push(unknown_package("proxmox-backup".into(), daemon_version_info));
}
let mut kernel_pkgs: Vec<APTUpdateInfo> = pbs_packages
.iter()
.filter(|pkg| is_kernel(&pkg.package))
.cloned()
.collect();
// make sure the cache mutex gets dropped before the next call to list_installed_apt_packages
{
let cache = apt_pkg_native::Cache::get_singleton();
kernel_pkgs.sort_by(|left, right| {
cache
.compare_versions(&left.old_version, &right.old_version)
.reverse()
});
}
packages.append(&mut kernel_pkgs);
// add entry for all packages we're interested in, even if not installed
for pkg in PACKAGES.iter() {
if pkg == &"proxmox-backup" || pkg == &"proxmox-backup-server" {
continue;
}
match pbs_packages.iter().find(|item| &item.package == pkg) {
Some(apt_pkg) => packages.push(apt_pkg.to_owned()),
None => packages.push(unknown_package(pkg.to_string(), None)),
}
}
Ok(packages)
}
const SUBDIRS: SubdirMap = &[
("changelog", &Router::new().get(&API_METHOD_APT_GET_CHANGELOG)),
("update", &Router::new()
.get(&API_METHOD_APT_UPDATE_AVAILABLE)
.post(&API_METHOD_APT_UPDATE_DATABASE)
),
("versions", &Router::new().get(&API_METHOD_GET_VERSIONS)),
];
pub const ROUTER: Router = Router::new()

View File

@ -142,6 +142,18 @@ pub fn create_datastore_disk(
bail!("disk '{}' is already in use.", disk);
}
let mount_point = format!("/mnt/datastore/{}", &name);
// check if the default path does exist already and bail if it does
let default_path = std::path::PathBuf::from(&mount_point);
match std::fs::metadata(&default_path) {
Err(_) => {}, // path does not exist
Ok(_) => {
bail!("path {:?} already exists", default_path);
}
}
let upid_str = WorkerTask::new_thread(
"dircreate", Some(name.clone()), auth_id, to_stdout, move |worker|
{
@ -160,7 +172,7 @@ pub fn create_datastore_disk(
let uuid = get_fs_uuid(&partition)?;
let uuid_path = format!("/dev/disk/by-uuid/{}", uuid);
let (mount_unit_name, mount_point) = create_datastore_mount_unit(&name, filesystem, &uuid_path)?;
let mount_unit_name = create_datastore_mount_unit(&name, &mount_point, filesystem, &uuid_path)?;
systemd::reload_daemon()?;
systemd::enable_unit(&mount_unit_name)?;
@ -243,11 +255,11 @@ pub const ROUTER: Router = Router::new()
fn create_datastore_mount_unit(
datastore_name: &str,
mount_point: &str,
fs_type: FileSystemType,
what: &str,
) -> Result<(String, String), Error> {
) -> Result<String, Error> {
let mount_point = format!("/mnt/datastore/{}", datastore_name);
let mut mount_unit_name = systemd::escape_unit(&mount_point, true);
mount_unit_name.push_str(".mount");
@ -265,7 +277,7 @@ fn create_datastore_mount_unit(
let mount = SystemdMountSection {
What: what.to_string(),
Where: mount_point.clone(),
Where: mount_point.to_string(),
Type: Some(fs_type.to_string()),
Options: Some(String::from("defaults")),
..Default::default()
@ -278,5 +290,5 @@ fn create_datastore_mount_unit(
systemd::config::save_systemd_mount(&mount_unit_path, &config)?;
Ok((mount_unit_name, mount_point))
Ok(mount_unit_name)
}

View File

@ -243,7 +243,7 @@ pub fn zpool_details(
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_MODIFY, false),
},
)]
/// Create a new ZFS pool.
/// Create a new ZFS pool. Will be mounted under '/mnt/datastore/<name>'.
pub fn create_zpool(
name: String,
devices: String,
@ -303,10 +303,11 @@ pub fn create_zpool(
bail!("{:?} needs at least {} disks.", raidlevel, min_disks);
}
let mount_point = format!("/mnt/datastore/{}", &name);
// check if the default path does exist already and bail if it does
// otherwise we get an error on mounting
let mut default_path = std::path::PathBuf::from("/");
default_path.push(&name);
// otherwise 'zpool create' aborts after partitioning, but before creating the pool
let default_path = std::path::PathBuf::from(&mount_point);
match std::fs::metadata(&default_path) {
Err(_) => {}, // path does not exist
@ -322,7 +323,7 @@ pub fn create_zpool(
let mut command = std::process::Command::new("zpool");
command.args(&["create", "-o", &format!("ashift={}", ashift), &name]);
command.args(&["create", "-o", &format!("ashift={}", ashift), "-m", &mount_point, &name]);
match raidlevel {
ZfsRaidLevel::Single => {
@ -371,7 +372,6 @@ pub fn create_zpool(
}
if add_datastore {
let mount_point = format!("/{}", name);
crate::api2::config::datastore::create_datastore(json!({ "name": name, "path": mount_point }))?
}

View File

@ -102,10 +102,7 @@ pub fn list_network_devices(
},
},
},
returns: {
description: "The network interface configuration (with config digest).",
type: Interface,
},
returns: { type: Interface },
access: {
permission: &Permission::Privilege(&["system", "network", "interfaces", "{name}"], PRIV_SYS_AUDIT, false),
},
@ -135,7 +132,6 @@ pub fn read_interface(iface: String) -> Result<Value, Error> {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
},
"type": {
description: "Interface type.",
type: NetworkInterfaceType,
optional: true,
},
@ -388,7 +384,6 @@ pub enum DeletableProperty {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
},
"type": {
description: "Interface type. If specified, need to match the current type.",
type: NetworkInterfaceType,
optional: true,
},

View File

@ -22,7 +22,7 @@ static SERVICE_NAME_LIST: [&str; 7] = [
"systemd-timesyncd",
];
fn real_service_name(service: &str) -> &str {
pub fn real_service_name(service: &str) -> &str {
// since postfix package 3.1.0-3.1 the postfix unit is only here
// to manage subinstances, of which the default is called "-".

View File

@ -73,10 +73,7 @@ pub fn check_subscription(
},
},
},
returns: {
description: "Subscription status.",
type: SubscriptionInfo,
},
returns: { type: SubscriptionInfo },
access: {
permission: &Permission::Anybody,
},

View File

@ -134,12 +134,18 @@ fn get_syslog(
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let service = if let Some(service) = param["service"].as_str() {
Some(crate::api2::node::services::real_service_name(service))
} else {
None
};
let (count, lines) = dump_journal(
param["start"].as_u64(),
param["limit"].as_u64(),
param["since"].as_str(),
param["until"].as_str(),
param["service"].as_str())?;
service)?;
rpcenv["total"] = Value::from(count);

View File

@ -71,6 +71,36 @@ fn check_job_privs(auth_id: &Authid, user_info: &CachedUserInfo, upid: &UPID) ->
bail!("not a scheduled job task");
}
// get the store out of the worker_id
fn check_job_store(upid: &UPID, store: &str) -> bool {
match (upid.worker_type.as_str(), &upid.worker_id) {
(workertype, Some(workerid)) if workertype.starts_with("verif") => {
if let Some(captures) = VERIFICATION_JOB_WORKER_ID_REGEX.captures(&workerid) {
if let Some(jobstore) = captures.get(1) {
return store == jobstore.as_str();
}
} else {
return workerid == store;
}
}
("syncjob", Some(workerid)) => {
if let Some(captures) = SYNC_JOB_WORKER_ID_REGEX.captures(&workerid) {
if let Some(local_store) = captures.get(3) {
return store == local_store.as_str();
}
}
}
("prune", Some(workerid))
| ("backup", Some(workerid))
| ("garbage_collection", Some(workerid)) => {
return workerid == store || workerid.starts_with(&format!("{}:", store));
}
_ => {}
};
false
}
fn check_task_access(auth_id: &Authid, upid: &UPID) -> Result<(), Error> {
let task_auth_id = &upid.auth_id;
if auth_id == task_auth_id
@ -136,7 +166,6 @@ fn check_task_access(auth_id: &Authid, upid: &UPID) -> Result<(), Error> {
},
user: {
type: Userid,
description: "The user who started the task.",
},
tokenid: {
type: Tokenname,
@ -455,21 +484,8 @@ pub fn list_tasks(
}
if let Some(store) = store {
// Note: useful to select all tasks spawned by proxmox-backup-client
let worker_id = match &info.upid.worker_id {
Some(w) => w,
None => return None, // skip
};
if info.upid.worker_type == "backup" || info.upid.worker_type == "restore" ||
info.upid.worker_type == "prune"
{
let prefix = format!("{}:", store);
if !worker_id.starts_with(&prefix) { return None; }
} else if info.upid.worker_type == "garbage_collection" {
if worker_id != store { return None; }
} else {
return None; // skip
if !check_job_store(&info.upid, store) {
return None;
}
}

View File

@ -50,7 +50,7 @@ pub async fn get_pull_parameters(
let (remote_config, _digest) = remote::config()?;
let remote: remote::Remote = remote_config.lookup("remote", remote)?;
let src_repo = BackupRepository::new(Some(remote.userid.clone()), Some(remote.host.clone()), remote.port, remote_store.to_string());
let src_repo = BackupRepository::new(Some(remote.auth_id.clone()), Some(remote.host.clone()), remote.port, remote_store.to_string());
let client = crate::api2::config::remote::remote_client(remote).await?;

View File

@ -103,6 +103,7 @@ fn datastore_status(
"total": status.total,
"used": status.used,
"avail": status.avail,
"gc-status": datastore.last_gc_status(),
});
let rrd_dir = format!("datastore/{}", store);
@ -152,6 +153,8 @@ fn datastore_status(
}
}
entry["history-start"] = start.into();
entry["history-delta"] = reso.into();
entry["history"] = history.into();
// we skip the calculation for datastores with not enough data

163
src/api2/tape/changer.rs Normal file
View File

@ -0,0 +1,163 @@
use std::path::Path;
use anyhow::Error;
use serde_json::Value;
use proxmox::api::{api, Router, SubdirMap};
use proxmox::list_subdirs_api_method;
use crate::{
config,
api2::types::{
CHANGER_ID_SCHEMA,
ScsiTapeChanger,
TapeDeviceInfo,
MtxStatusEntry,
MtxEntryKind,
},
tape::{
TAPE_STATUS_DIR,
ElementStatus,
OnlineStatusMap,
Inventory,
MediaStateDatabase,
linux_tape_changer_list,
mtx_status,
mtx_status_to_online_set,
mtx_transfer,
},
};
#[api(
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
},
},
returns: {
description: "A status entry for each drive and slot.",
type: Array,
items: {
type: MtxStatusEntry,
},
},
)]
/// Get tape changer status
pub fn get_status(name: String) -> Result<Vec<MtxStatusEntry>, Error> {
let (config, _digest) = config::drive::config()?;
let data: ScsiTapeChanger = config.lookup("changer", &name)?;
let status = mtx_status(&data.path)?;
let state_path = Path::new(TAPE_STATUS_DIR);
let inventory = Inventory::load(state_path)?;
let mut map = OnlineStatusMap::new(&config)?;
let online_set = mtx_status_to_online_set(&status, &inventory);
map.update_online_status(&name, online_set)?;
let mut state_db = MediaStateDatabase::load(state_path)?;
state_db.update_online_status(&map)?;
let mut list = Vec::new();
for (id, drive_status) in status.drives.iter().enumerate() {
let entry = MtxStatusEntry {
entry_kind: MtxEntryKind::Drive,
entry_id: id as u64,
changer_id: match &drive_status.status {
ElementStatus::Empty => None,
ElementStatus::Full => Some(String::new()),
ElementStatus::VolumeTag(tag) => Some(tag.to_string()),
},
loaded_slot: drive_status.loaded_slot,
};
list.push(entry);
}
for (id, slot_status) in status.slots.iter().enumerate() {
let entry = MtxStatusEntry {
entry_kind: MtxEntryKind::Slot,
entry_id: id as u64 + 1,
changer_id: match &slot_status {
ElementStatus::Empty => None,
ElementStatus::Full => Some(String::new()),
ElementStatus::VolumeTag(tag) => Some(tag.to_string()),
},
loaded_slot: None,
};
list.push(entry);
}
Ok(list)
}
#[api(
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
from: {
description: "Source slot number",
minimum: 1,
},
to: {
description: "Destination slot number",
minimum: 1,
},
},
},
)]
/// Transfers media from one slot to another
pub fn transfer(
name: String,
from: u64,
to: u64,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let data: ScsiTapeChanger = config.lookup("changer", &name)?;
mtx_transfer(&data.path, from, to)?;
Ok(())
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of autodetected tape changers.",
type: Array,
items: {
type: TapeDeviceInfo,
},
},
)]
/// Scan for SCSI tape changers
pub fn scan_changers(_param: Value) -> Result<Vec<TapeDeviceInfo>, Error> {
let list = linux_tape_changer_list();
Ok(list)
}
const SUBDIRS: SubdirMap = &[
(
"scan",
&Router::new()
.get(&API_METHOD_SCAN_CHANGERS)
),
];
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

816
src/api2/tape/drive.rs Normal file
View File

@ -0,0 +1,816 @@
use std::path::Path;
use std::sync::Arc;
use anyhow::{bail, Error};
use serde_json::Value;
use proxmox::{
sortable,
identity,
list_subdirs_api_method,
tools::Uuid,
sys::error::SysError,
api::{
api,
RpcEnvironment,
Router,
SubdirMap,
},
};
use crate::{
config::{
self,
drive::check_drive_exists,
},
api2::types::{
UPID_SCHEMA,
DRIVE_ID_SCHEMA,
MEDIA_LABEL_SCHEMA,
MEDIA_POOL_NAME_SCHEMA,
Authid,
LinuxTapeDrive,
ScsiTapeChanger,
TapeDeviceInfo,
MediaLabelInfoFlat,
LabelUuidMap,
},
server::WorkerTask,
tape::{
TAPE_STATUS_DIR,
TapeDriver,
MediaChange,
Inventory,
MediaStateDatabase,
MediaId,
mtx_load,
mtx_unload,
linux_tape_device_list,
open_drive,
media_changer,
update_changer_online_status,
file_formats::{
DriveLabel,
MediaSetLabel,
},
},
};
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
slot: {
description: "Source slot number",
minimum: 1,
},
},
},
)]
/// Load media via changer from slot
pub fn load_slot(
drive: String,
slot: u64,
_param: Value,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let drive_config: LinuxTapeDrive = config.lookup("linux", &drive)?;
let changer: ScsiTapeChanger = match drive_config.changer {
Some(ref changer) => config.lookup("changer", changer)?,
None => bail!("drive '{}' has no associated changer", drive),
};
let drivenum = drive_config.changer_drive_id.unwrap_or(0);
mtx_load(&changer.path, slot, drivenum)
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
"changer-id": {
schema: MEDIA_LABEL_SCHEMA,
},
},
},
)]
/// Load media with specified label
///
/// Issue a media load request to the associated changer device.
pub fn load_media(drive: String, changer_id: String) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let (mut changer, _) = media_changer(&config, &drive, false)?;
changer.load_media(&changer_id)?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
slot: {
description: "Target slot number. If omitted, defaults to the slot that the drive was loaded from.",
minimum: 1,
optional: true,
},
},
},
)]
/// Unload media via changer
pub fn unload(
drive: String,
slot: Option<u64>,
_param: Value,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let mut drive_config: LinuxTapeDrive = config.lookup("linux", &drive)?;
let changer: ScsiTapeChanger = match drive_config.changer {
Some(ref changer) => config.lookup("changer", changer)?,
None => bail!("drive '{}' has no associated changer", drive),
};
let drivenum: u64 = 0;
if let Some(slot) = slot {
mtx_unload(&changer.path, slot, drivenum)
} else {
drive_config.unload_media()
}
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of autodetected tape drives.",
type: Array,
items: {
type: TapeDeviceInfo,
},
},
)]
/// Scan tape drives
pub fn scan_drives(_param: Value) -> Result<Vec<TapeDeviceInfo>, Error> {
let list = linux_tape_device_list();
Ok(list)
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
fast: {
description: "Use fast erase.",
type: bool,
optional: true,
default: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Erase media
pub fn erase_media(
drive: String,
fast: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let (config, _digest) = config::drive::config()?;
check_drive_exists(&config, &drive)?; // early check before starting worker
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let upid_str = WorkerTask::new_thread(
"erase-media",
Some(drive.clone()),
auth_id,
true,
move |_worker| {
let mut drive = open_drive(&config, &drive)?;
drive.erase_media(fast.unwrap_or(true))?;
Ok(())
}
)?;
Ok(upid_str.into())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Rewind tape
pub fn rewind(
drive: String,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let (config, _digest) = config::drive::config()?;
check_drive_exists(&config, &drive)?; // early check before starting worker
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let upid_str = WorkerTask::new_thread(
"rewind-media",
Some(drive.clone()),
auth_id,
true,
move |_worker| {
let mut drive = open_drive(&config, &drive)?;
drive.rewind()?;
Ok(())
}
)?;
Ok(upid_str.into())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
},
},
)]
/// Eject/Unload drive media
pub fn eject_media(drive: String) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let (mut changer, _) = media_changer(&config, &drive, false)?;
if !changer.eject_on_unload() {
let mut drive = open_drive(&config, &drive)?;
drive.eject_media()?;
}
changer.unload_media()?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
"changer-id": {
schema: MEDIA_LABEL_SCHEMA,
},
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Label media
///
/// Write a new media label to the media in 'drive'. The media is
/// assigned to the specified 'pool', or else to the free media pool.
///
/// Note: The media need to be empty (you may want to erase it first).
pub fn label_media(
drive: String,
pool: Option<String>,
changer_id: String,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
if let Some(ref pool) = pool {
let (pool_config, _digest) = config::media_pool::config()?;
if pool_config.sections.get(pool).is_none() {
bail!("no such pool ('{}')", pool);
}
}
let (config, _digest) = config::drive::config()?;
let upid_str = WorkerTask::new_thread(
"label-media",
Some(drive.clone()),
auth_id,
true,
move |worker| {
let mut drive = open_drive(&config, &drive)?;
drive.rewind()?;
match drive.read_next_file() {
Ok(Some(_file)) => bail!("media is not empty (erase first)"),
Ok(None) => { /* EOF mark at BOT, assume tape is empty */ },
Err(err) => {
if err.is_errno(nix::errno::Errno::ENOSPC) || err.is_errno(nix::errno::Errno::EIO) {
/* assume tape is empty */
} else {
bail!("media read error - {}", err);
}
}
}
let ctime = proxmox::tools::time::epoch_i64();
let label = DriveLabel {
changer_id: changer_id.to_string(),
uuid: Uuid::generate(),
ctime,
};
write_media_label(worker, &mut drive, label, pool)
}
)?;
Ok(upid_str.into())
}
fn write_media_label(
worker: Arc<WorkerTask>,
drive: &mut Box<dyn TapeDriver>,
label: DriveLabel,
pool: Option<String>,
) -> Result<(), Error> {
drive.label_tape(&label)?;
let mut media_set_label = None;
if let Some(ref pool) = pool {
// assign media to pool by writing special media set label
worker.log(format!("Label media '{}' for pool '{}'", label.changer_id, pool));
let set = MediaSetLabel::with_data(&pool, [0u8; 16].into(), 0, label.ctime);
drive.write_media_set_label(&set)?;
media_set_label = Some(set);
} else {
worker.log(format!("Label media '{}' (no pool assignment)", label.changer_id));
}
let media_id = MediaId { label, media_set_label };
let mut inventory = Inventory::load(Path::new(TAPE_STATUS_DIR))?;
inventory.store(media_id.clone())?;
drive.rewind()?;
match drive.read_label() {
Ok(Some(info)) => {
if info.label.uuid != media_id.label.uuid {
bail!("verify label failed - got wrong label uuid");
}
if let Some(ref pool) = pool {
match info.media_set_label {
Some((set, _)) => {
if set.uuid != [0u8; 16].into() {
bail!("verify media set label failed - got wrong set uuid");
}
if &set.pool != pool {
bail!("verify media set label failed - got wrong pool");
}
}
None => {
bail!("verify media set label failed (missing set label)");
}
}
}
},
Ok(None) => bail!("verify label failed (got empty media)"),
Err(err) => bail!("verify label failed - {}", err),
};
drive.rewind()?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
},
},
returns: {
type: MediaLabelInfoFlat,
},
)]
/// Read media label
pub fn read_label(drive: String) -> Result<MediaLabelInfoFlat, Error> {
let (config, _digest) = config::drive::config()?;
let mut drive = open_drive(&config, &drive)?;
let info = drive.read_label()?;
let info = match info {
Some(info) => {
let mut flat = MediaLabelInfoFlat {
uuid: info.label.uuid.to_string(),
changer_id: info.label.changer_id.clone(),
ctime: info.label.ctime,
media_set_ctime: None,
media_set_uuid: None,
pool: None,
seq_nr: None,
};
if let Some((set, _)) = info.media_set_label {
flat.pool = Some(set.pool.clone());
flat.seq_nr = Some(set.seq_nr);
flat.media_set_uuid = Some(set.uuid.to_string());
flat.media_set_ctime = Some(set.ctime);
}
flat
}
None => {
bail!("Media is empty (no label).");
}
};
Ok(info)
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
},
},
returns: {
description: "The list of media labels with associated media Uuid (if any).",
type: Array,
items: {
type: LabelUuidMap,
},
},
)]
/// List known media labels (Changer Inventory)
///
/// Note: Only useful for drives with associated changer device.
///
/// This method queries the changer to get a list of media labels.
///
/// Note: This updates the media online status.
pub fn inventory(
drive: String,
) -> Result<Vec<LabelUuidMap>, Error> {
let (config, _digest) = config::drive::config()?;
let (changer, changer_name) = media_changer(&config, &drive, false)?;
let changer_id_list = changer.list_media_changer_ids()?;
let state_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(state_path)?;
let mut state_db = MediaStateDatabase::load(state_path)?;
update_changer_online_status(&config, &mut inventory, &mut state_db, &changer_name, &changer_id_list)?;
let mut list = Vec::new();
for changer_id in changer_id_list.iter() {
if changer_id.starts_with("CLN") {
// skip cleaning unit
continue;
}
let changer_id = changer_id.to_string();
if let Some(media_id) = inventory.find_media_by_changer_id(&changer_id) {
list.push(LabelUuidMap { changer_id, uuid: Some(media_id.label.uuid.to_string()) });
} else {
list.push(LabelUuidMap { changer_id, uuid: None });
}
}
Ok(list)
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
"read-all-labels": {
description: "Load all tapes and try read labels (even if already inventoried)",
type: bool,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Update inventory
///
/// Note: Only useful for drives with associated changer device.
///
/// This method queries the changer to get a list of media labels. It
/// then loads any unknown media into the drive, reads the label, and
/// store the result to the media database.
///
/// Note: This updates the media online status.
pub fn update_inventory(
drive: String,
read_all_labels: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let (config, _digest) = config::drive::config()?;
check_drive_exists(&config, &drive)?; // early check before starting worker
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let upid_str = WorkerTask::new_thread(
"inventory-update",
Some(drive.clone()),
auth_id,
true,
move |worker| {
let (mut changer, changer_name) = media_changer(&config, &drive, false)?;
let changer_id_list = changer.list_media_changer_ids()?;
if changer_id_list.is_empty() {
worker.log(format!("changer device does not list any media labels"));
}
let state_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(state_path)?;
let mut state_db = MediaStateDatabase::load(state_path)?;
update_changer_online_status(&config, &mut inventory, &mut state_db, &changer_name, &changer_id_list)?;
for changer_id in changer_id_list.iter() {
if changer_id.starts_with("CLN") {
worker.log(format!("skip cleaning unit '{}'", changer_id));
continue;
}
let changer_id = changer_id.to_string();
if !read_all_labels.unwrap_or(false) {
if let Some(_) = inventory.find_media_by_changer_id(&changer_id) {
worker.log(format!("media '{}' already inventoried", changer_id));
continue;
}
}
if let Err(err) = changer.load_media(&changer_id) {
worker.warn(format!("unable to load media '{}' - {}", changer_id, err));
continue;
}
let mut drive = open_drive(&config, &drive)?;
match drive.read_label() {
Err(err) => {
worker.warn(format!("unable to read label form media '{}' - {}", changer_id, err));
}
Ok(None) => {
worker.log(format!("media '{}' is empty", changer_id));
}
Ok(Some(info)) => {
if changer_id != info.label.changer_id {
worker.warn(format!("label changer ID missmatch ({} != {})", changer_id, info.label.changer_id));
continue;
}
worker.log(format!("inventorize media '{}' with uuid '{}'", changer_id, info.label.uuid));
inventory.store(info.into())?;
}
}
}
Ok(())
}
)?;
Ok(upid_str.into())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Label media with barcodes from changer device
pub fn barcode_label_media(
drive: String,
pool: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
if let Some(ref pool) = pool {
let (pool_config, _digest) = config::media_pool::config()?;
if pool_config.sections.get(pool).is_none() {
bail!("no such pool ('{}')", pool);
}
}
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let upid_str = WorkerTask::new_thread(
"barcode-label-media",
Some(drive.clone()),
auth_id,
true,
move |worker| {
barcode_label_media_worker(worker, drive, pool)
}
)?;
Ok(upid_str.into())
}
fn barcode_label_media_worker(
worker: Arc<WorkerTask>,
drive: String,
pool: Option<String>,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let (mut changer, changer_name) = media_changer(&config, &drive, false)?;
let changer_id_list = changer.list_media_changer_ids()?;
let state_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(state_path)?;
let mut state_db = MediaStateDatabase::load(state_path)?;
update_changer_online_status(&config, &mut inventory, &mut state_db, &changer_name, &changer_id_list)?;
if changer_id_list.is_empty() {
bail!("changer device does not list any media labels");
}
for changer_id in changer_id_list {
if changer_id.starts_with("CLN") { continue; }
inventory.reload()?;
if inventory.find_media_by_changer_id(&changer_id).is_some() {
worker.log(format!("media '{}' already inventoried (already labeled)", changer_id));
continue;
}
worker.log(format!("checking/loading media '{}'", changer_id));
if let Err(err) = changer.load_media(&changer_id) {
worker.warn(format!("unable to load media '{}' - {}", changer_id, err));
continue;
}
let mut drive = open_drive(&config, &drive)?;
drive.rewind()?;
match drive.read_next_file() {
Ok(Some(_file)) => {
worker.log(format!("media '{}' is not empty (erase first)", changer_id));
continue;
}
Ok(None) => { /* EOF mark at BOT, assume tape is empty */ },
Err(err) => {
if err.is_errno(nix::errno::Errno::ENOSPC) || err.is_errno(nix::errno::Errno::EIO) {
/* assume tape is empty */
} else {
worker.warn(format!("media '{}' read error (maybe not empty - erase first)", changer_id));
continue;
}
}
}
let ctime = proxmox::tools::time::epoch_i64();
let label = DriveLabel {
changer_id: changer_id.to_string(),
uuid: Uuid::generate(),
ctime,
};
write_media_label(worker.clone(), &mut drive, label, pool.clone())?
}
Ok(())
}
#[sortable]
pub const SUBDIRS: SubdirMap = &sorted!([
(
"barcode-label-media",
&Router::new()
.put(&API_METHOD_BARCODE_LABEL_MEDIA)
),
(
"eject-media",
&Router::new()
.put(&API_METHOD_EJECT_MEDIA)
),
(
"erase-media",
&Router::new()
.put(&API_METHOD_ERASE_MEDIA)
),
(
"inventory",
&Router::new()
.get(&API_METHOD_INVENTORY)
.put(&API_METHOD_UPDATE_INVENTORY)
),
(
"label-media",
&Router::new()
.put(&API_METHOD_LABEL_MEDIA)
),
(
"load-slot",
&Router::new()
.put(&API_METHOD_LOAD_SLOT)
),
(
"read-label",
&Router::new()
.get(&API_METHOD_READ_LABEL)
),
(
"rewind",
&Router::new()
.put(&API_METHOD_REWIND)
),
(
"scan",
&Router::new()
.get(&API_METHOD_SCAN_DRIVES)
),
(
"unload",
&Router::new()
.put(&API_METHOD_UNLOAD)
),
]);
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

15
src/api2/tape/mod.rs Normal file
View File

@ -0,0 +1,15 @@
use proxmox::api::router::SubdirMap;
use proxmox::api::Router;
use proxmox::list_subdirs_api_method;
pub mod drive;
pub mod changer;
pub const SUBDIRS: SubdirMap = &[
("changer", &changer::ROUTER),
("drive", &drive::ROUTER),
];
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

View File

@ -5,7 +5,7 @@ use proxmox::api::{api, schema::*};
use proxmox::const_regex;
use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::{CryptMode, BACKUP_ID_REGEX};
use crate::backup::{CryptMode, Fingerprint, BACKUP_ID_REGEX};
use crate::server::UPID;
#[macro_use]
@ -20,6 +20,9 @@ pub use userid::Userid;
pub use userid::Authid;
pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GROUP_ID_SCHEMA};
mod tape;
pub use tape::*;
// File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
if name.starts_with('.') {
@ -484,6 +487,10 @@ pub struct SnapshotVerifyState {
type: SnapshotVerifyState,
optional: true,
},
fingerprint: {
type: String,
optional: true,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
@ -508,6 +515,9 @@ pub struct SnapshotListItem {
/// The result of the last run verify task
#[serde(skip_serializing_if="Option::is_none")]
pub verification: Option<SnapshotVerifyState>,
/// Fingerprint of encryption key
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<Fingerprint>,
/// List of contained archive files.
pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes).
@ -692,7 +702,7 @@ pub struct TypeCounts {
},
},
)]
#[derive(Serialize, Deserialize)]
#[derive(Serialize, Deserialize, Default)]
/// Counts of groups/snapshots per BackupType.
pub struct Counts {
/// The counts for CT backups
@ -707,8 +717,14 @@ pub struct Counts {
#[api(
properties: {
"gc-status": { type: GarbageCollectionStatus, },
counts: { type: Counts, }
"gc-status": {
type: GarbageCollectionStatus,
optional: true,
},
counts: {
type: Counts,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
@ -722,9 +738,11 @@ pub struct DataStoreStatus {
/// Available space (bytes).
pub avail: u64,
/// Status of last GC
pub gc_status: GarbageCollectionStatus,
#[serde(skip_serializing_if="Option::is_none")]
pub gc_status: Option<GarbageCollectionStatus>,
/// Group/Snapshot counts
pub counts: Counts,
#[serde(skip_serializing_if="Option::is_none")]
pub counts: Option<Counts>,
}
#[api(
@ -1153,7 +1171,7 @@ pub enum RRDTimeFrameResolution {
}
#[api()]
#[derive(Debug, Serialize, Deserialize)]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
/// Describes a package for which an update is available.
pub struct APTUpdateInfo {
@ -1177,6 +1195,9 @@ pub struct APTUpdateInfo {
pub section: String,
/// URL under which the package's changelog can be retrieved
pub change_log_url: String,
/// Custom extra field for additional package information
#[serde(skip_serializing_if="Option::is_none")]
pub extra_info: Option<String>,
}
#[api()]

View File

@ -0,0 +1,39 @@
use ::serde::{Deserialize, Serialize};
use proxmox::api::api;
#[api()]
#[derive(Debug,Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Kind of devive
pub enum DeviceKind {
/// Tape changer (Autoloader, Robot)
Changer,
/// Normal SCSI tape device
Tape,
}
#[api(
properties: {
kind: {
type: DeviceKind,
},
},
)]
#[derive(Debug,Serialize,Deserialize)]
/// Tape device information
pub struct TapeDeviceInfo {
pub kind: DeviceKind,
/// Path to the linux device node
pub path: String,
/// Serial number (autodetected)
pub serial: String,
/// Vendor (autodetected)
pub vendor: String,
/// Model (autodetected)
pub model: String,
/// Device major number
pub major: u32,
/// Device minor number
pub minor: u32,
}

View File

@ -0,0 +1,169 @@
//! Types for tape drive API
use serde::{Deserialize, Serialize};
use proxmox::api::{
api,
schema::{Schema, IntegerSchema, StringSchema},
};
use crate::api2::types::PROXMOX_SAFE_ID_FORMAT;
pub const DRIVE_ID_SCHEMA: Schema = StringSchema::new("Drive Identifier.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const CHANGER_ID_SCHEMA: Schema = StringSchema::new("Tape Changer Identifier.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const LINUX_DRIVE_PATH_SCHEMA: Schema = StringSchema::new(
"The path to a LINUX non-rewinding SCSI tape device (i.e. '/dev/nst0')")
.schema();
pub const SCSI_CHANGER_PATH_SCHEMA: Schema = StringSchema::new(
"Path to Linux generic SCSI device (i.e. '/dev/sg4')")
.schema();
pub const MEDIA_LABEL_SCHEMA: Schema = StringSchema::new("Media Label/Barcode.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const CHANGER_DRIVE_ID_SCHEMA: Schema = IntegerSchema::new(
"Associated changer drive number (requires option changer)")
.minimum(0)
.maximum(8)
.default(0)
.schema();
#[api(
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
}
}
)]
#[derive(Serialize,Deserialize)]
/// Simulate tape drives (only for test and debug)
#[serde(rename_all = "kebab-case")]
pub struct VirtualTapeDrive {
pub name: String,
/// Path to directory
pub path: String,
/// Virtual tape size
#[serde(skip_serializing_if="Option::is_none")]
pub max_size: Option<usize>,
}
#[api(
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
},
changer: {
schema: CHANGER_ID_SCHEMA,
optional: true,
},
"changer-drive-id": {
schema: CHANGER_DRIVE_ID_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Linux SCSI tape driver
pub struct LinuxTapeDrive {
pub name: String,
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]
pub changer: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub changer_drive_id: Option<u64>,
}
#[api(
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
path: {
schema: SCSI_CHANGER_PATH_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize)]
/// SCSI tape changer
pub struct ScsiTapeChanger {
pub name: String,
pub path: String,
}
#[api()]
#[derive(Serialize,Deserialize)]
/// Drive list entry
pub struct DriveListEntry {
/// Drive name
pub name: String,
/// Path to the linux device node
pub path: String,
/// Associated changer device
#[serde(skip_serializing_if="Option::is_none")]
pub changer: Option<String>,
/// Vendor (autodetected)
#[serde(skip_serializing_if="Option::is_none")]
pub vendor: Option<String>,
/// Model (autodetected)
#[serde(skip_serializing_if="Option::is_none")]
pub model: Option<String>,
/// Serial number (autodetected)
#[serde(skip_serializing_if="Option::is_none")]
pub serial: Option<String>,
}
#[api()]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "lowercase")]
/// Mtx Entry Kind
pub enum MtxEntryKind {
/// Drive
Drive,
/// Slot
Slot,
}
#[api(
properties: {
"entry-kind": {
type: MtxEntryKind,
},
"changer-id": {
schema: MEDIA_LABEL_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Mtx Status Entry
pub struct MtxStatusEntry {
pub entry_kind: MtxEntryKind,
/// The ID of the slot or drive
pub entry_id: u64,
/// The media label (volume tag) if the slot/drive is full
#[serde(skip_serializing_if="Option::is_none")]
pub changer_id: Option<String>,
/// The slot the drive was loaded from
#[serde(skip_serializing_if="Option::is_none")]
pub loaded_slot: Option<u64>,
}

View File

@ -0,0 +1,96 @@
use ::serde::{Deserialize, Serialize};
use proxmox::api::api;
use super::{
MediaStatus,
};
#[api()]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "lowercase")]
/// Media location
pub enum MediaLocationKind {
/// Ready for use (inside tape library)
Online,
/// Local available, but need to be mounted (insert into tape
/// drive)
Offline,
/// Media is inside a Vault
Vault,
}
#[api(
properties: {
location: {
type: MediaLocationKind,
},
status: {
type: MediaStatus,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media list entry
pub struct MediaListEntry {
/// Media changer ID
pub changer_id: String,
/// Media Uuid
pub uuid: String,
pub location: MediaLocationKind,
/// Media location hint (vault name, changer name)
pub location_hint: Option<String>,
pub status: MediaStatus,
/// Expired flag
pub expired: bool,
/// Media set name
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_name: Option<String>,
/// Media set uuid
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_uuid: Option<String>,
/// Media set seq_nr
#[serde(skip_serializing_if="Option::is_none")]
pub seq_nr: Option<u64>,
/// Media Pool
#[serde(skip_serializing_if="Option::is_none")]
pub pool: Option<String>,
}
#[api()]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media label info
pub struct MediaLabelInfoFlat {
/// Unique ID
pub uuid: String,
/// Media Changer ID or Barcode
pub changer_id: String,
/// Creation time stamp
pub ctime: i64,
// All MediaSet properties are optional here
/// MediaSet Pool
#[serde(skip_serializing_if="Option::is_none")]
pub pool: Option<String>,
/// MediaSet Uuid. We use the all-zero Uuid to reseve an empty media for a specific pool
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_uuid: Option<String>,
/// MediaSet media sequence number
#[serde(skip_serializing_if="Option::is_none")]
pub seq_nr: Option<u64>,
/// MediaSet Creation time stamp
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_ctime: Option<i64>,
}
#[api()]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Label with optional Uuid
pub struct LabelUuidMap {
/// Changer ID (label)
pub changer_id: String,
/// Associated Uuid (if any)
pub uuid: Option<String>,
}

View File

@ -0,0 +1,154 @@
//! Types for tape media pool API
//!
//! Note: Both MediaSetPolicy and RetentionPolicy are complex enums,
//! so we cannot use them directly for the API. Instead, we represent
//! them as String.
use anyhow::Error;
use std::str::FromStr;
use serde::{Deserialize, Serialize};
use proxmox::api::{
api,
schema::{Schema, StringSchema, ApiStringFormat},
};
use crate::{
tools::systemd::time::{
CalendarEvent,
TimeSpan,
parse_time_span,
parse_calendar_event,
},
api2::types::{
DRIVE_ID_SCHEMA,
PROXMOX_SAFE_ID_FORMAT,
SINGLE_LINE_COMMENT_FORMAT,
},
};
pub const MEDIA_POOL_NAME_SCHEMA: Schema = StringSchema::new("Media pool name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(2)
.max_length(32)
.schema();
pub const MEDIA_SET_NAMING_TEMPLATE_SCHEMA: Schema = StringSchema::new(
"Media set naming template.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const MEDIA_SET_ALLOCATION_POLICY_FORMAT: ApiStringFormat =
ApiStringFormat::VerifyFn(|s| { MediaSetPolicy::from_str(s)?; Ok(()) });
pub const MEDIA_SET_ALLOCATION_POLICY_SCHEMA: Schema = StringSchema::new(
"Media set allocation policy.")
.format(&MEDIA_SET_ALLOCATION_POLICY_FORMAT)
.schema();
/// Media set allocation policy
pub enum MediaSetPolicy {
/// Try to use the current media set
ContinueCurrent,
/// Each backup job creates a new media set
AlwaysCreate,
/// Create a new set when the specified CalendarEvent triggers
CreateAt(CalendarEvent),
}
impl std::str::FromStr for MediaSetPolicy {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s == "continue" {
return Ok(MediaSetPolicy::ContinueCurrent);
}
if s == "always" {
return Ok(MediaSetPolicy::AlwaysCreate);
}
let event = parse_calendar_event(s)?;
Ok(MediaSetPolicy::CreateAt(event))
}
}
pub const MEDIA_RETENTION_POLICY_FORMAT: ApiStringFormat =
ApiStringFormat::VerifyFn(|s| { RetentionPolicy::from_str(s)?; Ok(()) });
pub const MEDIA_RETENTION_POLICY_SCHEMA: Schema = StringSchema::new(
"Media retention policy.")
.format(&MEDIA_RETENTION_POLICY_FORMAT)
.schema();
/// Media retention Policy
pub enum RetentionPolicy {
/// Always overwrite media
OverwriteAlways,
/// Protect data for the timespan specified
ProtectFor(TimeSpan),
/// Never overwrite data
KeepForever,
}
impl std::str::FromStr for RetentionPolicy {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s == "overwrite" {
return Ok(RetentionPolicy::OverwriteAlways);
}
if s == "keep" {
return Ok(RetentionPolicy::KeepForever);
}
let time_span = parse_time_span(s)?;
Ok(RetentionPolicy::ProtectFor(time_span))
}
}
#[api(
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
drive: {
schema: DRIVE_ID_SCHEMA,
},
allocation: {
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
optional: true,
},
retention: {
schema: MEDIA_RETENTION_POLICY_SCHEMA,
optional: true,
},
template: {
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize,Deserialize)]
/// Media pool configuration
pub struct MediaPoolConfig {
/// The pool name
pub name: String,
/// The associated drive
pub drive: String,
/// Media Set allocation policy
#[serde(skip_serializing_if="Option::is_none")]
pub allocation: Option<String>,
/// Media retention policy
#[serde(skip_serializing_if="Option::is_none")]
pub retention: Option<String>,
/// Media set naming template (default "%id%")
///
/// The template is UTF8 text, and can include strftime time
/// format specifications.
#[serde(skip_serializing_if="Option::is_none")]
pub template: Option<String>,
}

View File

@ -0,0 +1,21 @@
use ::serde::{Deserialize, Serialize};
use proxmox::api::api;
#[api()]
/// Media status
#[derive(Debug, PartialEq, Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Media Status
pub enum MediaStatus {
/// Media is ready to be written
Writable,
/// Media is full (contains data)
Full,
/// Media is marked as unknown, needs rescan
Unknown,
/// Media is marked as damaged
Damaged,
/// Media is marked as retired
Retired,
}

View File

@ -0,0 +1,16 @@
//! Types for tape backup API
mod device;
pub use device::*;
mod drive;
pub use drive::*;
mod media_pool;
pub use media_pool::*;
mod media_status;
pub use media_status::*;
mod media;
pub use media::*;

View File

@ -419,12 +419,10 @@ impl<'a> TryFrom<&'a str> for &'a TokennameRef {
}
/// A complete user id consisting of a user name and a realm
#[derive(Clone, Debug, Hash)]
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct Userid {
data: String,
name_len: usize,
//name: Username,
//realm: Realm,
}
impl Userid {
@ -460,14 +458,6 @@ lazy_static! {
pub static ref ROOT_USERID: Userid = Userid::new("root@pam".to_string(), 4);
}
impl Eq for Userid {}
impl PartialEq for Userid {
fn eq(&self, rhs: &Self) -> bool {
self.data == rhs.data && self.name_len == rhs.name_len
}
}
impl From<Authid> for Userid {
fn from(authid: Authid) -> Self {
authid.user

View File

@ -247,6 +247,9 @@ pub use prune::*;
mod datastore;
pub use datastore::*;
mod store_progress;
pub use store_progress::*;
mod verify;
pub use verify::*;

View File

@ -145,20 +145,6 @@ impl BackupGroup {
Ok(last)
}
pub fn list_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
let mut list = Vec::new();
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_l1_fd, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
list.push(BackupGroup::new(backup_type, backup_id));
Ok(())
})
})?;
Ok(list)
}
}
impl std::fmt::Display for BackupGroup {
@ -327,26 +313,20 @@ impl BackupInfo {
Ok(files)
}
pub fn list_backups(base_path: &Path) -> Result<Vec<BackupInfo>, Error> {
pub fn list_backup_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
let mut list = Vec::new();
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |l1_fd, backup_id, file_type| {
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l1_fd, backup_id, &BACKUP_DATE_REGEX, |l2_fd, backup_time_string, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
let backup_dir = BackupDir::with_rfc3339(backup_type, backup_id, backup_time_string)?;
list.push(BackupGroup::new(backup_type, backup_id));
let files = list_backup_files(l2_fd, backup_time_string)?;
list.push(BackupInfo { backup_dir, files });
Ok(())
})
Ok(())
})
})?;
Ok(list)
}

View File

@ -154,9 +154,11 @@ impl ChunkStore {
}
pub fn cond_touch_chunk(&self, digest: &[u8; 32], fail_if_not_exist: bool) -> Result<bool, Error> {
let (chunk_path, _digest_str) = self.chunk_path(digest);
self.cond_touch_path(&chunk_path, fail_if_not_exist)
}
pub fn cond_touch_path(&self, path: &Path, fail_if_not_exist: bool) -> Result<bool, Error> {
const UTIME_NOW: i64 = (1 << 30) - 1;
const UTIME_OMIT: i64 = (1 << 30) - 2;
@ -167,7 +169,7 @@ impl ChunkStore {
use nix::NixPath;
let res = chunk_path.with_nix_path(|cstr| unsafe {
let res = path.with_nix_path(|cstr| unsafe {
let tmp = libc::utimensat(-1, cstr.as_ptr(), &times[0], libc::AT_SYMLINK_NOFOLLOW);
nix::errno::Errno::result(tmp)
})?;
@ -177,7 +179,7 @@ impl ChunkStore {
return Ok(false);
}
bail!("update atime failed for chunk {:?} - {}", chunk_path, err);
bail!("update atime failed for chunk/file {:?} - {}", path, err);
}
Ok(true)
@ -299,7 +301,7 @@ impl ChunkStore {
last_percentage = percentage;
crate::task_log!(
worker,
"percentage done: phase2 {}% (processed {} chunks)",
"processed {}% ({} chunks)",
percentage,
chunk_count,
);
@ -328,49 +330,13 @@ impl ChunkStore {
let lock = self.mutex.lock();
if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) {
if bad {
// filename validity checked in iterator
let orig_filename = std::ffi::CString::new(&filename.to_bytes()[..64])?;
match fstatat(
dirfd,
orig_filename.as_c_str(),
nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW)
{
Ok(_) => {
match unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
Err(err) =>
crate::task_warn!(
worker,
"unlinking corrupt chunk {:?} failed on store '{}' - {}",
filename,
self.name,
err,
),
Ok(_) => {
status.removed_bad += 1;
status.removed_bytes += stat.st_size as u64;
}
}
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
// chunk hasn't been rewritten yet, keep .bad file
status.still_bad += 1;
},
Err(err) => {
// some other error, warn user and keep .bad file around too
status.still_bad += 1;
crate::task_warn!(
worker,
"error during stat on '{:?}' - {}",
orig_filename,
err,
);
}
}
} else if stat.st_atime < min_atime {
if stat.st_atime < min_atime {
//let age = now - stat.st_atime;
//println!("UNLINK {} {:?}", age/(3600*24), filename);
if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
if bad {
status.still_bad += 1;
}
bail!(
"unlinking chunk {:?} failed on store '{}' - {}",
filename,
@ -378,13 +344,23 @@ impl ChunkStore {
err,
);
}
status.removed_chunks += 1;
if bad {
status.removed_bad += 1;
} else {
status.removed_chunks += 1;
}
status.removed_bytes += stat.st_size as u64;
} else if stat.st_atime < oldest_writer {
status.pending_chunks += 1;
if bad {
status.still_bad += 1;
} else {
status.pending_chunks += 1;
}
status.pending_bytes += stat.st_size as u64;
} else {
status.disk_chunks += 1;
if !bad {
status.disk_chunks += 1;
}
status.disk_bytes += stat.st_size as u64;
}
}

View File

@ -7,6 +7,8 @@
//! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption)
//! for a short introduction.
use std::fmt;
use std::fmt::Display;
use std::io::Write;
use anyhow::{bail, Error};
@ -15,8 +17,15 @@ use openssl::pkcs5::pbkdf2_hmac;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
use serde::{Deserialize, Serialize};
use crate::tools::format::{as_fingerprint, bytes_as_fingerprint};
use proxmox::api::api;
// openssl::sha::sha256(b"Proxmox Backup Encryption Key Fingerprint")
const FINGERPRINT_INPUT: [u8; 32] = [ 110, 208, 239, 119, 71, 31, 255, 77,
85, 199, 168, 254, 74, 157, 182, 33,
97, 64, 127, 19, 76, 114, 93, 223,
48, 153, 45, 37, 236, 69, 237, 38, ];
#[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
@ -30,6 +39,30 @@ pub enum CryptMode {
SignOnly,
}
#[derive(Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(transparent)]
/// 32-byte fingerprint, usually calculated with SHA256.
pub struct Fingerprint {
#[serde(with = "bytes_as_fingerprint")]
bytes: [u8; 32],
}
impl Fingerprint {
pub fn new(bytes: [u8; 32]) -> Self {
Self { bytes }
}
pub fn bytes(&self) -> &[u8; 32] {
&self.bytes
}
}
/// Display as short key ID
impl Display for Fingerprint {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", as_fingerprint(&self.bytes[0..8]))
}
}
/// Encryption Configuration with secret key
///
/// This structure stores the secret key and provides helpers for
@ -101,6 +134,10 @@ impl CryptConfig {
tag
}
pub fn fingerprint(&self) -> Fingerprint {
Fingerprint::new(self.compute_digest(&FINGERPRINT_INPUT))
}
pub fn data_crypter(&self, iv: &[u8; 16], mode: Mode) -> Result<Crypter, Error> {
let mut crypter = openssl::symm::Crypter::new(self.cipher, mode, &self.enc_key, Some(iv))?;
crypter.aad_update(b"")?; //??
@ -219,7 +256,13 @@ impl CryptConfig {
) -> Result<Vec<u8>, Error> {
let modified = proxmox::tools::time::epoch_i64();
let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() };
let key_config = super::KeyConfig {
kdf: None,
created,
modified,
data: self.enc_key.to_vec(),
fingerprint: Some(self.fingerprint()),
};
let data = serde_json::to_string(&key_config)?.as_bytes().to_vec();
let mut buffer = vec![0u8; rsa.size() as usize];

View File

@ -3,6 +3,7 @@ use std::io::{self, Write};
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
use std::convert::TryFrom;
use std::str::FromStr;
use std::time::Duration;
use std::fs::File;
@ -243,7 +244,7 @@ impl DataStore {
let (_guard, _manifest_guard);
if !force {
_guard = lock_dir_noblock(&full_path, "snapshot", "possibly running or in use")?;
_manifest_guard = self.lock_manifest(backup_dir);
_manifest_guard = self.lock_manifest(backup_dir)?;
}
log::info!("removing backup snapshot {:?}", full_path);
@ -256,6 +257,12 @@ impl DataStore {
)
})?;
// the manifest does not exists anymore, we do not need to keep the lock
if let Ok(path) = self.manifest_lock_path(backup_dir) {
// ignore errors
let _ = std::fs::remove_file(path);
}
Ok(())
}
@ -379,7 +386,7 @@ impl DataStore {
use walkdir::WalkDir;
let walker = WalkDir::new(&base).same_file_system(true).into_iter();
let walker = WalkDir::new(&base).into_iter();
// make sure we skip .chunks (and other hidden files to keep it simple)
fn is_hidden(entry: &walkdir::DirEntry) -> bool {
@ -446,6 +453,17 @@ impl DataStore {
file_name,
err,
);
// touch any corresponding .bad files to keep them around, meaning if a chunk is
// rewritten correctly they will be removed automatically, as well as if no index
// file requires the chunk anymore (won't get to this loop then)
for i in 0..=9 {
let bad_ext = format!("{}.bad", i);
let mut bad_path = PathBuf::new();
bad_path.push(self.chunk_path(digest).0);
bad_path.set_extension(bad_ext);
self.chunk_store.cond_touch_path(&bad_path, false)?;
}
}
}
Ok(())
@ -463,30 +481,40 @@ impl DataStore {
let mut done = 0;
let mut last_percentage: usize = 0;
let mut strange_paths_count: u64 = 0;
for img in image_list {
worker.check_abort()?;
tools::fail_on_shutdown()?;
let path = self.chunk_store.relative_path(&img);
match std::fs::File::open(&path) {
if let Some(backup_dir_path) = img.parent() {
let backup_dir_path = backup_dir_path.strip_prefix(self.base_path())?;
if let Some(backup_dir_str) = backup_dir_path.to_str() {
if BackupDir::from_str(backup_dir_str).is_err() {
strange_paths_count += 1;
}
}
}
match std::fs::File::open(&img) {
Ok(file) => {
if let Ok(archive_type) = archive_type(&img) {
if archive_type == ArchiveType::FixedIndex {
let index = FixedIndexReader::new(file).map_err(|e| {
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e)
format_err!("can't read index '{}' - {}", img.to_string_lossy(), e)
})?;
self.index_mark_used_chunks(index, &img, status, worker)?;
} else if archive_type == ArchiveType::DynamicIndex {
let index = DynamicIndexReader::new(file).map_err(|e| {
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e)
format_err!("can't read index '{}' - {}", img.to_string_lossy(), e)
})?;
self.index_mark_used_chunks(index, &img, status, worker)?;
}
}
}
Err(err) if err.kind() == io::ErrorKind::NotFound => (), // ignore vanished files
Err(err) => bail!("can't open index {} - {}", path.to_string_lossy(), err),
Err(err) => bail!("can't open index {} - {}", img.to_string_lossy(), err),
}
done += 1;
@ -494,7 +522,7 @@ impl DataStore {
if percentage > last_percentage {
crate::task_log!(
worker,
"percentage done: phase1 {}% ({} of {} index files)",
"marked {}% ({} of {} index files)",
percentage,
done,
image_count,
@ -503,6 +531,15 @@ impl DataStore {
}
}
if strange_paths_count > 0 {
crate::task_log!(
worker,
"found (and marked) {} index files outside of expected directory scheme",
strange_paths_count,
);
}
Ok(())
}
@ -667,13 +704,32 @@ impl DataStore {
))
}
/// Returns the filename to lock a manifest
///
/// Also creates the basedir. The lockfile is located in
/// '/run/proxmox-backup/locks/{datastore}/{type}/{id}/{timestamp}.index.json.lck'
fn manifest_lock_path(
&self,
backup_dir: &BackupDir,
) -> Result<String, Error> {
let mut path = format!(
"/run/proxmox-backup/locks/{}/{}/{}",
self.name(),
backup_dir.group().backup_type(),
backup_dir.group().backup_id(),
);
std::fs::create_dir_all(&path)?;
use std::fmt::Write;
write!(path, "/{}{}", backup_dir.backup_time_string(), &MANIFEST_LOCK_NAME)?;
Ok(path)
}
fn lock_manifest(
&self,
backup_dir: &BackupDir,
) -> Result<File, Error> {
let mut path = self.base_path();
path.push(backup_dir.relative_path());
path.push(&MANIFEST_LOCK_NAME);
let path = self.manifest_lock_path(backup_dir)?;
// update_manifest should never take a long time, so if someone else has
// the lock we can simply block a bit and should get it soon
@ -728,3 +784,4 @@ impl DataStore {
self.verify_new
}
}

View File

@ -219,7 +219,6 @@ impl IndexFile for DynamicIndexReader {
(csum, chunk_end)
}
#[allow(clippy::cast_ptr_alignment)]
fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo> {
if pos >= self.index.len() {
return None;

View File

@ -3,7 +3,7 @@ use std::io::{Seek, SeekFrom};
use super::chunk_stat::*;
use super::chunk_store::*;
use super::{IndexFile, ChunkReadInfo};
use super::{ChunkReadInfo, IndexFile};
use crate::tools;
use std::fs::File;
@ -69,8 +69,7 @@ impl FixedIndexReader {
let header_size = std::mem::size_of::<FixedIndexHeader>();
let rawfd = file.as_raw_fd();
let stat = match nix::sys::stat::fstat(rawfd) {
let stat = match nix::sys::stat::fstat(file.as_raw_fd()) {
Ok(stat) => stat,
Err(err) => bail!("fstat failed - {}", err),
};
@ -94,7 +93,6 @@ impl FixedIndexReader {
let index_length = ((size + chunk_size - 1) / chunk_size) as usize;
let index_size = index_length * 32;
let expected_index_size = (stat.st_size as usize) - header_size;
if index_size != expected_index_size {
bail!(
@ -150,7 +148,7 @@ impl FixedIndexReader {
println!("ChunkSize: {}", self.chunk_size);
let mut ctime_str = self.ctime.to_string();
if let Ok(s) = proxmox::tools::time::strftime_local("%c",self.ctime) {
if let Ok(s) = proxmox::tools::time::strftime_local("%c", self.ctime) {
ctime_str = s;
}
@ -215,7 +213,7 @@ impl IndexFile for FixedIndexReader {
Some((
(offset / self.chunk_size as u64) as usize,
offset & (self.chunk_size - 1) as u64 // fast modulo, valid for 2^x chunk_size
offset & (self.chunk_size - 1) as u64, // fast modulo, valid for 2^x chunk_size
))
}
}

View File

@ -2,9 +2,34 @@ use anyhow::{bail, format_err, Context, Error};
use serde::{Deserialize, Serialize};
use crate::backup::{CryptConfig, Fingerprint};
use proxmox::api::api;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::try_block;
#[api(default: "scrypt")]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
/// Encrtypt the Key with a password using PBKDF2
PBKDF2,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[derive(Deserialize, Serialize, Debug)]
pub enum KeyDerivationConfig {
Scrypt {
@ -66,6 +91,9 @@ pub struct KeyConfig {
pub modified: i64,
#[serde(with = "proxmox::tools::serde::bytes_as_base64")]
pub data: Vec<u8>,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(default)]
pub fingerprint: Option<Fingerprint>,
}
pub fn store_key_config(
@ -103,15 +131,25 @@ pub fn store_key_config(
pub fn encrypt_key_with_passphrase(
raw_key: &[u8],
passphrase: &[u8],
kdf: Kdf,
) -> Result<KeyConfig, Error> {
let salt = proxmox::sys::linux::random_data(32)?;
let kdf = KeyDerivationConfig::Scrypt {
n: 65536,
r: 8,
p: 1,
salt,
let kdf = match kdf {
Kdf::Scrypt => KeyDerivationConfig::Scrypt {
n: 65536,
r: 8,
p: 1,
salt,
},
Kdf::PBKDF2 => KeyDerivationConfig::PBKDF2 {
iter: 65535,
salt,
},
Kdf::None => {
bail!("No key derivation function specified");
}
};
let derived_key = kdf.derive_key(passphrase)?;
@ -142,28 +180,22 @@ pub fn encrypt_key_with_passphrase(
created,
modified: created,
data: enc_data,
fingerprint: None,
})
}
pub fn load_and_decrypt_key(
path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> {
do_load_and_decrypt_key(path, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
}
fn do_load_and_decrypt_key(
path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> {
) -> Result<([u8;32], i64, Fingerprint), Error> {
decrypt_key(&file_get_contents(&path)?, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
}
pub fn decrypt_key(
mut keydata: &[u8],
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> {
) -> Result<([u8;32], i64, Fingerprint), Error> {
let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?;
let raw_data = key_config.data;
@ -203,5 +235,13 @@ pub fn decrypt_key(
let mut result = [0u8; 32];
result.copy_from_slice(&key);
Ok((result, created))
let fingerprint = match key_config.fingerprint {
Some(fingerprint) => fingerprint,
None => {
let crypt_config = CryptConfig::new(result.clone())?;
crypt_config.fingerprint()
},
};
Ok((result, created, fingerprint))
}

View File

@ -5,7 +5,7 @@ use std::path::Path;
use serde_json::{json, Value};
use ::serde::{Deserialize, Serialize};
use crate::backup::{BackupDir, CryptMode, CryptConfig};
use crate::backup::{BackupDir, CryptMode, CryptConfig, Fingerprint};
pub const MANIFEST_BLOB_NAME: &str = "index.json.blob";
pub const MANIFEST_LOCK_NAME: &str = ".index.json.lck";
@ -223,12 +223,48 @@ impl BackupManifest {
if let Some(crypt_config) = crypt_config {
let sig = self.signature(crypt_config)?;
manifest["signature"] = proxmox::tools::digest_to_hex(&sig).into();
let fingerprint = &crypt_config.fingerprint();
manifest["unprotected"]["key-fingerprint"] = serde_json::to_value(fingerprint)?;
}
let manifest = serde_json::to_string_pretty(&manifest).unwrap().into();
Ok(manifest)
}
pub fn fingerprint(&self) -> Result<Option<Fingerprint>, Error> {
match &self.unprotected["key-fingerprint"] {
Value::Null => Ok(None),
value => Ok(Some(serde_json::from_value(value.clone())?))
}
}
/// Checks if a BackupManifest and a CryptConfig share a valid fingerprint combination.
///
/// An unsigned manifest is valid with any or no CryptConfig.
/// A signed manifest is only valid with a matching CryptConfig.
pub fn check_fingerprint(&self, crypt_config: Option<&CryptConfig>) -> Result<(), Error> {
if let Some(fingerprint) = self.fingerprint()? {
match crypt_config {
None => bail!(
"missing key - manifest was created with key {}",
fingerprint,
),
Some(crypt_config) => {
let config_fp = crypt_config.fingerprint();
if config_fp != fingerprint {
bail!(
"wrong key - manifest's key {} does not match provided key {}",
fingerprint,
config_fp
);
}
}
}
};
Ok(())
}
/// Try to read the manifest. This verifies the signature if there is a crypt_config.
pub fn from_data(data: &[u8], crypt_config: Option<&CryptConfig>) -> Result<BackupManifest, Error> {
let json: Value = serde_json::from_slice(data)?;
@ -237,6 +273,19 @@ impl BackupManifest {
if let Some(ref crypt_config) = crypt_config {
if let Some(signature) = signature {
let expected_signature = proxmox::tools::digest_to_hex(&Self::json_signature(&json, crypt_config)?);
let fingerprint = &json["unprotected"]["key-fingerprint"];
if fingerprint != &Value::Null {
let fingerprint = serde_json::from_value(fingerprint.clone())?;
let config_fp = crypt_config.fingerprint();
if config_fp != fingerprint {
bail!(
"wrong key - unable to verify signature since manifest's key {} does not match provided key {}",
fingerprint,
config_fp
);
}
}
if signature != expected_signature {
bail!("wrong signature in manifest");
}

View File

@ -0,0 +1,64 @@
#[derive(Debug, Default)]
/// Tracker for progress of operations iterating over `Datastore` contents.
pub struct StoreProgress {
/// Completed groups
pub done_groups: u64,
/// Total groups
pub total_groups: u64,
/// Completed snapshots within current group
pub done_snapshots: u64,
/// Total snapshots in current group
pub group_snapshots: u64,
}
impl StoreProgress {
pub fn new(total_groups: u64) -> Self {
StoreProgress {
total_groups,
.. Default::default()
}
}
/// Calculates an interpolated relative progress based on current counters.
pub fn percentage(&self) -> f64 {
let per_groups = (self.done_groups as f64) / (self.total_groups as f64);
if self.group_snapshots == 0 {
per_groups
} else {
let per_snapshots = (self.done_snapshots as f64) / (self.group_snapshots as f64);
per_groups + (1.0 / self.total_groups as f64) * per_snapshots
}
}
}
impl std::fmt::Display for StoreProgress {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
if self.group_snapshots == 0 {
write!(
f,
"{:.2}% ({} of {} groups)",
self.percentage() * 100.0,
self.done_groups,
self.total_groups,
)
} else if self.total_groups == 1 {
write!(
f,
"{:.2}% ({} of {} snapshots)",
self.percentage() * 100.0,
self.done_snapshots,
self.group_snapshots,
)
} else {
write!(
f,
"{:.2}% ({} of {} groups, {} of {} group snapshots)",
self.percentage() * 100.0,
self.done_groups,
self.total_groups,
self.done_snapshots,
self.group_snapshots,
)
}
}
}

View File

@ -10,6 +10,7 @@ use crate::{
api2::types::*,
backup::{
DataStore,
StoreProgress,
DataBlob,
BackupGroup,
BackupDir,
@ -425,11 +426,11 @@ pub fn verify_backup_group(
group: &BackupGroup,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
progress: Option<(usize, usize)>, // (done, snapshot_count)
progress: &mut StoreProgress,
worker: Arc<dyn TaskState + Send + Sync>,
upid: &UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<(usize, Vec<String>), Error> {
) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
let mut list = match group.list_backups(&datastore.base_path()) {
@ -442,19 +443,17 @@ pub fn verify_backup_group(
group,
err,
);
return Ok((0, errors));
return Ok(errors);
}
};
task_log!(worker, "verify group {}:{}", datastore.name(), group);
let snapshot_count = list.len();
task_log!(worker, "verify group {}:{} ({} snapshots)", datastore.name(), group, snapshot_count);
let (done, snapshot_count) = progress.unwrap_or((0, list.len()));
progress.group_snapshots = snapshot_count as u64;
let mut count = 0;
BackupInfo::sort_list(&mut list, false); // newest first
for info in list {
count += 1;
for (pos, info) in list.into_iter().enumerate() {
if !verify_backup_dir(
datastore.clone(),
&info.backup_dir,
@ -466,20 +465,15 @@ pub fn verify_backup_group(
)? {
errors.push(info.backup_dir.to_string());
}
if snapshot_count != 0 {
let pos = done + count;
let percentage = ((pos as f64) * 100.0)/(snapshot_count as f64);
task_log!(
worker,
"percentage done: {:.2}% ({} of {} snapshots)",
percentage,
pos,
snapshot_count,
);
}
progress.done_snapshots = pos as u64 + 1;
task_log!(
worker,
"percentage done: {}",
progress
);
}
Ok((count, errors))
Ok(errors)
}
/// Verify all (owned) backups inside a datastore
@ -498,32 +492,42 @@ pub fn verify_all_backups(
) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
task_log!(worker, "verify datastore {}", datastore.name());
if let Some(owner) = &owner {
task_log!(
worker,
"verify datastore {} - limiting to backups owned by {}",
datastore.name(),
owner
);
task_log!(worker, "limiting to backups owned by {}", owner);
}
let filter_by_owner = |group: &BackupGroup| {
if let Some(owner) = &owner {
match datastore.get_owner(group) {
Ok(ref group_owner) => {
group_owner == owner
|| (group_owner.is_token()
&& !owner.is_token()
&& group_owner.user() == owner.user())
},
Err(_) => false,
}
} else {
true
match (datastore.get_owner(group), &owner) {
(Ok(ref group_owner), Some(owner)) => {
group_owner == owner
|| (group_owner.is_token()
&& !owner.is_token()
&& group_owner.user() == owner.user())
},
(Ok(_), None) => true,
(Err(err), Some(_)) => {
// intentionally not in task log
// the task user might not be allowed to see this group!
println!("Failed to get owner of group '{}' - {}", group, err);
false
},
(Err(err), None) => {
// we don't filter by owner, but we want to log the error
task_log!(
worker,
"Failed to get owner of group '{} - {}",
group,
err,
);
errors.push(group.to_string());
true
},
}
};
let mut list = match BackupGroup::list_groups(&datastore.base_path()) {
let mut list = match BackupInfo::list_backup_groups(&datastore.base_path()) {
Ok(list) => list
.into_iter()
.filter(|group| !(group.backup_type() == "host" && group.backup_id() == "benchmark"))
@ -532,8 +536,7 @@ pub fn verify_all_backups(
Err(err) => {
task_log!(
worker,
"verify datastore {} - unable to list backups: {}",
datastore.name(),
"unable to list backups: {}",
err,
);
return Ok(errors);
@ -542,34 +545,33 @@ pub fn verify_all_backups(
list.sort_unstable();
let mut snapshot_count = 0;
for group in list.iter() {
snapshot_count += group.list_backups(&datastore.base_path())?.len();
}
// start with 16384 chunks (up to 65GB)
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
// start with 64 chunks since we assume there are few corrupt ones
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
task_log!(worker, "verify datastore {} ({} snapshots)", datastore.name(), snapshot_count);
let group_count = list.len();
task_log!(worker, "found {} groups", group_count);
let mut done = 0;
for group in list {
let (count, mut group_errors) = verify_backup_group(
let mut progress = StoreProgress::new(group_count as u64);
for (pos, group) in list.into_iter().enumerate() {
progress.done_groups = pos as u64;
progress.done_snapshots = 0;
progress.group_snapshots = 0;
let mut group_errors = verify_backup_group(
datastore.clone(),
&group,
verified_chunks.clone(),
corrupt_chunks.clone(),
Some((done, snapshot_count)),
&mut progress,
worker.clone(),
upid,
filter,
)?;
errors.append(&mut group_errors);
done += count;
}
Ok(errors)

View File

@ -38,6 +38,7 @@ async fn run() -> Result<(), Error> {
proxmox_backup::rrd::create_rrdb_dir()?;
proxmox_backup::server::jobstate::create_jobstate_dir()?;
proxmox_backup::tape::create_tape_status_dir()?;
if let Err(err) = generate_auth_key() {
bail!("unable to generate auth key - {}", err);
@ -76,6 +77,7 @@ async fn run() -> Result<(), Error> {
})
)
},
"proxmox-backup.service",
);
server::write_pid(buildcfg::PROXMOX_BACKUP_API_PID_FN)?;

View File

@ -53,7 +53,6 @@ use proxmox_backup::backup::{
ChunkStream,
CryptConfig,
CryptMode,
DataBlob,
DynamicIndexReader,
FixedChunkStream,
FixedIndexReader,
@ -193,8 +192,12 @@ pub fn complete_repository(_arg: &str, _param: &HashMap<String, String>) -> Vec<
result
}
fn connect(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, Error> {
fn connect(repo: &BackupRepository) -> Result<HttpClient, Error> {
connect_do(repo.host(), repo.port(), repo.auth_id())
.map_err(|err| format_err!("error building client for repository {} - {}", repo, err))
}
fn connect_do(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, Error> {
let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
use std::env::VarError::*;
@ -366,7 +369,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/groups", repo.store());
@ -435,7 +438,7 @@ async fn change_backup_owner(group: String, mut param: Value) -> Result<(), Erro
let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let mut client = connect(&repo)?;
param.as_object_mut().unwrap().remove("repository");
@ -452,112 +455,6 @@ async fn change_backup_owner(group: String, mut param: Value) -> Result<(), Erro
Ok(())
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
group: {
type: String,
description: "Backup group.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List backup snapshots.
async fn list_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?)
} else {
None
};
let mut data = api_datastore_list_snapshots(&client, repo.store(), group).await?;
record_repository(&repo);
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned())
};
let render_files = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let mut filenames = Vec::new();
for file in &item.files {
filenames.push(file.filename.to_string());
}
Ok(tools::format::render_backup_file_list(&filenames[..]))
};
let options = default_table_format_options()
.sortby("backup-type", false)
.sortby("backup-id", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
.column(ColumnConfig::new("size").renderer(tools::format::render_bytes_human_readable))
.column(ColumnConfig::new("files").renderer(render_files))
;
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOTS;
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
}
}
)]
/// Forget (remove) backup snapshots.
async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
let result = client.delete(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
Ok(result)
}
#[api(
input: {
properties: {
@ -573,7 +470,7 @@ async fn api_login(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
client.login().await?;
record_repository(&repo);
@ -630,7 +527,7 @@ async fn api_version(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param);
if let Ok(repo) = repo {
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
match client.get("api2/json/version", None).await {
Ok(mut result) => version_info["server"] = result["data"].take(),
@ -651,58 +548,6 @@ async fn api_version(param: Value) -> Result<(), Error> {
Ok(())
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List snapshot files.
async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
let mut result = client.get(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOT_FILES;
let mut data: Value = result["data"].take();
let options = default_table_format_options();
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
@ -724,7 +569,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/gc", repo.store());
@ -798,7 +643,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
let keydata = match (keyfile, key_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(file_get_contents(keyfile)?),
(Some(keyfile), None) => {
eprintln!("Using encryption key file: {}", keyfile);
Some(file_get_contents(keyfile)?)
},
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
@ -806,6 +654,7 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
.map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err)
})?;
eprintln!("Using encryption key from file descriptor");
Some(data)
}
};
@ -813,7 +662,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
Ok(match (keydata, crypt_mode) {
// no parameters:
(None, None) => match key::read_optional_default_encryption_key()? {
Some(key) => (Some(key), CryptMode::Encrypt),
Some(key) => {
eprintln!("Encrypting with default encryption key!");
(Some(key), CryptMode::Encrypt)
},
None => (None, CryptMode::None),
},
@ -823,7 +675,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
// just --crypt-mode other than none
(None, Some(crypt_mode)) => match key::read_optional_default_encryption_key()? {
None => bail!("--crypt-mode without --keyfile and no default key file available"),
Some(key) => (Some(key), crypt_mode),
Some(key) => {
eprintln!("Encrypting with default encryption key!");
(Some(key), crypt_mode)
},
}
// just --keyfile
@ -861,6 +716,11 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
description: "Path to file.",
}
},
"all-file-systems": {
type: Boolean,
description: "Include all mounted subdirectories.",
optional: true,
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
@ -1036,7 +896,7 @@ async fn create_backup(
let backup_time = backup_time_opt.unwrap_or_else(|| epoch_i64());
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
record_repository(&repo);
println!("Starting backup: {}/{}/{}", backup_type, backup_id, BackupDir::backup_time_to_string(backup_time)?);
@ -1050,7 +910,8 @@ async fn create_backup(
let (crypt_config, rsa_encrypted_key) = match keydata {
None => (None, None),
Some(key) => {
let (key, created) = decrypt_key(&key, &key::get_encryption_key_password)?;
let (key, created, fingerprint) = decrypt_key(&key, &key::get_encryption_key_password)?;
println!("Encryption key fingerprint: {}", fingerprint);
let crypt_config = CryptConfig::new(key)?;
@ -1059,6 +920,8 @@ async fn create_backup(
let pem_data = file_get_contents(path)?;
let rsa = openssl::rsa::Rsa::public_key_from_pem(&pem_data)?;
let enc_key = crypt_config.generate_rsa_encoded_key(rsa, created)?;
println!("Master key '{:?}'", path);
(Some(Arc::new(crypt_config)), Some(enc_key))
}
_ => (Some(Arc::new(crypt_config)), None),
@ -1077,8 +940,40 @@ async fn create_backup(
false
).await?;
let previous_manifest = if let Ok(previous_manifest) = client.download_previous_manifest().await {
Some(Arc::new(previous_manifest))
let download_previous_manifest = match client.previous_backup_time().await {
Ok(Some(backup_time)) => {
println!(
"Downloading previous manifest ({})",
strftime_local("%c", backup_time)?
);
true
}
Ok(None) => {
println!("No previous manifest available.");
false
}
Err(_) => {
// Fallback for outdated server, TODO remove/bubble up with 2.0
true
}
};
let previous_manifest = if download_previous_manifest {
match client.download_previous_manifest().await {
Ok(previous_manifest) => {
match previous_manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref)) {
Ok(()) => Some(Arc::new(previous_manifest)),
Err(err) => {
println!("Couldn't re-use previous manifest - {}", err);
None
}
}
}
Err(err) => {
println!("Couldn't download previous manifest - {}", err);
None
}
}
} else {
None
};
@ -1339,7 +1234,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
record_repository(&repo);
@ -1361,7 +1256,8 @@ async fn restore(param: Value) -> Result<Value, Error> {
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _) = decrypt_key(&key, &key::get_encryption_key_password)?;
let (key, _, fingerprint) = decrypt_key(&key, &key::get_encryption_key_password)?;
eprintln!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?))
}
};
@ -1377,6 +1273,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
).await?;
let (manifest, backup_index_data) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let (archive_name, archive_type) = parse_archive_type(archive_name);
@ -1473,81 +1370,6 @@ async fn restore(param: Value) -> Result<Value, Error> {
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Group/Snapshot path.",
},
logfile: {
type: String,
description: "The path to the log file you want to upload.",
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
}
}
)]
/// Upload backup log file.
async fn upload_log(param: Value) -> Result<Value, Error> {
let logfile = tools::required_string_param(&param, "logfile")?;
let repo = extract_repository_from_value(&param)?;
let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let data = file_get_contents(logfile)?;
// fixme: howto sign log?
let blob = match crypt_mode {
CryptMode::None | CryptMode::SignOnly => DataBlob::encode(&data, None, true)?,
CryptMode::Encrypt => DataBlob::encode(&data, crypt_config.as_ref().map(Arc::as_ref), true)?,
};
let raw_data = blob.into_inner();
let path = format!("api2/json/admin/datastore/{}/upload-backup-log", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let body = hyper::Body::from(raw_data);
client.upload("application/octet-stream", body, &path, Some(args)).await
}
const API_METHOD_PRUNE: ApiMethod = ApiMethod::new(
&ApiHandler::Async(&prune),
&ObjectSchema::new(
@ -1583,7 +1405,7 @@ fn prune<'a>(
async fn prune_async(mut param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/prune", repo.store());
@ -1669,7 +1491,7 @@ async fn status(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/status", repo.store());
@ -1985,26 +1807,9 @@ fn main() {
.completion_cb("repository", complete_repository)
.completion_cb("keyfile", tools::complete_file_name);
let upload_log_cmd_def = CliCommand::new(&API_METHOD_UPLOAD_LOG)
.arg_param(&["snapshot", "logfile"])
.completion_cb("snapshot", complete_backup_snapshot)
.completion_cb("logfile", tools::complete_file_name)
.completion_cb("keyfile", tools::complete_file_name)
.completion_cb("repository", complete_repository);
let list_cmd_def = CliCommand::new(&API_METHOD_LIST_BACKUP_GROUPS)
.completion_cb("repository", complete_repository);
let snapshots_cmd_def = CliCommand::new(&API_METHOD_LIST_SNAPSHOTS)
.arg_param(&["group"])
.completion_cb("group", complete_backup_group)
.completion_cb("repository", complete_repository);
let forget_cmd_def = CliCommand::new(&API_METHOD_FORGET_SNAPSHOTS)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot);
let garbage_collect_cmd_def = CliCommand::new(&API_METHOD_START_GARBAGE_COLLECTION)
.completion_cb("repository", complete_repository);
@ -2015,11 +1820,6 @@ fn main() {
.completion_cb("archive-name", complete_archive_name)
.completion_cb("target", tools::complete_file_name);
let files_cmd_def = CliCommand::new(&API_METHOD_LIST_SNAPSHOT_FILES)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot);
let prune_cmd_def = CliCommand::new(&API_METHOD_PRUNE)
.arg_param(&["group"])
.completion_cb("group", complete_backup_group)
@ -2045,16 +1845,13 @@ fn main() {
let cmd_def = CliCommandMap::new()
.insert("backup", backup_cmd_def)
.insert("upload-log", upload_log_cmd_def)
.insert("forget", forget_cmd_def)
.insert("garbage-collect", garbage_collect_cmd_def)
.insert("list", list_cmd_def)
.insert("login", login_cmd_def)
.insert("logout", logout_cmd_def)
.insert("prune", prune_cmd_def)
.insert("restore", restore_cmd_def)
.insert("snapshots", snapshots_cmd_def)
.insert("files", files_cmd_def)
.insert("snapshot", snapshot_mgtm_cli())
.insert("status", status_cmd_def)
.insert("key", key::cli())
.insert("mount", mount_cmd_def())
@ -2064,7 +1861,13 @@ fn main() {
.insert("task", task_mgmt_cli())
.insert("version", version_cmd_def)
.insert("benchmark", benchmark_cmd_def)
.insert("change-owner", change_owner_cmd_def);
.insert("change-owner", change_owner_cmd_def)
.alias(&["files"], &["snapshot", "files"])
.alias(&["forget"], &["snapshot", "forget"])
.alias(&["upload-log"], &["snapshot", "upload-log"])
.alias(&["snapshots"], &["snapshot", "list"])
;
let rpcenv = CliEnvironment::new();
run_cli_command(cmd_def, rpcenv, Some(|future| {

View File

@ -10,8 +10,6 @@ use proxmox_backup::tools;
use proxmox_backup::config;
use proxmox_backup::api2::{self, types::* };
use proxmox_backup::client::*;
use proxmox_backup::tools::ticket::Ticket;
use proxmox_backup::auth_helpers::*;
mod proxmox_backup_manager;
use proxmox_backup_manager::*;
@ -51,27 +49,6 @@ pub async fn wait_for_local_worker(upid_str: &str) -> Result<(), Error> {
Ok(())
}
fn connect() -> Result<HttpClient, Error> {
let uid = nix::unistd::Uid::current();
let mut options = HttpClientOptions::new()
.prefix(Some("proxmox-backup".to_string()))
.verify_cert(false); // not required for connection to localhost
let client = if uid.is_root() {
let ticket = Ticket::new("PBS", Userid::root_userid())?
.sign(private_auth_key(), None)?;
options = options.password(Some(ticket));
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
} else {
options = options.ticket_cache(true).interactive(true);
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
};
Ok(client)
}
#[api(
input: {
properties: {
@ -92,7 +69,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
let store = tools::required_string_param(&param, "store")?;
let mut client = connect()?;
let mut client = connect_to_localhost()?;
let path = format!("api2/json/admin/datastore/{}/gc", store);
@ -123,7 +100,7 @@ async fn garbage_collection_status(param: Value) -> Result<Value, Error> {
let store = tools::required_string_param(&param, "store")?;
let client = connect()?;
let client = connect_to_localhost()?;
let path = format!("api2/json/admin/datastore/{}/gc", store);
@ -183,7 +160,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let client = connect()?;
let client = connect_to_localhost()?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false);
@ -222,7 +199,7 @@ async fn task_log(param: Value) -> Result<Value, Error> {
let upid = tools::required_string_param(&param, "upid")?;
let client = connect()?;
let client = connect_to_localhost()?;
display_task_log(client, upid, true).await?;
@ -243,9 +220,9 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect()?;
let mut client = connect_to_localhost()?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
let _ = client.delete(&path, None).await?;
Ok(Value::Null)
@ -302,7 +279,7 @@ async fn pull_datastore(
let output_format = get_output_format(&param);
let mut client = connect()?;
let mut client = connect_to_localhost()?;
let mut args = json!({
"store": local_store,
@ -342,7 +319,7 @@ async fn verify(
let output_format = get_output_format(&param);
let mut client = connect()?;
let mut client = connect_to_localhost()?;
let args = json!({});
@ -363,6 +340,43 @@ async fn report() -> Result<Value, Error> {
Ok(Value::Null)
}
#[api(
input: {
properties: {
verbose: {
type: Boolean,
optional: true,
default: false,
description: "Output verbose package information. It is ignored if output-format is specified.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
}
}
}
)]
/// List package versions for important Proxmox Backup Server packages.
async fn get_versions(verbose: bool, param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let packages = crate::api2::node::apt::get_versions()?;
let mut packages = json!(if verbose { &packages[..] } else { &packages[1..2] });
let options = default_table_format_options()
.disable_sort()
.noborder(true) // just not helpfull for version info which gets copy pasted often
.column(ColumnConfig::new("Package"))
.column(ColumnConfig::new("Version"))
.column(ColumnConfig::new("ExtraInfo").header("Extra Info"))
;
let schema = &crate::api2::node::apt::API_RETURN_SCHEMA_GET_VERSIONS;
format_and_print_result_full(&mut packages, schema, &output_format, &options);
Ok(Value::Null)
}
fn main() {
proxmox_backup::tools::setup_safe_path_env();
@ -396,6 +410,9 @@ fn main() {
)
.insert("report",
CliCommand::new(&API_METHOD_REPORT)
)
.insert("versions",
CliCommand::new(&API_METHOD_GET_VERSIONS)
);

View File

@ -133,6 +133,7 @@ async fn run() -> Result<(), Error> {
.map(|_| ())
)
},
"proxmox-backup-proxy.service",
);
server::write_pid(buildcfg::PROXMOX_BACKUP_PROXY_PID_FN)?;
@ -592,9 +593,9 @@ async fn schedule_task_log_rotate() {
.ok_or_else(|| format_err!("could not get API auth log file names"))?;
if logrotate.rotate(max_size, None, Some(max_files))? {
worker.log(format!("API access log was rotated"));
worker.log(format!("API authentication log was rotated"));
} else {
worker.log(format!("API access log was not rotated"));
worker.log(format!("API authentication log was not rotated"));
}
Ok(())

471
src/bin/proxmox-tape.rs Normal file
View File

@ -0,0 +1,471 @@
use anyhow::{format_err, Error};
use serde_json::{json, Value};
use proxmox::{
api::{
api,
cli::*,
ApiHandler,
RpcEnvironment,
section_config::SectionConfigData,
},
};
use proxmox_backup::{
tools::format::render_epoch,
server::{
UPID,
worker_is_active_local,
},
api2::{
self,
types::{
DRIVE_ID_SCHEMA,
MEDIA_LABEL_SCHEMA,
MEDIA_POOL_NAME_SCHEMA,
},
},
config::{
self,
drive::complete_drive_name,
media_pool::complete_pool_name,
},
tape::{
complete_media_changer_id,
},
};
mod proxmox_tape;
use proxmox_tape::*;
// Note: local workers should print logs to stdout, so there is no need
// to fetch/display logs. We just wait for the worker to finish.
pub async fn wait_for_local_worker(upid_str: &str) -> Result<(), Error> {
let upid: UPID = upid_str.parse()?;
let sleep_duration = core::time::Duration::new(0, 100_000_000);
loop {
if worker_is_active_local(&upid) {
tokio::time::delay_for(sleep_duration).await;
} else {
break;
}
}
Ok(())
}
fn lookup_drive_name(
param: &Value,
config: &SectionConfigData,
) -> Result<String, Error> {
let drive = param["drive"]
.as_str()
.map(String::from)
.or_else(|| std::env::var("PROXMOX_TAPE_DRIVE").ok())
.or_else(|| {
let mut drive_names = Vec::new();
for (name, (section_type, _)) in config.sections.iter() {
if !(section_type == "linux" || section_type == "virtual") { continue; }
drive_names.push(name);
}
if drive_names.len() == 1 {
Some(drive_names[0].to_owned())
} else {
None
}
})
.ok_or_else(|| format_err!("unable to get (default) drive name"))?;
Ok(drive)
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
fast: {
description: "Use fast erase.",
type: bool,
optional: true,
default: true,
},
},
},
)]
/// Erase media
async fn erase_media(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_ERASE_MEDIA;
let result = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(result.as_str().unwrap()).await?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
},
},
)]
/// Rewind tape
async fn rewind(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_REWIND;
let result = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(result.as_str().unwrap()).await?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
},
},
)]
/// Eject/Unload drive media
fn eject_media(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_EJECT_MEDIA;
match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
"changer-id": {
schema: MEDIA_LABEL_SCHEMA,
},
},
},
)]
/// Load media
fn load_media(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_LOAD_MEDIA;
match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
Ok(())
}
#[api(
input: {
properties: {
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
"changer-id": {
schema: MEDIA_LABEL_SCHEMA,
},
},
},
)]
/// Label media
async fn label_media(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_LABEL_MEDIA;
let result = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(result.as_str().unwrap()).await?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// Read media label
fn read_label(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let output_format = get_output_format(&param);
let info = &api2::tape::drive::API_METHOD_READ_LABEL;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("changer-id"))
.column(ColumnConfig::new("uuid"))
.column(ColumnConfig::new("ctime").renderer(render_epoch))
.column(ColumnConfig::new("pool"))
.column(ColumnConfig::new("media-set-uuid"))
.column(ColumnConfig::new("media-set-ctime").renderer(render_epoch))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
"read-labels": {
description: "Load unknown tapes and try read labels",
type: bool,
optional: true,
},
"read-all-labels": {
description: "Load all tapes and try read labels (even if already inventoried)",
type: bool,
optional: true,
},
},
},
)]
/// List (and update) media labels (Changer Inventory)
async fn inventory(
read_labels: Option<bool>,
read_all_labels: Option<bool>,
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let (config, _digest) = config::drive::config()?;
let drive = lookup_drive_name(&param, &config)?;
let do_read = read_labels.unwrap_or(false) || read_all_labels.unwrap_or(false);
if do_read {
let mut param = json!({
"drive": &drive,
});
if let Some(true) = read_all_labels {
param["read-all-labels"] = true.into();
}
let info = &api2::tape::drive::API_METHOD_UPDATE_INVENTORY;
let result = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(result.as_str().unwrap()).await?;
}
let info = &api2::tape::drive::API_METHOD_INVENTORY;
let param = json!({ "drive": &drive });
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("changer-id"))
.column(ColumnConfig::new("uuid"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
},
},
)]
/// Label media with barcodes from changer device
async fn barcode_label_media(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_BARCODE_LABEL_MEDIA;
let result = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(result.as_str().unwrap()).await?;
Ok(())
}
fn main() {
let cmd_def = CliCommandMap::new()
.insert(
"barcode-label",
CliCommand::new(&API_METHOD_BARCODE_LABEL_MEDIA)
.completion_cb("drive", complete_drive_name)
.completion_cb("pool", complete_pool_name)
)
.insert(
"rewind",
CliCommand::new(&API_METHOD_REWIND)
.completion_cb("drive", complete_drive_name)
)
.insert(
"erase",
CliCommand::new(&API_METHOD_ERASE_MEDIA)
.completion_cb("drive", complete_drive_name)
)
.insert(
"eject",
CliCommand::new(&API_METHOD_EJECT_MEDIA)
.completion_cb("drive", complete_drive_name)
)
.insert(
"inventory",
CliCommand::new(&API_METHOD_INVENTORY)
.completion_cb("drive", complete_drive_name)
)
.insert(
"read-label",
CliCommand::new(&API_METHOD_READ_LABEL)
.completion_cb("drive", complete_drive_name)
)
.insert(
"label",
CliCommand::new(&API_METHOD_LABEL_MEDIA)
.completion_cb("drive", complete_drive_name)
.completion_cb("pool", complete_pool_name)
)
.insert("changer", changer_commands())
.insert("drive", drive_commands())
.insert("pool", pool_commands())
.insert(
"load-media",
CliCommand::new(&API_METHOD_LOAD_MEDIA)
.arg_param(&["changer-id"])
.completion_cb("drive", complete_drive_name)
.completion_cb("changer-id", complete_media_changer_id)
)
;
let mut rpcenv = CliEnvironment::new();
rpcenv.set_auth_id(Some(String::from("root@pam")));
proxmox_backup::tools::runtime::main(run_async_cli_command(cmd_def, rpcenv));
}

View File

@ -151,7 +151,7 @@ pub async fn benchmark(
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let (key, _, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
@ -225,7 +225,7 @@ async fn test_upload_speed(
let backup_time = proxmox::tools::time::epoch_i64();
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
record_repository(&repo);
if verbose { eprintln!("Connecting to backup server"); }

View File

@ -73,13 +73,13 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?;
let (key, _created, _fingerprint) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
let client = BackupReader::start(
client,
@ -92,6 +92,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
).await?;
let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let index = client.download_dynamic_index(&manifest, CATALOG_NAME).await?;
@ -153,7 +154,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
/// Shell to interactively inspect and restore snapshots.
async fn catalog_shell(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
let path = tools::required_string_param(&param, "snapshot")?;
let archive_name = tools::required_string_param(&param, "archive-name")?;
@ -170,7 +171,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?;
let (key, _created, _fingerprint) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
@ -199,6 +200,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
.open("/tmp")?;
let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8);

View File

@ -4,14 +4,28 @@ use std::process::{Stdio, Command};
use anyhow::{bail, format_err, Error};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use proxmox::api::api;
use proxmox::api::cli::{CliCommand, CliCommandMap};
use proxmox::api::cli::{
ColumnConfig,
CliCommand,
CliCommandMap,
format_and_print_result_full,
get_output_format,
OUTPUT_FORMAT,
};
use proxmox::sys::linux::tty;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox_backup::backup::{
encrypt_key_with_passphrase, load_and_decrypt_key, store_key_config, KeyConfig,
encrypt_key_with_passphrase,
load_and_decrypt_key,
store_key_config,
CryptConfig,
Kdf,
KeyConfig,
KeyDerivationConfig,
};
use proxmox_backup::tools;
@ -71,27 +85,6 @@ pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
bail!("no password input mechanism available");
}
#[api(
default: "scrypt",
)]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api(
input: {
properties: {
@ -120,7 +113,10 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let kdf = kdf.unwrap_or_default();
let key = proxmox::sys::linux::random_data(32)?;
let mut key_array = [0u8; 32];
proxmox::sys::linux::fill_with_random_data(&mut key_array)?;
let crypt_config = CryptConfig::new(key_array.clone())?;
let key = key_array.to_vec();
match kdf {
Kdf::None => {
@ -134,10 +130,11 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
created,
modified: created,
data: key,
fingerprint: Some(crypt_config.fingerprint()),
},
)?;
}
Kdf::Scrypt => {
Kdf::Scrypt | Kdf::PBKDF2 => {
// always read passphrase from tty
if !tty::stdin_isatty() {
bail!("unable to read passphrase - no tty");
@ -145,7 +142,8 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let password = tty::read_and_verify_password("Encryption Key Password: ")?;
let key_config = encrypt_key_with_passphrase(&key, &password)?;
let mut key_config = encrypt_key_with_passphrase(&key, &password, kdf)?;
key_config.fingerprint = Some(crypt_config.fingerprint());
store_key_config(&path, false, key_config)?;
}
@ -188,7 +186,7 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
bail!("unable to change passphrase - no tty");
}
let (key, created) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
let (key, created, fingerprint) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
match kdf {
Kdf::None => {
@ -202,14 +200,16 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
created, // keep original value
modified,
data: key.to_vec(),
fingerprint: Some(fingerprint),
},
)?;
}
Kdf::Scrypt => {
Kdf::Scrypt | Kdf::PBKDF2 => {
let password = tty::read_and_verify_password("New Password: ")?;
let mut new_key_config = encrypt_key_with_passphrase(&key, &password)?;
let mut new_key_config = encrypt_key_with_passphrase(&key, &password, kdf)?;
new_key_config.created = created; // keep original value
new_key_config.fingerprint = Some(fingerprint);
store_key_config(&path, true, new_key_config)?;
}
@ -218,6 +218,91 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
Ok(())
}
#[api(
properties: {
kdf: {
type: Kdf,
},
},
)]
#[derive(Deserialize, Serialize)]
/// Encryption Key Information
struct KeyInfo {
/// Path to key
path: String,
kdf: Kdf,
/// Key creation time
pub created: i64,
/// Key modification time
pub modified: i64,
/// Key fingerprint
pub fingerprint: Option<String>,
}
#[api(
input: {
properties: {
path: {
description: "Key file. Without this the default key's metadata will be shown.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// Print the encryption key's metadata.
fn show_key(
path: Option<String>,
param: Value,
) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => {
let path = find_default_encryption_key()?
.ok_or_else(|| {
format_err!("no encryption file provided and no default file found")
})?;
path
}
};
let config: KeyConfig = serde_json::from_slice(&file_get_contents(path.clone())?)?;
let output_format = get_output_format(&param);
let info = KeyInfo {
path: format!("{:?}", path),
kdf: match config.kdf {
Some(KeyDerivationConfig::PBKDF2 { .. }) => Kdf::PBKDF2,
Some(KeyDerivationConfig::Scrypt { .. }) => Kdf::Scrypt,
None => Kdf::None,
},
created: config.created,
modified: config.modified,
fingerprint: match config.fingerprint {
Some(ref fp) => Some(format!("{}", fp)),
None => None,
},
};
let options = proxmox::api::cli::default_table_format_options()
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("kdf"))
.column(ColumnConfig::new("created").renderer(tools::format::render_epoch))
.column(ColumnConfig::new("modified").renderer(tools::format::render_epoch))
.column(ColumnConfig::new("fingerprint"));
let schema = &KeyInfo::API_SCHEMA;
format_and_print_result_full(&mut serde_json::to_value(info)?, schema, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
@ -287,7 +372,6 @@ fn create_master_key() -> Result<(), Error> {
},
"output-format": {
type: PaperkeyFormat,
description: "Output format. Text or Html.",
optional: true,
},
},
@ -313,13 +397,47 @@ fn paper_key(
};
let data = file_get_contents(&path)?;
let data = std::str::from_utf8(&data)?;
let data = String::from_utf8(data)?;
let (data, is_private_key) = if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data
.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
(lines, true)
} else {
match serde_json::from_str::<KeyConfig>(&data) {
Ok(key_config) => {
let lines = serde_json::to_string_pretty(&key_config)?
.lines()
.map(String::from)
.collect();
(lines, false)
},
Err(err) => {
eprintln!("Couldn't parse '{:?}' as KeyConfig - {}", path, err);
bail!("Neither a PEM-formatted private key, nor a PBS key file.");
},
}
};
let format = output_format.unwrap_or(PaperkeyFormat::Html);
match format {
PaperkeyFormat::Html => paperkey_html(data, subject),
PaperkeyFormat::Text => paperkey_text(data, subject),
PaperkeyFormat::Html => paperkey_html(&data, subject, is_private_key),
PaperkeyFormat::Text => paperkey_text(&data, subject, is_private_key),
}
}
@ -337,6 +455,10 @@ pub fn cli() -> CliCommandMap {
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let key_show_cmd_def = CliCommand::new(&API_METHOD_SHOW_KEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let paper_key_cmd_def = CliCommand::new(&API_METHOD_PAPER_KEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
@ -346,10 +468,11 @@ pub fn cli() -> CliCommandMap {
.insert("create-master-key", key_create_master_key_cmd_def)
.insert("import-master-pubkey", key_import_master_pubkey_cmd_def)
.insert("change-passphrase", key_change_passphrase_cmd_def)
.insert("show", key_show_cmd_def)
.insert("paperkey", paper_key_cmd_def)
}
fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
fn paperkey_html(lines: &[String], subject: Option<String>, is_private: bool) -> Result<(), Error> {
let img_size_pt = 500;
@ -378,21 +501,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("<p>Subject: {}</p>", subject);
}
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
if is_private {
const BLOCK_SIZE: usize = 20;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
@ -413,8 +522,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>");
let data = data.join("\n");
let qr_code = generate_qr_code("svg", data.as_bytes())?;
let qr_code = generate_qr_code("svg", data)?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>");
@ -430,16 +538,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
return Ok(());
}
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("<div style=\"page-break-inside: avoid\">");
println!("<p>");
println!("-----BEGIN PROXMOX BACKUP KEY-----");
for line in key_text.lines() {
for line in lines {
println!("{}", line);
}
@ -447,7 +552,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>");
let qr_code = generate_qr_code("svg", key_text.as_bytes())?;
let qr_code = generate_qr_code("svg", lines)?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>");
@ -464,27 +569,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
Ok(())
}
fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
fn paperkey_text(lines: &[String], subject: Option<String>, is_private: bool) -> Result<(), Error> {
if let Some(subject) = subject {
println!("Subject: {}\n", subject);
}
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
if is_private {
const BLOCK_SIZE: usize = 5;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
@ -499,8 +590,7 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
for l in start..end {
println!("{:-2}: {}", l, lines[l]);
}
let data = data.join("\n");
let qr_code = generate_qr_code("utf8i", data.as_bytes())?;
let qr_code = generate_qr_code("utf8i", data)?;
let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
println!("{}", qr_code);
@ -510,14 +600,13 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
return Ok(());
}
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("-----BEGIN PROXMOX BACKUP KEY-----");
println!("{}", key_text);
for line in lines {
println!("{}", line);
}
println!("-----END PROXMOX BACKUP KEY-----");
let qr_code = generate_qr_code("utf8i", key_text.as_bytes())?;
let qr_code = generate_qr_code("utf8i", &lines)?;
let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
@ -526,8 +615,7 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
Ok(())
}
fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> {
fn generate_qr_code(output_type: &str, lines: &[String]) -> Result<Vec<u8>, Error> {
let mut child = Command::new("qrencode")
.args(&["-t", output_type, "-m0", "-s1", "-lm", "--output", "-"])
.stdin(Stdio::piped())
@ -537,7 +625,8 @@ fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> {
{
let stdin = child.stdin.as_mut()
.ok_or_else(|| format_err!("Failed to open stdin"))?;
stdin.write_all(data)
let data = lines.join("\n");
stdin.write_all(data.as_bytes())
.map_err(|_| format_err!("Failed to write to stdin"))?;
}

View File

@ -8,6 +8,8 @@ mod task;
pub use task::*;
mod catalog;
pub use catalog::*;
mod snapshot;
pub use snapshot::*;
pub mod key;

View File

@ -1,22 +1,21 @@
use std::path::PathBuf;
use std::sync::Arc;
use std::os::unix::io::RawFd;
use std::path::Path;
use std::ffi::OsStr;
use std::collections::HashMap;
use std::ffi::OsStr;
use std::hash::BuildHasher;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use anyhow::{bail, format_err, Error};
use futures::future::FutureExt;
use futures::select;
use futures::stream::{StreamExt, TryStreamExt};
use nix::unistd::{fork, ForkResult};
use serde_json::Value;
use tokio::signal::unix::{signal, SignalKind};
use nix::unistd::{fork, ForkResult, pipe};
use futures::select;
use futures::future::FutureExt;
use futures::stream::{StreamExt, TryStreamExt};
use proxmox::{sortable, identity};
use proxmox::api::{ApiHandler, ApiMethod, RpcEnvironment, schema::*, cli::*};
use proxmox::tools::fd::Fd;
use proxmox_backup::tools;
use proxmox_backup::backup::{
@ -143,27 +142,27 @@ fn mount(
// Process should be deamonized.
// Make sure to fork before the async runtime is instantiated to avoid troubles.
let pipe = pipe()?;
let (pr, pw) = proxmox_backup::tools::pipe()?;
match unsafe { fork() } {
Ok(ForkResult::Parent { .. }) => {
nix::unistd::close(pipe.1).unwrap();
drop(pw);
// Blocks the parent process until we are ready to go in the child
let _res = nix::unistd::read(pipe.0, &mut [0]).unwrap();
let _res = nix::unistd::read(pr.as_raw_fd(), &mut [0]).unwrap();
Ok(Value::Null)
}
Ok(ForkResult::Child) => {
nix::unistd::close(pipe.0).unwrap();
drop(pr);
nix::unistd::setsid().unwrap();
proxmox_backup::tools::runtime::main(mount_do(param, Some(pipe.1)))
proxmox_backup::tools::runtime::main(mount_do(param, Some(pw)))
}
Err(_) => bail!("failed to daemonize process"),
}
}
async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
async fn mount_do(param: Value, pipe: Option<Fd>) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
let target = param["target"].as_str();
@ -182,7 +181,9 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
println!("Encryption key file: '{:?}'", path);
let (key, _, fingerprint) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
println!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?))
}
};
@ -212,6 +213,7 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
).await?;
let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let file_info = manifest.lookup_file_info(&server_archive_name)?;
@ -232,8 +234,8 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
}
// Signal the parent process that we are done with the setup and it can
// terminate.
nix::unistd::write(pipe, &[0u8])?;
nix::unistd::close(pipe).unwrap();
nix::unistd::write(pipe.as_raw_fd(), &[0u8])?;
let _: Fd = pipe;
}
Ok(())

View File

@ -0,0 +1,416 @@
use std::sync::Arc;
use anyhow::Error;
use serde_json::{json, Value};
use proxmox::{
api::{api, cli::*},
tools::fs::file_get_contents,
};
use proxmox_backup::{
tools,
api2::types::*,
backup::{
CryptMode,
CryptConfig,
DataBlob,
BackupGroup,
decrypt_key,
}
};
use crate::{
REPO_URL_SCHEMA,
KEYFILE_SCHEMA,
KEYFD_SCHEMA,
BackupDir,
api_datastore_list_snapshots,
complete_backup_snapshot,
complete_backup_group,
complete_repository,
connect,
extract_repository_from_value,
record_repository,
keyfile_parameters,
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
group: {
type: String,
description: "Backup group.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List backup snapshots.
async fn list_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?)
} else {
None
};
let mut data = api_datastore_list_snapshots(&client, repo.store(), group).await?;
record_repository(&repo);
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned())
};
let render_files = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let mut filenames = Vec::new();
for file in &item.files {
filenames.push(file.filename.to_string());
}
Ok(tools::format::render_backup_file_list(&filenames[..]))
};
let options = default_table_format_options()
.sortby("backup-type", false)
.sortby("backup-id", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
.column(ColumnConfig::new("size").renderer(tools::format::render_bytes_human_readable))
.column(ColumnConfig::new("files").renderer(render_files))
;
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOTS;
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List snapshot files.
async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
let mut result = client.get(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOT_FILES;
let mut data: Value = result["data"].take();
let options = default_table_format_options();
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
}
}
)]
/// Forget (remove) backup snapshots.
async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
let result = client.delete(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
Ok(result)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Group/Snapshot path.",
},
logfile: {
type: String,
description: "The path to the log file you want to upload.",
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
}
}
)]
/// Upload backup log file.
async fn upload_log(param: Value) -> Result<Value, Error> {
let logfile = tools::required_string_param(&param, "logfile")?;
let repo = extract_repository_from_value(&param)?;
let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(&repo)?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created, _) = decrypt_key(&key, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let data = file_get_contents(logfile)?;
// fixme: howto sign log?
let blob = match crypt_mode {
CryptMode::None | CryptMode::SignOnly => DataBlob::encode(&data, None, true)?,
CryptMode::Encrypt => DataBlob::encode(&data, crypt_config.as_ref().map(Arc::as_ref), true)?,
};
let raw_data = blob.into_inner();
let path = format!("api2/json/admin/datastore/{}/upload-backup-log", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let body = hyper::Body::from(raw_data);
client.upload("application/octet-stream", body, &path, Some(args)).await
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// Show notes
async fn show_notes(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/notes", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let output_format = get_output_format(&param);
let mut result = client.get(&path, Some(args)).await?;
let notes = result["data"].take();
if output_format == "text" {
if let Some(notes) = notes.as_str() {
println!("{}", notes);
}
} else {
format_and_print_result(
&json!({
"notes": notes,
}),
&output_format,
);
}
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
notes: {
type: String,
description: "The Notes.",
},
}
}
)]
/// Update Notes
async fn update_notes(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let notes = tools::required_string_param(&param, "notes")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/notes", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
"notes": notes,
});
client.put(&path, Some(args)).await?;
Ok(Value::Null)
}
fn notes_cli() -> CliCommandMap {
CliCommandMap::new()
.insert(
"show",
CliCommand::new(&API_METHOD_SHOW_NOTES)
.arg_param(&["snapshot"])
.completion_cb("snapshot", complete_backup_snapshot),
)
.insert(
"update",
CliCommand::new(&API_METHOD_UPDATE_NOTES)
.arg_param(&["snapshot", "notes"])
.completion_cb("snapshot", complete_backup_snapshot),
)
}
pub fn snapshot_mgtm_cli() -> CliCommandMap {
CliCommandMap::new()
.insert("notes", notes_cli())
.insert(
"list",
CliCommand::new(&API_METHOD_LIST_SNAPSHOTS)
.arg_param(&["group"])
.completion_cb("group", complete_backup_group)
.completion_cb("repository", complete_repository)
)
.insert(
"files",
CliCommand::new(&API_METHOD_LIST_SNAPSHOT_FILES)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot)
)
.insert(
"forget",
CliCommand::new(&API_METHOD_FORGET_SNAPSHOTS)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot)
)
.insert(
"upload-log",
CliCommand::new(&API_METHOD_UPLOAD_LOG)
.arg_param(&["snapshot", "logfile"])
.completion_cb("snapshot", complete_backup_snapshot)
.completion_cb("logfile", tools::complete_file_name)
.completion_cb("keyfile", tools::complete_file_name)
.completion_cb("repository", complete_repository)
)
}

View File

@ -48,7 +48,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false);
@ -96,7 +96,7 @@ async fn task_log(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid = tools::required_string_param(&param, "upid")?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = connect(&repo)?;
display_task_log(client, upid, true).await?;
@ -122,9 +122,9 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let mut client = connect(&repo)?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
let _ = client.delete(&path, None).await?;
Ok(Value::Null)

View File

@ -0,0 +1,219 @@
use anyhow::{Error};
use serde_json::Value;
use proxmox::{
api::{
api,
cli::*,
RpcEnvironment,
ApiHandler,
},
};
use proxmox_backup::{
api2::{
self,
types::{
CHANGER_ID_SCHEMA,
},
},
tape::{
complete_changer_path,
},
config::{
drive::{
complete_drive_name,
complete_changer_name,
}
},
};
pub fn changer_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new()
.insert("scan", CliCommand::new(&API_METHOD_SCAN_FOR_CHANGERS))
.insert("list", CliCommand::new(&API_METHOD_LIST_CHANGERS))
.insert("config",
CliCommand::new(&API_METHOD_GET_CONFIG)
.arg_param(&["name"])
.completion_cb("name", complete_changer_name)
)
.insert(
"remove",
CliCommand::new(&api2::config::changer::API_METHOD_DELETE_CHANGER)
.arg_param(&["name"])
.completion_cb("name", complete_changer_name)
)
.insert(
"create",
CliCommand::new(&api2::config::changer::API_METHOD_CREATE_CHANGER)
.arg_param(&["name"])
.completion_cb("name", complete_drive_name)
.completion_cb("path", complete_changer_path)
)
.insert(
"update",
CliCommand::new(&api2::config::changer::API_METHOD_UPDATE_CHANGER)
.arg_param(&["name"])
.completion_cb("name", complete_changer_name)
.completion_cb("path", complete_changer_path)
)
.insert("status",
CliCommand::new(&API_METHOD_GET_STATUS)
.arg_param(&["name"])
.completion_cb("name", complete_changer_name)
)
.insert("transfer",
CliCommand::new(&api2::tape::changer::API_METHOD_TRANSFER)
.arg_param(&["name"])
.completion_cb("name", complete_changer_name)
)
;
cmd_def.into()
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// List changers
fn list_changers(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::changer::API_METHOD_LIST_CHANGERS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("vendor"))
.column(ColumnConfig::new("model"))
.column(ColumnConfig::new("serial"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// Scan for SCSI tape changers
fn scan_for_changers(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::tape::changer::API_METHOD_SCAN_CHANGERS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("vendor"))
.column(ColumnConfig::new("model"))
.column(ColumnConfig::new("serial"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
name: {
schema: CHANGER_ID_SCHEMA,
},
},
},
)]
/// Get tape changer configuration
fn get_config(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::changer::API_METHOD_GET_CONFIG;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("path"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
name: {
schema: CHANGER_ID_SCHEMA,
},
},
},
)]
/// Get tape changer status
fn get_status(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::tape::changer::API_METHOD_GET_STATUS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("entry-kind"))
.column(ColumnConfig::new("entry-id"))
.column(ColumnConfig::new("changer-id"))
.column(ColumnConfig::new("loaded-slot"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}

View File

@ -0,0 +1,188 @@
use anyhow::Error;
use serde_json::Value;
use proxmox::{
api::{
api,
cli::*,
RpcEnvironment,
ApiHandler,
},
};
use proxmox_backup::{
api2::{
self,
types::{
DRIVE_ID_SCHEMA,
},
},
tape::{
complete_drive_path,
},
config::drive::{
complete_drive_name,
complete_changer_name,
complete_linux_drive_name,
},
};
pub fn drive_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new()
.insert("scan", CliCommand::new(&API_METHOD_SCAN_FOR_DRIVES))
.insert("list", CliCommand::new(&API_METHOD_LIST_DRIVES))
.insert("config",
CliCommand::new(&API_METHOD_GET_CONFIG)
.arg_param(&["name"])
.completion_cb("name", complete_linux_drive_name)
)
.insert(
"remove",
CliCommand::new(&api2::config::drive::API_METHOD_DELETE_DRIVE)
.arg_param(&["name"])
.completion_cb("name", complete_linux_drive_name)
)
.insert(
"create",
CliCommand::new(&api2::config::drive::API_METHOD_CREATE_DRIVE)
.arg_param(&["name"])
.completion_cb("name", complete_drive_name)
.completion_cb("path", complete_drive_path)
.completion_cb("changer", complete_changer_name)
)
.insert(
"update",
CliCommand::new(&api2::config::drive::API_METHOD_UPDATE_DRIVE)
.arg_param(&["name"])
.completion_cb("name", complete_linux_drive_name)
.completion_cb("path", complete_drive_path)
.completion_cb("changer", complete_changer_name)
)
.insert(
"load",
CliCommand::new(&api2::tape::drive::API_METHOD_LOAD_SLOT)
.arg_param(&["drive"])
.completion_cb("drive", complete_linux_drive_name)
)
.insert(
"unload",
CliCommand::new(&api2::tape::drive::API_METHOD_UNLOAD)
.arg_param(&["drive"])
.completion_cb("drive", complete_linux_drive_name)
)
;
cmd_def.into()
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// List drives
fn list_drives(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::drive::API_METHOD_LIST_DRIVES;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("changer"))
.column(ColumnConfig::new("vendor"))
.column(ColumnConfig::new("model"))
.column(ColumnConfig::new("serial"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
}
)]
/// Scan for drives
fn scan_for_drives(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::tape::drive::API_METHOD_SCAN_DRIVES;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("vendor"))
.column(ColumnConfig::new("model"))
.column(ColumnConfig::new("serial"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
name: {
schema: DRIVE_ID_SCHEMA,
},
},
},
)]
/// Get pool configuration
fn get_config(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::drive::API_METHOD_GET_CONFIG;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("changer"))
.column(ColumnConfig::new("changer-drive-id"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}

View File

@ -0,0 +1,8 @@
mod changer;
pub use changer::*;
mod drive;
pub use drive::*;
mod pool;
pub use pool::*;

View File

@ -0,0 +1,137 @@
use anyhow::{Error};
use serde_json::Value;
use proxmox::{
api::{
api,
cli::*,
RpcEnvironment,
ApiHandler,
},
};
use proxmox_backup::{
api2::{
self,
types::{
MEDIA_POOL_NAME_SCHEMA,
},
},
config::{
drive::{
complete_drive_name,
},
media_pool::{
complete_pool_name,
},
},
};
pub fn pool_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new()
.insert("list", CliCommand::new(&API_METHOD_LIST_POOLS))
.insert("config",
CliCommand::new(&API_METHOD_GET_CONFIG)
.arg_param(&["name"])
.completion_cb("name", complete_pool_name)
)
.insert(
"remove",
CliCommand::new(&api2::config::media_pool::API_METHOD_DELETE_POOL)
.arg_param(&["name"])
.completion_cb("name", complete_pool_name)
)
.insert(
"create",
CliCommand::new(&api2::config::media_pool::API_METHOD_CREATE_POOL)
.arg_param(&["name"])
.completion_cb("name", complete_pool_name)
.completion_cb("drive", complete_drive_name)
)
.insert(
"update",
CliCommand::new(&api2::config::media_pool::API_METHOD_UPDATE_POOL)
.arg_param(&["name"])
.completion_cb("name", complete_pool_name)
.completion_cb("drive", complete_drive_name)
)
;
cmd_def.into()
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// List media pool
fn list_pools(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::media_pool::API_METHOD_LIST_POOLS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("drive"))
.column(ColumnConfig::new("allocation"))
.column(ColumnConfig::new("retention"))
.column(ColumnConfig::new("template"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
},
},
)]
/// Get media pool configuration
fn get_config(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::media_pool::API_METHOD_GET_CONFIG;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("drive"))
.column(ColumnConfig::new("allocation"))
.column(ColumnConfig::new("retention"))
.column(ColumnConfig::new("template"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}

View File

@ -3,6 +3,16 @@
//! This library implements the client side to access the backups
//! server using https.
use anyhow::Error;
use crate::{
api2::types::{Userid, Authid},
tools::ticket::Ticket,
auth_helpers::private_auth_key,
};
mod merge_known_chunks;
pub mod pipe_to_stream;
@ -31,3 +41,27 @@ mod backup_specification;
pub use backup_specification::*;
pub mod pull;
/// Connect to localhost:8007 as root@pam
///
/// This automatically creates a ticket if run as 'root' user.
pub fn connect_to_localhost() -> Result<HttpClient, Error> {
let uid = nix::unistd::Uid::current();
let mut options = HttpClientOptions::new()
.prefix(Some("proxmox-backup".to_string()))
.verify_cert(false); // not required for connection to localhost
let client = if uid.is_root() {
let ticket = Ticket::new("PBS", Userid::root_userid())?
.sign(private_auth_key(), None)?;
options = options.password(Some(ticket));
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
} else {
options = options.ticket_cache(true).interactive(true);
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
};
Ok(client)
}

View File

@ -475,6 +475,13 @@ impl BackupWriter {
Ok(index)
}
/// Retrieve backup time of last backup
pub async fn previous_backup_time(&self) -> Result<Option<i64>, Error> {
let data = self.h2.get("previous_backup_time", None).await?;
serde_json::from_value(data)
.map_err(|err| format_err!("Failed to parse backup time value returned by server - {}", err))
}
/// Download backup manifest (index.json) of last backup
pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {

View File

@ -534,6 +534,15 @@ impl HttpClient {
self.request(req).await
}
pub async fn put(
&mut self,
path: &str,
data: Option<Value>,
) -> Result<Value, Error> {
let req = Self::request_builder(&self.server, self.port, "PUT", path, data)?;
self.request(req).await
}
pub async fn download(
&mut self,
path: &str,

View File

@ -3,6 +3,7 @@ use std::task::{Context, Poll};
use anyhow::{Error};
use futures::*;
use pin_project::pin_project;
use crate::backup::ChunkInfo;
@ -15,7 +16,9 @@ pub trait MergeKnownChunks: Sized {
fn merge_known_chunks(self) -> MergeKnownChunksQueue<Self>;
}
#[pin_project]
pub struct MergeKnownChunksQueue<S> {
#[pin]
input: S,
buffer: Option<MergedChunkInfo>,
}
@ -39,10 +42,10 @@ where
type Item = Result<MergedChunkInfo, Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
let this = unsafe { self.get_unchecked_mut() };
let mut this = self.project();
loop {
match ready!(unsafe { Pin::new_unchecked(&mut this.input) }.poll_next(cx)) {
match ready!(this.input.as_mut().poll_next(cx)) {
Some(Err(err)) => return Poll::Ready(Some(Err(err))),
None => {
if let Some(last) = this.buffer.take() {
@ -58,13 +61,13 @@ where
match last {
None => {
this.buffer = Some(MergedChunkInfo::Known(list));
*this.buffer = Some(MergedChunkInfo::Known(list));
// continue
}
Some(MergedChunkInfo::Known(mut last_list)) => {
last_list.extend_from_slice(&list);
let len = last_list.len();
this.buffer = Some(MergedChunkInfo::Known(last_list));
*this.buffer = Some(MergedChunkInfo::Known(last_list));
if len >= 64 {
return Poll::Ready(this.buffer.take().map(Ok));
@ -72,7 +75,7 @@ where
// continue
}
Some(MergedChunkInfo::New(_)) => {
this.buffer = Some(MergedChunkInfo::Known(list));
*this.buffer = Some(MergedChunkInfo::Known(list));
return Poll::Ready(last.map(Ok));
}
}
@ -80,7 +83,7 @@ where
MergedChunkInfo::New(chunk_info) => {
let new = MergedChunkInfo::New(chunk_info);
if let Some(last) = this.buffer.take() {
this.buffer = Some(new);
*this.buffer = Some(new);
return Poll::Ready(Some(Ok(last)));
} else {
return Poll::Ready(Some(Ok(new)));

View File

@ -395,7 +395,7 @@ pub async fn pull_group(
tgt_store: Arc<DataStore>,
group: &BackupGroup,
delete: bool,
progress: Option<(usize, usize)>, // (groups_done, group_count)
progress: &mut StoreProgress,
) -> Result<(), Error> {
let path = format!("api2/json/admin/datastore/{}/snapshots", src_repo.store());
@ -418,18 +418,10 @@ pub async fn pull_group(
let mut remote_snapshots = std::collections::HashSet::new();
let (per_start, per_group) = if let Some((groups_done, group_count)) = progress {
let per_start = (groups_done as f64)/(group_count as f64);
let per_group = 1.0/(group_count as f64);
(per_start, per_group)
} else {
(0.0, 1.0)
};
// start with 16384 chunks (up to 65GB)
let downloaded_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*64)));
let snapshot_count = list.len();
progress.group_snapshots = list.len() as u64;
for (pos, item) in list.into_iter().enumerate() {
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
@ -469,9 +461,8 @@ pub async fn pull_group(
let result = pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot, downloaded_chunks.clone()).await;
let percentage = (pos as f64)/(snapshot_count as f64);
let percentage = per_start + percentage*per_group;
worker.log(format!("percentage done: {:.2}%", percentage*100.0));
progress.done_snapshots = pos as u64 + 1;
worker.log(format!("percentage done: {}", progress));
result?; // stop on error
}
@ -507,6 +498,8 @@ pub async fn pull_store(
let mut list: Vec<GroupListItem> = serde_json::from_value(result["data"].take())?;
worker.log(format!("found {} groups to sync", list.len()));
list.sort_unstable_by(|a, b| {
let type_order = a.backup_type.cmp(&b.backup_type);
if type_order == std::cmp::Ordering::Equal {
@ -523,12 +516,25 @@ pub async fn pull_store(
new_groups.insert(BackupGroup::new(&item.backup_type, &item.backup_id));
}
let group_count = list.len();
let mut progress = StoreProgress::new(list.len() as u64);
for (done, item) in list.into_iter().enumerate() {
progress.done_groups = done as u64;
progress.done_snapshots = 0;
progress.group_snapshots = 0;
for (groups_done, item) in list.into_iter().enumerate() {
let group = BackupGroup::new(&item.backup_type, &item.backup_id);
let (owner, _lock_guard) = tgt_store.create_locked_backup_group(&group, &auth_id)?;
let (owner, _lock_guard) = match tgt_store.create_locked_backup_group(&group, &auth_id) {
Ok(result) => result,
Err(err) => {
worker.log(format!("sync group {}/{} failed - group lock failed: {}",
item.backup_type, item.backup_id, err));
errors = true; // do not stop here, instead continue
continue;
}
};
// permission check
if auth_id != owner { // only the owner is allowed to create additional snapshots
worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})",
@ -542,7 +548,7 @@ pub async fn pull_store(
tgt_store.clone(),
&group,
delete,
Some((groups_done, group_count)),
&mut progress,
).await {
worker.log(format!(
"sync group {}/{} failed - {}",
@ -556,7 +562,7 @@ pub async fn pull_store(
if delete {
let result: Result<(), Error> = proxmox::try_block!({
let local_groups = BackupGroup::list_groups(&tgt_store.base_path())?;
let local_groups = BackupInfo::list_backup_groups(&tgt_store.base_path())?;
for local_group in local_groups {
if new_groups.contains(&local_group) { continue; }
worker.log(format!("delete vanished group '{}/{}'", local_group.backup_type(), local_group.backup_id()));

View File

@ -2,6 +2,7 @@ use anyhow::{bail, Error};
use serde_json::json;
use super::HttpClient;
use crate::tools;
pub async fn display_task_log(
client: HttpClient,
@ -9,7 +10,7 @@ pub async fn display_task_log(
strip_date: bool,
) -> Result<(), Error> {
let path = format!("api2/json/nodes/localhost/tasks/{}/log", upid_str);
let path = format!("api2/json/nodes/localhost/tasks/{}/log", tools::percent_encode_component(upid_str));
let mut start = 1;
let limit = 500;

View File

@ -24,6 +24,8 @@ pub mod sync;
pub mod token_shadow;
pub mod user;
pub mod verify;
pub mod drive;
pub mod media_pool;
/// Check configuration directory permissions
///

145
src/config/drive.rs Normal file
View File

@ -0,0 +1,145 @@
use std::collections::HashMap;
use anyhow::{bail, Error};
use lazy_static::lazy_static;
use proxmox::{
api::{
schema::*,
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
},
},
tools::fs::{
open_file_locked,
replace_file,
CreateOptions,
},
};
use crate::{
api2::types::{
DRIVE_ID_SCHEMA,
VirtualTapeDrive,
LinuxTapeDrive,
ScsiTapeChanger,
},
};
lazy_static! {
pub static ref CONFIG: SectionConfig = init();
}
fn init() -> SectionConfig {
let mut config = SectionConfig::new(&DRIVE_ID_SCHEMA);
let obj_schema = match VirtualTapeDrive::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("virtual".to_string(), Some("name".to_string()), obj_schema);
config.register_plugin(plugin);
let obj_schema = match LinuxTapeDrive::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("linux".to_string(), Some("name".to_string()), obj_schema);
config.register_plugin(plugin);
let obj_schema = match ScsiTapeChanger::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("changer".to_string(), Some("name".to_string()), obj_schema);
config.register_plugin(plugin);
config
}
pub const DRIVE_CFG_FILENAME: &str = "/etc/proxmox-backup/tape.cfg";
pub const DRIVE_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.tape.lck";
pub fn lock() -> Result<std::fs::File, Error> {
open_file_locked(DRIVE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(DRIVE_CFG_FILENAME)?;
let content = content.unwrap_or(String::from(""));
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(DRIVE_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(DRIVE_CFG_FILENAME, &config)?;
let backup_user = crate::backup::backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= backup
let options = CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(backup_user.gid);
replace_file(DRIVE_CFG_FILENAME, raw.as_bytes(), options)?;
Ok(())
}
pub fn check_drive_exists(config: &SectionConfigData, drive: &str) -> Result<(), Error> {
match config.sections.get(drive) {
Some((section_type, _)) => {
if !(section_type == "linux" || section_type == "virtual") {
bail!("Entry '{}' exists, but is not a tape drive", drive);
}
}
None => bail!("Drive '{}' does not exist", drive),
}
Ok(())
}
// shell completion helper
/// List all drive names
pub fn complete_drive_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter()
.map(|(id, _)| id.to_string())
.collect(),
Err(_) => return vec![],
}
}
/// List Linux tape drives
pub fn complete_linux_drive_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter()
.filter(|(_id, (section_type, _))| {
section_type == "linux"
})
.map(|(id, _)| id.to_string())
.collect(),
Err(_) => return vec![],
}
}
/// List Scsi tape changer names
pub fn complete_changer_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter()
.filter(|(_id, (section_type, _))| {
section_type == "changer"
})
.map(|(id, _)| id.to_string())
.collect(),
Err(_) => return vec![],
}
}

88
src/config/media_pool.rs Normal file
View File

@ -0,0 +1,88 @@
use std::collections::HashMap;
use anyhow::Error;
use lazy_static::lazy_static;
use proxmox::{
api::{
schema::*,
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
}
},
tools::fs::{
open_file_locked,
replace_file,
CreateOptions,
},
};
use crate::{
api2::types::{
MEDIA_POOL_NAME_SCHEMA,
MediaPoolConfig,
},
};
lazy_static! {
static ref CONFIG: SectionConfig = init();
}
fn init() -> SectionConfig {
let mut config = SectionConfig::new(&MEDIA_POOL_NAME_SCHEMA);
let obj_schema = match MediaPoolConfig::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("pool".to_string(), Some("name".to_string()), obj_schema);
config.register_plugin(plugin);
config
}
pub const MEDIA_POOL_CFG_FILENAME: &'static str = "/etc/proxmox-backup/media-pool.cfg";
pub const MEDIA_POOL_CFG_LOCKFILE: &'static str = "/etc/proxmox-backup/.media-pool.lck";
pub fn lock() -> Result<std::fs::File, Error> {
open_file_locked(MEDIA_POOL_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(MEDIA_POOL_CFG_FILENAME)?;
let content = content.unwrap_or(String::from(""));
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(MEDIA_POOL_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(MEDIA_POOL_CFG_FILENAME, &config)?;
let backup_user = crate::backup::backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= backup
let options = CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(backup_user.gid);
replace_file(MEDIA_POOL_CFG_FILENAME, raw.as_bytes(), options)?;
Ok(())
}
// shell completion helper
/// List existing pool names
pub fn complete_pool_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => return vec![],
}
}

Some files were not shown because too many files have changed in this diff Show More