Compare commits

..

162 Commits

Author SHA1 Message Date
2d87f2fb73 bump version to 1.0.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-11 14:19:28 +01:00
4c81273274 debian: just install whole images directory
fixes build for recently added tape icon (and includes it for real)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-11 14:19:28 +01:00
73b8f6793e tape: add svg icon 2020-12-11 13:02:23 +01:00
663ef85992 tape: use WorkerTask for erase and rewind 2020-12-11 11:19:33 +01:00
e92c75815b tape: split inventory api
inventory: sync, list labels with uuids,
update_inventory: WorkerTask, updates database
2020-12-11 10:42:29 +01:00
6dbad5b4b5 tape: run label commands as WorkerTask (threads) 2020-12-11 09:10:22 +01:00
bff7e3f3e4 tape: implement barcode-label-mdedia 2020-12-11 07:50:19 +01:00
83abc7497d tape: implement inventory command 2020-12-11 07:39:28 +01:00
8bc5eebeb8 depend on package mt-st
We do not use the mt utility directly, but the package also provides
an udev helper to correctly initialize tape drives (stinit). Also,
the mt utility is helpful for debugging tap issues.
2020-12-11 06:38:45 +01:00
1433b96ba0 control.in: fix indentation
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-12-11 06:31:30 +01:00
be1a8c94ae fix build: add missing file 2020-12-10 13:40:20 +01:00
4606f34353 tape: implement read-label command 2020-12-10 13:20:39 +01:00
7bb720cb4d tape: implement label command 2020-12-10 12:30:27 +01:00
c4d8542ec1 tape: add media pool handling 2020-12-10 11:41:35 +01:00
9700d5374a tape: add media pool cli 2020-12-10 11:13:12 +01:00
05e90d6463 tape: add media pool config api 2020-12-10 10:52:27 +01:00
55118ca18e tape: correctly sort drive api subdir 2020-12-10 10:09:12 +01:00
f70d8091d3 tape: implement option changer-drive-id 2020-12-10 09:09:06 +01:00
a3c709ef21 tape: cli cleanup - avoid api redefinition 2020-12-10 08:35:11 +01:00
4917f1e2d4 tape: implement delete property for drive update command 2020-12-10 08:25:46 +01:00
93829fc680 tape: cleanup load-slot api 2020-12-10 08:04:55 +01:00
5605ca5619 tape: cli cleanup - rename scana-for-* into scan 2020-12-10 07:58:45 +01:00
e49f0c03d9 tape: implement load-media command 2020-12-10 07:52:56 +01:00
0098b712a5 tape: implement eject 2020-12-09 17:50:48 +01:00
5fb694e8c0 tape: implement rewind 2020-12-09 17:43:38 +01:00
583a68a446 tape: implement erase media 2020-12-09 17:35:31 +01:00
e6604cf391 tape: add command line interface proxmox-tape 2020-12-09 13:00:20 +01:00
43cfb3c35a tape: do not remove changer while still used 2020-12-09 12:55:54 +01:00
8a16c571d2 tape: add changer property to drive create api 2020-12-09 12:55:10 +01:00
314652a499 tape: set protected flag for configuration change api methods 2020-12-09 12:02:55 +01:00
6b68e5d597 client: move connect_to_localhost into client module 2020-12-09 11:59:50 +01:00
cafd51bf42 tape: add media state database 2020-12-09 11:21:56 +01:00
eaff09f483 update control file 2020-12-09 11:21:56 +01:00
9b93c62044 remove unused descriptions from api macros
these are now a hard error in the api macro

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-12-09 10:55:18 +01:00
5d90860688 tape: expose basic tape/changer functionality at api2/tape/ 2020-12-08 15:42:50 +01:00
5ba83ed099 tape: check digest on config update 2020-12-08 11:24:38 +01:00
50bf10ad56 tape: add changer configuration API 2020-12-08 09:04:56 +01:00
16d444c979 tape: add tape drive configuration API 2020-12-07 13:04:32 +01:00
fa9c9be737 tape: add tape device driver 2020-12-07 08:29:22 +01:00
2e7014e31d tape: add BlockeReader/BlockedWriter streams
This is the basic format used to write data to tapes.
2020-12-06 12:09:55 +01:00
a84050c1f0 tape: add BlockHeader impl 2020-12-06 10:26:24 +01:00
7c9835465e tape: add helpers to emulate tape read/write behavior 2020-12-06 09:41:16 +01:00
ec00200411 fix bug #3189: fix change_password permission checks, run protected 2020-12-05 16:20:29 +01:00
956e5fec1f depend on mtx (tape changer control)
A very small package with no additional dependencies.
2020-12-05 14:54:12 +01:00
b107fdb99a tape: add tape changer support using 'mtx' command 2020-12-05 14:54:12 +01:00
7320e9ff4b tape: add media invenotry 2020-12-05 12:54:15 +01:00
c4d2d54a6d tape: define useful constants 2020-12-05 12:20:46 +01:00
1142350e8d tape: add media pool config 2020-12-05 11:59:38 +01:00
d735b31345 tape: add tape read trait 2020-12-05 10:54:38 +01:00
e211fee562 tape: add tape write trait 2020-12-05 10:51:34 +01:00
8c15560b68 tape: add file format definitions 2020-12-05 10:45:08 +01:00
327e93711f commit missing file: tape api type definitions 2020-12-04 16:00:52 +01:00
a076571470 tape support: add drive configuration 2020-12-04 15:42:32 +01:00
ff50c07ebf start experimental tape management GUI
You need to set the environment TEST_TAPE_GUI=1 to enable this.
The current GUI is only a placeholder.
2020-12-04 12:50:08 +01:00
179145dc24 backup/datastore: move manifest locking to /run
this fixes the issue that on some filesystems, you cannot recursively
remove a directory when you hold a lock on a file inside (e.g. nfs/cifs)

it is not really backwards compatible (so during an upgrade, there
could be two daemons have the lock), but since the locking was
broken before (see previous patch) it should not really matter
(also it seems very unlikely that someone will trigger this)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-03 09:56:42 +01:00
6bd0a00c46 backup/datastore: really lock manifest on delete
'lock_manifest' returns a Result<File, Error> so we always got the result,
even when we did not get the lock, but we acted like we had.

bubble the locking error up

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-02 14:37:05 +01:00
f6e28f4e62 client/pull: log how many groups to pull were found
if no groups were found, the task log was very confusing as it
contained no real information why nothing was synced, e.g.:

 Starting datastore sync job 'remote:datastore:local-datastore:s-79412799-e6ee'
 Sync datastore 'local-datastore' from 'remote/datastore'
 sync job 'remote:datastore:local-datastore:s-79412799-e6ee' end
 TASK OK

this patch simply logs how many groups were found and are about to be synced:

 Starting datastore sync job 'remote:datastore:local-datastore:s-79412799-e6ee'
 Sync datastore 'local-datastore' from 'remote/datastore'
 found 0 groups to sync
 sync job 'remote:datastore:local-datastore:s-79412799-e6ee' end
 TASK OK

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-02 07:22:50 +01:00
37f1b7dd8d docs: add more thoughts about chunk size 2020-12-01 10:28:06 +01:00
60e6ee46de doc: add some thoughts about large chunk sizes 2020-12-01 08:47:15 +01:00
2260f065d4 cleanup: use extra file for StoreProgress 2020-12-01 06:34:33 +01:00
6eff8dec4f cleanup: remove unnecessary StoreProgress clone() 2020-12-01 06:29:11 +01:00
7e25b9aaaa verify: use same progress as pull
percentage of verified groups, interpolating based on snapshot count
within the group. in most cases, this will also be closer to 'real'
progress since added snapshots (those which will be verified) in active
backup groups will be roughly evenly distributed, while number of total
snapshots per group will be heavily skewed towards those groups which
have existed the longest, even though most of those old snapshots will
only be re-verified very infrequently.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:22:55 +01:00
f867ef9c4a progress: add format variants
for iterating over a single group, or iterating just on the group level

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:22:12 +01:00
fc8920e35d pull: factor out interpolated progress
and add group/snapshot count info.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:13:11 +01:00
7f3b0f67e7 remove BackupGroup::list_groups
BackupInfo::list_backup_groups is identical code-wise, and makes more
sense as entry point for listing groups.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:09:44 +01:00
844660036b gc: don't limit index listing to same filesystem
WalkDir does not follow symlinks by default anyway, and this behaviour
is not documented anywhere. e.g., if a sysadmin mounts 'extra storage'
for some backup group or type (not knowing that only metadata is stored
in those directories), GC will ignore all the indices contained within
and happily garbage collect their chunks..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:07:09 +01:00
efcac39d34 gc: remove duplicate variable
list_images already returns absolute paths, we don't need to prepend
anything.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:06:51 +01:00
cb4b721cb0 gc: log index files found outside of expected scheme
for safety reason, GC finds and marks all index files below the
datastore base path. as a result of regular operations, only index files
within the expected scheme of <TYPE>/<ID>/<TIMESTAMP> should exist.

add a small check + warning if the index list contains index files out
side of this expected scheme, so that an admin with shell access can
investigate.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:06:17 +01:00
7956877f14 gc: shorten progress messages
we have messages starting the phases anyway, and limit the number of
progress updates so that context remains available at all times.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:04:13 +01:00
2241c6795f d/control bump
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:28:02 +01:00
43e60ceb41 file logger: remove test.log after test as well
and a doc formatting fixup

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:13:21 +01:00
b760d8a23f derive PartialEq for Userid
the manual implementation is equivalent

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:10:17 +01:00
2c1592263d tiny clippy hint
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:03:43 +01:00
616533823c don't enforce Vec and String in tools::join
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:56:59 +01:00
913dddea85 minor cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:56:21 +01:00
3530430365 tools avoid unnecessary copying of parameters/properties
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:53:49 +01:00
a4ba60be8f minor cleanups
whitespace, formatting and superfluous lifetime annotations

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:47:31 +01:00
99e98f605c network helpers: fix fd leak in get_network_interfaces
This one always leaked.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
935ee97b17 use fd_change_cloexec helper
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
6b9bfd7fe9 minor cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
dd519bbad1 pxar: stricter file descriptor guards
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
35fe981c7d client: use tools::pipe instead of nix
nix::unistd::pipe returns unguarded RawFds which should be
avoided

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
b6570abe79 changes for proxmox 0.8
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
54813c650e bump proxmox dep to 0.8.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
781106f8c5 ui: fix usage of findRecord
findRecord does not match exactly, but only at the beginning and
case insensitive, by default. Change all calls to be case sensitive
and an exactmatch (we never want the default behaviour afaics).

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-27 07:20:32 +01:00
96f35520a0 bump version to 1.0.5-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 15:30:06 +01:00
490560e0c6 restore: print to STDERR
else restoring to STDOUT is broken..

Reported-by: Dominic Jäger <d.jaeger@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-25 14:38:02 +01:00
52f53d8280 control: update versions 2020-11-25 10:35:51 +01:00
27b8a3f671 bump version to 1.0.4-1 2020-11-25 08:03:11 +01:00
abf9b6da42 docs: fix renamed commands 2020-11-25 08:03:11 +01:00
0c9209b04c cli: rename command "upload-log" to "snapshot upload-log" 2020-11-25 07:57:39 +01:00
edebd52374 cli: rename command "forget" to "snapshot forget" 2020-11-25 07:57:39 +01:00
61205f00fb cli: rename command "files" to "snapshot files" 2020-11-25 07:57:39 +01:00
a303e00289 fingerprint: add new() method 2020-11-25 07:57:39 +01:00
af9f72e9d8 fingerprint: add bytes() accessor
needed for libproxmox-backup-qemu0

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-25 06:34:34 +01:00
5176346b30 ui: fix broken gettext use
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 00:21:17 +01:00
731eeef25b cli: use new alias feature for "snapshots"
Now maps to "snapshot list".
2020-11-24 13:26:43 +01:00
a65e3e4bc0 client: add 'snapshot notes show/update' command
to show and update snapshot notes from the cli

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-24 11:44:19 +01:00
027eb2bbe6 bump version to 1.0.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:56:18 +01:00
6982a54701 gui: add snapshot/file fingerprint tooltip
display short key ID, like backend's Display trait.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
035c40e638 list_snapshots: return manifest fingerprint
for display in clients.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
79c535955d refactor BackupInfo -> SnapshotListItem helper
before adding more fields to the tuple, let's just create the struct
inside the match arms to improve readability.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
8b7f8d3f3d expose previous backup time in backup env
and use this information to add more information to client backup log
and guide the download manifest decision.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
866c859a1e bump version to 1.0.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:33:20 +01:00
23e4e90540 verification: fix message in notification mail
the errors Vec can contain failed groups as well (e.g., if a group has
no or an invalid owner).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
a4fa3fc241 verification job: log failed dirs
else users have to manually search through a potentially very long task
log to find the entries that are different.. this is the same summary
printed at the end of a manual verify task.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
81d10c3b37 cleanup: remove dead code 2020-11-24 08:03:00 +01:00
f1e2904150 paperkey: refactor common code
from formatting functions to main function, and pass along the key data
lines instead of the full string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:57:21 +01:00
23f9503a31 client: check fingerprint after downloading manifest
this is stricter than the check that happened on manifest load, as it
also fails if the manifest is signed but we don't have a key available.

add some additional output at the start of a backup to indicate whether
a previous manifest is available to base the backup on.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:55:12 +01:00
a0ef68b93c manifest: check fingerprint when loading with key
otherwise loading will run into the signature mismatch which is
technically true, but not the complete picture in this case.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:49:51 +01:00
6b127e6ea0 fix #3139: add key fingerprint to manifest
if the manifest is signed/the contained archives/blobs are encrypted.
stored in 'unprotected' area, since there is already a strong binding
between key and manifest via the signature, and this avoids breaking
backwards compatibility for a simple usability improvement.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:45:11 +01:00
5e17dbf2bb cli: cleanup 'key show' - use format_and_print_result_full
We now expose all key derivation functions on the cli, so users can
choose between scrypt or pbkdf2.
2020-11-24 07:32:34 +01:00
dfb04575ad client: add 'key show' command
for (pretty-)printing a keyfile.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:15:29 +01:00
6f2626ae19 client: print key fingerprint and master key
for operations where it makes sense.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:11:26 +01:00
37e60ddcde key: add fingerprint to key config
and set/generate it on
- key creation
- key passphrase change
- key decryption if not already set
- key encryption with master key

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:46 +01:00
05cdc05347 crypt config: add fingerprint mechanism
by computing the ID digest of a hash of a static string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:16 +01:00
6364115b4b OnlineHelpInfo.js problems
Anybody known why I always get the following diff:
2020-11-23 12:57:41 +01:00
2133cd9103 update debian/control 2020-11-23 12:13:58 +01:00
01f84fcce1 ui: datastore content: use our keep field for group pruning
sets some defaults and provides the clear trigger, so less code and
slightly nicer UX.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-21 19:52:03 +01:00
08b3823025 bump dependency on proxmox to 0.7.1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-20 17:38:34 +01:00
968a0ab261 fix systemd-encoded upid strings in http client
since we systemd-encode parts of the upid string, and those can contain
characters that are invalid in urls (e.g. '\'), we have to percent encode
those

add a 'percent_encode_component' helper, so that we can maybe change
the AsciiSet for all uses at the same time

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-19 11:01:19 +01:00
21b552848a prune sim: make numberfields more similar to PBS's
by creating a new class that adds a clear trigger and also uses the
clear-trigger image. Code was taken from the one in PBS's prune window,
but we have default values here, so a bit of adapting was necessary. For
example, we don't want to reset to the original value (which might have
been one of the defaults) when clearing, but always to 'null'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-19 09:47:51 +01:00
fd19256470 gc: treat .bad files like regular chunks
Simplify the phase 2 code by treating .bad files just like regular
chunks, with the exception of stat logging.

To facilitate, we need to touch .bad files in phase 1. We only do this
under the condition that 1) the original chunk is missing (as before),
and 2) the original chunk is still referenced somewhere (since the code
lives in the error handler for a failed chunk touch, it only gets called
for chunks we expect to be there, i.e. ones that are referenced).

Untouched they will then be cleaned up after 24 hours (or after the last
longer-running task finishes).

Reason 2) is also a fix for .bad files not being cleaned up at all if
the original is no longer referenced anywhere (e.g. a user deleting all
snapshots after seeing some corrupt chunks appear).

cond_touch_path is introduced to touch arbitrary paths in the chunk
store with the same logic as touching chunks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-18 14:04:49 +01:00
1ed022576c api: include store in invalid owner errors
since a group might exist in plenty stores

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:24 +01:00
f6aa7b38bf drop now unused BackupInfo::list_backups
all global backup listing now happens via BackupGroup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:21 +01:00
fdfcb74d67 api: filter snapshot counts
unprivileged users should only see the counts related to their part of
the datastore.

while we're at it, switch to a list groups, filter groups, count
snapshots approach (like list_snapshots) to speedup calls to this
endpoint when many unprivileged users share a datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:50 +01:00
98afc7b152 api: make expensive parts of datastore status opt-in
used in the PBS GUI, but also for PVE usage queries which don't need all
the extra expensive information..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:47 +01:00
0d08fceeb9 improve group/snapshot listing
by listing groups first, then filtering, then listing group snapshots.

this cuts down the number of openat/getdirents calls for users that just
have a partial view of the datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 10:37:04 +01:00
3c945d73c2 client/http_client: add put method
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 16:59:14 +01:00
58fcbf5ab7 client: expose all-file-systems option
Useful to avoid the need for a long (and possibly changing) list of include-dev
options in certain situations, e.g. nested ZFS file systems. The option is
already implemented and seems to work as expected. The checks for virtual
filesystems are not affected by this option.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-16 16:59:14 +01:00
3a3f31c947 ui: datastores: hide "no datastore" box by default
avoids that it shows during store load, we do not know if there are
no datastores at that point and have already a loading mask.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-16 16:59:14 +01:00
8fc63287df ui: improve comment behaviour for datastore Summary
when we could not load the config (e.g. missing permissions)
show the comment from the global datastore-list

also show a messagebox for a load error instead of setting
the text of the comment box

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
172473e4de ui: DataStoreList: show message when there are no datastores
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
76f549debb ui: DataStoreList: remove datastores also from hash
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
c9097ff801 pxar: avoid including archive root's exclude patterns in .pxarexclude-cli
The patterns from the archive root's .pxarexclude file are already present in
self.patterns when encode_pxarexclude_cli is called. Pass along the number of
CLI patterns and slice accordingly.

Suggested-By: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 13:05:09 +01:00
fb01fd3af6 visibility cleanups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:53:50 +01:00
fa4bcbcad0 pxar: only generate .pxarexclude-cli if there were CLI parameters
previously a .pxarexclude entry in the root of the archive caused the file to
be generated as well, because the patterns are read before calling
generate_directory_file_list and within the function it wasn't possible to
distinguish between a pattern coming from the CLI and a pattern coming from
archive/root/.pxarexclude

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:08 +01:00
189cdb7427 pxar: include .pxarexclude files in the archive
The documentation states:
.pxarexclude files are treated as regular files and will be included in the
backup archive.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:06 +01:00
874bd5454d pxar: fix anchored exclusion at archive root
There is no leading slash in an entry's full_path, causing an anchored
exclude at the root level to fail, e.g. having "/name" as the content of the
file archive/root/.pxarexclude didn't match the file archive/root/name

Fix this by prepending a leading slash before matching.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:04 +01:00
b649887e9a remove unused function
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:15:15 +01:00
8c62c15f56 follouwp: whitespace cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 11:02:45 +01:00
51ac17b56e api: apt/versions: fix running_kernel string for unknown package case
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-12 11:02:20 +01:00
fc5a012068 manager: versions: non-verbose should actually print server pkg info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 10:28:03 +01:00
5e293f1315 apt: use typed response for get_versions
...and cleanup get_versions for manager CLI.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-12 10:15:32 +01:00
87367decf2 ui: tell ESLint to be strict in check target
the .lint-incremental target, which is implicitly used by the install
target, is still more forgiving to allow faster "change, build, test"
iteration when developing.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
f792220dd4 d/control: update for new pin-project dependency
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
97030c9407 cleanup clippy leftovers
this used to contain a pointer cast, now it doesn't

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
5d1d0f5d6c use pin-project to remove more unsafe blocks
we already have it in our dependency tree, so use it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
294466ee61 manager: versions: unify printing
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
c100fe9108 add versions command to proxmox-backup-manager
Add the versions command to proxmox-backup-manager with a similar output
to pveversion [-v]. It prints the packages line by line with only the
package name, followed by the version and, for proxmox-backup and
proxmox-backup-server, some additional information (running kernel,
running version).

In addition it supports the optional output-format parameter which can
be used to print the complete data in either json, json-pretty or text
format. If output-format is specified, the --verbose parameter is
ignored and the detailed list of packages is printed.

With the addition of the versions command, the report is extended as
well.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 18:30:33 +01:00
e754da3ac2 api: versions: add version also in server package unknown case
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
bc1e52bc38 api: versions: rust fmt cleanups
line length limit is 100

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
6f0073bbb5 api: apt update info: do not serialize extra info if none
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
2decf85d6e add extra_info field to APTUpdateInfo
Add an optional string field to APTUpdateInfo which can be used for
extra information.

This is used for passing running kernel and running version information
in the versions API call together with proxmox-backup and
proxmox-backup-server.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 16:39:11 +01:00
1d8f849457 api2/node/syslog: use 'real_service_name' here also
for now this only does the 'postfix' -> 'postfix@-' conversion,
fixes the issue that we only showed the 'postfix' service syslog
(which is rather empty in a default setup) instead of the instance one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 16:36:42 +01:00
beb07279b6 log source of encryption key
This patch prints the source of the encryption key when running
operations with proxmox-backup-client.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
8c6854c8fd inform user when using default encryption key
Currently if you generate a default encryption key:
`proxmox-backup-client key create --kdf none`

all backup operations which don't explicitly disable encryption will be
encrypted with this key.

I found it quite surprising, that my backups were all encrypted without
me explicitly specfying neither key nor encryption mode

This patch informs the user when the default key is used (and no
crypt-mode is provided explicitly)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
57f472d9bb report: use '$' instead of '#' for showing commands
since some files can contain '#' character for comments. (i.e.,
/etc/hosts)

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:37 +01:00
94ffca10a2 report: fix grammar error
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:33 +01:00
0a274ab0a0 ui: UserView: render name as 'Firstname Lastname'
instead of only the firstname

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
c0026563b0 make user properties deletable
by using our usual pattern for the update call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
e411924c7c rest: check for disabled token (user)
when authenticating a token, and not just when authenticating a
user/ticket.

Reported-By: Dominik Jäger <d.jaeger@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-11 12:21:29 +01:00
132 changed files with 10113 additions and 1065 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "1.0.1" version = "1.0.6"
authors = [ authors = [
"Dietmar Maurer <dietmar@proxmox.com>", "Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>", "Dominik Csapak <d.csapak@proxmox.com>",
@ -46,8 +46,9 @@ pam = "0.7"
pam-sys = "0.5" pam-sys = "0.5"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-utils = "0.1.0" pin-utils = "0.1.0"
pin-project = "0.4"
pathpatterns = "0.1.2" pathpatterns = "0.1.2"
proxmox = { version = "0.7.0", features = [ "sortable-macro", "api-macro", "websocket" ] } proxmox = { version = "0.8.1", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0" proxmox-fuse = "0.1.0"

View File

@ -53,3 +53,83 @@ Setup:
Note: 2. may be skipped if you already added the PVE or PBS package repository Note: 2. may be skipped if you already added the PVE or PBS package repository
You are now able to build using the Makefile or cargo itself. You are now able to build using the Makefile or cargo itself.
Design Notes
============
Here are some random thought about the software design (unless I find a better place).
Large chunk sizes
-----------------
It is important to notice that large chunk sizes are crucial for
performance. We have a multi-user system, where different people can do
different operations on a datastore at the same time, and most operation
involves reading a series of chunks.
So what is the maximal theoretical speed we can get when reading a
series of chunks? Reading a chunk sequence need the following steps:
- seek to the first chunk start location
- read the chunk data
- seek to the first chunk start location
- read the chunk data
- ...
Lets use the following disk performance metrics:
:AST: Average Seek Time (second)
:MRS: Maximum sequential Read Speed (bytes/second)
:ACS: Average Chunk Size (bytes)
The maximum performance you can get is::
MAX(ACS) = ACS /(AST + ACS/MRS)
Please note that chunk data is likely to be sequential arranged on disk, but
this it is sort of a best case assumption.
For a typical rotational disk, we assume the following values::
AST: 10ms
MRS: 170MB/s
MAX(4MB) = 115.37 MB/s
MAX(1MB) = 61.85 MB/s;
MAX(64KB) = 6.02 MB/s;
MAX(4KB) = 0.39 MB/s;
MAX(1KB) = 0.10 MB/s;
Modern SSD are much faster, lets assume the following::
max IOPS: 20000 => AST = 0.00005
MRS: 500Mb/s
MAX(4MB) = 474 MB/s
MAX(1MB) = 465 MB/s;
MAX(64KB) = 354 MB/s;
MAX(4KB) = 67 MB/s;
MAX(1KB) = 18 MB/s;
Also, the average chunk directly relates to the number of chunks produced by
a backup::
CHUNK_COUNT = BACKUP_SIZE / ACS
Here are some staticics from my developer worstation::
Disk Usage: 65 GB
Directories: 58971
Files: 726314
Files < 64KB: 617541
As you see, there are really many small files. If we would do file
level deduplication, i.e. generate one chunk per file, we end up with
more than 700000 chunks.
Instead, our current algorithm only produce large chunks with an
average chunks size of 4MB. With above data, this produce about 15000
chunks (factor 50 less chunks).

69
debian/changelog vendored
View File

@ -1,3 +1,72 @@
rust-proxmox-backup (1.0.6-1) unstable; urgency=medium
* stricter handling of file-descriptors, fixes some cases where some could
leak
* ui: fix various usages of the findRecord emthod, ensuring it matches exact
* garbage collection: improve task log format
* verification: improve progress log, make it similar to what's logged on
pull (sync)
* datastore: move manifest locking to /run. This avoids issues with
filesystems which cannot natively handle removing in-use files ("delete on
last close"), and create a virtual, internal, replacement file to work
around that. This is done, for example, by NFS or CIFS (samba).
-- Proxmox Support Team <support@proxmox.com> Fri, 11 Dec 2020 12:51:33 +0100
rust-proxmox-backup (1.0.5-1) unstable; urgency=medium
* client: restore: print meta information exclusively to standard error
-- Proxmox Support Team <support@proxmox.com> Wed, 25 Nov 2020 15:29:58 +0100
rust-proxmox-backup (1.0.4-1) unstable; urgency=medium
* fingerprint: add bytes() accessor
* ui: fix broken gettext use
* cli: move more commands into "snapshot" sub-command
-- Proxmox Support Team <support@proxmox.com> Wed, 25 Nov 2020 06:37:41 +0100
rust-proxmox-backup (1.0.3-1) unstable; urgency=medium
* client: inform user when automatically using the default encryption key
* ui: UserView: render name as 'Firstname Lastname'
* proxmox-backup-manager: add versions command
* pxar: fix anchored exclusion at archive root
* pxar: include .pxarexclude files in the archive
* client: expose all-file-systems option
* api: make expensive parts of datastore status opt-in
* api: include datastore ID in invalid owner errors
* garbage collection: treat .bad files like regular chunks to ensure they
are removed if not referenced anymore
* client: fix issues with encoded UPID strings
* encryption: add fingerprint to key config
* client: add 'key show' command
* fix #3139: add key fingerprint to backup snapshot manifest and check it
when loading with a key
* ui: add snapshot/file fingerprint tooltip
-- Proxmox Support Team <support@proxmox.com> Tue, 24 Nov 2020 08:55:47 +0100
rust-proxmox-backup (1.0.1-1) unstable; urgency=medium rust-proxmox-backup (1.0.1-1) unstable; urgency=medium
* ui: datastore summary: drop 'removed bytes' display * ui: datastore summary: drop 'removed bytes' display

13
debian/control vendored
View File

@ -33,11 +33,12 @@ Build-Depends: debhelper (>= 11),
librust-pam-sys-0.5+default-dev, librust-pam-sys-0.5+default-dev,
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~), librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~), librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-0.4+default-dev,
librust-pin-utils-0.1+default-dev, librust-pin-utils-0.1+default-dev,
librust-proxmox-0.7+api-macro-dev, librust-proxmox-0.8+api-macro-dev (>= 0.8.1-~~),
librust-proxmox-0.7+default-dev, librust-proxmox-0.8+default-dev (>= 0.8.1-~~),
librust-proxmox-0.7+sortable-macro-dev, librust-proxmox-0.8+sortable-macro-dev (>= 0.8.1-~~),
librust-proxmox-0.7+websocket-dev, librust-proxmox-0.8+websocket-dev (>= 0.8.1-~~),
librust-proxmox-fuse-0.1+default-dev, librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.6+default-dev (>= 0.6.1-~~), librust-pxar-0.6+default-dev (>= 0.6.1-~~),
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~), librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),
@ -78,7 +79,7 @@ Build-Depends: debhelper (>= 11),
uuid-dev, uuid-dev,
debhelper (>= 12~), debhelper (>= 12~),
bash-completion, bash-completion,
pve-eslint, pve-eslint (>= 7.12.1-1),
python3-docutils, python3-docutils,
python3-pygments, python3-pygments,
rsync, rsync,
@ -112,6 +113,8 @@ Depends: fonts-font-awesome,
proxmox-widget-toolkit (>= 2.3-6), proxmox-widget-toolkit (>= 2.3-6),
pve-xtermjs (>= 4.7.0-1), pve-xtermjs (>= 4.7.0-1),
smartmontools, smartmontools,
mtx,
mt-st,
${misc:Depends}, ${misc:Depends},
${shlibs:Depends}, ${shlibs:Depends},
Recommends: zfsutils-linux, Recommends: zfsutils-linux,

2
debian/control.in vendored
View File

@ -12,6 +12,8 @@ Depends: fonts-font-awesome,
proxmox-widget-toolkit (>= 2.3-6), proxmox-widget-toolkit (>= 2.3-6),
pve-xtermjs (>= 4.7.0-1), pve-xtermjs (>= 4.7.0-1),
smartmontools, smartmontools,
mtx,
mt-st,
${misc:Depends}, ${misc:Depends},
${shlibs:Depends}, ${shlibs:Depends},
Recommends: zfsutils-linux, Recommends: zfsutils-linux,

View File

@ -14,7 +14,7 @@ section = "admin"
build_depends = [ build_depends = [
"debhelper (>= 12~)", "debhelper (>= 12~)",
"bash-completion", "bash-completion",
"pve-eslint", "pve-eslint (>= 7.12.1-1)",
"python3-docutils", "python3-docutils",
"python3-pygments", "python3-pygments",
"rsync", "rsync",

View File

@ -11,8 +11,7 @@ usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-daily-update
usr/sbin/proxmox-backup-manager usr/sbin/proxmox-backup-manager
usr/share/javascript/proxmox-backup/index.hbs usr/share/javascript/proxmox-backup/index.hbs
usr/share/javascript/proxmox-backup/css/ext6-pbs.css usr/share/javascript/proxmox-backup/css/ext6-pbs.css
usr/share/javascript/proxmox-backup/images/logo-128.png usr/share/javascript/proxmox-backup/images
usr/share/javascript/proxmox-backup/images/proxmox_logo.png
usr/share/javascript/proxmox-backup/js/proxmox-backup-gui.js usr/share/javascript/proxmox-backup/js/proxmox-backup-gui.js
usr/share/man/man1/proxmox-backup-manager.1 usr/share/man/man1/proxmox-backup-manager.1
usr/share/man/man1/proxmox-backup-proxy.1 usr/share/man/man1/proxmox-backup-proxy.1

View File

@ -17,6 +17,7 @@ MANUAL_PAGES := \
PRUNE_SIMULATOR_FILES := \ PRUNE_SIMULATOR_FILES := \
prune-simulator/index.html \ prune-simulator/index.html \
prune-simulator/documentation.html \ prune-simulator/documentation.html \
prune-simulator/clear-trigger.png \
prune-simulator/prune-simulator.js prune-simulator/prune-simulator.js
# Sphinx documentation setup # Sphinx documentation setup

View File

@ -392,11 +392,11 @@ periodic recovery tests to ensure that you can access the data in
case of problems. case of problems.
First, you need to find the snapshot which you want to restore. The snapshot First, you need to find the snapshot which you want to restore. The snapshot
command provides a list of all the snapshots on the server: list command provides a list of all the snapshots on the server:
.. code-block:: console .. code-block:: console
# proxmox-backup-client snapshots # proxmox-backup-client snapshot list
┌────────────────────────────────┬─────────────┬────────────────────────────────────┐ ┌────────────────────────────────┬─────────────┬────────────────────────────────────┐
│ snapshot │ size │ files │ │ snapshot │ size │ files │
╞════════════════════════════════╪═════════════╪════════════════════════════════════╡ ╞════════════════════════════════╪═════════════╪════════════════════════════════════╡
@ -581,7 +581,7 @@ command:
.. code-block:: console .. code-block:: console
# proxmox-backup-client forget <snapshot> # proxmox-backup-client snapshot forget <snapshot>
.. caution:: This command removes all archives in this backup .. caution:: This command removes all archives in this backup

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -33,6 +33,9 @@
.first-of-month { .first-of-month {
border-right: dashed black 4px; border-right: dashed black 4px;
} }
.clear-trigger {
background-image: url(./clear-trigger.png);
}
</style> </style>
<script type="text/javascript" src="extjs/ext-all.js"></script> <script type="text/javascript" src="extjs/ext-all.js"></script>

View File

@ -265,6 +265,34 @@ Ext.onReady(function() {
}, },
}); });
Ext.define('PBS.PruneSimulatorKeepInput', {
extend: 'Ext.form.field.Number',
alias: 'widget.prunesimulatorKeepInput',
allowBlank: true,
fieldGroup: 'keep',
minValue: 1,
listeners: {
afterrender: function(field) {
this.triggers.clear.setVisible(field.value !== null);
},
change: function(field, newValue, oldValue) {
this.triggers.clear.setVisible(newValue !== null);
},
},
triggers: {
clear: {
cls: 'clear-trigger',
weight: -1,
handler: function() {
this.triggers.clear.setVisible(false);
this.setValue(null);
},
},
},
});
Ext.define('PBS.PruneSimulatorPanel', { Ext.define('PBS.PruneSimulatorPanel', {
extend: 'Ext.panel.Panel', extend: 'Ext.panel.Panel',
alias: 'widget.prunesimulatorPanel', alias: 'widget.prunesimulatorPanel',
@ -560,58 +588,37 @@ Ext.onReady(function() {
keepItems: [ keepItems: [
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-last', name: 'keep-last',
allowBlank: true,
fieldLabel: 'keep-last', fieldLabel: 'keep-last',
minValue: 0,
value: 4, value: 4,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-hourly', name: 'keep-hourly',
allowBlank: true,
fieldLabel: 'keep-hourly', fieldLabel: 'keep-hourly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-daily', name: 'keep-daily',
allowBlank: true,
fieldLabel: 'keep-daily', fieldLabel: 'keep-daily',
minValue: 0,
value: 5, value: 5,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-weekly', name: 'keep-weekly',
allowBlank: true,
fieldLabel: 'keep-weekly', fieldLabel: 'keep-weekly',
minValue: 0,
value: 2, value: 2,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-monthly', name: 'keep-monthly',
allowBlank: true,
fieldLabel: 'keep-monthly', fieldLabel: 'keep-monthly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-yearly', name: 'keep-yearly',
allowBlank: true,
fieldLabel: 'keep-yearly', fieldLabel: 'keep-yearly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
], ],

View File

@ -9,6 +9,7 @@ pub mod types;
pub mod version; pub mod version;
pub mod ping; pub mod ping;
pub mod pull; pub mod pull;
pub mod tape;
mod helpers; mod helpers;
use proxmox::api::router::SubdirMap; use proxmox::api::router::SubdirMap;
@ -27,6 +28,7 @@ pub const SUBDIRS: SubdirMap = &[
("pull", &pull::ROUTER), ("pull", &pull::ROUTER),
("reader", &reader::ROUTER), ("reader", &reader::ROUTER),
("status", &status::ROUTER), ("status", &status::ROUTER),
("tape", &tape::ROUTER),
("version", &version::ROUTER), ("version", &version::ROUTER),
]; ];

View File

@ -181,6 +181,7 @@ fn create_ticket(
} }
#[api( #[api(
protected: true,
input: { input: {
properties: { properties: {
userid: { userid: {
@ -195,7 +196,6 @@ fn create_ticket(
description: "Anybody is allowed to change there own password. In addition, users with 'Permissions:Modify' privilege may change any password.", description: "Anybody is allowed to change there own password. In addition, users with 'Permissions:Modify' privilege may change any password.",
permission: &Permission::Anybody, permission: &Permission::Anybody,
}, },
)] )]
/// Change user password /// Change user password
/// ///
@ -215,7 +215,7 @@ fn change_password(
let mut allowed = userid == current_user; let mut allowed = userid == current_user;
if userid == "root@pam" { allowed = true; } if current_user == "root@pam" { allowed = true; }
if !allowed { if !allowed {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;

View File

@ -249,10 +249,7 @@ pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error>
}, },
}, },
}, },
returns: { returns: { type: user::User },
description: "The user configuration (with config digest).",
type: user::User,
},
access: { access: {
permission: &Permission::Or(&[ permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false), &Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
@ -268,6 +265,21 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
Ok(user) Ok(user)
} }
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
#[allow(non_camel_case_types)]
pub enum DeletableProperty {
/// Delete the comment property.
comment,
/// Delete the firstname property.
firstname,
/// Delete the lastname property.
lastname,
/// Delete the email property.
email,
}
#[api( #[api(
protected: true, protected: true,
input: { input: {
@ -303,6 +315,14 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
schema: user::EMAIL_SCHEMA, schema: user::EMAIL_SCHEMA,
optional: true, optional: true,
}, },
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: { digest: {
optional: true, optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA, schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
@ -326,6 +346,7 @@ pub fn update_user(
firstname: Option<String>, firstname: Option<String>,
lastname: Option<String>, lastname: Option<String>,
email: Option<String>, email: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
@ -340,6 +361,17 @@ pub fn update_user(
let mut data: user::User = config.lookup("user", userid.as_str())?; let mut data: user::User = config.lookup("user", userid.as_str())?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::comment => data.comment = None,
DeletableProperty::firstname => data.firstname = None,
DeletableProperty::lastname => data.lastname = None,
DeletableProperty::email => data.email = None,
}
}
}
if let Some(comment) = comment { if let Some(comment) = comment {
let comment = comment.trim().to_string(); let comment = comment.trim().to_string();
if comment.is_empty() { if comment.is_empty() {
@ -433,10 +465,7 @@ pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error>
}, },
}, },
}, },
returns: { returns: { type: user::ApiToken },
description: "Get API token metadata (with config digest).",
type: user::ApiToken,
},
access: { access: {
permission: &Permission::Or(&[ permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false), &Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),

View File

@ -1,4 +1,4 @@
use std::collections::{HashSet, HashMap}; use std::collections::HashSet;
use std::ffi::OsStr; use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
@ -125,19 +125,6 @@ fn get_all_snapshot_files(
Ok((manifest, files)) Ok((manifest, files))
} }
fn group_backups(backup_list: Vec<BackupInfo>) -> HashMap<String, Vec<BackupInfo>> {
let mut group_hash = HashMap::new();
for info in backup_list {
let group_id = info.backup_dir.group().group_path().to_str().unwrap().to_owned();
let time_list = group_hash.entry(group_id).or_insert(vec![]);
time_list.push(info);
}
group_hash
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -171,45 +158,64 @@ fn list_groups(
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let backup_list = BackupInfo::list_backups(&datastore.base_path())?; let backup_groups = BackupInfo::list_backup_groups(&datastore.base_path())?;
let group_hash = group_backups(backup_list); let group_info = backup_groups
.into_iter()
.fold(Vec::new(), |mut group_info, group| {
let owner = match datastore.get_owner(&group) {
Ok(auth_id) => auth_id,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
&store,
group,
err);
return group_info;
},
};
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
return group_info;
}
let mut groups = Vec::new(); let snapshots = match group.list_backups(&datastore.base_path()) {
Ok(snapshots) => snapshots,
Err(_) => {
return group_info;
},
};
for (_group_id, mut list) in group_hash { let backup_count: u64 = snapshots.len() as u64;
if backup_count == 0 {
return group_info;
}
BackupInfo::sort_list(&mut list, false); let last_backup = snapshots
.iter()
.fold(&snapshots[0], |last, curr| {
if curr.is_finished()
&& curr.backup_dir.backup_time() > last.backup_dir.backup_time() {
curr
} else {
last
}
})
.to_owned();
let info = &list[0]; group_info.push(GroupListItem {
backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(),
last_backup: last_backup.backup_dir.backup_time(),
owner: Some(owner),
backup_count,
files: last_backup.files,
});
let group = info.backup_dir.group(); group_info
});
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0; Ok(group_info)
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
println!("Failed to get owner of group '{}' - {}", group, err);
continue;
},
};
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
let result_item = GroupListItem {
backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(),
last_backup: info.backup_dir.backup_time(),
backup_count: list.len() as u64,
files: info.files.clone(),
owner: Some(owner),
};
groups.push(result_item);
}
Ok(groups)
} }
#[api( #[api(
@ -357,49 +363,56 @@ pub fn list_snapshots (
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let base_path = datastore.base_path(); let base_path = datastore.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?; let groups = match (backup_type, backup_id) {
(Some(backup_type), Some(backup_id)) => {
let mut groups = Vec::with_capacity(1);
groups.push(BackupGroup::new(backup_type, backup_id));
groups
},
(Some(backup_type), None) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_type() == backup_type)
.collect()
},
(None, Some(backup_id)) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_id() == backup_id)
.collect()
},
_ => BackupInfo::list_backup_groups(&base_path)?,
};
let mut snapshots = vec![]; let info_to_snapshot_list_item = |group: &BackupGroup, owner, info: BackupInfo| {
let backup_type = group.backup_type().to_string();
let backup_id = group.backup_id().to_string();
let backup_time = info.backup_dir.backup_time();
for info in backup_list { match get_all_snapshot_files(&datastore, &info) {
let group = info.backup_dir.group();
if let Some(ref backup_type) = backup_type {
if backup_type != group.backup_type() { continue; }
}
if let Some(ref backup_id) = backup_id {
if backup_id != group.backup_id() { continue; }
}
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
println!("Failed to get owner of group '{}' - {}", group, err);
continue;
},
};
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
let mut size = None;
let (comment, verification, files) = match get_all_snapshot_files(&datastore, &info) {
Ok((manifest, files)) => { Ok((manifest, files)) => {
size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
// extract the first line from notes // extract the first line from notes
let comment: Option<String> = manifest.unprotected["notes"] let comment: Option<String> = manifest.unprotected["notes"]
.as_str() .as_str()
.and_then(|notes| notes.lines().next()) .and_then(|notes| notes.lines().next())
.map(String::from); .map(String::from);
let verify = manifest.unprotected["verify_state"].clone(); let fingerprint = match manifest.fingerprint() {
let verify: Option<SnapshotVerifyState> = match serde_json::from_value(verify) { Ok(fp) => fp,
Err(err) => {
eprintln!("error parsing fingerprint: '{}'", err);
None
},
};
let verification = manifest.unprotected["verify_state"].clone();
let verification: Option<SnapshotVerifyState> = match serde_json::from_value(verification) {
Ok(verify) => verify, Ok(verify) => verify,
Err(err) => { Err(err) => {
eprintln!("error parsing verification state : '{}'", err); eprintln!("error parsing verification state : '{}'", err);
@ -407,88 +420,114 @@ pub fn list_snapshots (
} }
}; };
(comment, verify, files) let size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment,
verification,
fingerprint,
files,
size,
owner,
}
}, },
Err(err) => { Err(err) => {
eprintln!("error during snapshot file listing: '{}'", err); eprintln!("error during snapshot file listing: '{}'", err);
( let files = info
None,
None,
info
.files .files
.iter() .into_iter()
.map(|x| BackupContent { .map(|x| BackupContent {
filename: x.to_string(), filename: x.to_string(),
size: None, size: None,
crypt_mode: None, crypt_mode: None,
}) })
.collect() .collect();
)
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment: None,
verification: None,
fingerprint: None,
files,
size: None,
owner,
}
}, },
}; }
let result_item = SnapshotListItem {
backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(),
backup_time: info.backup_dir.backup_time(),
comment,
verification,
files,
size,
owner: Some(owner),
};
snapshots.push(result_item);
}
Ok(snapshots)
}
fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
let base_path = store.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?;
let mut groups = HashSet::new();
let mut result = Counts {
ct: None,
host: None,
vm: None,
other: None,
}; };
for info in backup_list { groups
let group = info.backup_dir.group(); .iter()
.try_fold(Vec::new(), |mut snapshots, group| {
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
&store,
group,
err);
return Ok(snapshots);
},
};
let id = group.backup_id(); if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
let backup_type = group.backup_type(); return Ok(snapshots);
}
let mut new_id = false; let group_backups = group.list_backups(&datastore.base_path())?;
if groups.insert(format!("{}-{}", &backup_type, &id)) { snapshots.extend(
new_id = true; group_backups
} .into_iter()
.map(|info| info_to_snapshot_list_item(&group, Some(owner.clone()), info))
);
let mut counts = match backup_type { Ok(snapshots)
"ct" => result.ct.take().unwrap_or(Default::default()), })
"host" => result.host.take().unwrap_or(Default::default()), }
"vm" => result.vm.take().unwrap_or(Default::default()),
_ => result.other.take().unwrap_or(Default::default()),
};
counts.snapshots += 1; fn get_snapshots_count(store: &DataStore, filter_owner: Option<&Authid>) -> Result<Counts, Error> {
if new_id { let base_path = store.base_path();
counts.groups +=1; let groups = BackupInfo::list_backup_groups(&base_path)?;
}
match backup_type { groups.iter()
"ct" => result.ct = Some(counts), .filter(|group| {
"host" => result.host = Some(counts), let owner = match store.get_owner(&group) {
"vm" => result.vm = Some(counts), Ok(owner) => owner,
_ => result.other = Some(counts), Err(err) => {
} eprintln!("Failed to get owner of group '{}/{}' - {}",
} store.name(),
group,
err);
return false;
},
};
Ok(result) match filter_owner {
Some(filter) => check_backup_owner(&owner, filter).is_ok(),
None => true,
}
})
.try_fold(Counts::default(), |mut counts, group| {
let snapshot_count = group.list_backups(&base_path)?.len() as u64;
let type_count = match group.backup_type() {
"ct" => counts.ct.get_or_insert(Default::default()),
"vm" => counts.vm.get_or_insert(Default::default()),
"host" => counts.host.get_or_insert(Default::default()),
_ => counts.other.get_or_insert(Default::default()),
};
type_count.groups += 1;
type_count.snapshots += snapshot_count;
Ok(counts)
})
} }
#[api( #[api(
@ -497,7 +536,14 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
store: { store: {
schema: DATASTORE_SCHEMA, schema: DATASTORE_SCHEMA,
}, },
verbose: {
type: bool,
default: false,
optional: true,
description: "Include additional information like snapshot counts and GC status.",
},
}, },
}, },
returns: { returns: {
type: DataStoreStatus, type: DataStoreStatus,
@ -509,13 +555,30 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
/// Get datastore status. /// Get datastore status.
pub fn status( pub fn status(
store: String, store: String,
verbose: bool,
_info: &ApiMethod, _info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<DataStoreStatus, Error> { ) -> Result<DataStoreStatus, Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let storage = crate::tools::disks::disk_usage(&datastore.base_path())?; let storage = crate::tools::disks::disk_usage(&datastore.base_path())?;
let counts = get_snapshots_count(&datastore)?; let (counts, gc_status) = if verbose {
let gc_status = datastore.last_gc_status(); let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let store_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let filter_owner = if store_privs & PRIV_DATASTORE_AUDIT != 0 {
None
} else {
Some(&auth_id)
};
let counts = Some(get_snapshots_count(&datastore, filter_owner)?);
let gc_status = Some(datastore.last_gc_status());
(counts, gc_status)
} else {
(None, None)
};
Ok(DataStoreStatus { Ok(DataStoreStatus {
total: storage.total, total: storage.total,
@ -624,12 +687,12 @@ pub fn verify(
} }
res res
} else if let Some(backup_group) = backup_group { } else if let Some(backup_group) = backup_group {
let (_count, failed_dirs) = verify_backup_group( let failed_dirs = verify_backup_group(
datastore, datastore,
&backup_group, &backup_group,
verified_chunks, verified_chunks,
corrupt_chunks, corrupt_chunks,
None, &mut StoreProgress::new(1),
worker.clone(), worker.clone(),
worker.upid(), worker.upid(),
None, None,
@ -648,7 +711,7 @@ pub fn verify(
verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)? verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)?
}; };
if failed_dirs.len() > 0 { if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots/groups:"); worker.log("Failed to verify the following snapshots/groups:");
for dir in failed_dirs { for dir in failed_dirs {
worker.log(format!("\t{}", dir)); worker.log(format!("\t{}", dir));
} }
@ -912,10 +975,7 @@ pub fn garbage_collection_status(
returns: { returns: {
description: "List the accessible datastores.", description: "List the accessible datastores.",
type: Array, type: Array,
items: { items: { type: DataStoreListItem },
description: "Datastore name and description.",
type: DataStoreListItem,
},
}, },
access: { access: {
permission: &Permission::Anybody, permission: &Permission::Anybody,

View File

@ -311,6 +311,10 @@ pub const BACKUP_API_SUBDIRS: SubdirMap = &[
"previous", &Router::new() "previous", &Router::new()
.download(&API_METHOD_DOWNLOAD_PREVIOUS) .download(&API_METHOD_DOWNLOAD_PREVIOUS)
), ),
(
"previous_backup_time", &Router::new()
.get(&API_METHOD_GET_PREVIOUS_BACKUP_TIME)
),
( (
"speedtest", &Router::new() "speedtest", &Router::new()
.upload(&API_METHOD_UPLOAD_SPEEDTEST) .upload(&API_METHOD_UPLOAD_SPEEDTEST)
@ -694,6 +698,28 @@ fn finish_backup (
Ok(Value::Null) Ok(Value::Null)
} }
#[sortable]
pub const API_METHOD_GET_PREVIOUS_BACKUP_TIME: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&get_previous_backup_time),
&ObjectSchema::new(
"Get previous backup time.",
&[],
)
);
fn get_previous_backup_time(
_param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let env: &BackupEnvironment = rpcenv.as_ref();
let backup_time = env.last_backup.as_ref().map(|info| info.backup_dir.backup_time());
Ok(json!(backup_time))
}
#[sortable] #[sortable]
pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new( pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&download_previous), &ApiHandler::AsyncHttp(&download_previous),

View File

@ -5,9 +5,15 @@ pub mod datastore;
pub mod remote; pub mod remote;
pub mod sync; pub mod sync;
pub mod verify; pub mod verify;
pub mod drive;
pub mod changer;
pub mod media_pool;
const SUBDIRS: SubdirMap = &[ const SUBDIRS: SubdirMap = &[
("changer", &changer::ROUTER),
("datastore", &datastore::ROUTER), ("datastore", &datastore::ROUTER),
("drive", &drive::ROUTER),
("media-pool", &media_pool::ROUTER),
("remote", &remote::ROUTER), ("remote", &remote::ROUTER),
("sync", &sync::ROUTER), ("sync", &sync::ROUTER),
("verify", &verify::ROUTER) ("verify", &verify::ROUTER)

243
src/api2/config/changer.rs Normal file
View File

@ -0,0 +1,243 @@
use anyhow::{bail, Error};
use serde_json::Value;
use proxmox::api::{api, Router, RpcEnvironment};
use crate::{
config,
api2::types::{
PROXMOX_CONFIG_DIGEST_SCHEMA,
CHANGER_ID_SCHEMA,
LINUX_DRIVE_PATH_SCHEMA,
DriveListEntry,
ScsiTapeChanger,
LinuxTapeDrive,
},
tape::{
linux_tape_changer_list,
check_drive_path,
lookup_drive,
},
};
#[api(
protected: true,
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
},
},
},
)]
/// Create a new changer device
pub fn create_changer(
name: String,
path: String,
) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
let linux_changers = linux_tape_changer_list();
check_drive_path(&linux_changers, &path)?;
if config.sections.get(&name).is_some() {
bail!("Entry '{}' already exists", name);
}
let item = ScsiTapeChanger {
name: name.clone(),
path,
};
config.set_data(&name, "changer", &item)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
},
},
returns: {
type: ScsiTapeChanger,
},
)]
/// Get tape changer configuration
pub fn get_config(
name: String,
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<ScsiTapeChanger, Error> {
let (config, digest) = config::drive::config()?;
let data: ScsiTapeChanger = config.lookup("changer", &name)?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(data)
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of configured changers (with config digest).",
type: Array,
items: {
type: DriveListEntry,
},
},
)]
/// List changers
pub fn list_changers(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<DriveListEntry>, Error> {
let (config, digest) = config::drive::config()?;
let linux_changers = linux_tape_changer_list();
let changer_list: Vec<ScsiTapeChanger> = config.convert_to_typed_array("changer")?;
let mut list = Vec::new();
for changer in changer_list {
let mut entry = DriveListEntry {
name: changer.name,
path: changer.path.clone(),
changer: None,
vendor: None,
model: None,
serial: None,
};
if let Some(info) = lookup_drive(&linux_changers, &changer.path) {
entry.vendor = Some(info.vendor.clone());
entry.model = Some(info.model.clone());
entry.serial = Some(info.serial.clone());
}
list.push(entry);
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
optional: true,
},
digest: {
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
optional: true,
},
},
},
)]
/// Update a tape changer configuration
pub fn update_changer(
name: String,
path: Option<String>,
digest: Option<String>,
_param: Value,
) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, expected_digest) = config::drive::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let mut data: ScsiTapeChanger = config.lookup("changer", &name)?;
if let Some(path) = path {
let changers = linux_tape_changer_list();
check_drive_path(&changers, &path)?;
data.path = path;
}
config.set_data(&name, "changer", &data)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
},
},
)]
/// Delete a tape changer configuration
pub fn delete_changer(name: String, _param: Value) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
match config.sections.get(&name) {
Some((section_type, _)) => {
if section_type != "changer" {
bail!("Entry '{}' exists, but is not a changer device", name);
}
config.sections.remove(&name);
},
None => bail!("Delete changer '{}' failed - no such entry", name),
}
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
for drive in drive_list {
if let Some(changer) = drive.changer {
if changer == name {
bail!("Delete changer '{}' failed - used by drive '{}'", name, drive.name);
}
}
}
config::drive::save_config(&config)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_CONFIG)
.put(&API_METHOD_UPDATE_CHANGER)
.delete(&API_METHOD_DELETE_CHANGER);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_CHANGERS)
.post(&API_METHOD_CREATE_CHANGER)
.match_all("name", &ITEM_ROUTER);

View File

@ -151,10 +151,7 @@ pub fn create_datastore(param: Value) -> Result<(), Error> {
}, },
}, },
}, },
returns: { returns: { type: datastore::DataStoreConfig },
description: "The datastore configuration (with config digest).",
type: datastore::DataStoreConfig,
},
access: { access: {
permission: &Permission::Privilege(&["datastore", "{name}"], PRIV_DATASTORE_AUDIT, false), permission: &Permission::Privilege(&["datastore", "{name}"], PRIV_DATASTORE_AUDIT, false),
}, },

297
src/api2/config/drive.rs Normal file
View File

@ -0,0 +1,297 @@
use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize};
use serde_json::Value;
use proxmox::api::{api, Router, RpcEnvironment};
use crate::{
config,
api2::types::{
PROXMOX_CONFIG_DIGEST_SCHEMA,
DRIVE_ID_SCHEMA,
CHANGER_ID_SCHEMA,
CHANGER_DRIVE_ID_SCHEMA,
LINUX_DRIVE_PATH_SCHEMA,
DriveListEntry,
LinuxTapeDrive,
ScsiTapeChanger,
},
tape::{
linux_tape_device_list,
check_drive_path,
lookup_drive,
},
};
#[api(
protected: true,
input: {
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
},
changer: {
schema: CHANGER_ID_SCHEMA,
optional: true,
},
"changer-drive-id": {
schema: CHANGER_DRIVE_ID_SCHEMA,
optional: true,
},
},
},
)]
/// Create a new drive
pub fn create_drive(param: Value) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
let item: LinuxTapeDrive = serde_json::from_value(param)?;
let linux_drives = linux_tape_device_list();
check_drive_path(&linux_drives, &item.path)?;
if config.sections.get(&item.name).is_some() {
bail!("Entry '{}' already exists", item.name);
}
config.set_data(&item.name, "linux", &item)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
},
},
},
returns: {
type: LinuxTapeDrive,
},
)]
/// Get drive configuration
pub fn get_config(
name: String,
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<LinuxTapeDrive, Error> {
let (config, digest) = config::drive::config()?;
let data: LinuxTapeDrive = config.lookup("linux", &name)?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(data)
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of configured remotes (with config digest).",
type: Array,
items: {
type: DriveListEntry,
},
},
)]
/// List drives
pub fn list_drives(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<DriveListEntry>, Error> {
let (config, digest) = config::drive::config()?;
let linux_drives = linux_tape_device_list();
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
let mut list = Vec::new();
for drive in drive_list {
let mut entry = DriveListEntry {
name: drive.name,
path: drive.path.clone(),
changer: drive.changer,
vendor: None,
model: None,
serial: None,
};
if let Some(info) = lookup_drive(&linux_drives, &drive.path) {
entry.vendor = Some(info.vendor.clone());
entry.model = Some(info.model.clone());
entry.serial = Some(info.serial.clone());
}
list.push(entry);
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[allow(non_camel_case_types)]
#[serde(rename_all = "kebab-case")]
/// Deletable property name
pub enum DeletableProperty {
/// Delete the changer property.
changer,
/// Delete the changer-drive-id property.
changer_drive_id,
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
optional: true,
},
changer: {
schema: CHANGER_ID_SCHEMA,
optional: true,
},
"changer-drive-id": {
schema: CHANGER_DRIVE_ID_SCHEMA,
optional: true,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: {
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
optional: true,
},
},
},
)]
/// Update a drive configuration
pub fn update_drive(
name: String,
path: Option<String>,
changer: Option<String>,
changer_drive_id: Option<u64>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
_param: Value,
) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, expected_digest) = config::drive::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let mut data: LinuxTapeDrive = config.lookup("linux", &name)?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::changer => {
data.changer = None;
data.changer_drive_id = None;
},
DeletableProperty::changer_drive_id => { data.changer_drive_id = None; },
}
}
}
if let Some(path) = path {
let linux_drives = linux_tape_device_list();
check_drive_path(&linux_drives, &path)?;
data.path = path;
}
if let Some(changer) = changer {
let _: ScsiTapeChanger = config.lookup("changer", &changer)?;
data.changer = Some(changer);
}
if let Some(changer_drive_id) = changer_drive_id {
if changer_drive_id == 0 {
data.changer_drive_id = None;
} else {
if data.changer.is_none() {
bail!("Option 'changer-drive-id' requires option 'changer'.");
}
data.changer_drive_id = Some(changer_drive_id);
}
}
config.set_data(&name, "linux", &data)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
},
},
},
)]
/// Delete a drive configuration
pub fn delete_drive(name: String, _param: Value) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
match config.sections.get(&name) {
Some((section_type, _)) => {
if section_type != "linux" {
bail!("Entry '{}' exists, but is not a linux tape drive", name);
}
config.sections.remove(&name);
},
None => bail!("Delete drive '{}' failed - no such drive", name),
}
config::drive::save_config(&config)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_CONFIG)
.put(&API_METHOD_UPDATE_DRIVE)
.delete(&API_METHOD_DELETE_DRIVE);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_DRIVES)
.post(&API_METHOD_CREATE_DRIVE)
.match_all("name", &ITEM_ROUTER);

View File

@ -0,0 +1,252 @@
use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize};
use proxmox::{
api::{
api,
Router,
RpcEnvironment,
},
};
use crate::{
api2::types::{
DRIVE_ID_SCHEMA,
MEDIA_POOL_NAME_SCHEMA,
MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
MEDIA_RETENTION_POLICY_SCHEMA,
MediaPoolConfig,
},
config::{
self,
drive::{
check_drive_exists,
},
},
};
#[api(
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
drive: {
schema: DRIVE_ID_SCHEMA,
},
allocation: {
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
optional: true,
},
retention: {
schema: MEDIA_RETENTION_POLICY_SCHEMA,
optional: true,
},
template: {
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
optional: true,
},
},
},
)]
/// Create a new media pool
pub fn create_pool(
name: String,
drive: String,
allocation: Option<String>,
retention: Option<String>,
template: Option<String>,
) -> Result<(), Error> {
let _lock = config::media_pool::lock()?;
let (mut config, _digest) = config::media_pool::config()?;
if config.sections.get(&name).is_some() {
bail!("Media pool '{}' already exists", name);
}
let (drive_config, _) = config::drive::config()?;
check_drive_exists(&drive_config, &drive)?;
let item = MediaPoolConfig {
name: name.clone(),
drive,
allocation,
retention,
template,
};
config.set_data(&name, "pool", &item)?;
config::media_pool::save_config(&config)?;
Ok(())
}
#[api(
returns: {
description: "The list of configured media pools (with config digest).",
type: Array,
items: {
type: MediaPoolConfig,
},
},
)]
/// List media pools
pub fn list_pools(
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<MediaPoolConfig>, Error> {
let (config, digest) = config::media_pool::config()?;
let list = config.convert_to_typed_array("pool")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
},
},
returns: {
type: MediaPoolConfig,
},
)]
/// Get media pool configuration
pub fn get_config(name: String) -> Result<MediaPoolConfig, Error> {
let (config, _digest) = config::media_pool::config()?;
let data: MediaPoolConfig = config.lookup("pool", &name)?;
Ok(data)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[allow(non_camel_case_types)]
/// Deletable property name
pub enum DeletableProperty {
/// Delete media set allocation policy.
allocation,
/// Delete pool retention policy
retention,
/// Delete media set naming template
template,
}
#[api(
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
allocation: {
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
optional: true,
},
retention: {
schema: MEDIA_RETENTION_POLICY_SCHEMA,
optional: true,
},
template: {
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
optional: true,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
},
},
)]
/// Update media pool settings
pub fn update_pool(
name: String,
drive: Option<String>,
allocation: Option<String>,
retention: Option<String>,
template: Option<String>,
delete: Option<Vec<DeletableProperty>>,
) -> Result<(), Error> {
let _lock = config::media_pool::lock()?;
let (mut config, _digest) = config::media_pool::config()?;
let mut data: MediaPoolConfig = config.lookup("pool", &name)?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::allocation => { data.allocation = None; },
DeletableProperty::retention => { data.retention = None; },
DeletableProperty::template => { data.template = None; },
}
}
}
if let Some(drive) = drive { data.drive = drive; }
if allocation.is_some() { data.allocation = allocation; }
if retention.is_some() { data.retention = retention; }
if template.is_some() { data.template = template; }
config.set_data(&name, "pool", &data)?;
config::media_pool::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
},
},
)]
/// Delete a media pool configuration
pub fn delete_pool(name: String) -> Result<(), Error> {
let _lock = config::media_pool::lock()?;
let (mut config, _digest) = config::media_pool::config()?;
match config.sections.get(&name) {
Some(_) => { config.sections.remove(&name); },
None => bail!("delete pool '{}' failed - no such pool", name),
}
config::media_pool::save_config(&config)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_CONFIG)
.put(&API_METHOD_UPDATE_POOL)
.delete(&API_METHOD_DELETE_POOL);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_POOLS)
.post(&API_METHOD_CREATE_POOL)
.match_all("name", &ITEM_ROUTER);

View File

@ -19,10 +19,7 @@ use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
returns: { returns: {
description: "The list of configured remotes (with config digest).", description: "The list of configured remotes (with config digest).",
type: Array, type: Array,
items: { items: { type: remote::Remote },
type: remote::Remote,
description: "Remote configuration (without password).",
},
}, },
access: { access: {
description: "List configured remotes filtered by Remote.Audit privileges", description: "List configured remotes filtered by Remote.Audit privileges",
@ -124,10 +121,7 @@ pub fn create_remote(password: String, param: Value) -> Result<(), Error> {
}, },
}, },
}, },
returns: { returns: { type: remote::Remote },
description: "The remote configuration (with config digest).",
type: remote::Remote,
},
access: { access: {
permission: &Permission::Privilege(&["remote", "{name}"], PRIV_REMOTE_AUDIT, false), permission: &Permission::Privilege(&["remote", "{name}"], PRIV_REMOTE_AUDIT, false),
} }
@ -347,10 +341,7 @@ pub async fn remote_client(remote: remote::Remote) -> Result<HttpClient, Error>
returns: { returns: {
description: "List the accessible datastores.", description: "List the accessible datastores.",
type: Array, type: Array,
items: { items: { type: DataStoreListItem },
description: "Datastore name and description.",
type: DataStoreListItem,
},
}, },
)] )]
/// List datastores of a remote.cfg entry /// List datastores of a remote.cfg entry

View File

@ -182,10 +182,7 @@ pub fn create_sync_job(
}, },
}, },
}, },
returns: { returns: { type: sync::SyncJobConfig },
description: "The sync job configuration.",
type: sync::SyncJobConfig,
},
access: { access: {
description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.", description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody, permission: &Permission::Anybody,

View File

@ -127,10 +127,7 @@ pub fn create_verification_job(
}, },
}, },
}, },
returns: { returns: { type: verify::VerificationJobConfig },
description: "The verification job configuration.",
type: verify::VerificationJobConfig,
},
access: { access: {
permission: &Permission::Anybody, permission: &Permission::Anybody,
description: "Requires Datastore.Audit or Datastore.Verify on job's datastore.", description: "Requires Datastore.Audit or Datastore.Verify on job's datastore.",

View File

@ -6,7 +6,6 @@ use futures::future::{FutureExt, TryFutureExt};
use hyper::body::Body; use hyper::body::Body;
use hyper::http::request::Parts; use hyper::http::request::Parts;
use hyper::upgrade::Upgraded; use hyper::upgrade::Upgraded;
use nix::fcntl::{fcntl, FcntlArg, FdFlag};
use serde_json::{json, Value}; use serde_json::{json, Value};
use tokio::io::{AsyncBufReadExt, BufReader}; use tokio::io::{AsyncBufReadExt, BufReader};
@ -34,7 +33,7 @@ pub mod subscription;
pub(crate) mod rrd; pub(crate) mod rrd;
mod journal; mod journal;
mod services; pub(crate) mod services;
mod status; mod status;
mod syslog; mod syslog;
mod time; mod time;
@ -145,18 +144,10 @@ async fn termproxy(
move |worker| async move { move |worker| async move {
// move inside the worker so that it survives and does not close the port // move inside the worker so that it survives and does not close the port
// remove CLOEXEC from listenere so that we can reuse it in termproxy // remove CLOEXEC from listenere so that we can reuse it in termproxy
let fd = listener.as_raw_fd(); tools::fd_change_cloexec(listener.as_raw_fd(), false)?;
let mut flags = match fcntl(fd, FcntlArg::F_GETFD) {
Ok(bits) => FdFlag::from_bits_truncate(bits),
Err(err) => bail!("could not get fd: {}", err),
};
flags.remove(FdFlag::FD_CLOEXEC);
if let Err(err) = fcntl(fd, FcntlArg::F_SETFD(flags)) {
bail!("could not set fd: {}", err);
}
let mut arguments: Vec<&str> = Vec::new(); let mut arguments: Vec<&str> = Vec::new();
let fd_string = fd.to_string(); let fd_string = listener.as_raw_fd().to_string();
arguments.push(&fd_string); arguments.push(&fd_string);
arguments.extend_from_slice(&[ arguments.extend_from_slice(&[
"--path", "--path",

View File

@ -261,7 +261,7 @@ fn apt_get_changelog(
}, },
)] )]
/// Get package information for important Proxmox Backup Server packages. /// Get package information for important Proxmox Backup Server packages.
pub fn get_versions() -> Result<Value, Error> { pub fn get_versions() -> Result<Vec<APTUpdateInfo>, Error> {
const PACKAGES: &[&str] = &[ const PACKAGES: &[&str] = &[
"ifupdown2", "ifupdown2",
"libjs-extjs", "libjs-extjs",
@ -276,7 +276,7 @@ pub fn get_versions() -> Result<Value, Error> {
"zfsutils-linux", "zfsutils-linux",
]; ];
fn unknown_package(package: String) -> APTUpdateInfo { fn unknown_package(package: String, extra_info: Option<String>) -> APTUpdateInfo {
APTUpdateInfo { APTUpdateInfo {
package, package,
title: "unknown".into(), title: "unknown".into(),
@ -288,6 +288,7 @@ pub fn get_versions() -> Result<Value, Error> {
priority: "unknown".into(), priority: "unknown".into(),
section: "unknown".into(), section: "unknown".into(),
change_log_url: "unknown".into(), change_log_url: "unknown".into(),
extra_info,
} }
} }
@ -301,14 +302,28 @@ pub fn get_versions() -> Result<Value, Error> {
}, },
None, None,
); );
let running_kernel = format!(
"running kernel: {}",
nix::sys::utsname::uname().release().to_owned()
);
if let Some(proxmox_backup) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup") { if let Some(proxmox_backup) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup") {
packages.push(proxmox_backup.to_owned()); let mut proxmox_backup = proxmox_backup.clone();
proxmox_backup.extra_info = Some(running_kernel);
packages.push(proxmox_backup);
} else { } else {
packages.push(unknown_package("proxmox-backup".into())); packages.push(unknown_package("proxmox-backup".into(), Some(running_kernel)));
} }
let version = crate::api2::version::PROXMOX_PKG_VERSION;
let release = crate::api2::version::PROXMOX_PKG_RELEASE;
let daemon_version_info = Some(format!("running version: {}.{}", version, release));
if let Some(pkg) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup-server") { if let Some(pkg) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup-server") {
packages.push(pkg.to_owned()); let mut pkg = pkg.clone();
pkg.extra_info = daemon_version_info;
packages.push(pkg);
} else {
packages.push(unknown_package("proxmox-backup".into(), daemon_version_info));
} }
let mut kernel_pkgs: Vec<APTUpdateInfo> = pbs_packages let mut kernel_pkgs: Vec<APTUpdateInfo> = pbs_packages
@ -334,11 +349,11 @@ pub fn get_versions() -> Result<Value, Error> {
} }
match pbs_packages.iter().find(|item| &item.package == pkg) { match pbs_packages.iter().find(|item| &item.package == pkg) {
Some(apt_pkg) => packages.push(apt_pkg.to_owned()), Some(apt_pkg) => packages.push(apt_pkg.to_owned()),
None => packages.push(unknown_package(pkg.to_string())), None => packages.push(unknown_package(pkg.to_string(), None)),
} }
} }
Ok(json!(packages)) Ok(packages)
} }
const SUBDIRS: SubdirMap = &[ const SUBDIRS: SubdirMap = &[

View File

@ -102,10 +102,7 @@ pub fn list_network_devices(
}, },
}, },
}, },
returns: { returns: { type: Interface },
description: "The network interface configuration (with config digest).",
type: Interface,
},
access: { access: {
permission: &Permission::Privilege(&["system", "network", "interfaces", "{name}"], PRIV_SYS_AUDIT, false), permission: &Permission::Privilege(&["system", "network", "interfaces", "{name}"], PRIV_SYS_AUDIT, false),
}, },
@ -135,7 +132,6 @@ pub fn read_interface(iface: String) -> Result<Value, Error> {
schema: NETWORK_INTERFACE_NAME_SCHEMA, schema: NETWORK_INTERFACE_NAME_SCHEMA,
}, },
"type": { "type": {
description: "Interface type.",
type: NetworkInterfaceType, type: NetworkInterfaceType,
optional: true, optional: true,
}, },
@ -388,7 +384,6 @@ pub enum DeletableProperty {
schema: NETWORK_INTERFACE_NAME_SCHEMA, schema: NETWORK_INTERFACE_NAME_SCHEMA,
}, },
"type": { "type": {
description: "Interface type. If specified, need to match the current type.",
type: NetworkInterfaceType, type: NetworkInterfaceType,
optional: true, optional: true,
}, },

View File

@ -22,7 +22,7 @@ static SERVICE_NAME_LIST: [&str; 7] = [
"systemd-timesyncd", "systemd-timesyncd",
]; ];
fn real_service_name(service: &str) -> &str { pub fn real_service_name(service: &str) -> &str {
// since postfix package 3.1.0-3.1 the postfix unit is only here // since postfix package 3.1.0-3.1 the postfix unit is only here
// to manage subinstances, of which the default is called "-". // to manage subinstances, of which the default is called "-".

View File

@ -73,10 +73,7 @@ pub fn check_subscription(
}, },
}, },
}, },
returns: { returns: { type: SubscriptionInfo },
description: "Subscription status.",
type: SubscriptionInfo,
},
access: { access: {
permission: &Permission::Anybody, permission: &Permission::Anybody,
}, },

View File

@ -134,12 +134,18 @@ fn get_syslog(
mut rpcenv: &mut dyn RpcEnvironment, mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let service = if let Some(service) = param["service"].as_str() {
Some(crate::api2::node::services::real_service_name(service))
} else {
None
};
let (count, lines) = dump_journal( let (count, lines) = dump_journal(
param["start"].as_u64(), param["start"].as_u64(),
param["limit"].as_u64(), param["limit"].as_u64(),
param["since"].as_str(), param["since"].as_str(),
param["until"].as_str(), param["until"].as_str(),
param["service"].as_str())?; service)?;
rpcenv["total"] = Value::from(count); rpcenv["total"] = Value::from(count);

View File

@ -166,7 +166,6 @@ fn check_task_access(auth_id: &Authid, upid: &UPID) -> Result<(), Error> {
}, },
user: { user: {
type: Userid, type: Userid,
description: "The user who started the task.",
}, },
tokenid: { tokenid: {
type: Tokenname, type: Tokenname,

163
src/api2/tape/changer.rs Normal file
View File

@ -0,0 +1,163 @@
use std::path::Path;
use anyhow::Error;
use serde_json::Value;
use proxmox::api::{api, Router, SubdirMap};
use proxmox::list_subdirs_api_method;
use crate::{
config,
api2::types::{
CHANGER_ID_SCHEMA,
ScsiTapeChanger,
TapeDeviceInfo,
MtxStatusEntry,
MtxEntryKind,
},
tape::{
TAPE_STATUS_DIR,
ElementStatus,
OnlineStatusMap,
Inventory,
MediaStateDatabase,
linux_tape_changer_list,
mtx_status,
mtx_status_to_online_set,
mtx_transfer,
},
};
#[api(
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
},
},
returns: {
description: "A status entry for each drive and slot.",
type: Array,
items: {
type: MtxStatusEntry,
},
},
)]
/// Get tape changer status
pub fn get_status(name: String) -> Result<Vec<MtxStatusEntry>, Error> {
let (config, _digest) = config::drive::config()?;
let data: ScsiTapeChanger = config.lookup("changer", &name)?;
let status = mtx_status(&data.path)?;
let state_path = Path::new(TAPE_STATUS_DIR);
let inventory = Inventory::load(state_path)?;
let mut map = OnlineStatusMap::new(&config)?;
let online_set = mtx_status_to_online_set(&status, &inventory);
map.update_online_status(&name, online_set)?;
let mut state_db = MediaStateDatabase::load(state_path)?;
state_db.update_online_status(&map)?;
let mut list = Vec::new();
for (id, drive_status) in status.drives.iter().enumerate() {
let entry = MtxStatusEntry {
entry_kind: MtxEntryKind::Drive,
entry_id: id as u64,
changer_id: match &drive_status.status {
ElementStatus::Empty => None,
ElementStatus::Full => Some(String::new()),
ElementStatus::VolumeTag(tag) => Some(tag.to_string()),
},
loaded_slot: drive_status.loaded_slot,
};
list.push(entry);
}
for (id, slot_status) in status.slots.iter().enumerate() {
let entry = MtxStatusEntry {
entry_kind: MtxEntryKind::Slot,
entry_id: id as u64 + 1,
changer_id: match &slot_status {
ElementStatus::Empty => None,
ElementStatus::Full => Some(String::new()),
ElementStatus::VolumeTag(tag) => Some(tag.to_string()),
},
loaded_slot: None,
};
list.push(entry);
}
Ok(list)
}
#[api(
input: {
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
from: {
description: "Source slot number",
minimum: 1,
},
to: {
description: "Destination slot number",
minimum: 1,
},
},
},
)]
/// Transfers media from one slot to another
pub fn transfer(
name: String,
from: u64,
to: u64,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let data: ScsiTapeChanger = config.lookup("changer", &name)?;
mtx_transfer(&data.path, from, to)?;
Ok(())
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of autodetected tape changers.",
type: Array,
items: {
type: TapeDeviceInfo,
},
},
)]
/// Scan for SCSI tape changers
pub fn scan_changers(_param: Value) -> Result<Vec<TapeDeviceInfo>, Error> {
let list = linux_tape_changer_list();
Ok(list)
}
const SUBDIRS: SubdirMap = &[
(
"scan",
&Router::new()
.get(&API_METHOD_SCAN_CHANGERS)
),
];
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

816
src/api2/tape/drive.rs Normal file
View File

@ -0,0 +1,816 @@
use std::path::Path;
use std::sync::Arc;
use anyhow::{bail, Error};
use serde_json::Value;
use proxmox::{
sortable,
identity,
list_subdirs_api_method,
tools::Uuid,
sys::error::SysError,
api::{
api,
RpcEnvironment,
Router,
SubdirMap,
},
};
use crate::{
config::{
self,
drive::check_drive_exists,
},
api2::types::{
UPID_SCHEMA,
DRIVE_ID_SCHEMA,
MEDIA_LABEL_SCHEMA,
MEDIA_POOL_NAME_SCHEMA,
Authid,
LinuxTapeDrive,
ScsiTapeChanger,
TapeDeviceInfo,
MediaLabelInfoFlat,
LabelUuidMap,
},
server::WorkerTask,
tape::{
TAPE_STATUS_DIR,
TapeDriver,
MediaChange,
Inventory,
MediaStateDatabase,
MediaId,
mtx_load,
mtx_unload,
linux_tape_device_list,
open_drive,
media_changer,
update_changer_online_status,
file_formats::{
DriveLabel,
MediaSetLabel,
},
},
};
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
slot: {
description: "Source slot number",
minimum: 1,
},
},
},
)]
/// Load media via changer from slot
pub fn load_slot(
drive: String,
slot: u64,
_param: Value,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let drive_config: LinuxTapeDrive = config.lookup("linux", &drive)?;
let changer: ScsiTapeChanger = match drive_config.changer {
Some(ref changer) => config.lookup("changer", changer)?,
None => bail!("drive '{}' has no associated changer", drive),
};
let drivenum = drive_config.changer_drive_id.unwrap_or(0);
mtx_load(&changer.path, slot, drivenum)
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
"changer-id": {
schema: MEDIA_LABEL_SCHEMA,
},
},
},
)]
/// Load media with specified label
///
/// Issue a media load request to the associated changer device.
pub fn load_media(drive: String, changer_id: String) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let (mut changer, _) = media_changer(&config, &drive, false)?;
changer.load_media(&changer_id)?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
slot: {
description: "Target slot number. If omitted, defaults to the slot that the drive was loaded from.",
minimum: 1,
optional: true,
},
},
},
)]
/// Unload media via changer
pub fn unload(
drive: String,
slot: Option<u64>,
_param: Value,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let mut drive_config: LinuxTapeDrive = config.lookup("linux", &drive)?;
let changer: ScsiTapeChanger = match drive_config.changer {
Some(ref changer) => config.lookup("changer", changer)?,
None => bail!("drive '{}' has no associated changer", drive),
};
let drivenum: u64 = 0;
if let Some(slot) = slot {
mtx_unload(&changer.path, slot, drivenum)
} else {
drive_config.unload_media()
}
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of autodetected tape drives.",
type: Array,
items: {
type: TapeDeviceInfo,
},
},
)]
/// Scan tape drives
pub fn scan_drives(_param: Value) -> Result<Vec<TapeDeviceInfo>, Error> {
let list = linux_tape_device_list();
Ok(list)
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
fast: {
description: "Use fast erase.",
type: bool,
optional: true,
default: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Erase media
pub fn erase_media(
drive: String,
fast: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let (config, _digest) = config::drive::config()?;
check_drive_exists(&config, &drive)?; // early check before starting worker
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let upid_str = WorkerTask::new_thread(
"erase-media",
Some(drive.clone()),
auth_id,
true,
move |_worker| {
let mut drive = open_drive(&config, &drive)?;
drive.erase_media(fast.unwrap_or(true))?;
Ok(())
}
)?;
Ok(upid_str.into())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Rewind tape
pub fn rewind(
drive: String,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let (config, _digest) = config::drive::config()?;
check_drive_exists(&config, &drive)?; // early check before starting worker
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let upid_str = WorkerTask::new_thread(
"rewind-media",
Some(drive.clone()),
auth_id,
true,
move |_worker| {
let mut drive = open_drive(&config, &drive)?;
drive.rewind()?;
Ok(())
}
)?;
Ok(upid_str.into())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
},
},
)]
/// Eject/Unload drive media
pub fn eject_media(drive: String) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let (mut changer, _) = media_changer(&config, &drive, false)?;
if !changer.eject_on_unload() {
let mut drive = open_drive(&config, &drive)?;
drive.eject_media()?;
}
changer.unload_media()?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
"changer-id": {
schema: MEDIA_LABEL_SCHEMA,
},
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Label media
///
/// Write a new media label to the media in 'drive'. The media is
/// assigned to the specified 'pool', or else to the free media pool.
///
/// Note: The media need to be empty (you may want to erase it first).
pub fn label_media(
drive: String,
pool: Option<String>,
changer_id: String,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
if let Some(ref pool) = pool {
let (pool_config, _digest) = config::media_pool::config()?;
if pool_config.sections.get(pool).is_none() {
bail!("no such pool ('{}')", pool);
}
}
let (config, _digest) = config::drive::config()?;
let upid_str = WorkerTask::new_thread(
"label-media",
Some(drive.clone()),
auth_id,
true,
move |worker| {
let mut drive = open_drive(&config, &drive)?;
drive.rewind()?;
match drive.read_next_file() {
Ok(Some(_file)) => bail!("media is not empty (erase first)"),
Ok(None) => { /* EOF mark at BOT, assume tape is empty */ },
Err(err) => {
if err.is_errno(nix::errno::Errno::ENOSPC) || err.is_errno(nix::errno::Errno::EIO) {
/* assume tape is empty */
} else {
bail!("media read error - {}", err);
}
}
}
let ctime = proxmox::tools::time::epoch_i64();
let label = DriveLabel {
changer_id: changer_id.to_string(),
uuid: Uuid::generate(),
ctime,
};
write_media_label(worker, &mut drive, label, pool)
}
)?;
Ok(upid_str.into())
}
fn write_media_label(
worker: Arc<WorkerTask>,
drive: &mut Box<dyn TapeDriver>,
label: DriveLabel,
pool: Option<String>,
) -> Result<(), Error> {
drive.label_tape(&label)?;
let mut media_set_label = None;
if let Some(ref pool) = pool {
// assign media to pool by writing special media set label
worker.log(format!("Label media '{}' for pool '{}'", label.changer_id, pool));
let set = MediaSetLabel::with_data(&pool, [0u8; 16].into(), 0, label.ctime);
drive.write_media_set_label(&set)?;
media_set_label = Some(set);
} else {
worker.log(format!("Label media '{}' (no pool assignment)", label.changer_id));
}
let media_id = MediaId { label, media_set_label };
let mut inventory = Inventory::load(Path::new(TAPE_STATUS_DIR))?;
inventory.store(media_id.clone())?;
drive.rewind()?;
match drive.read_label() {
Ok(Some(info)) => {
if info.label.uuid != media_id.label.uuid {
bail!("verify label failed - got wrong label uuid");
}
if let Some(ref pool) = pool {
match info.media_set_label {
Some((set, _)) => {
if set.uuid != [0u8; 16].into() {
bail!("verify media set label failed - got wrong set uuid");
}
if &set.pool != pool {
bail!("verify media set label failed - got wrong pool");
}
}
None => {
bail!("verify media set label failed (missing set label)");
}
}
}
},
Ok(None) => bail!("verify label failed (got empty media)"),
Err(err) => bail!("verify label failed - {}", err),
};
drive.rewind()?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
},
},
returns: {
type: MediaLabelInfoFlat,
},
)]
/// Read media label
pub fn read_label(drive: String) -> Result<MediaLabelInfoFlat, Error> {
let (config, _digest) = config::drive::config()?;
let mut drive = open_drive(&config, &drive)?;
let info = drive.read_label()?;
let info = match info {
Some(info) => {
let mut flat = MediaLabelInfoFlat {
uuid: info.label.uuid.to_string(),
changer_id: info.label.changer_id.clone(),
ctime: info.label.ctime,
media_set_ctime: None,
media_set_uuid: None,
pool: None,
seq_nr: None,
};
if let Some((set, _)) = info.media_set_label {
flat.pool = Some(set.pool.clone());
flat.seq_nr = Some(set.seq_nr);
flat.media_set_uuid = Some(set.uuid.to_string());
flat.media_set_ctime = Some(set.ctime);
}
flat
}
None => {
bail!("Media is empty (no label).");
}
};
Ok(info)
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
},
},
returns: {
description: "The list of media labels with associated media Uuid (if any).",
type: Array,
items: {
type: LabelUuidMap,
},
},
)]
/// List known media labels (Changer Inventory)
///
/// Note: Only useful for drives with associated changer device.
///
/// This method queries the changer to get a list of media labels.
///
/// Note: This updates the media online status.
pub fn inventory(
drive: String,
) -> Result<Vec<LabelUuidMap>, Error> {
let (config, _digest) = config::drive::config()?;
let (changer, changer_name) = media_changer(&config, &drive, false)?;
let changer_id_list = changer.list_media_changer_ids()?;
let state_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(state_path)?;
let mut state_db = MediaStateDatabase::load(state_path)?;
update_changer_online_status(&config, &mut inventory, &mut state_db, &changer_name, &changer_id_list)?;
let mut list = Vec::new();
for changer_id in changer_id_list.iter() {
if changer_id.starts_with("CLN") {
// skip cleaning unit
continue;
}
let changer_id = changer_id.to_string();
if let Some(media_id) = inventory.find_media_by_changer_id(&changer_id) {
list.push(LabelUuidMap { changer_id, uuid: Some(media_id.label.uuid.to_string()) });
} else {
list.push(LabelUuidMap { changer_id, uuid: None });
}
}
Ok(list)
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
"read-all-labels": {
description: "Load all tapes and try read labels (even if already inventoried)",
type: bool,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Update inventory
///
/// Note: Only useful for drives with associated changer device.
///
/// This method queries the changer to get a list of media labels. It
/// then loads any unknown media into the drive, reads the label, and
/// store the result to the media database.
///
/// Note: This updates the media online status.
pub fn update_inventory(
drive: String,
read_all_labels: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let (config, _digest) = config::drive::config()?;
check_drive_exists(&config, &drive)?; // early check before starting worker
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let upid_str = WorkerTask::new_thread(
"inventory-update",
Some(drive.clone()),
auth_id,
true,
move |worker| {
let (mut changer, changer_name) = media_changer(&config, &drive, false)?;
let changer_id_list = changer.list_media_changer_ids()?;
if changer_id_list.is_empty() {
worker.log(format!("changer device does not list any media labels"));
}
let state_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(state_path)?;
let mut state_db = MediaStateDatabase::load(state_path)?;
update_changer_online_status(&config, &mut inventory, &mut state_db, &changer_name, &changer_id_list)?;
for changer_id in changer_id_list.iter() {
if changer_id.starts_with("CLN") {
worker.log(format!("skip cleaning unit '{}'", changer_id));
continue;
}
let changer_id = changer_id.to_string();
if !read_all_labels.unwrap_or(false) {
if let Some(_) = inventory.find_media_by_changer_id(&changer_id) {
worker.log(format!("media '{}' already inventoried", changer_id));
continue;
}
}
if let Err(err) = changer.load_media(&changer_id) {
worker.warn(format!("unable to load media '{}' - {}", changer_id, err));
continue;
}
let mut drive = open_drive(&config, &drive)?;
match drive.read_label() {
Err(err) => {
worker.warn(format!("unable to read label form media '{}' - {}", changer_id, err));
}
Ok(None) => {
worker.log(format!("media '{}' is empty", changer_id));
}
Ok(Some(info)) => {
if changer_id != info.label.changer_id {
worker.warn(format!("label changer ID missmatch ({} != {})", changer_id, info.label.changer_id));
continue;
}
worker.log(format!("inventorize media '{}' with uuid '{}'", changer_id, info.label.uuid));
inventory.store(info.into())?;
}
}
}
Ok(())
}
)?;
Ok(upid_str.into())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
},
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Label media with barcodes from changer device
pub fn barcode_label_media(
drive: String,
pool: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
if let Some(ref pool) = pool {
let (pool_config, _digest) = config::media_pool::config()?;
if pool_config.sections.get(pool).is_none() {
bail!("no such pool ('{}')", pool);
}
}
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let upid_str = WorkerTask::new_thread(
"barcode-label-media",
Some(drive.clone()),
auth_id,
true,
move |worker| {
barcode_label_media_worker(worker, drive, pool)
}
)?;
Ok(upid_str.into())
}
fn barcode_label_media_worker(
worker: Arc<WorkerTask>,
drive: String,
pool: Option<String>,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let (mut changer, changer_name) = media_changer(&config, &drive, false)?;
let changer_id_list = changer.list_media_changer_ids()?;
let state_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(state_path)?;
let mut state_db = MediaStateDatabase::load(state_path)?;
update_changer_online_status(&config, &mut inventory, &mut state_db, &changer_name, &changer_id_list)?;
if changer_id_list.is_empty() {
bail!("changer device does not list any media labels");
}
for changer_id in changer_id_list {
if changer_id.starts_with("CLN") { continue; }
inventory.reload()?;
if inventory.find_media_by_changer_id(&changer_id).is_some() {
worker.log(format!("media '{}' already inventoried (already labeled)", changer_id));
continue;
}
worker.log(format!("checking/loading media '{}'", changer_id));
if let Err(err) = changer.load_media(&changer_id) {
worker.warn(format!("unable to load media '{}' - {}", changer_id, err));
continue;
}
let mut drive = open_drive(&config, &drive)?;
drive.rewind()?;
match drive.read_next_file() {
Ok(Some(_file)) => {
worker.log(format!("media '{}' is not empty (erase first)", changer_id));
continue;
}
Ok(None) => { /* EOF mark at BOT, assume tape is empty */ },
Err(err) => {
if err.is_errno(nix::errno::Errno::ENOSPC) || err.is_errno(nix::errno::Errno::EIO) {
/* assume tape is empty */
} else {
worker.warn(format!("media '{}' read error (maybe not empty - erase first)", changer_id));
continue;
}
}
}
let ctime = proxmox::tools::time::epoch_i64();
let label = DriveLabel {
changer_id: changer_id.to_string(),
uuid: Uuid::generate(),
ctime,
};
write_media_label(worker.clone(), &mut drive, label, pool.clone())?
}
Ok(())
}
#[sortable]
pub const SUBDIRS: SubdirMap = &sorted!([
(
"barcode-label-media",
&Router::new()
.put(&API_METHOD_BARCODE_LABEL_MEDIA)
),
(
"eject-media",
&Router::new()
.put(&API_METHOD_EJECT_MEDIA)
),
(
"erase-media",
&Router::new()
.put(&API_METHOD_ERASE_MEDIA)
),
(
"inventory",
&Router::new()
.get(&API_METHOD_INVENTORY)
.put(&API_METHOD_UPDATE_INVENTORY)
),
(
"label-media",
&Router::new()
.put(&API_METHOD_LABEL_MEDIA)
),
(
"load-slot",
&Router::new()
.put(&API_METHOD_LOAD_SLOT)
),
(
"read-label",
&Router::new()
.get(&API_METHOD_READ_LABEL)
),
(
"rewind",
&Router::new()
.put(&API_METHOD_REWIND)
),
(
"scan",
&Router::new()
.get(&API_METHOD_SCAN_DRIVES)
),
(
"unload",
&Router::new()
.put(&API_METHOD_UNLOAD)
),
]);
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

15
src/api2/tape/mod.rs Normal file
View File

@ -0,0 +1,15 @@
use proxmox::api::router::SubdirMap;
use proxmox::api::Router;
use proxmox::list_subdirs_api_method;
pub mod drive;
pub mod changer;
pub const SUBDIRS: SubdirMap = &[
("changer", &changer::ROUTER),
("drive", &drive::ROUTER),
];
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

View File

@ -5,7 +5,7 @@ use proxmox::api::{api, schema::*};
use proxmox::const_regex; use proxmox::const_regex;
use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32}; use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::{CryptMode, BACKUP_ID_REGEX}; use crate::backup::{CryptMode, Fingerprint, BACKUP_ID_REGEX};
use crate::server::UPID; use crate::server::UPID;
#[macro_use] #[macro_use]
@ -20,6 +20,9 @@ pub use userid::Userid;
pub use userid::Authid; pub use userid::Authid;
pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GROUP_ID_SCHEMA}; pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GROUP_ID_SCHEMA};
mod tape;
pub use tape::*;
// File names: may not contain slashes, may not start with "." // File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| { pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
if name.starts_with('.') { if name.starts_with('.') {
@ -484,6 +487,10 @@ pub struct SnapshotVerifyState {
type: SnapshotVerifyState, type: SnapshotVerifyState,
optional: true, optional: true,
}, },
fingerprint: {
type: String,
optional: true,
},
files: { files: {
items: { items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA schema: BACKUP_ARCHIVE_NAME_SCHEMA
@ -508,6 +515,9 @@ pub struct SnapshotListItem {
/// The result of the last run verify task /// The result of the last run verify task
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub verification: Option<SnapshotVerifyState>, pub verification: Option<SnapshotVerifyState>,
/// Fingerprint of encryption key
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<Fingerprint>,
/// List of contained archive files. /// List of contained archive files.
pub files: Vec<BackupContent>, pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes). /// Overall snapshot size (sum of all archive sizes).
@ -692,7 +702,7 @@ pub struct TypeCounts {
}, },
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize, Default)]
/// Counts of groups/snapshots per BackupType. /// Counts of groups/snapshots per BackupType.
pub struct Counts { pub struct Counts {
/// The counts for CT backups /// The counts for CT backups
@ -707,8 +717,14 @@ pub struct Counts {
#[api( #[api(
properties: { properties: {
"gc-status": { type: GarbageCollectionStatus, }, "gc-status": {
counts: { type: Counts, } type: GarbageCollectionStatus,
optional: true,
},
counts: {
type: Counts,
optional: true,
},
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
@ -722,9 +738,11 @@ pub struct DataStoreStatus {
/// Available space (bytes). /// Available space (bytes).
pub avail: u64, pub avail: u64,
/// Status of last GC /// Status of last GC
pub gc_status: GarbageCollectionStatus, #[serde(skip_serializing_if="Option::is_none")]
pub gc_status: Option<GarbageCollectionStatus>,
/// Group/Snapshot counts /// Group/Snapshot counts
pub counts: Counts, #[serde(skip_serializing_if="Option::is_none")]
pub counts: Option<Counts>,
} }
#[api( #[api(
@ -1177,6 +1195,9 @@ pub struct APTUpdateInfo {
pub section: String, pub section: String,
/// URL under which the package's changelog can be retrieved /// URL under which the package's changelog can be retrieved
pub change_log_url: String, pub change_log_url: String,
/// Custom extra field for additional package information
#[serde(skip_serializing_if="Option::is_none")]
pub extra_info: Option<String>,
} }
#[api()] #[api()]

View File

@ -0,0 +1,39 @@
use ::serde::{Deserialize, Serialize};
use proxmox::api::api;
#[api()]
#[derive(Debug,Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Kind of devive
pub enum DeviceKind {
/// Tape changer (Autoloader, Robot)
Changer,
/// Normal SCSI tape device
Tape,
}
#[api(
properties: {
kind: {
type: DeviceKind,
},
},
)]
#[derive(Debug,Serialize,Deserialize)]
/// Tape device information
pub struct TapeDeviceInfo {
pub kind: DeviceKind,
/// Path to the linux device node
pub path: String,
/// Serial number (autodetected)
pub serial: String,
/// Vendor (autodetected)
pub vendor: String,
/// Model (autodetected)
pub model: String,
/// Device major number
pub major: u32,
/// Device minor number
pub minor: u32,
}

View File

@ -0,0 +1,169 @@
//! Types for tape drive API
use serde::{Deserialize, Serialize};
use proxmox::api::{
api,
schema::{Schema, IntegerSchema, StringSchema},
};
use crate::api2::types::PROXMOX_SAFE_ID_FORMAT;
pub const DRIVE_ID_SCHEMA: Schema = StringSchema::new("Drive Identifier.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const CHANGER_ID_SCHEMA: Schema = StringSchema::new("Tape Changer Identifier.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const LINUX_DRIVE_PATH_SCHEMA: Schema = StringSchema::new(
"The path to a LINUX non-rewinding SCSI tape device (i.e. '/dev/nst0')")
.schema();
pub const SCSI_CHANGER_PATH_SCHEMA: Schema = StringSchema::new(
"Path to Linux generic SCSI device (i.e. '/dev/sg4')")
.schema();
pub const MEDIA_LABEL_SCHEMA: Schema = StringSchema::new("Media Label/Barcode.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const CHANGER_DRIVE_ID_SCHEMA: Schema = IntegerSchema::new(
"Associated changer drive number (requires option changer)")
.minimum(0)
.maximum(8)
.default(0)
.schema();
#[api(
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
}
}
)]
#[derive(Serialize,Deserialize)]
/// Simulate tape drives (only for test and debug)
#[serde(rename_all = "kebab-case")]
pub struct VirtualTapeDrive {
pub name: String,
/// Path to directory
pub path: String,
/// Virtual tape size
#[serde(skip_serializing_if="Option::is_none")]
pub max_size: Option<usize>,
}
#[api(
properties: {
name: {
schema: DRIVE_ID_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
},
changer: {
schema: CHANGER_ID_SCHEMA,
optional: true,
},
"changer-drive-id": {
schema: CHANGER_DRIVE_ID_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Linux SCSI tape driver
pub struct LinuxTapeDrive {
pub name: String,
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]
pub changer: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub changer_drive_id: Option<u64>,
}
#[api(
properties: {
name: {
schema: CHANGER_ID_SCHEMA,
},
path: {
schema: SCSI_CHANGER_PATH_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize)]
/// SCSI tape changer
pub struct ScsiTapeChanger {
pub name: String,
pub path: String,
}
#[api()]
#[derive(Serialize,Deserialize)]
/// Drive list entry
pub struct DriveListEntry {
/// Drive name
pub name: String,
/// Path to the linux device node
pub path: String,
/// Associated changer device
#[serde(skip_serializing_if="Option::is_none")]
pub changer: Option<String>,
/// Vendor (autodetected)
#[serde(skip_serializing_if="Option::is_none")]
pub vendor: Option<String>,
/// Model (autodetected)
#[serde(skip_serializing_if="Option::is_none")]
pub model: Option<String>,
/// Serial number (autodetected)
#[serde(skip_serializing_if="Option::is_none")]
pub serial: Option<String>,
}
#[api()]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "lowercase")]
/// Mtx Entry Kind
pub enum MtxEntryKind {
/// Drive
Drive,
/// Slot
Slot,
}
#[api(
properties: {
"entry-kind": {
type: MtxEntryKind,
},
"changer-id": {
schema: MEDIA_LABEL_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Mtx Status Entry
pub struct MtxStatusEntry {
pub entry_kind: MtxEntryKind,
/// The ID of the slot or drive
pub entry_id: u64,
/// The media label (volume tag) if the slot/drive is full
#[serde(skip_serializing_if="Option::is_none")]
pub changer_id: Option<String>,
/// The slot the drive was loaded from
#[serde(skip_serializing_if="Option::is_none")]
pub loaded_slot: Option<u64>,
}

View File

@ -0,0 +1,96 @@
use ::serde::{Deserialize, Serialize};
use proxmox::api::api;
use super::{
MediaStatus,
};
#[api()]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "lowercase")]
/// Media location
pub enum MediaLocationKind {
/// Ready for use (inside tape library)
Online,
/// Local available, but need to be mounted (insert into tape
/// drive)
Offline,
/// Media is inside a Vault
Vault,
}
#[api(
properties: {
location: {
type: MediaLocationKind,
},
status: {
type: MediaStatus,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media list entry
pub struct MediaListEntry {
/// Media changer ID
pub changer_id: String,
/// Media Uuid
pub uuid: String,
pub location: MediaLocationKind,
/// Media location hint (vault name, changer name)
pub location_hint: Option<String>,
pub status: MediaStatus,
/// Expired flag
pub expired: bool,
/// Media set name
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_name: Option<String>,
/// Media set uuid
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_uuid: Option<String>,
/// Media set seq_nr
#[serde(skip_serializing_if="Option::is_none")]
pub seq_nr: Option<u64>,
/// Media Pool
#[serde(skip_serializing_if="Option::is_none")]
pub pool: Option<String>,
}
#[api()]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media label info
pub struct MediaLabelInfoFlat {
/// Unique ID
pub uuid: String,
/// Media Changer ID or Barcode
pub changer_id: String,
/// Creation time stamp
pub ctime: i64,
// All MediaSet properties are optional here
/// MediaSet Pool
#[serde(skip_serializing_if="Option::is_none")]
pub pool: Option<String>,
/// MediaSet Uuid. We use the all-zero Uuid to reseve an empty media for a specific pool
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_uuid: Option<String>,
/// MediaSet media sequence number
#[serde(skip_serializing_if="Option::is_none")]
pub seq_nr: Option<u64>,
/// MediaSet Creation time stamp
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_ctime: Option<i64>,
}
#[api()]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Label with optional Uuid
pub struct LabelUuidMap {
/// Changer ID (label)
pub changer_id: String,
/// Associated Uuid (if any)
pub uuid: Option<String>,
}

View File

@ -0,0 +1,154 @@
//! Types for tape media pool API
//!
//! Note: Both MediaSetPolicy and RetentionPolicy are complex enums,
//! so we cannot use them directly for the API. Instead, we represent
//! them as String.
use anyhow::Error;
use std::str::FromStr;
use serde::{Deserialize, Serialize};
use proxmox::api::{
api,
schema::{Schema, StringSchema, ApiStringFormat},
};
use crate::{
tools::systemd::time::{
CalendarEvent,
TimeSpan,
parse_time_span,
parse_calendar_event,
},
api2::types::{
DRIVE_ID_SCHEMA,
PROXMOX_SAFE_ID_FORMAT,
SINGLE_LINE_COMMENT_FORMAT,
},
};
pub const MEDIA_POOL_NAME_SCHEMA: Schema = StringSchema::new("Media pool name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(2)
.max_length(32)
.schema();
pub const MEDIA_SET_NAMING_TEMPLATE_SCHEMA: Schema = StringSchema::new(
"Media set naming template.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const MEDIA_SET_ALLOCATION_POLICY_FORMAT: ApiStringFormat =
ApiStringFormat::VerifyFn(|s| { MediaSetPolicy::from_str(s)?; Ok(()) });
pub const MEDIA_SET_ALLOCATION_POLICY_SCHEMA: Schema = StringSchema::new(
"Media set allocation policy.")
.format(&MEDIA_SET_ALLOCATION_POLICY_FORMAT)
.schema();
/// Media set allocation policy
pub enum MediaSetPolicy {
/// Try to use the current media set
ContinueCurrent,
/// Each backup job creates a new media set
AlwaysCreate,
/// Create a new set when the specified CalendarEvent triggers
CreateAt(CalendarEvent),
}
impl std::str::FromStr for MediaSetPolicy {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s == "continue" {
return Ok(MediaSetPolicy::ContinueCurrent);
}
if s == "always" {
return Ok(MediaSetPolicy::AlwaysCreate);
}
let event = parse_calendar_event(s)?;
Ok(MediaSetPolicy::CreateAt(event))
}
}
pub const MEDIA_RETENTION_POLICY_FORMAT: ApiStringFormat =
ApiStringFormat::VerifyFn(|s| { RetentionPolicy::from_str(s)?; Ok(()) });
pub const MEDIA_RETENTION_POLICY_SCHEMA: Schema = StringSchema::new(
"Media retention policy.")
.format(&MEDIA_RETENTION_POLICY_FORMAT)
.schema();
/// Media retention Policy
pub enum RetentionPolicy {
/// Always overwrite media
OverwriteAlways,
/// Protect data for the timespan specified
ProtectFor(TimeSpan),
/// Never overwrite data
KeepForever,
}
impl std::str::FromStr for RetentionPolicy {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s == "overwrite" {
return Ok(RetentionPolicy::OverwriteAlways);
}
if s == "keep" {
return Ok(RetentionPolicy::KeepForever);
}
let time_span = parse_time_span(s)?;
Ok(RetentionPolicy::ProtectFor(time_span))
}
}
#[api(
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
drive: {
schema: DRIVE_ID_SCHEMA,
},
allocation: {
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
optional: true,
},
retention: {
schema: MEDIA_RETENTION_POLICY_SCHEMA,
optional: true,
},
template: {
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize,Deserialize)]
/// Media pool configuration
pub struct MediaPoolConfig {
/// The pool name
pub name: String,
/// The associated drive
pub drive: String,
/// Media Set allocation policy
#[serde(skip_serializing_if="Option::is_none")]
pub allocation: Option<String>,
/// Media retention policy
#[serde(skip_serializing_if="Option::is_none")]
pub retention: Option<String>,
/// Media set naming template (default "%id%")
///
/// The template is UTF8 text, and can include strftime time
/// format specifications.
#[serde(skip_serializing_if="Option::is_none")]
pub template: Option<String>,
}

View File

@ -0,0 +1,21 @@
use ::serde::{Deserialize, Serialize};
use proxmox::api::api;
#[api()]
/// Media status
#[derive(Debug, PartialEq, Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Media Status
pub enum MediaStatus {
/// Media is ready to be written
Writable,
/// Media is full (contains data)
Full,
/// Media is marked as unknown, needs rescan
Unknown,
/// Media is marked as damaged
Damaged,
/// Media is marked as retired
Retired,
}

View File

@ -0,0 +1,16 @@
//! Types for tape backup API
mod device;
pub use device::*;
mod drive;
pub use drive::*;
mod media_pool;
pub use media_pool::*;
mod media_status;
pub use media_status::*;
mod media;
pub use media::*;

View File

@ -419,12 +419,10 @@ impl<'a> TryFrom<&'a str> for &'a TokennameRef {
} }
/// A complete user id consisting of a user name and a realm /// A complete user id consisting of a user name and a realm
#[derive(Clone, Debug, Hash)] #[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct Userid { pub struct Userid {
data: String, data: String,
name_len: usize, name_len: usize,
//name: Username,
//realm: Realm,
} }
impl Userid { impl Userid {
@ -460,14 +458,6 @@ lazy_static! {
pub static ref ROOT_USERID: Userid = Userid::new("root@pam".to_string(), 4); pub static ref ROOT_USERID: Userid = Userid::new("root@pam".to_string(), 4);
} }
impl Eq for Userid {}
impl PartialEq for Userid {
fn eq(&self, rhs: &Self) -> bool {
self.data == rhs.data && self.name_len == rhs.name_len
}
}
impl From<Authid> for Userid { impl From<Authid> for Userid {
fn from(authid: Authid) -> Self { fn from(authid: Authid) -> Self {
authid.user authid.user

View File

@ -247,6 +247,9 @@ pub use prune::*;
mod datastore; mod datastore;
pub use datastore::*; pub use datastore::*;
mod store_progress;
pub use store_progress::*;
mod verify; mod verify;
pub use verify::*; pub use verify::*;

View File

@ -145,20 +145,6 @@ impl BackupGroup {
Ok(last) Ok(last)
} }
pub fn list_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
let mut list = Vec::new();
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_l1_fd, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
list.push(BackupGroup::new(backup_type, backup_id));
Ok(())
})
})?;
Ok(list)
}
} }
impl std::fmt::Display for BackupGroup { impl std::fmt::Display for BackupGroup {
@ -327,26 +313,20 @@ impl BackupInfo {
Ok(files) Ok(files)
} }
pub fn list_backups(base_path: &Path) -> Result<Vec<BackupInfo>, Error> { pub fn list_backup_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
let mut list = Vec::new(); let mut list = Vec::new();
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| { tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |l1_fd, backup_id, file_type| { tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l1_fd, backup_id, &BACKUP_DATE_REGEX, |l2_fd, backup_time_string, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
let backup_dir = BackupDir::with_rfc3339(backup_type, backup_id, backup_time_string)?; list.push(BackupGroup::new(backup_type, backup_id));
let files = list_backup_files(l2_fd, backup_time_string)?; Ok(())
list.push(BackupInfo { backup_dir, files });
Ok(())
})
}) })
})?; })?;
Ok(list) Ok(list)
} }

View File

@ -154,9 +154,11 @@ impl ChunkStore {
} }
pub fn cond_touch_chunk(&self, digest: &[u8; 32], fail_if_not_exist: bool) -> Result<bool, Error> { pub fn cond_touch_chunk(&self, digest: &[u8; 32], fail_if_not_exist: bool) -> Result<bool, Error> {
let (chunk_path, _digest_str) = self.chunk_path(digest); let (chunk_path, _digest_str) = self.chunk_path(digest);
self.cond_touch_path(&chunk_path, fail_if_not_exist)
}
pub fn cond_touch_path(&self, path: &Path, fail_if_not_exist: bool) -> Result<bool, Error> {
const UTIME_NOW: i64 = (1 << 30) - 1; const UTIME_NOW: i64 = (1 << 30) - 1;
const UTIME_OMIT: i64 = (1 << 30) - 2; const UTIME_OMIT: i64 = (1 << 30) - 2;
@ -167,7 +169,7 @@ impl ChunkStore {
use nix::NixPath; use nix::NixPath;
let res = chunk_path.with_nix_path(|cstr| unsafe { let res = path.with_nix_path(|cstr| unsafe {
let tmp = libc::utimensat(-1, cstr.as_ptr(), &times[0], libc::AT_SYMLINK_NOFOLLOW); let tmp = libc::utimensat(-1, cstr.as_ptr(), &times[0], libc::AT_SYMLINK_NOFOLLOW);
nix::errno::Errno::result(tmp) nix::errno::Errno::result(tmp)
})?; })?;
@ -177,7 +179,7 @@ impl ChunkStore {
return Ok(false); return Ok(false);
} }
bail!("update atime failed for chunk {:?} - {}", chunk_path, err); bail!("update atime failed for chunk/file {:?} - {}", path, err);
} }
Ok(true) Ok(true)
@ -299,7 +301,7 @@ impl ChunkStore {
last_percentage = percentage; last_percentage = percentage;
crate::task_log!( crate::task_log!(
worker, worker,
"percentage done: phase2 {}% (processed {} chunks)", "processed {}% ({} chunks)",
percentage, percentage,
chunk_count, chunk_count,
); );
@ -328,49 +330,13 @@ impl ChunkStore {
let lock = self.mutex.lock(); let lock = self.mutex.lock();
if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) { if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) {
if bad { if stat.st_atime < min_atime {
// filename validity checked in iterator
let orig_filename = std::ffi::CString::new(&filename.to_bytes()[..64])?;
match fstatat(
dirfd,
orig_filename.as_c_str(),
nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW)
{
Ok(_) => {
match unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
Err(err) =>
crate::task_warn!(
worker,
"unlinking corrupt chunk {:?} failed on store '{}' - {}",
filename,
self.name,
err,
),
Ok(_) => {
status.removed_bad += 1;
status.removed_bytes += stat.st_size as u64;
}
}
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
// chunk hasn't been rewritten yet, keep .bad file
status.still_bad += 1;
},
Err(err) => {
// some other error, warn user and keep .bad file around too
status.still_bad += 1;
crate::task_warn!(
worker,
"error during stat on '{:?}' - {}",
orig_filename,
err,
);
}
}
} else if stat.st_atime < min_atime {
//let age = now - stat.st_atime; //let age = now - stat.st_atime;
//println!("UNLINK {} {:?}", age/(3600*24), filename); //println!("UNLINK {} {:?}", age/(3600*24), filename);
if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) { if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
if bad {
status.still_bad += 1;
}
bail!( bail!(
"unlinking chunk {:?} failed on store '{}' - {}", "unlinking chunk {:?} failed on store '{}' - {}",
filename, filename,
@ -378,13 +344,23 @@ impl ChunkStore {
err, err,
); );
} }
status.removed_chunks += 1; if bad {
status.removed_bad += 1;
} else {
status.removed_chunks += 1;
}
status.removed_bytes += stat.st_size as u64; status.removed_bytes += stat.st_size as u64;
} else if stat.st_atime < oldest_writer { } else if stat.st_atime < oldest_writer {
status.pending_chunks += 1; if bad {
status.still_bad += 1;
} else {
status.pending_chunks += 1;
}
status.pending_bytes += stat.st_size as u64; status.pending_bytes += stat.st_size as u64;
} else { } else {
status.disk_chunks += 1; if !bad {
status.disk_chunks += 1;
}
status.disk_bytes += stat.st_size as u64; status.disk_bytes += stat.st_size as u64;
} }
} }

View File

@ -7,6 +7,8 @@
//! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption) //! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption)
//! for a short introduction. //! for a short introduction.
use std::fmt;
use std::fmt::Display;
use std::io::Write; use std::io::Write;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
@ -15,8 +17,15 @@ use openssl::pkcs5::pbkdf2_hmac;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode}; use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::tools::format::{as_fingerprint, bytes_as_fingerprint};
use proxmox::api::api; use proxmox::api::api;
// openssl::sha::sha256(b"Proxmox Backup Encryption Key Fingerprint")
const FINGERPRINT_INPUT: [u8; 32] = [ 110, 208, 239, 119, 71, 31, 255, 77,
85, 199, 168, 254, 74, 157, 182, 33,
97, 64, 127, 19, 76, 114, 93, 223,
48, 153, 45, 37, 236, 69, 237, 38, ];
#[api(default: "encrypt")] #[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)] #[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
@ -30,6 +39,30 @@ pub enum CryptMode {
SignOnly, SignOnly,
} }
#[derive(Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(transparent)]
/// 32-byte fingerprint, usually calculated with SHA256.
pub struct Fingerprint {
#[serde(with = "bytes_as_fingerprint")]
bytes: [u8; 32],
}
impl Fingerprint {
pub fn new(bytes: [u8; 32]) -> Self {
Self { bytes }
}
pub fn bytes(&self) -> &[u8; 32] {
&self.bytes
}
}
/// Display as short key ID
impl Display for Fingerprint {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", as_fingerprint(&self.bytes[0..8]))
}
}
/// Encryption Configuration with secret key /// Encryption Configuration with secret key
/// ///
/// This structure stores the secret key and provides helpers for /// This structure stores the secret key and provides helpers for
@ -101,6 +134,10 @@ impl CryptConfig {
tag tag
} }
pub fn fingerprint(&self) -> Fingerprint {
Fingerprint::new(self.compute_digest(&FINGERPRINT_INPUT))
}
pub fn data_crypter(&self, iv: &[u8; 16], mode: Mode) -> Result<Crypter, Error> { pub fn data_crypter(&self, iv: &[u8; 16], mode: Mode) -> Result<Crypter, Error> {
let mut crypter = openssl::symm::Crypter::new(self.cipher, mode, &self.enc_key, Some(iv))?; let mut crypter = openssl::symm::Crypter::new(self.cipher, mode, &self.enc_key, Some(iv))?;
crypter.aad_update(b"")?; //?? crypter.aad_update(b"")?; //??
@ -219,7 +256,13 @@ impl CryptConfig {
) -> Result<Vec<u8>, Error> { ) -> Result<Vec<u8>, Error> {
let modified = proxmox::tools::time::epoch_i64(); let modified = proxmox::tools::time::epoch_i64();
let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() }; let key_config = super::KeyConfig {
kdf: None,
created,
modified,
data: self.enc_key.to_vec(),
fingerprint: Some(self.fingerprint()),
};
let data = serde_json::to_string(&key_config)?.as_bytes().to_vec(); let data = serde_json::to_string(&key_config)?.as_bytes().to_vec();
let mut buffer = vec![0u8; rsa.size() as usize]; let mut buffer = vec![0u8; rsa.size() as usize];

View File

@ -3,6 +3,7 @@ use std::io::{self, Write};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use std::convert::TryFrom; use std::convert::TryFrom;
use std::str::FromStr;
use std::time::Duration; use std::time::Duration;
use std::fs::File; use std::fs::File;
@ -243,7 +244,7 @@ impl DataStore {
let (_guard, _manifest_guard); let (_guard, _manifest_guard);
if !force { if !force {
_guard = lock_dir_noblock(&full_path, "snapshot", "possibly running or in use")?; _guard = lock_dir_noblock(&full_path, "snapshot", "possibly running or in use")?;
_manifest_guard = self.lock_manifest(backup_dir); _manifest_guard = self.lock_manifest(backup_dir)?;
} }
log::info!("removing backup snapshot {:?}", full_path); log::info!("removing backup snapshot {:?}", full_path);
@ -256,6 +257,12 @@ impl DataStore {
) )
})?; })?;
// the manifest does not exists anymore, we do not need to keep the lock
if let Ok(path) = self.manifest_lock_path(backup_dir) {
// ignore errors
let _ = std::fs::remove_file(path);
}
Ok(()) Ok(())
} }
@ -379,7 +386,7 @@ impl DataStore {
use walkdir::WalkDir; use walkdir::WalkDir;
let walker = WalkDir::new(&base).same_file_system(true).into_iter(); let walker = WalkDir::new(&base).into_iter();
// make sure we skip .chunks (and other hidden files to keep it simple) // make sure we skip .chunks (and other hidden files to keep it simple)
fn is_hidden(entry: &walkdir::DirEntry) -> bool { fn is_hidden(entry: &walkdir::DirEntry) -> bool {
@ -446,6 +453,17 @@ impl DataStore {
file_name, file_name,
err, err,
); );
// touch any corresponding .bad files to keep them around, meaning if a chunk is
// rewritten correctly they will be removed automatically, as well as if no index
// file requires the chunk anymore (won't get to this loop then)
for i in 0..=9 {
let bad_ext = format!("{}.bad", i);
let mut bad_path = PathBuf::new();
bad_path.push(self.chunk_path(digest).0);
bad_path.set_extension(bad_ext);
self.chunk_store.cond_touch_path(&bad_path, false)?;
}
} }
} }
Ok(()) Ok(())
@ -463,30 +481,40 @@ impl DataStore {
let mut done = 0; let mut done = 0;
let mut last_percentage: usize = 0; let mut last_percentage: usize = 0;
let mut strange_paths_count: u64 = 0;
for img in image_list { for img in image_list {
worker.check_abort()?; worker.check_abort()?;
tools::fail_on_shutdown()?; tools::fail_on_shutdown()?;
let path = self.chunk_store.relative_path(&img); if let Some(backup_dir_path) = img.parent() {
match std::fs::File::open(&path) { let backup_dir_path = backup_dir_path.strip_prefix(self.base_path())?;
if let Some(backup_dir_str) = backup_dir_path.to_str() {
if BackupDir::from_str(backup_dir_str).is_err() {
strange_paths_count += 1;
}
}
}
match std::fs::File::open(&img) {
Ok(file) => { Ok(file) => {
if let Ok(archive_type) = archive_type(&img) { if let Ok(archive_type) = archive_type(&img) {
if archive_type == ArchiveType::FixedIndex { if archive_type == ArchiveType::FixedIndex {
let index = FixedIndexReader::new(file).map_err(|e| { let index = FixedIndexReader::new(file).map_err(|e| {
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e) format_err!("can't read index '{}' - {}", img.to_string_lossy(), e)
})?; })?;
self.index_mark_used_chunks(index, &img, status, worker)?; self.index_mark_used_chunks(index, &img, status, worker)?;
} else if archive_type == ArchiveType::DynamicIndex { } else if archive_type == ArchiveType::DynamicIndex {
let index = DynamicIndexReader::new(file).map_err(|e| { let index = DynamicIndexReader::new(file).map_err(|e| {
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e) format_err!("can't read index '{}' - {}", img.to_string_lossy(), e)
})?; })?;
self.index_mark_used_chunks(index, &img, status, worker)?; self.index_mark_used_chunks(index, &img, status, worker)?;
} }
} }
} }
Err(err) if err.kind() == io::ErrorKind::NotFound => (), // ignore vanished files Err(err) if err.kind() == io::ErrorKind::NotFound => (), // ignore vanished files
Err(err) => bail!("can't open index {} - {}", path.to_string_lossy(), err), Err(err) => bail!("can't open index {} - {}", img.to_string_lossy(), err),
} }
done += 1; done += 1;
@ -494,7 +522,7 @@ impl DataStore {
if percentage > last_percentage { if percentage > last_percentage {
crate::task_log!( crate::task_log!(
worker, worker,
"percentage done: phase1 {}% ({} of {} index files)", "marked {}% ({} of {} index files)",
percentage, percentage,
done, done,
image_count, image_count,
@ -503,6 +531,15 @@ impl DataStore {
} }
} }
if strange_paths_count > 0 {
crate::task_log!(
worker,
"found (and marked) {} index files outside of expected directory scheme",
strange_paths_count,
);
}
Ok(()) Ok(())
} }
@ -667,13 +704,32 @@ impl DataStore {
)) ))
} }
/// Returns the filename to lock a manifest
///
/// Also creates the basedir. The lockfile is located in
/// '/run/proxmox-backup/locks/{datastore}/{type}/{id}/{timestamp}.index.json.lck'
fn manifest_lock_path(
&self,
backup_dir: &BackupDir,
) -> Result<String, Error> {
let mut path = format!(
"/run/proxmox-backup/locks/{}/{}/{}",
self.name(),
backup_dir.group().backup_type(),
backup_dir.group().backup_id(),
);
std::fs::create_dir_all(&path)?;
use std::fmt::Write;
write!(path, "/{}{}", backup_dir.backup_time_string(), &MANIFEST_LOCK_NAME)?;
Ok(path)
}
fn lock_manifest( fn lock_manifest(
&self, &self,
backup_dir: &BackupDir, backup_dir: &BackupDir,
) -> Result<File, Error> { ) -> Result<File, Error> {
let mut path = self.base_path(); let path = self.manifest_lock_path(backup_dir)?;
path.push(backup_dir.relative_path());
path.push(&MANIFEST_LOCK_NAME);
// update_manifest should never take a long time, so if someone else has // update_manifest should never take a long time, so if someone else has
// the lock we can simply block a bit and should get it soon // the lock we can simply block a bit and should get it soon
@ -728,3 +784,4 @@ impl DataStore {
self.verify_new self.verify_new
} }
} }

View File

@ -219,7 +219,6 @@ impl IndexFile for DynamicIndexReader {
(csum, chunk_end) (csum, chunk_end)
} }
#[allow(clippy::cast_ptr_alignment)]
fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo> { fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo> {
if pos >= self.index.len() { if pos >= self.index.len() {
return None; return None;

View File

@ -3,7 +3,7 @@ use std::io::{Seek, SeekFrom};
use super::chunk_stat::*; use super::chunk_stat::*;
use super::chunk_store::*; use super::chunk_store::*;
use super::{IndexFile, ChunkReadInfo}; use super::{ChunkReadInfo, IndexFile};
use crate::tools; use crate::tools;
use std::fs::File; use std::fs::File;
@ -69,8 +69,7 @@ impl FixedIndexReader {
let header_size = std::mem::size_of::<FixedIndexHeader>(); let header_size = std::mem::size_of::<FixedIndexHeader>();
let rawfd = file.as_raw_fd(); let stat = match nix::sys::stat::fstat(file.as_raw_fd()) {
let stat = match nix::sys::stat::fstat(rawfd) {
Ok(stat) => stat, Ok(stat) => stat,
Err(err) => bail!("fstat failed - {}", err), Err(err) => bail!("fstat failed - {}", err),
}; };
@ -94,7 +93,6 @@ impl FixedIndexReader {
let index_length = ((size + chunk_size - 1) / chunk_size) as usize; let index_length = ((size + chunk_size - 1) / chunk_size) as usize;
let index_size = index_length * 32; let index_size = index_length * 32;
let expected_index_size = (stat.st_size as usize) - header_size; let expected_index_size = (stat.st_size as usize) - header_size;
if index_size != expected_index_size { if index_size != expected_index_size {
bail!( bail!(
@ -150,7 +148,7 @@ impl FixedIndexReader {
println!("ChunkSize: {}", self.chunk_size); println!("ChunkSize: {}", self.chunk_size);
let mut ctime_str = self.ctime.to_string(); let mut ctime_str = self.ctime.to_string();
if let Ok(s) = proxmox::tools::time::strftime_local("%c",self.ctime) { if let Ok(s) = proxmox::tools::time::strftime_local("%c", self.ctime) {
ctime_str = s; ctime_str = s;
} }
@ -215,7 +213,7 @@ impl IndexFile for FixedIndexReader {
Some(( Some((
(offset / self.chunk_size as u64) as usize, (offset / self.chunk_size as u64) as usize,
offset & (self.chunk_size - 1) as u64 // fast modulo, valid for 2^x chunk_size offset & (self.chunk_size - 1) as u64, // fast modulo, valid for 2^x chunk_size
)) ))
} }
} }

View File

@ -2,9 +2,34 @@ use anyhow::{bail, format_err, Context, Error};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::backup::{CryptConfig, Fingerprint};
use proxmox::api::api;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions}; use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::try_block; use proxmox::try_block;
#[api(default: "scrypt")]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
/// Encrtypt the Key with a password using PBKDF2
PBKDF2,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[derive(Deserialize, Serialize, Debug)] #[derive(Deserialize, Serialize, Debug)]
pub enum KeyDerivationConfig { pub enum KeyDerivationConfig {
Scrypt { Scrypt {
@ -66,6 +91,9 @@ pub struct KeyConfig {
pub modified: i64, pub modified: i64,
#[serde(with = "proxmox::tools::serde::bytes_as_base64")] #[serde(with = "proxmox::tools::serde::bytes_as_base64")]
pub data: Vec<u8>, pub data: Vec<u8>,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(default)]
pub fingerprint: Option<Fingerprint>,
} }
pub fn store_key_config( pub fn store_key_config(
@ -103,15 +131,25 @@ pub fn store_key_config(
pub fn encrypt_key_with_passphrase( pub fn encrypt_key_with_passphrase(
raw_key: &[u8], raw_key: &[u8],
passphrase: &[u8], passphrase: &[u8],
kdf: Kdf,
) -> Result<KeyConfig, Error> { ) -> Result<KeyConfig, Error> {
let salt = proxmox::sys::linux::random_data(32)?; let salt = proxmox::sys::linux::random_data(32)?;
let kdf = KeyDerivationConfig::Scrypt { let kdf = match kdf {
n: 65536, Kdf::Scrypt => KeyDerivationConfig::Scrypt {
r: 8, n: 65536,
p: 1, r: 8,
salt, p: 1,
salt,
},
Kdf::PBKDF2 => KeyDerivationConfig::PBKDF2 {
iter: 65535,
salt,
},
Kdf::None => {
bail!("No key derivation function specified");
}
}; };
let derived_key = kdf.derive_key(passphrase)?; let derived_key = kdf.derive_key(passphrase)?;
@ -142,28 +180,22 @@ pub fn encrypt_key_with_passphrase(
created, created,
modified: created, modified: created,
data: enc_data, data: enc_data,
fingerprint: None,
}) })
} }
pub fn load_and_decrypt_key( pub fn load_and_decrypt_key(
path: &std::path::Path, path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>, passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> { ) -> Result<([u8;32], i64, Fingerprint), Error> {
do_load_and_decrypt_key(path, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
}
fn do_load_and_decrypt_key(
path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> {
decrypt_key(&file_get_contents(&path)?, passphrase) decrypt_key(&file_get_contents(&path)?, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
} }
pub fn decrypt_key( pub fn decrypt_key(
mut keydata: &[u8], mut keydata: &[u8],
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>, passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> { ) -> Result<([u8;32], i64, Fingerprint), Error> {
let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?; let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?;
let raw_data = key_config.data; let raw_data = key_config.data;
@ -203,5 +235,13 @@ pub fn decrypt_key(
let mut result = [0u8; 32]; let mut result = [0u8; 32];
result.copy_from_slice(&key); result.copy_from_slice(&key);
Ok((result, created)) let fingerprint = match key_config.fingerprint {
Some(fingerprint) => fingerprint,
None => {
let crypt_config = CryptConfig::new(result.clone())?;
crypt_config.fingerprint()
},
};
Ok((result, created, fingerprint))
} }

View File

@ -5,7 +5,7 @@ use std::path::Path;
use serde_json::{json, Value}; use serde_json::{json, Value};
use ::serde::{Deserialize, Serialize}; use ::serde::{Deserialize, Serialize};
use crate::backup::{BackupDir, CryptMode, CryptConfig}; use crate::backup::{BackupDir, CryptMode, CryptConfig, Fingerprint};
pub const MANIFEST_BLOB_NAME: &str = "index.json.blob"; pub const MANIFEST_BLOB_NAME: &str = "index.json.blob";
pub const MANIFEST_LOCK_NAME: &str = ".index.json.lck"; pub const MANIFEST_LOCK_NAME: &str = ".index.json.lck";
@ -223,12 +223,48 @@ impl BackupManifest {
if let Some(crypt_config) = crypt_config { if let Some(crypt_config) = crypt_config {
let sig = self.signature(crypt_config)?; let sig = self.signature(crypt_config)?;
manifest["signature"] = proxmox::tools::digest_to_hex(&sig).into(); manifest["signature"] = proxmox::tools::digest_to_hex(&sig).into();
let fingerprint = &crypt_config.fingerprint();
manifest["unprotected"]["key-fingerprint"] = serde_json::to_value(fingerprint)?;
} }
let manifest = serde_json::to_string_pretty(&manifest).unwrap().into(); let manifest = serde_json::to_string_pretty(&manifest).unwrap().into();
Ok(manifest) Ok(manifest)
} }
pub fn fingerprint(&self) -> Result<Option<Fingerprint>, Error> {
match &self.unprotected["key-fingerprint"] {
Value::Null => Ok(None),
value => Ok(Some(serde_json::from_value(value.clone())?))
}
}
/// Checks if a BackupManifest and a CryptConfig share a valid fingerprint combination.
///
/// An unsigned manifest is valid with any or no CryptConfig.
/// A signed manifest is only valid with a matching CryptConfig.
pub fn check_fingerprint(&self, crypt_config: Option<&CryptConfig>) -> Result<(), Error> {
if let Some(fingerprint) = self.fingerprint()? {
match crypt_config {
None => bail!(
"missing key - manifest was created with key {}",
fingerprint,
),
Some(crypt_config) => {
let config_fp = crypt_config.fingerprint();
if config_fp != fingerprint {
bail!(
"wrong key - manifest's key {} does not match provided key {}",
fingerprint,
config_fp
);
}
}
}
};
Ok(())
}
/// Try to read the manifest. This verifies the signature if there is a crypt_config. /// Try to read the manifest. This verifies the signature if there is a crypt_config.
pub fn from_data(data: &[u8], crypt_config: Option<&CryptConfig>) -> Result<BackupManifest, Error> { pub fn from_data(data: &[u8], crypt_config: Option<&CryptConfig>) -> Result<BackupManifest, Error> {
let json: Value = serde_json::from_slice(data)?; let json: Value = serde_json::from_slice(data)?;
@ -237,6 +273,19 @@ impl BackupManifest {
if let Some(ref crypt_config) = crypt_config { if let Some(ref crypt_config) = crypt_config {
if let Some(signature) = signature { if let Some(signature) = signature {
let expected_signature = proxmox::tools::digest_to_hex(&Self::json_signature(&json, crypt_config)?); let expected_signature = proxmox::tools::digest_to_hex(&Self::json_signature(&json, crypt_config)?);
let fingerprint = &json["unprotected"]["key-fingerprint"];
if fingerprint != &Value::Null {
let fingerprint = serde_json::from_value(fingerprint.clone())?;
let config_fp = crypt_config.fingerprint();
if config_fp != fingerprint {
bail!(
"wrong key - unable to verify signature since manifest's key {} does not match provided key {}",
fingerprint,
config_fp
);
}
}
if signature != expected_signature { if signature != expected_signature {
bail!("wrong signature in manifest"); bail!("wrong signature in manifest");
} }

View File

@ -0,0 +1,64 @@
#[derive(Debug, Default)]
/// Tracker for progress of operations iterating over `Datastore` contents.
pub struct StoreProgress {
/// Completed groups
pub done_groups: u64,
/// Total groups
pub total_groups: u64,
/// Completed snapshots within current group
pub done_snapshots: u64,
/// Total snapshots in current group
pub group_snapshots: u64,
}
impl StoreProgress {
pub fn new(total_groups: u64) -> Self {
StoreProgress {
total_groups,
.. Default::default()
}
}
/// Calculates an interpolated relative progress based on current counters.
pub fn percentage(&self) -> f64 {
let per_groups = (self.done_groups as f64) / (self.total_groups as f64);
if self.group_snapshots == 0 {
per_groups
} else {
let per_snapshots = (self.done_snapshots as f64) / (self.group_snapshots as f64);
per_groups + (1.0 / self.total_groups as f64) * per_snapshots
}
}
}
impl std::fmt::Display for StoreProgress {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
if self.group_snapshots == 0 {
write!(
f,
"{:.2}% ({} of {} groups)",
self.percentage() * 100.0,
self.done_groups,
self.total_groups,
)
} else if self.total_groups == 1 {
write!(
f,
"{:.2}% ({} of {} snapshots)",
self.percentage() * 100.0,
self.done_snapshots,
self.group_snapshots,
)
} else {
write!(
f,
"{:.2}% ({} of {} groups, {} of {} group snapshots)",
self.percentage() * 100.0,
self.done_groups,
self.total_groups,
self.done_snapshots,
self.group_snapshots,
)
}
}
}

View File

@ -10,6 +10,7 @@ use crate::{
api2::types::*, api2::types::*,
backup::{ backup::{
DataStore, DataStore,
StoreProgress,
DataBlob, DataBlob,
BackupGroup, BackupGroup,
BackupDir, BackupDir,
@ -425,11 +426,11 @@ pub fn verify_backup_group(
group: &BackupGroup, group: &BackupGroup,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>, verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>, corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
progress: Option<(usize, usize)>, // (done, snapshot_count) progress: &mut StoreProgress,
worker: Arc<dyn TaskState + Send + Sync>, worker: Arc<dyn TaskState + Send + Sync>,
upid: &UPID, upid: &UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>, filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<(usize, Vec<String>), Error> { ) -> Result<Vec<String>, Error> {
let mut errors = Vec::new(); let mut errors = Vec::new();
let mut list = match group.list_backups(&datastore.base_path()) { let mut list = match group.list_backups(&datastore.base_path()) {
@ -442,19 +443,17 @@ pub fn verify_backup_group(
group, group,
err, err,
); );
return Ok((0, errors)); return Ok(errors);
} }
}; };
task_log!(worker, "verify group {}:{}", datastore.name(), group); let snapshot_count = list.len();
task_log!(worker, "verify group {}:{} ({} snapshots)", datastore.name(), group, snapshot_count);
let (done, snapshot_count) = progress.unwrap_or((0, list.len())); progress.group_snapshots = snapshot_count as u64;
let mut count = 0;
BackupInfo::sort_list(&mut list, false); // newest first BackupInfo::sort_list(&mut list, false); // newest first
for info in list { for (pos, info) in list.into_iter().enumerate() {
count += 1;
if !verify_backup_dir( if !verify_backup_dir(
datastore.clone(), datastore.clone(),
&info.backup_dir, &info.backup_dir,
@ -466,20 +465,15 @@ pub fn verify_backup_group(
)? { )? {
errors.push(info.backup_dir.to_string()); errors.push(info.backup_dir.to_string());
} }
if snapshot_count != 0 { progress.done_snapshots = pos as u64 + 1;
let pos = done + count; task_log!(
let percentage = ((pos as f64) * 100.0)/(snapshot_count as f64); worker,
task_log!( "percentage done: {}",
worker, progress
"percentage done: {:.2}% ({} of {} snapshots)", );
percentage,
pos,
snapshot_count,
);
}
} }
Ok((count, errors)) Ok(errors)
} }
/// Verify all (owned) backups inside a datastore /// Verify all (owned) backups inside a datastore
@ -533,7 +527,7 @@ pub fn verify_all_backups(
} }
}; };
let mut list = match BackupGroup::list_groups(&datastore.base_path()) { let mut list = match BackupInfo::list_backup_groups(&datastore.base_path()) {
Ok(list) => list Ok(list) => list
.into_iter() .into_iter()
.filter(|group| !(group.backup_type() == "host" && group.backup_id() == "benchmark")) .filter(|group| !(group.backup_type() == "host" && group.backup_id() == "benchmark"))
@ -551,34 +545,33 @@ pub fn verify_all_backups(
list.sort_unstable(); list.sort_unstable();
let mut snapshot_count = 0;
for group in list.iter() {
snapshot_count += group.list_backups(&datastore.base_path())?.len();
}
// start with 16384 chunks (up to 65GB) // start with 16384 chunks (up to 65GB)
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16))); let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
// start with 64 chunks since we assume there are few corrupt ones // start with 64 chunks since we assume there are few corrupt ones
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64))); let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
task_log!(worker, "found {} snapshots", snapshot_count); let group_count = list.len();
task_log!(worker, "found {} groups", group_count);
let mut done = 0; let mut progress = StoreProgress::new(group_count as u64);
for group in list {
let (count, mut group_errors) = verify_backup_group( for (pos, group) in list.into_iter().enumerate() {
progress.done_groups = pos as u64;
progress.done_snapshots = 0;
progress.group_snapshots = 0;
let mut group_errors = verify_backup_group(
datastore.clone(), datastore.clone(),
&group, &group,
verified_chunks.clone(), verified_chunks.clone(),
corrupt_chunks.clone(), corrupt_chunks.clone(),
Some((done, snapshot_count)), &mut progress,
worker.clone(), worker.clone(),
upid, upid,
filter, filter,
)?; )?;
errors.append(&mut group_errors); errors.append(&mut group_errors);
done += count;
} }
Ok(errors) Ok(errors)

View File

@ -38,6 +38,7 @@ async fn run() -> Result<(), Error> {
proxmox_backup::rrd::create_rrdb_dir()?; proxmox_backup::rrd::create_rrdb_dir()?;
proxmox_backup::server::jobstate::create_jobstate_dir()?; proxmox_backup::server::jobstate::create_jobstate_dir()?;
proxmox_backup::tape::create_tape_status_dir()?;
if let Err(err) = generate_auth_key() { if let Err(err) = generate_auth_key() {
bail!("unable to generate auth key - {}", err); bail!("unable to generate auth key - {}", err);

View File

@ -53,7 +53,6 @@ use proxmox_backup::backup::{
ChunkStream, ChunkStream,
CryptConfig, CryptConfig,
CryptMode, CryptMode,
DataBlob,
DynamicIndexReader, DynamicIndexReader,
FixedChunkStream, FixedChunkStream,
FixedIndexReader, FixedIndexReader,
@ -456,112 +455,6 @@ async fn change_backup_owner(group: String, mut param: Value) -> Result<(), Erro
Ok(()) Ok(())
} }
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
group: {
type: String,
description: "Backup group.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List backup snapshots.
async fn list_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?)
} else {
None
};
let mut data = api_datastore_list_snapshots(&client, repo.store(), group).await?;
record_repository(&repo);
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned())
};
let render_files = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let mut filenames = Vec::new();
for file in &item.files {
filenames.push(file.filename.to_string());
}
Ok(tools::format::render_backup_file_list(&filenames[..]))
};
let options = default_table_format_options()
.sortby("backup-type", false)
.sortby("backup-id", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
.column(ColumnConfig::new("size").renderer(tools::format::render_bytes_human_readable))
.column(ColumnConfig::new("files").renderer(render_files))
;
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOTS;
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
}
}
)]
/// Forget (remove) backup snapshots.
async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
let result = client.delete(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
Ok(result)
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -655,58 +548,6 @@ async fn api_version(param: Value) -> Result<(), Error> {
Ok(()) Ok(())
} }
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List snapshot files.
async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
let mut result = client.get(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOT_FILES;
let mut data: Value = result["data"].take();
let options = default_table_format_options();
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -802,7 +643,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
let keydata = match (keyfile, key_fd) { let keydata = match (keyfile, key_fd) {
(None, None) => None, (None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"), (Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(file_get_contents(keyfile)?), (Some(keyfile), None) => {
eprintln!("Using encryption key file: {}", keyfile);
Some(file_get_contents(keyfile)?)
},
(None, Some(fd)) => { (None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) }; let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new(); let mut data = Vec::new();
@ -810,6 +654,7 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
.map_err(|err| { .map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err) format_err!("error reading encryption key from fd {}: {}", fd, err)
})?; })?;
eprintln!("Using encryption key from file descriptor");
Some(data) Some(data)
} }
}; };
@ -817,7 +662,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
Ok(match (keydata, crypt_mode) { Ok(match (keydata, crypt_mode) {
// no parameters: // no parameters:
(None, None) => match key::read_optional_default_encryption_key()? { (None, None) => match key::read_optional_default_encryption_key()? {
Some(key) => (Some(key), CryptMode::Encrypt), Some(key) => {
eprintln!("Encrypting with default encryption key!");
(Some(key), CryptMode::Encrypt)
},
None => (None, CryptMode::None), None => (None, CryptMode::None),
}, },
@ -827,7 +675,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
// just --crypt-mode other than none // just --crypt-mode other than none
(None, Some(crypt_mode)) => match key::read_optional_default_encryption_key()? { (None, Some(crypt_mode)) => match key::read_optional_default_encryption_key()? {
None => bail!("--crypt-mode without --keyfile and no default key file available"), None => bail!("--crypt-mode without --keyfile and no default key file available"),
Some(key) => (Some(key), crypt_mode), Some(key) => {
eprintln!("Encrypting with default encryption key!");
(Some(key), crypt_mode)
},
} }
// just --keyfile // just --keyfile
@ -865,6 +716,11 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
description: "Path to file.", description: "Path to file.",
} }
}, },
"all-file-systems": {
type: Boolean,
description: "Include all mounted subdirectories.",
optional: true,
},
keyfile: { keyfile: {
schema: KEYFILE_SCHEMA, schema: KEYFILE_SCHEMA,
optional: true, optional: true,
@ -1054,7 +910,8 @@ async fn create_backup(
let (crypt_config, rsa_encrypted_key) = match keydata { let (crypt_config, rsa_encrypted_key) = match keydata {
None => (None, None), None => (None, None),
Some(key) => { Some(key) => {
let (key, created) = decrypt_key(&key, &key::get_encryption_key_password)?; let (key, created, fingerprint) = decrypt_key(&key, &key::get_encryption_key_password)?;
println!("Encryption key fingerprint: {}", fingerprint);
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
@ -1063,6 +920,8 @@ async fn create_backup(
let pem_data = file_get_contents(path)?; let pem_data = file_get_contents(path)?;
let rsa = openssl::rsa::Rsa::public_key_from_pem(&pem_data)?; let rsa = openssl::rsa::Rsa::public_key_from_pem(&pem_data)?;
let enc_key = crypt_config.generate_rsa_encoded_key(rsa, created)?; let enc_key = crypt_config.generate_rsa_encoded_key(rsa, created)?;
println!("Master key '{:?}'", path);
(Some(Arc::new(crypt_config)), Some(enc_key)) (Some(Arc::new(crypt_config)), Some(enc_key))
} }
_ => (Some(Arc::new(crypt_config)), None), _ => (Some(Arc::new(crypt_config)), None),
@ -1081,8 +940,40 @@ async fn create_backup(
false false
).await?; ).await?;
let previous_manifest = if let Ok(previous_manifest) = client.download_previous_manifest().await { let download_previous_manifest = match client.previous_backup_time().await {
Some(Arc::new(previous_manifest)) Ok(Some(backup_time)) => {
println!(
"Downloading previous manifest ({})",
strftime_local("%c", backup_time)?
);
true
}
Ok(None) => {
println!("No previous manifest available.");
false
}
Err(_) => {
// Fallback for outdated server, TODO remove/bubble up with 2.0
true
}
};
let previous_manifest = if download_previous_manifest {
match client.download_previous_manifest().await {
Ok(previous_manifest) => {
match previous_manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref)) {
Ok(()) => Some(Arc::new(previous_manifest)),
Err(err) => {
println!("Couldn't re-use previous manifest - {}", err);
None
}
}
}
Err(err) => {
println!("Couldn't download previous manifest - {}", err);
None
}
}
} else { } else {
None None
}; };
@ -1365,7 +1256,8 @@ async fn restore(param: Value) -> Result<Value, Error> {
let crypt_config = match keydata { let crypt_config = match keydata {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _) = decrypt_key(&key, &key::get_encryption_key_password)?; let (key, _, fingerprint) = decrypt_key(&key, &key::get_encryption_key_password)?;
eprintln!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?)) Some(Arc::new(CryptConfig::new(key)?))
} }
}; };
@ -1381,6 +1273,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
).await?; ).await?;
let (manifest, backup_index_data) = client.download_manifest().await?; let (manifest, backup_index_data) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let (archive_name, archive_type) = parse_archive_type(archive_name); let (archive_name, archive_type) = parse_archive_type(archive_name);
@ -1477,81 +1370,6 @@ async fn restore(param: Value) -> Result<Value, Error> {
Ok(Value::Null) Ok(Value::Null)
} }
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Group/Snapshot path.",
},
logfile: {
type: String,
description: "The path to the log file you want to upload.",
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
}
}
)]
/// Upload backup log file.
async fn upload_log(param: Value) -> Result<Value, Error> {
let logfile = tools::required_string_param(&param, "logfile")?;
let repo = extract_repository_from_value(&param)?;
let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(&repo)?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let data = file_get_contents(logfile)?;
// fixme: howto sign log?
let blob = match crypt_mode {
CryptMode::None | CryptMode::SignOnly => DataBlob::encode(&data, None, true)?,
CryptMode::Encrypt => DataBlob::encode(&data, crypt_config.as_ref().map(Arc::as_ref), true)?,
};
let raw_data = blob.into_inner();
let path = format!("api2/json/admin/datastore/{}/upload-backup-log", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let body = hyper::Body::from(raw_data);
client.upload("application/octet-stream", body, &path, Some(args)).await
}
const API_METHOD_PRUNE: ApiMethod = ApiMethod::new( const API_METHOD_PRUNE: ApiMethod = ApiMethod::new(
&ApiHandler::Async(&prune), &ApiHandler::Async(&prune),
&ObjectSchema::new( &ObjectSchema::new(
@ -1989,26 +1807,9 @@ fn main() {
.completion_cb("repository", complete_repository) .completion_cb("repository", complete_repository)
.completion_cb("keyfile", tools::complete_file_name); .completion_cb("keyfile", tools::complete_file_name);
let upload_log_cmd_def = CliCommand::new(&API_METHOD_UPLOAD_LOG)
.arg_param(&["snapshot", "logfile"])
.completion_cb("snapshot", complete_backup_snapshot)
.completion_cb("logfile", tools::complete_file_name)
.completion_cb("keyfile", tools::complete_file_name)
.completion_cb("repository", complete_repository);
let list_cmd_def = CliCommand::new(&API_METHOD_LIST_BACKUP_GROUPS) let list_cmd_def = CliCommand::new(&API_METHOD_LIST_BACKUP_GROUPS)
.completion_cb("repository", complete_repository); .completion_cb("repository", complete_repository);
let snapshots_cmd_def = CliCommand::new(&API_METHOD_LIST_SNAPSHOTS)
.arg_param(&["group"])
.completion_cb("group", complete_backup_group)
.completion_cb("repository", complete_repository);
let forget_cmd_def = CliCommand::new(&API_METHOD_FORGET_SNAPSHOTS)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot);
let garbage_collect_cmd_def = CliCommand::new(&API_METHOD_START_GARBAGE_COLLECTION) let garbage_collect_cmd_def = CliCommand::new(&API_METHOD_START_GARBAGE_COLLECTION)
.completion_cb("repository", complete_repository); .completion_cb("repository", complete_repository);
@ -2019,11 +1820,6 @@ fn main() {
.completion_cb("archive-name", complete_archive_name) .completion_cb("archive-name", complete_archive_name)
.completion_cb("target", tools::complete_file_name); .completion_cb("target", tools::complete_file_name);
let files_cmd_def = CliCommand::new(&API_METHOD_LIST_SNAPSHOT_FILES)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot);
let prune_cmd_def = CliCommand::new(&API_METHOD_PRUNE) let prune_cmd_def = CliCommand::new(&API_METHOD_PRUNE)
.arg_param(&["group"]) .arg_param(&["group"])
.completion_cb("group", complete_backup_group) .completion_cb("group", complete_backup_group)
@ -2049,16 +1845,13 @@ fn main() {
let cmd_def = CliCommandMap::new() let cmd_def = CliCommandMap::new()
.insert("backup", backup_cmd_def) .insert("backup", backup_cmd_def)
.insert("upload-log", upload_log_cmd_def)
.insert("forget", forget_cmd_def)
.insert("garbage-collect", garbage_collect_cmd_def) .insert("garbage-collect", garbage_collect_cmd_def)
.insert("list", list_cmd_def) .insert("list", list_cmd_def)
.insert("login", login_cmd_def) .insert("login", login_cmd_def)
.insert("logout", logout_cmd_def) .insert("logout", logout_cmd_def)
.insert("prune", prune_cmd_def) .insert("prune", prune_cmd_def)
.insert("restore", restore_cmd_def) .insert("restore", restore_cmd_def)
.insert("snapshots", snapshots_cmd_def) .insert("snapshot", snapshot_mgtm_cli())
.insert("files", files_cmd_def)
.insert("status", status_cmd_def) .insert("status", status_cmd_def)
.insert("key", key::cli()) .insert("key", key::cli())
.insert("mount", mount_cmd_def()) .insert("mount", mount_cmd_def())
@ -2068,7 +1861,13 @@ fn main() {
.insert("task", task_mgmt_cli()) .insert("task", task_mgmt_cli())
.insert("version", version_cmd_def) .insert("version", version_cmd_def)
.insert("benchmark", benchmark_cmd_def) .insert("benchmark", benchmark_cmd_def)
.insert("change-owner", change_owner_cmd_def); .insert("change-owner", change_owner_cmd_def)
.alias(&["files"], &["snapshot", "files"])
.alias(&["forget"], &["snapshot", "forget"])
.alias(&["upload-log"], &["snapshot", "upload-log"])
.alias(&["snapshots"], &["snapshot", "list"])
;
let rpcenv = CliEnvironment::new(); let rpcenv = CliEnvironment::new();
run_cli_command(cmd_def, rpcenv, Some(|future| { run_cli_command(cmd_def, rpcenv, Some(|future| {

View File

@ -10,8 +10,6 @@ use proxmox_backup::tools;
use proxmox_backup::config; use proxmox_backup::config;
use proxmox_backup::api2::{self, types::* }; use proxmox_backup::api2::{self, types::* };
use proxmox_backup::client::*; use proxmox_backup::client::*;
use proxmox_backup::tools::ticket::Ticket;
use proxmox_backup::auth_helpers::*;
mod proxmox_backup_manager; mod proxmox_backup_manager;
use proxmox_backup_manager::*; use proxmox_backup_manager::*;
@ -51,27 +49,6 @@ pub async fn wait_for_local_worker(upid_str: &str) -> Result<(), Error> {
Ok(()) Ok(())
} }
fn connect() -> Result<HttpClient, Error> {
let uid = nix::unistd::Uid::current();
let mut options = HttpClientOptions::new()
.prefix(Some("proxmox-backup".to_string()))
.verify_cert(false); // not required for connection to localhost
let client = if uid.is_root() {
let ticket = Ticket::new("PBS", Userid::root_userid())?
.sign(private_auth_key(), None)?;
options = options.password(Some(ticket));
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
} else {
options = options.ticket_cache(true).interactive(true);
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
};
Ok(client)
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -92,7 +69,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
let store = tools::required_string_param(&param, "store")?; let store = tools::required_string_param(&param, "store")?;
let mut client = connect()?; let mut client = connect_to_localhost()?;
let path = format!("api2/json/admin/datastore/{}/gc", store); let path = format!("api2/json/admin/datastore/{}/gc", store);
@ -123,7 +100,7 @@ async fn garbage_collection_status(param: Value) -> Result<Value, Error> {
let store = tools::required_string_param(&param, "store")?; let store = tools::required_string_param(&param, "store")?;
let client = connect()?; let client = connect_to_localhost()?;
let path = format!("api2/json/admin/datastore/{}/gc", store); let path = format!("api2/json/admin/datastore/{}/gc", store);
@ -183,7 +160,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect()?; let client = connect_to_localhost()?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize; let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false); let running = !param["all"].as_bool().unwrap_or(false);
@ -222,7 +199,7 @@ async fn task_log(param: Value) -> Result<Value, Error> {
let upid = tools::required_string_param(&param, "upid")?; let upid = tools::required_string_param(&param, "upid")?;
let client = connect()?; let client = connect_to_localhost()?;
display_task_log(client, upid, true).await?; display_task_log(client, upid, true).await?;
@ -243,9 +220,9 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let upid_str = tools::required_string_param(&param, "upid")?; let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect()?; let mut client = connect_to_localhost()?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
let _ = client.delete(&path, None).await?; let _ = client.delete(&path, None).await?;
Ok(Value::Null) Ok(Value::Null)
@ -302,7 +279,7 @@ async fn pull_datastore(
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let mut client = connect()?; let mut client = connect_to_localhost()?;
let mut args = json!({ let mut args = json!({
"store": local_store, "store": local_store,
@ -342,7 +319,7 @@ async fn verify(
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let mut client = connect()?; let mut client = connect_to_localhost()?;
let args = json!({}); let args = json!({});
@ -363,6 +340,43 @@ async fn report() -> Result<Value, Error> {
Ok(Value::Null) Ok(Value::Null)
} }
#[api(
input: {
properties: {
verbose: {
type: Boolean,
optional: true,
default: false,
description: "Output verbose package information. It is ignored if output-format is specified.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
}
}
}
)]
/// List package versions for important Proxmox Backup Server packages.
async fn get_versions(verbose: bool, param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let packages = crate::api2::node::apt::get_versions()?;
let mut packages = json!(if verbose { &packages[..] } else { &packages[1..2] });
let options = default_table_format_options()
.disable_sort()
.noborder(true) // just not helpfull for version info which gets copy pasted often
.column(ColumnConfig::new("Package"))
.column(ColumnConfig::new("Version"))
.column(ColumnConfig::new("ExtraInfo").header("Extra Info"))
;
let schema = &crate::api2::node::apt::API_RETURN_SCHEMA_GET_VERSIONS;
format_and_print_result_full(&mut packages, schema, &output_format, &options);
Ok(Value::Null)
}
fn main() { fn main() {
proxmox_backup::tools::setup_safe_path_env(); proxmox_backup::tools::setup_safe_path_env();
@ -396,6 +410,9 @@ fn main() {
) )
.insert("report", .insert("report",
CliCommand::new(&API_METHOD_REPORT) CliCommand::new(&API_METHOD_REPORT)
)
.insert("versions",
CliCommand::new(&API_METHOD_GET_VERSIONS)
); );

471
src/bin/proxmox-tape.rs Normal file
View File

@ -0,0 +1,471 @@
use anyhow::{format_err, Error};
use serde_json::{json, Value};
use proxmox::{
api::{
api,
cli::*,
ApiHandler,
RpcEnvironment,
section_config::SectionConfigData,
},
};
use proxmox_backup::{
tools::format::render_epoch,
server::{
UPID,
worker_is_active_local,
},
api2::{
self,
types::{
DRIVE_ID_SCHEMA,
MEDIA_LABEL_SCHEMA,
MEDIA_POOL_NAME_SCHEMA,
},
},
config::{
self,
drive::complete_drive_name,
media_pool::complete_pool_name,
},
tape::{
complete_media_changer_id,
},
};
mod proxmox_tape;
use proxmox_tape::*;
// Note: local workers should print logs to stdout, so there is no need
// to fetch/display logs. We just wait for the worker to finish.
pub async fn wait_for_local_worker(upid_str: &str) -> Result<(), Error> {
let upid: UPID = upid_str.parse()?;
let sleep_duration = core::time::Duration::new(0, 100_000_000);
loop {
if worker_is_active_local(&upid) {
tokio::time::delay_for(sleep_duration).await;
} else {
break;
}
}
Ok(())
}
fn lookup_drive_name(
param: &Value,
config: &SectionConfigData,
) -> Result<String, Error> {
let drive = param["drive"]
.as_str()
.map(String::from)
.or_else(|| std::env::var("PROXMOX_TAPE_DRIVE").ok())
.or_else(|| {
let mut drive_names = Vec::new();
for (name, (section_type, _)) in config.sections.iter() {
if !(section_type == "linux" || section_type == "virtual") { continue; }
drive_names.push(name);
}
if drive_names.len() == 1 {
Some(drive_names[0].to_owned())
} else {
None
}
})
.ok_or_else(|| format_err!("unable to get (default) drive name"))?;
Ok(drive)
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
fast: {
description: "Use fast erase.",
type: bool,
optional: true,
default: true,
},
},
},
)]
/// Erase media
async fn erase_media(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_ERASE_MEDIA;
let result = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(result.as_str().unwrap()).await?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
},
},
)]
/// Rewind tape
async fn rewind(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_REWIND;
let result = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(result.as_str().unwrap()).await?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
},
},
)]
/// Eject/Unload drive media
fn eject_media(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_EJECT_MEDIA;
match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
"changer-id": {
schema: MEDIA_LABEL_SCHEMA,
},
},
},
)]
/// Load media
fn load_media(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_LOAD_MEDIA;
match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
Ok(())
}
#[api(
input: {
properties: {
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
"changer-id": {
schema: MEDIA_LABEL_SCHEMA,
},
},
},
)]
/// Label media
async fn label_media(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_LABEL_MEDIA;
let result = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(result.as_str().unwrap()).await?;
Ok(())
}
#[api(
input: {
properties: {
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// Read media label
fn read_label(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let output_format = get_output_format(&param);
let info = &api2::tape::drive::API_METHOD_READ_LABEL;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("changer-id"))
.column(ColumnConfig::new("uuid"))
.column(ColumnConfig::new("ctime").renderer(render_epoch))
.column(ColumnConfig::new("pool"))
.column(ColumnConfig::new("media-set-uuid"))
.column(ColumnConfig::new("media-set-ctime").renderer(render_epoch))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
"read-labels": {
description: "Load unknown tapes and try read labels",
type: bool,
optional: true,
},
"read-all-labels": {
description: "Load all tapes and try read labels (even if already inventoried)",
type: bool,
optional: true,
},
},
},
)]
/// List (and update) media labels (Changer Inventory)
async fn inventory(
read_labels: Option<bool>,
read_all_labels: Option<bool>,
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let (config, _digest) = config::drive::config()?;
let drive = lookup_drive_name(&param, &config)?;
let do_read = read_labels.unwrap_or(false) || read_all_labels.unwrap_or(false);
if do_read {
let mut param = json!({
"drive": &drive,
});
if let Some(true) = read_all_labels {
param["read-all-labels"] = true.into();
}
let info = &api2::tape::drive::API_METHOD_UPDATE_INVENTORY;
let result = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(result.as_str().unwrap()).await?;
}
let info = &api2::tape::drive::API_METHOD_INVENTORY;
let param = json!({ "drive": &drive });
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("changer-id"))
.column(ColumnConfig::new("uuid"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
drive: {
schema: DRIVE_ID_SCHEMA,
optional: true,
},
},
},
)]
/// Label media with barcodes from changer device
async fn barcode_label_media(
mut param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
param["drive"] = lookup_drive_name(&param, &config)?.into();
let info = &api2::tape::drive::API_METHOD_BARCODE_LABEL_MEDIA;
let result = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(result.as_str().unwrap()).await?;
Ok(())
}
fn main() {
let cmd_def = CliCommandMap::new()
.insert(
"barcode-label",
CliCommand::new(&API_METHOD_BARCODE_LABEL_MEDIA)
.completion_cb("drive", complete_drive_name)
.completion_cb("pool", complete_pool_name)
)
.insert(
"rewind",
CliCommand::new(&API_METHOD_REWIND)
.completion_cb("drive", complete_drive_name)
)
.insert(
"erase",
CliCommand::new(&API_METHOD_ERASE_MEDIA)
.completion_cb("drive", complete_drive_name)
)
.insert(
"eject",
CliCommand::new(&API_METHOD_EJECT_MEDIA)
.completion_cb("drive", complete_drive_name)
)
.insert(
"inventory",
CliCommand::new(&API_METHOD_INVENTORY)
.completion_cb("drive", complete_drive_name)
)
.insert(
"read-label",
CliCommand::new(&API_METHOD_READ_LABEL)
.completion_cb("drive", complete_drive_name)
)
.insert(
"label",
CliCommand::new(&API_METHOD_LABEL_MEDIA)
.completion_cb("drive", complete_drive_name)
.completion_cb("pool", complete_pool_name)
)
.insert("changer", changer_commands())
.insert("drive", drive_commands())
.insert("pool", pool_commands())
.insert(
"load-media",
CliCommand::new(&API_METHOD_LOAD_MEDIA)
.arg_param(&["changer-id"])
.completion_cb("drive", complete_drive_name)
.completion_cb("changer-id", complete_media_changer_id)
)
;
let mut rpcenv = CliEnvironment::new();
rpcenv.set_auth_id(Some(String::from("root@pam")));
proxmox_backup::tools::runtime::main(run_async_cli_command(cmd_def, rpcenv));
}

View File

@ -151,7 +151,7 @@ pub async fn benchmark(
let crypt_config = match keyfile { let crypt_config = match keyfile {
None => None, None => None,
Some(path) => { Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?; let (key, _, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }

View File

@ -73,7 +73,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
let crypt_config = match keydata { let crypt_config = match keydata {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?; let (key, _created, _fingerprint) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }
@ -92,6 +92,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
).await?; ).await?;
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let index = client.download_dynamic_index(&manifest, CATALOG_NAME).await?; let index = client.download_dynamic_index(&manifest, CATALOG_NAME).await?;
@ -170,7 +171,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
let crypt_config = match keydata { let crypt_config = match keydata {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?; let (key, _created, _fingerprint) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }
@ -199,6 +200,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
.open("/tmp")?; .open("/tmp")?;
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?; let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8); let most_used = index.find_most_used_chunks(8);

View File

@ -4,14 +4,28 @@ use std::process::{Stdio, Command};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value;
use proxmox::api::api; use proxmox::api::api;
use proxmox::api::cli::{CliCommand, CliCommandMap}; use proxmox::api::cli::{
ColumnConfig,
CliCommand,
CliCommandMap,
format_and_print_result_full,
get_output_format,
OUTPUT_FORMAT,
};
use proxmox::sys::linux::tty; use proxmox::sys::linux::tty;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions}; use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox_backup::backup::{ use proxmox_backup::backup::{
encrypt_key_with_passphrase, load_and_decrypt_key, store_key_config, KeyConfig, encrypt_key_with_passphrase,
load_and_decrypt_key,
store_key_config,
CryptConfig,
Kdf,
KeyConfig,
KeyDerivationConfig,
}; };
use proxmox_backup::tools; use proxmox_backup::tools;
@ -71,27 +85,6 @@ pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
bail!("no password input mechanism available"); bail!("no password input mechanism available");
} }
#[api(
default: "scrypt",
)]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -120,7 +113,10 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let kdf = kdf.unwrap_or_default(); let kdf = kdf.unwrap_or_default();
let key = proxmox::sys::linux::random_data(32)?; let mut key_array = [0u8; 32];
proxmox::sys::linux::fill_with_random_data(&mut key_array)?;
let crypt_config = CryptConfig::new(key_array.clone())?;
let key = key_array.to_vec();
match kdf { match kdf {
Kdf::None => { Kdf::None => {
@ -134,10 +130,11 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
created, created,
modified: created, modified: created,
data: key, data: key,
fingerprint: Some(crypt_config.fingerprint()),
}, },
)?; )?;
} }
Kdf::Scrypt => { Kdf::Scrypt | Kdf::PBKDF2 => {
// always read passphrase from tty // always read passphrase from tty
if !tty::stdin_isatty() { if !tty::stdin_isatty() {
bail!("unable to read passphrase - no tty"); bail!("unable to read passphrase - no tty");
@ -145,7 +142,8 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let password = tty::read_and_verify_password("Encryption Key Password: ")?; let password = tty::read_and_verify_password("Encryption Key Password: ")?;
let key_config = encrypt_key_with_passphrase(&key, &password)?; let mut key_config = encrypt_key_with_passphrase(&key, &password, kdf)?;
key_config.fingerprint = Some(crypt_config.fingerprint());
store_key_config(&path, false, key_config)?; store_key_config(&path, false, key_config)?;
} }
@ -188,7 +186,7 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
bail!("unable to change passphrase - no tty"); bail!("unable to change passphrase - no tty");
} }
let (key, created) = load_and_decrypt_key(&path, &get_encryption_key_password)?; let (key, created, fingerprint) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
match kdf { match kdf {
Kdf::None => { Kdf::None => {
@ -202,14 +200,16 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
created, // keep original value created, // keep original value
modified, modified,
data: key.to_vec(), data: key.to_vec(),
fingerprint: Some(fingerprint),
}, },
)?; )?;
} }
Kdf::Scrypt => { Kdf::Scrypt | Kdf::PBKDF2 => {
let password = tty::read_and_verify_password("New Password: ")?; let password = tty::read_and_verify_password("New Password: ")?;
let mut new_key_config = encrypt_key_with_passphrase(&key, &password)?; let mut new_key_config = encrypt_key_with_passphrase(&key, &password, kdf)?;
new_key_config.created = created; // keep original value new_key_config.created = created; // keep original value
new_key_config.fingerprint = Some(fingerprint);
store_key_config(&path, true, new_key_config)?; store_key_config(&path, true, new_key_config)?;
} }
@ -218,6 +218,91 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
Ok(()) Ok(())
} }
#[api(
properties: {
kdf: {
type: Kdf,
},
},
)]
#[derive(Deserialize, Serialize)]
/// Encryption Key Information
struct KeyInfo {
/// Path to key
path: String,
kdf: Kdf,
/// Key creation time
pub created: i64,
/// Key modification time
pub modified: i64,
/// Key fingerprint
pub fingerprint: Option<String>,
}
#[api(
input: {
properties: {
path: {
description: "Key file. Without this the default key's metadata will be shown.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// Print the encryption key's metadata.
fn show_key(
path: Option<String>,
param: Value,
) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => {
let path = find_default_encryption_key()?
.ok_or_else(|| {
format_err!("no encryption file provided and no default file found")
})?;
path
}
};
let config: KeyConfig = serde_json::from_slice(&file_get_contents(path.clone())?)?;
let output_format = get_output_format(&param);
let info = KeyInfo {
path: format!("{:?}", path),
kdf: match config.kdf {
Some(KeyDerivationConfig::PBKDF2 { .. }) => Kdf::PBKDF2,
Some(KeyDerivationConfig::Scrypt { .. }) => Kdf::Scrypt,
None => Kdf::None,
},
created: config.created,
modified: config.modified,
fingerprint: match config.fingerprint {
Some(ref fp) => Some(format!("{}", fp)),
None => None,
},
};
let options = proxmox::api::cli::default_table_format_options()
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("kdf"))
.column(ColumnConfig::new("created").renderer(tools::format::render_epoch))
.column(ColumnConfig::new("modified").renderer(tools::format::render_epoch))
.column(ColumnConfig::new("fingerprint"));
let schema = &KeyInfo::API_SCHEMA;
format_and_print_result_full(&mut serde_json::to_value(info)?, schema, &output_format, &options);
Ok(())
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -287,7 +372,6 @@ fn create_master_key() -> Result<(), Error> {
}, },
"output-format": { "output-format": {
type: PaperkeyFormat, type: PaperkeyFormat,
description: "Output format. Text or Html.",
optional: true, optional: true,
}, },
}, },
@ -313,13 +397,47 @@ fn paper_key(
}; };
let data = file_get_contents(&path)?; let data = file_get_contents(&path)?;
let data = std::str::from_utf8(&data)?; let data = String::from_utf8(data)?;
let (data, is_private_key) = if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data
.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
(lines, true)
} else {
match serde_json::from_str::<KeyConfig>(&data) {
Ok(key_config) => {
let lines = serde_json::to_string_pretty(&key_config)?
.lines()
.map(String::from)
.collect();
(lines, false)
},
Err(err) => {
eprintln!("Couldn't parse '{:?}' as KeyConfig - {}", path, err);
bail!("Neither a PEM-formatted private key, nor a PBS key file.");
},
}
};
let format = output_format.unwrap_or(PaperkeyFormat::Html); let format = output_format.unwrap_or(PaperkeyFormat::Html);
match format { match format {
PaperkeyFormat::Html => paperkey_html(data, subject), PaperkeyFormat::Html => paperkey_html(&data, subject, is_private_key),
PaperkeyFormat::Text => paperkey_text(data, subject), PaperkeyFormat::Text => paperkey_text(&data, subject, is_private_key),
} }
} }
@ -337,6 +455,10 @@ pub fn cli() -> CliCommandMap {
.arg_param(&["path"]) .arg_param(&["path"])
.completion_cb("path", tools::complete_file_name); .completion_cb("path", tools::complete_file_name);
let key_show_cmd_def = CliCommand::new(&API_METHOD_SHOW_KEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let paper_key_cmd_def = CliCommand::new(&API_METHOD_PAPER_KEY) let paper_key_cmd_def = CliCommand::new(&API_METHOD_PAPER_KEY)
.arg_param(&["path"]) .arg_param(&["path"])
.completion_cb("path", tools::complete_file_name); .completion_cb("path", tools::complete_file_name);
@ -346,10 +468,11 @@ pub fn cli() -> CliCommandMap {
.insert("create-master-key", key_create_master_key_cmd_def) .insert("create-master-key", key_create_master_key_cmd_def)
.insert("import-master-pubkey", key_import_master_pubkey_cmd_def) .insert("import-master-pubkey", key_import_master_pubkey_cmd_def)
.insert("change-passphrase", key_change_passphrase_cmd_def) .insert("change-passphrase", key_change_passphrase_cmd_def)
.insert("show", key_show_cmd_def)
.insert("paperkey", paper_key_cmd_def) .insert("paperkey", paper_key_cmd_def)
} }
fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> { fn paperkey_html(lines: &[String], subject: Option<String>, is_private: bool) -> Result<(), Error> {
let img_size_pt = 500; let img_size_pt = 500;
@ -378,21 +501,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("<p>Subject: {}</p>", subject); println!("<p>Subject: {}</p>", subject);
} }
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") { if is_private {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
const BLOCK_SIZE: usize = 20; const BLOCK_SIZE: usize = 20;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE; let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
@ -413,8 +522,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>"); println!("</p>");
let data = data.join("\n"); let qr_code = generate_qr_code("svg", data)?;
let qr_code = generate_qr_code("svg", data.as_bytes())?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD); let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>"); println!("<center>");
@ -430,16 +538,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
return Ok(()); return Ok(());
} }
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("<div style=\"page-break-inside: avoid\">"); println!("<div style=\"page-break-inside: avoid\">");
println!("<p>"); println!("<p>");
println!("-----BEGIN PROXMOX BACKUP KEY-----"); println!("-----BEGIN PROXMOX BACKUP KEY-----");
for line in key_text.lines() { for line in lines {
println!("{}", line); println!("{}", line);
} }
@ -447,7 +552,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>"); println!("</p>");
let qr_code = generate_qr_code("svg", key_text.as_bytes())?; let qr_code = generate_qr_code("svg", lines)?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD); let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>"); println!("<center>");
@ -464,27 +569,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
Ok(()) Ok(())
} }
fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> { fn paperkey_text(lines: &[String], subject: Option<String>, is_private: bool) -> Result<(), Error> {
if let Some(subject) = subject { if let Some(subject) = subject {
println!("Subject: {}\n", subject); println!("Subject: {}\n", subject);
} }
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") { if is_private {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
const BLOCK_SIZE: usize = 5; const BLOCK_SIZE: usize = 5;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE; let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
@ -499,8 +590,7 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
for l in start..end { for l in start..end {
println!("{:-2}: {}", l, lines[l]); println!("{:-2}: {}", l, lines[l]);
} }
let data = data.join("\n"); let qr_code = generate_qr_code("utf8i", data)?;
let qr_code = generate_qr_code("utf8i", data.as_bytes())?;
let qr_code = String::from_utf8(qr_code) let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?; .map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
println!("{}", qr_code); println!("{}", qr_code);
@ -510,14 +600,13 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
return Ok(()); return Ok(());
} }
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("-----BEGIN PROXMOX BACKUP KEY-----"); println!("-----BEGIN PROXMOX BACKUP KEY-----");
println!("{}", key_text); for line in lines {
println!("{}", line);
}
println!("-----END PROXMOX BACKUP KEY-----"); println!("-----END PROXMOX BACKUP KEY-----");
let qr_code = generate_qr_code("utf8i", key_text.as_bytes())?; let qr_code = generate_qr_code("utf8i", &lines)?;
let qr_code = String::from_utf8(qr_code) let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?; .map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
@ -526,8 +615,7 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
Ok(()) Ok(())
} }
fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> { fn generate_qr_code(output_type: &str, lines: &[String]) -> Result<Vec<u8>, Error> {
let mut child = Command::new("qrencode") let mut child = Command::new("qrencode")
.args(&["-t", output_type, "-m0", "-s1", "-lm", "--output", "-"]) .args(&["-t", output_type, "-m0", "-s1", "-lm", "--output", "-"])
.stdin(Stdio::piped()) .stdin(Stdio::piped())
@ -537,7 +625,8 @@ fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> {
{ {
let stdin = child.stdin.as_mut() let stdin = child.stdin.as_mut()
.ok_or_else(|| format_err!("Failed to open stdin"))?; .ok_or_else(|| format_err!("Failed to open stdin"))?;
stdin.write_all(data) let data = lines.join("\n");
stdin.write_all(data.as_bytes())
.map_err(|_| format_err!("Failed to write to stdin"))?; .map_err(|_| format_err!("Failed to write to stdin"))?;
} }

View File

@ -8,6 +8,8 @@ mod task;
pub use task::*; pub use task::*;
mod catalog; mod catalog;
pub use catalog::*; pub use catalog::*;
mod snapshot;
pub use snapshot::*;
pub mod key; pub mod key;

View File

@ -1,22 +1,21 @@
use std::path::PathBuf;
use std::sync::Arc;
use std::os::unix::io::RawFd;
use std::path::Path;
use std::ffi::OsStr;
use std::collections::HashMap; use std::collections::HashMap;
use std::ffi::OsStr;
use std::hash::BuildHasher; use std::hash::BuildHasher;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::future::FutureExt;
use futures::select;
use futures::stream::{StreamExt, TryStreamExt};
use nix::unistd::{fork, ForkResult};
use serde_json::Value; use serde_json::Value;
use tokio::signal::unix::{signal, SignalKind}; use tokio::signal::unix::{signal, SignalKind};
use nix::unistd::{fork, ForkResult, pipe};
use futures::select;
use futures::future::FutureExt;
use futures::stream::{StreamExt, TryStreamExt};
use proxmox::{sortable, identity}; use proxmox::{sortable, identity};
use proxmox::api::{ApiHandler, ApiMethod, RpcEnvironment, schema::*, cli::*}; use proxmox::api::{ApiHandler, ApiMethod, RpcEnvironment, schema::*, cli::*};
use proxmox::tools::fd::Fd;
use proxmox_backup::tools; use proxmox_backup::tools;
use proxmox_backup::backup::{ use proxmox_backup::backup::{
@ -143,24 +142,24 @@ fn mount(
// Process should be deamonized. // Process should be deamonized.
// Make sure to fork before the async runtime is instantiated to avoid troubles. // Make sure to fork before the async runtime is instantiated to avoid troubles.
let pipe = pipe()?; let (pr, pw) = proxmox_backup::tools::pipe()?;
match unsafe { fork() } { match unsafe { fork() } {
Ok(ForkResult::Parent { .. }) => { Ok(ForkResult::Parent { .. }) => {
nix::unistd::close(pipe.1).unwrap(); drop(pw);
// Blocks the parent process until we are ready to go in the child // Blocks the parent process until we are ready to go in the child
let _res = nix::unistd::read(pipe.0, &mut [0]).unwrap(); let _res = nix::unistd::read(pr.as_raw_fd(), &mut [0]).unwrap();
Ok(Value::Null) Ok(Value::Null)
} }
Ok(ForkResult::Child) => { Ok(ForkResult::Child) => {
nix::unistd::close(pipe.0).unwrap(); drop(pr);
nix::unistd::setsid().unwrap(); nix::unistd::setsid().unwrap();
proxmox_backup::tools::runtime::main(mount_do(param, Some(pipe.1))) proxmox_backup::tools::runtime::main(mount_do(param, Some(pw)))
} }
Err(_) => bail!("failed to daemonize process"), Err(_) => bail!("failed to daemonize process"),
} }
} }
async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> { async fn mount_do(param: Value, pipe: Option<Fd>) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(&repo)?; let client = connect(&repo)?;
@ -182,7 +181,9 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let crypt_config = match keyfile { let crypt_config = match keyfile {
None => None, None => None,
Some(path) => { Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?; println!("Encryption key file: '{:?}'", path);
let (key, _, fingerprint) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
println!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?)) Some(Arc::new(CryptConfig::new(key)?))
} }
}; };
@ -212,6 +213,7 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
).await?; ).await?;
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let file_info = manifest.lookup_file_info(&server_archive_name)?; let file_info = manifest.lookup_file_info(&server_archive_name)?;
@ -232,8 +234,8 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
} }
// Signal the parent process that we are done with the setup and it can // Signal the parent process that we are done with the setup and it can
// terminate. // terminate.
nix::unistd::write(pipe, &[0u8])?; nix::unistd::write(pipe.as_raw_fd(), &[0u8])?;
nix::unistd::close(pipe).unwrap(); let _: Fd = pipe;
} }
Ok(()) Ok(())

View File

@ -0,0 +1,416 @@
use std::sync::Arc;
use anyhow::Error;
use serde_json::{json, Value};
use proxmox::{
api::{api, cli::*},
tools::fs::file_get_contents,
};
use proxmox_backup::{
tools,
api2::types::*,
backup::{
CryptMode,
CryptConfig,
DataBlob,
BackupGroup,
decrypt_key,
}
};
use crate::{
REPO_URL_SCHEMA,
KEYFILE_SCHEMA,
KEYFD_SCHEMA,
BackupDir,
api_datastore_list_snapshots,
complete_backup_snapshot,
complete_backup_group,
complete_repository,
connect,
extract_repository_from_value,
record_repository,
keyfile_parameters,
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
group: {
type: String,
description: "Backup group.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List backup snapshots.
async fn list_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?)
} else {
None
};
let mut data = api_datastore_list_snapshots(&client, repo.store(), group).await?;
record_repository(&repo);
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned())
};
let render_files = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let mut filenames = Vec::new();
for file in &item.files {
filenames.push(file.filename.to_string());
}
Ok(tools::format::render_backup_file_list(&filenames[..]))
};
let options = default_table_format_options()
.sortby("backup-type", false)
.sortby("backup-id", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
.column(ColumnConfig::new("size").renderer(tools::format::render_bytes_human_readable))
.column(ColumnConfig::new("files").renderer(render_files))
;
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOTS;
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// List snapshot files.
async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let output_format = get_output_format(&param);
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
let mut result = client.get(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOT_FILES;
let mut data: Value = result["data"].take();
let options = default_table_format_options();
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
}
}
)]
/// Forget (remove) backup snapshots.
async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
let result = client.delete(&path, Some(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
}))).await?;
record_repository(&repo);
Ok(result)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Group/Snapshot path.",
},
logfile: {
type: String,
description: "The path to the log file you want to upload.",
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
}
}
)]
/// Upload backup log file.
async fn upload_log(param: Value) -> Result<Value, Error> {
let logfile = tools::required_string_param(&param, "logfile")?;
let repo = extract_repository_from_value(&param)?;
let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(&repo)?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created, _) = decrypt_key(&key, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let data = file_get_contents(logfile)?;
// fixme: howto sign log?
let blob = match crypt_mode {
CryptMode::None | CryptMode::SignOnly => DataBlob::encode(&data, None, true)?,
CryptMode::Encrypt => DataBlob::encode(&data, crypt_config.as_ref().map(Arc::as_ref), true)?,
};
let raw_data = blob.into_inner();
let path = format!("api2/json/admin/datastore/{}/upload-backup-log", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let body = hyper::Body::from(raw_data);
client.upload("application/octet-stream", body, &path, Some(args)).await
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// Show notes
async fn show_notes(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/notes", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
});
let output_format = get_output_format(&param);
let mut result = client.get(&path, Some(args)).await?;
let notes = result["data"].take();
if output_format == "text" {
if let Some(notes) = notes.as_str() {
println!("{}", notes);
}
} else {
format_and_print_result(
&json!({
"notes": notes,
}),
&output_format,
);
}
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
notes: {
type: String,
description: "The Notes.",
},
}
}
)]
/// Update Notes
async fn update_notes(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let notes = tools::required_string_param(&param, "notes")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/notes", repo.store());
let args = json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
"notes": notes,
});
client.put(&path, Some(args)).await?;
Ok(Value::Null)
}
fn notes_cli() -> CliCommandMap {
CliCommandMap::new()
.insert(
"show",
CliCommand::new(&API_METHOD_SHOW_NOTES)
.arg_param(&["snapshot"])
.completion_cb("snapshot", complete_backup_snapshot),
)
.insert(
"update",
CliCommand::new(&API_METHOD_UPDATE_NOTES)
.arg_param(&["snapshot", "notes"])
.completion_cb("snapshot", complete_backup_snapshot),
)
}
pub fn snapshot_mgtm_cli() -> CliCommandMap {
CliCommandMap::new()
.insert("notes", notes_cli())
.insert(
"list",
CliCommand::new(&API_METHOD_LIST_SNAPSHOTS)
.arg_param(&["group"])
.completion_cb("group", complete_backup_group)
.completion_cb("repository", complete_repository)
)
.insert(
"files",
CliCommand::new(&API_METHOD_LIST_SNAPSHOT_FILES)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot)
)
.insert(
"forget",
CliCommand::new(&API_METHOD_FORGET_SNAPSHOTS)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot)
)
.insert(
"upload-log",
CliCommand::new(&API_METHOD_UPLOAD_LOG)
.arg_param(&["snapshot", "logfile"])
.completion_cb("snapshot", complete_backup_snapshot)
.completion_cb("logfile", tools::complete_file_name)
.completion_cb("keyfile", tools::complete_file_name)
.completion_cb("repository", complete_repository)
)
}

View File

@ -124,7 +124,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let mut client = connect(&repo)?; let mut client = connect(&repo)?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
let _ = client.delete(&path, None).await?; let _ = client.delete(&path, None).await?;
Ok(Value::Null) Ok(Value::Null)

View File

@ -0,0 +1,219 @@
use anyhow::{Error};
use serde_json::Value;
use proxmox::{
api::{
api,
cli::*,
RpcEnvironment,
ApiHandler,
},
};
use proxmox_backup::{
api2::{
self,
types::{
CHANGER_ID_SCHEMA,
},
},
tape::{
complete_changer_path,
},
config::{
drive::{
complete_drive_name,
complete_changer_name,
}
},
};
pub fn changer_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new()
.insert("scan", CliCommand::new(&API_METHOD_SCAN_FOR_CHANGERS))
.insert("list", CliCommand::new(&API_METHOD_LIST_CHANGERS))
.insert("config",
CliCommand::new(&API_METHOD_GET_CONFIG)
.arg_param(&["name"])
.completion_cb("name", complete_changer_name)
)
.insert(
"remove",
CliCommand::new(&api2::config::changer::API_METHOD_DELETE_CHANGER)
.arg_param(&["name"])
.completion_cb("name", complete_changer_name)
)
.insert(
"create",
CliCommand::new(&api2::config::changer::API_METHOD_CREATE_CHANGER)
.arg_param(&["name"])
.completion_cb("name", complete_drive_name)
.completion_cb("path", complete_changer_path)
)
.insert(
"update",
CliCommand::new(&api2::config::changer::API_METHOD_UPDATE_CHANGER)
.arg_param(&["name"])
.completion_cb("name", complete_changer_name)
.completion_cb("path", complete_changer_path)
)
.insert("status",
CliCommand::new(&API_METHOD_GET_STATUS)
.arg_param(&["name"])
.completion_cb("name", complete_changer_name)
)
.insert("transfer",
CliCommand::new(&api2::tape::changer::API_METHOD_TRANSFER)
.arg_param(&["name"])
.completion_cb("name", complete_changer_name)
)
;
cmd_def.into()
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// List changers
fn list_changers(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::changer::API_METHOD_LIST_CHANGERS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("vendor"))
.column(ColumnConfig::new("model"))
.column(ColumnConfig::new("serial"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// Scan for SCSI tape changers
fn scan_for_changers(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::tape::changer::API_METHOD_SCAN_CHANGERS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("vendor"))
.column(ColumnConfig::new("model"))
.column(ColumnConfig::new("serial"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
name: {
schema: CHANGER_ID_SCHEMA,
},
},
},
)]
/// Get tape changer configuration
fn get_config(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::changer::API_METHOD_GET_CONFIG;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("path"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
name: {
schema: CHANGER_ID_SCHEMA,
},
},
},
)]
/// Get tape changer status
fn get_status(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::tape::changer::API_METHOD_GET_STATUS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("entry-kind"))
.column(ColumnConfig::new("entry-id"))
.column(ColumnConfig::new("changer-id"))
.column(ColumnConfig::new("loaded-slot"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}

View File

@ -0,0 +1,188 @@
use anyhow::Error;
use serde_json::Value;
use proxmox::{
api::{
api,
cli::*,
RpcEnvironment,
ApiHandler,
},
};
use proxmox_backup::{
api2::{
self,
types::{
DRIVE_ID_SCHEMA,
},
},
tape::{
complete_drive_path,
},
config::drive::{
complete_drive_name,
complete_changer_name,
complete_linux_drive_name,
},
};
pub fn drive_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new()
.insert("scan", CliCommand::new(&API_METHOD_SCAN_FOR_DRIVES))
.insert("list", CliCommand::new(&API_METHOD_LIST_DRIVES))
.insert("config",
CliCommand::new(&API_METHOD_GET_CONFIG)
.arg_param(&["name"])
.completion_cb("name", complete_linux_drive_name)
)
.insert(
"remove",
CliCommand::new(&api2::config::drive::API_METHOD_DELETE_DRIVE)
.arg_param(&["name"])
.completion_cb("name", complete_linux_drive_name)
)
.insert(
"create",
CliCommand::new(&api2::config::drive::API_METHOD_CREATE_DRIVE)
.arg_param(&["name"])
.completion_cb("name", complete_drive_name)
.completion_cb("path", complete_drive_path)
.completion_cb("changer", complete_changer_name)
)
.insert(
"update",
CliCommand::new(&api2::config::drive::API_METHOD_UPDATE_DRIVE)
.arg_param(&["name"])
.completion_cb("name", complete_linux_drive_name)
.completion_cb("path", complete_drive_path)
.completion_cb("changer", complete_changer_name)
)
.insert(
"load",
CliCommand::new(&api2::tape::drive::API_METHOD_LOAD_SLOT)
.arg_param(&["drive"])
.completion_cb("drive", complete_linux_drive_name)
)
.insert(
"unload",
CliCommand::new(&api2::tape::drive::API_METHOD_UNLOAD)
.arg_param(&["drive"])
.completion_cb("drive", complete_linux_drive_name)
)
;
cmd_def.into()
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// List drives
fn list_drives(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::drive::API_METHOD_LIST_DRIVES;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("changer"))
.column(ColumnConfig::new("vendor"))
.column(ColumnConfig::new("model"))
.column(ColumnConfig::new("serial"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
}
)]
/// Scan for drives
fn scan_for_drives(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::tape::drive::API_METHOD_SCAN_DRIVES;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("vendor"))
.column(ColumnConfig::new("model"))
.column(ColumnConfig::new("serial"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
name: {
schema: DRIVE_ID_SCHEMA,
},
},
},
)]
/// Get pool configuration
fn get_config(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::drive::API_METHOD_GET_CONFIG;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("changer"))
.column(ColumnConfig::new("changer-drive-id"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}

View File

@ -0,0 +1,8 @@
mod changer;
pub use changer::*;
mod drive;
pub use drive::*;
mod pool;
pub use pool::*;

View File

@ -0,0 +1,137 @@
use anyhow::{Error};
use serde_json::Value;
use proxmox::{
api::{
api,
cli::*,
RpcEnvironment,
ApiHandler,
},
};
use proxmox_backup::{
api2::{
self,
types::{
MEDIA_POOL_NAME_SCHEMA,
},
},
config::{
drive::{
complete_drive_name,
},
media_pool::{
complete_pool_name,
},
},
};
pub fn pool_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new()
.insert("list", CliCommand::new(&API_METHOD_LIST_POOLS))
.insert("config",
CliCommand::new(&API_METHOD_GET_CONFIG)
.arg_param(&["name"])
.completion_cb("name", complete_pool_name)
)
.insert(
"remove",
CliCommand::new(&api2::config::media_pool::API_METHOD_DELETE_POOL)
.arg_param(&["name"])
.completion_cb("name", complete_pool_name)
)
.insert(
"create",
CliCommand::new(&api2::config::media_pool::API_METHOD_CREATE_POOL)
.arg_param(&["name"])
.completion_cb("name", complete_pool_name)
.completion_cb("drive", complete_drive_name)
)
.insert(
"update",
CliCommand::new(&api2::config::media_pool::API_METHOD_UPDATE_POOL)
.arg_param(&["name"])
.completion_cb("name", complete_pool_name)
.completion_cb("drive", complete_drive_name)
)
;
cmd_def.into()
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// List media pool
fn list_pools(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::media_pool::API_METHOD_LIST_POOLS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("drive"))
.column(ColumnConfig::new("allocation"))
.column(ColumnConfig::new("retention"))
.column(ColumnConfig::new("template"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
},
},
)]
/// Get media pool configuration
fn get_config(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let output_format = get_output_format(&param);
let info = &api2::config::media_pool::API_METHOD_GET_CONFIG;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("name"))
.column(ColumnConfig::new("drive"))
.column(ColumnConfig::new("allocation"))
.column(ColumnConfig::new("retention"))
.column(ColumnConfig::new("template"))
;
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(())
}

View File

@ -3,6 +3,16 @@
//! This library implements the client side to access the backups //! This library implements the client side to access the backups
//! server using https. //! server using https.
use anyhow::Error;
use crate::{
api2::types::{Userid, Authid},
tools::ticket::Ticket,
auth_helpers::private_auth_key,
};
mod merge_known_chunks; mod merge_known_chunks;
pub mod pipe_to_stream; pub mod pipe_to_stream;
@ -31,3 +41,27 @@ mod backup_specification;
pub use backup_specification::*; pub use backup_specification::*;
pub mod pull; pub mod pull;
/// Connect to localhost:8007 as root@pam
///
/// This automatically creates a ticket if run as 'root' user.
pub fn connect_to_localhost() -> Result<HttpClient, Error> {
let uid = nix::unistd::Uid::current();
let mut options = HttpClientOptions::new()
.prefix(Some("proxmox-backup".to_string()))
.verify_cert(false); // not required for connection to localhost
let client = if uid.is_root() {
let ticket = Ticket::new("PBS", Userid::root_userid())?
.sign(private_auth_key(), None)?;
options = options.password(Some(ticket));
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
} else {
options = options.ticket_cache(true).interactive(true);
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
};
Ok(client)
}

View File

@ -475,6 +475,13 @@ impl BackupWriter {
Ok(index) Ok(index)
} }
/// Retrieve backup time of last backup
pub async fn previous_backup_time(&self) -> Result<Option<i64>, Error> {
let data = self.h2.get("previous_backup_time", None).await?;
serde_json::from_value(data)
.map_err(|err| format_err!("Failed to parse backup time value returned by server - {}", err))
}
/// Download backup manifest (index.json) of last backup /// Download backup manifest (index.json) of last backup
pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> { pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {

View File

@ -534,6 +534,15 @@ impl HttpClient {
self.request(req).await self.request(req).await
} }
pub async fn put(
&mut self,
path: &str,
data: Option<Value>,
) -> Result<Value, Error> {
let req = Self::request_builder(&self.server, self.port, "PUT", path, data)?;
self.request(req).await
}
pub async fn download( pub async fn download(
&mut self, &mut self,
path: &str, path: &str,

View File

@ -3,6 +3,7 @@ use std::task::{Context, Poll};
use anyhow::{Error}; use anyhow::{Error};
use futures::*; use futures::*;
use pin_project::pin_project;
use crate::backup::ChunkInfo; use crate::backup::ChunkInfo;
@ -15,7 +16,9 @@ pub trait MergeKnownChunks: Sized {
fn merge_known_chunks(self) -> MergeKnownChunksQueue<Self>; fn merge_known_chunks(self) -> MergeKnownChunksQueue<Self>;
} }
#[pin_project]
pub struct MergeKnownChunksQueue<S> { pub struct MergeKnownChunksQueue<S> {
#[pin]
input: S, input: S,
buffer: Option<MergedChunkInfo>, buffer: Option<MergedChunkInfo>,
} }
@ -39,10 +42,10 @@ where
type Item = Result<MergedChunkInfo, Error>; type Item = Result<MergedChunkInfo, Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> { fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
let this = unsafe { self.get_unchecked_mut() }; let mut this = self.project();
loop { loop {
match ready!(unsafe { Pin::new_unchecked(&mut this.input) }.poll_next(cx)) { match ready!(this.input.as_mut().poll_next(cx)) {
Some(Err(err)) => return Poll::Ready(Some(Err(err))), Some(Err(err)) => return Poll::Ready(Some(Err(err))),
None => { None => {
if let Some(last) = this.buffer.take() { if let Some(last) = this.buffer.take() {
@ -58,13 +61,13 @@ where
match last { match last {
None => { None => {
this.buffer = Some(MergedChunkInfo::Known(list)); *this.buffer = Some(MergedChunkInfo::Known(list));
// continue // continue
} }
Some(MergedChunkInfo::Known(mut last_list)) => { Some(MergedChunkInfo::Known(mut last_list)) => {
last_list.extend_from_slice(&list); last_list.extend_from_slice(&list);
let len = last_list.len(); let len = last_list.len();
this.buffer = Some(MergedChunkInfo::Known(last_list)); *this.buffer = Some(MergedChunkInfo::Known(last_list));
if len >= 64 { if len >= 64 {
return Poll::Ready(this.buffer.take().map(Ok)); return Poll::Ready(this.buffer.take().map(Ok));
@ -72,7 +75,7 @@ where
// continue // continue
} }
Some(MergedChunkInfo::New(_)) => { Some(MergedChunkInfo::New(_)) => {
this.buffer = Some(MergedChunkInfo::Known(list)); *this.buffer = Some(MergedChunkInfo::Known(list));
return Poll::Ready(last.map(Ok)); return Poll::Ready(last.map(Ok));
} }
} }
@ -80,7 +83,7 @@ where
MergedChunkInfo::New(chunk_info) => { MergedChunkInfo::New(chunk_info) => {
let new = MergedChunkInfo::New(chunk_info); let new = MergedChunkInfo::New(chunk_info);
if let Some(last) = this.buffer.take() { if let Some(last) = this.buffer.take() {
this.buffer = Some(new); *this.buffer = Some(new);
return Poll::Ready(Some(Ok(last))); return Poll::Ready(Some(Ok(last)));
} else { } else {
return Poll::Ready(Some(Ok(new))); return Poll::Ready(Some(Ok(new)));

View File

@ -395,7 +395,7 @@ pub async fn pull_group(
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
group: &BackupGroup, group: &BackupGroup,
delete: bool, delete: bool,
progress: Option<(usize, usize)>, // (groups_done, group_count) progress: &mut StoreProgress,
) -> Result<(), Error> { ) -> Result<(), Error> {
let path = format!("api2/json/admin/datastore/{}/snapshots", src_repo.store()); let path = format!("api2/json/admin/datastore/{}/snapshots", src_repo.store());
@ -418,18 +418,10 @@ pub async fn pull_group(
let mut remote_snapshots = std::collections::HashSet::new(); let mut remote_snapshots = std::collections::HashSet::new();
let (per_start, per_group) = if let Some((groups_done, group_count)) = progress {
let per_start = (groups_done as f64)/(group_count as f64);
let per_group = 1.0/(group_count as f64);
(per_start, per_group)
} else {
(0.0, 1.0)
};
// start with 16384 chunks (up to 65GB) // start with 16384 chunks (up to 65GB)
let downloaded_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*64))); let downloaded_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*64)));
let snapshot_count = list.len(); progress.group_snapshots = list.len() as u64;
for (pos, item) in list.into_iter().enumerate() { for (pos, item) in list.into_iter().enumerate() {
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?; let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
@ -469,9 +461,8 @@ pub async fn pull_group(
let result = pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot, downloaded_chunks.clone()).await; let result = pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot, downloaded_chunks.clone()).await;
let percentage = (pos as f64)/(snapshot_count as f64); progress.done_snapshots = pos as u64 + 1;
let percentage = per_start + percentage*per_group; worker.log(format!("percentage done: {}", progress));
worker.log(format!("percentage done: {:.2}%", percentage*100.0));
result?; // stop on error result?; // stop on error
} }
@ -507,6 +498,8 @@ pub async fn pull_store(
let mut list: Vec<GroupListItem> = serde_json::from_value(result["data"].take())?; let mut list: Vec<GroupListItem> = serde_json::from_value(result["data"].take())?;
worker.log(format!("found {} groups to sync", list.len()));
list.sort_unstable_by(|a, b| { list.sort_unstable_by(|a, b| {
let type_order = a.backup_type.cmp(&b.backup_type); let type_order = a.backup_type.cmp(&b.backup_type);
if type_order == std::cmp::Ordering::Equal { if type_order == std::cmp::Ordering::Equal {
@ -523,9 +516,13 @@ pub async fn pull_store(
new_groups.insert(BackupGroup::new(&item.backup_type, &item.backup_id)); new_groups.insert(BackupGroup::new(&item.backup_type, &item.backup_id));
} }
let group_count = list.len(); let mut progress = StoreProgress::new(list.len() as u64);
for (done, item) in list.into_iter().enumerate() {
progress.done_groups = done as u64;
progress.done_snapshots = 0;
progress.group_snapshots = 0;
for (groups_done, item) in list.into_iter().enumerate() {
let group = BackupGroup::new(&item.backup_type, &item.backup_id); let group = BackupGroup::new(&item.backup_type, &item.backup_id);
let (owner, _lock_guard) = match tgt_store.create_locked_backup_group(&group, &auth_id) { let (owner, _lock_guard) = match tgt_store.create_locked_backup_group(&group, &auth_id) {
@ -551,7 +548,7 @@ pub async fn pull_store(
tgt_store.clone(), tgt_store.clone(),
&group, &group,
delete, delete,
Some((groups_done, group_count)), &mut progress,
).await { ).await {
worker.log(format!( worker.log(format!(
"sync group {}/{} failed - {}", "sync group {}/{} failed - {}",
@ -565,7 +562,7 @@ pub async fn pull_store(
if delete { if delete {
let result: Result<(), Error> = proxmox::try_block!({ let result: Result<(), Error> = proxmox::try_block!({
let local_groups = BackupGroup::list_groups(&tgt_store.base_path())?; let local_groups = BackupInfo::list_backup_groups(&tgt_store.base_path())?;
for local_group in local_groups { for local_group in local_groups {
if new_groups.contains(&local_group) { continue; } if new_groups.contains(&local_group) { continue; }
worker.log(format!("delete vanished group '{}/{}'", local_group.backup_type(), local_group.backup_id())); worker.log(format!("delete vanished group '{}/{}'", local_group.backup_type(), local_group.backup_id()));

View File

@ -2,6 +2,7 @@ use anyhow::{bail, Error};
use serde_json::json; use serde_json::json;
use super::HttpClient; use super::HttpClient;
use crate::tools;
pub async fn display_task_log( pub async fn display_task_log(
client: HttpClient, client: HttpClient,
@ -9,7 +10,7 @@ pub async fn display_task_log(
strip_date: bool, strip_date: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let path = format!("api2/json/nodes/localhost/tasks/{}/log", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}/log", tools::percent_encode_component(upid_str));
let mut start = 1; let mut start = 1;
let limit = 500; let limit = 500;

View File

@ -24,6 +24,8 @@ pub mod sync;
pub mod token_shadow; pub mod token_shadow;
pub mod user; pub mod user;
pub mod verify; pub mod verify;
pub mod drive;
pub mod media_pool;
/// Check configuration directory permissions /// Check configuration directory permissions
/// ///

145
src/config/drive.rs Normal file
View File

@ -0,0 +1,145 @@
use std::collections::HashMap;
use anyhow::{bail, Error};
use lazy_static::lazy_static;
use proxmox::{
api::{
schema::*,
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
},
},
tools::fs::{
open_file_locked,
replace_file,
CreateOptions,
},
};
use crate::{
api2::types::{
DRIVE_ID_SCHEMA,
VirtualTapeDrive,
LinuxTapeDrive,
ScsiTapeChanger,
},
};
lazy_static! {
pub static ref CONFIG: SectionConfig = init();
}
fn init() -> SectionConfig {
let mut config = SectionConfig::new(&DRIVE_ID_SCHEMA);
let obj_schema = match VirtualTapeDrive::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("virtual".to_string(), Some("name".to_string()), obj_schema);
config.register_plugin(plugin);
let obj_schema = match LinuxTapeDrive::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("linux".to_string(), Some("name".to_string()), obj_schema);
config.register_plugin(plugin);
let obj_schema = match ScsiTapeChanger::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("changer".to_string(), Some("name".to_string()), obj_schema);
config.register_plugin(plugin);
config
}
pub const DRIVE_CFG_FILENAME: &str = "/etc/proxmox-backup/tape.cfg";
pub const DRIVE_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.tape.lck";
pub fn lock() -> Result<std::fs::File, Error> {
open_file_locked(DRIVE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(DRIVE_CFG_FILENAME)?;
let content = content.unwrap_or(String::from(""));
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(DRIVE_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(DRIVE_CFG_FILENAME, &config)?;
let backup_user = crate::backup::backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= backup
let options = CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(backup_user.gid);
replace_file(DRIVE_CFG_FILENAME, raw.as_bytes(), options)?;
Ok(())
}
pub fn check_drive_exists(config: &SectionConfigData, drive: &str) -> Result<(), Error> {
match config.sections.get(drive) {
Some((section_type, _)) => {
if !(section_type == "linux" || section_type == "virtual") {
bail!("Entry '{}' exists, but is not a tape drive", drive);
}
}
None => bail!("Drive '{}' does not exist", drive),
}
Ok(())
}
// shell completion helper
/// List all drive names
pub fn complete_drive_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter()
.map(|(id, _)| id.to_string())
.collect(),
Err(_) => return vec![],
}
}
/// List Linux tape drives
pub fn complete_linux_drive_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter()
.filter(|(_id, (section_type, _))| {
section_type == "linux"
})
.map(|(id, _)| id.to_string())
.collect(),
Err(_) => return vec![],
}
}
/// List Scsi tape changer names
pub fn complete_changer_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter()
.filter(|(_id, (section_type, _))| {
section_type == "changer"
})
.map(|(id, _)| id.to_string())
.collect(),
Err(_) => return vec![],
}
}

88
src/config/media_pool.rs Normal file
View File

@ -0,0 +1,88 @@
use std::collections::HashMap;
use anyhow::Error;
use lazy_static::lazy_static;
use proxmox::{
api::{
schema::*,
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
}
},
tools::fs::{
open_file_locked,
replace_file,
CreateOptions,
},
};
use crate::{
api2::types::{
MEDIA_POOL_NAME_SCHEMA,
MediaPoolConfig,
},
};
lazy_static! {
static ref CONFIG: SectionConfig = init();
}
fn init() -> SectionConfig {
let mut config = SectionConfig::new(&MEDIA_POOL_NAME_SCHEMA);
let obj_schema = match MediaPoolConfig::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("pool".to_string(), Some("name".to_string()), obj_schema);
config.register_plugin(plugin);
config
}
pub const MEDIA_POOL_CFG_FILENAME: &'static str = "/etc/proxmox-backup/media-pool.cfg";
pub const MEDIA_POOL_CFG_LOCKFILE: &'static str = "/etc/proxmox-backup/.media-pool.lck";
pub fn lock() -> Result<std::fs::File, Error> {
open_file_locked(MEDIA_POOL_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(MEDIA_POOL_CFG_FILENAME)?;
let content = content.unwrap_or(String::from(""));
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(MEDIA_POOL_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(MEDIA_POOL_CFG_FILENAME, &config)?;
let backup_user = crate::backup::backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= backup
let options = CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(backup_user.gid);
replace_file(MEDIA_POOL_CFG_FILENAME, raw.as_bytes(), options)?;
Ok(())
}
// shell completion helper
/// List existing pool names
pub fn complete_pool_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => return vec![],
}
}

View File

@ -1,14 +1,16 @@
use std::path::Path; use std::path::Path;
use std::process::Command; use std::process::Command;
use std::collections::HashMap; use std::collections::HashMap;
use std::os::unix::io::{AsRawFd, FromRawFd};
use anyhow::{Error, bail, format_err}; use anyhow::{Error, bail, format_err};
use lazy_static::lazy_static; use lazy_static::lazy_static;
use nix::sys::socket::{socket, AddressFamily, SockType, SockFlag};
use nix::ioctl_read_bad; use nix::ioctl_read_bad;
use nix::sys::socket::{socket, AddressFamily, SockType, SockFlag};
use regex::Regex; use regex::Regex;
use proxmox::*; // for IP macros use proxmox::*; // for IP macros
use proxmox::tools::fd::Fd;
pub static IPV4_REVERSE_MASK: &[&str] = &[ pub static IPV4_REVERSE_MASK: &[&str] = &[
"0.0.0.0", "0.0.0.0",
@ -133,8 +135,24 @@ pub fn get_network_interfaces() -> Result<HashMap<String, bool>, Error> {
let lines = raw.lines(); let lines = raw.lines();
let sock = socket(AddressFamily::Inet, SockType::Datagram, SockFlag::empty(), None) let sock = unsafe {
.or_else(|_| socket(AddressFamily::Inet6, SockType::Datagram, SockFlag::empty(), None))?; Fd::from_raw_fd(
socket(
AddressFamily::Inet,
SockType::Datagram,
SockFlag::empty(),
None,
)
.or_else(|_| {
socket(
AddressFamily::Inet6,
SockType::Datagram,
SockFlag::empty(),
None,
)
})?,
)
};
let mut interface_list = HashMap::new(); let mut interface_list = HashMap::new();
@ -146,7 +164,7 @@ pub fn get_network_interfaces() -> Result<HashMap<String, bool>, Error> {
for (i, b) in std::ffi::CString::new(ifname)?.as_bytes_with_nul().iter().enumerate() { for (i, b) in std::ffi::CString::new(ifname)?.as_bytes_with_nul().iter().enumerate() {
if i < (libc::IFNAMSIZ-1) { req.ifr_name[i] = *b as libc::c_uchar; } if i < (libc::IFNAMSIZ-1) { req.ifr_name[i] = *b as libc::c_uchar; }
} }
let res = unsafe { get_interface_flags(sock, &mut req)? }; let res = unsafe { get_interface_flags(sock.as_raw_fd(), &mut req)? };
if res != 0 { if res != 0 {
bail!("ioctl get_interface_flags for '{}' failed ({})", ifname, res); bail!("ioctl get_interface_flags for '{}' failed ({})", ifname, res);
} }

View File

@ -30,3 +30,5 @@ pub mod auth_helpers;
pub mod auth; pub mod auth;
pub mod rrd; pub mod rrd;
pub mod tape;

View File

@ -237,7 +237,15 @@ impl<'a, 'b> Archiver<'a, 'b> {
let old_patterns_count = self.patterns.len(); let old_patterns_count = self.patterns.len();
self.read_pxar_excludes(dir.as_raw_fd())?; self.read_pxar_excludes(dir.as_raw_fd())?;
let file_list = self.generate_directory_file_list(&mut dir, is_root)?; let mut file_list = self.generate_directory_file_list(&mut dir, is_root)?;
if is_root && old_patterns_count > 0 {
file_list.push(FileListEntry {
name: CString::new(".pxarexclude-cli").unwrap(),
path: PathBuf::new(),
stat: unsafe { std::mem::zeroed() },
});
}
let dir_fd = dir.as_raw_fd(); let dir_fd = dir.as_raw_fd();
@ -247,7 +255,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
let file_name = file_entry.name.to_bytes(); let file_name = file_entry.name.to_bytes();
if is_root && file_name == b".pxarexclude-cli" { if is_root && file_name == b".pxarexclude-cli" {
self.encode_pxarexclude_cli(encoder, &file_entry.name)?; self.encode_pxarexclude_cli(encoder, &file_entry.name, old_patterns_count)?;
continue; continue;
} }
@ -379,8 +387,9 @@ impl<'a, 'b> Archiver<'a, 'b> {
&mut self, &mut self,
encoder: &mut Encoder, encoder: &mut Encoder,
file_name: &CStr, file_name: &CStr,
patterns_count: usize,
) -> Result<(), Error> { ) -> Result<(), Error> {
let content = generate_pxar_excludes_cli(&self.patterns); let content = generate_pxar_excludes_cli(&self.patterns[..patterns_count]);
if let Some(ref mut catalog) = self.catalog { if let Some(ref mut catalog) = self.catalog {
catalog.add_file(file_name, content.len() as u64, 0)?; catalog.add_file(file_name, content.len() as u64, 0)?;
@ -404,14 +413,6 @@ impl<'a, 'b> Archiver<'a, 'b> {
let mut file_list = Vec::new(); let mut file_list = Vec::new();
if is_root && !self.patterns.is_empty() {
file_list.push(FileListEntry {
name: CString::new(".pxarexclude-cli").unwrap(),
path: PathBuf::new(),
stat: unsafe { std::mem::zeroed() },
});
}
for file in dir.iter() { for file in dir.iter() {
let file = file?; let file = file?;
@ -425,10 +426,6 @@ impl<'a, 'b> Archiver<'a, 'b> {
continue; continue;
} }
if file_name_bytes == b".pxarexclude" {
continue;
}
let os_file_name = OsStr::from_bytes(file_name_bytes); let os_file_name = OsStr::from_bytes(file_name_bytes);
assert_single_path_component(os_file_name)?; assert_single_path_component(os_file_name)?;
let full_path = self.path.join(os_file_name); let full_path = self.path.join(os_file_name);
@ -443,9 +440,10 @@ impl<'a, 'b> Archiver<'a, 'b> {
Err(err) => bail!("stat failed on {:?}: {}", full_path, err), Err(err) => bail!("stat failed on {:?}: {}", full_path, err),
}; };
let match_path = PathBuf::from("/").join(full_path.clone());
if self if self
.patterns .patterns
.matches(full_path.as_os_str().as_bytes(), Some(stat.st_mode as u32)) .matches(match_path.as_os_str().as_bytes(), Some(stat.st_mode as u32))
== Some(MatchType::Exclude) == Some(MatchType::Exclude)
{ {
continue; continue;

View File

@ -8,6 +8,7 @@ use nix::fcntl::OFlag;
use nix::sys::stat::{mkdirat, Mode}; use nix::sys::stat::{mkdirat, Mode};
use proxmox::sys::error::SysError; use proxmox::sys::error::SysError;
use proxmox::tools::fd::BorrowedFd;
use pxar::Metadata; use pxar::Metadata;
use crate::pxar::tools::{assert_single_path_component, perms_from_metadata}; use crate::pxar::tools::{assert_single_path_component, perms_from_metadata};
@ -35,7 +36,11 @@ impl PxarDir {
} }
} }
fn create_dir(&mut self, parent: RawFd, allow_existing_dirs: bool) -> Result<RawFd, Error> { fn create_dir(
&mut self,
parent: RawFd,
allow_existing_dirs: bool,
) -> Result<BorrowedFd, Error> {
match mkdirat( match mkdirat(
parent, parent,
self.file_name.as_os_str(), self.file_name.as_os_str(),
@ -52,7 +57,7 @@ impl PxarDir {
self.open_dir(parent) self.open_dir(parent)
} }
fn open_dir(&mut self, parent: RawFd) -> Result<RawFd, Error> { fn open_dir(&mut self, parent: RawFd) -> Result<BorrowedFd, Error> {
let dir = Dir::openat( let dir = Dir::openat(
parent, parent,
self.file_name.as_os_str(), self.file_name.as_os_str(),
@ -60,14 +65,14 @@ impl PxarDir {
Mode::empty(), Mode::empty(),
)?; )?;
let fd = dir.as_raw_fd(); let fd = BorrowedFd::new(&dir);
self.dir = Some(dir); self.dir = Some(dir);
Ok(fd) Ok(fd)
} }
pub fn try_as_raw_fd(&self) -> Option<RawFd> { pub fn try_as_borrowed_fd(&self) -> Option<BorrowedFd> {
self.dir.as_ref().map(AsRawFd::as_raw_fd) self.dir.as_ref().map(BorrowedFd::new)
} }
pub fn metadata(&self) -> &Metadata { pub fn metadata(&self) -> &Metadata {
@ -119,32 +124,39 @@ impl PxarDirStack {
Ok(out) Ok(out)
} }
pub fn last_dir_fd(&mut self, allow_existing_dirs: bool) -> Result<RawFd, Error> { pub fn last_dir_fd(&mut self, allow_existing_dirs: bool) -> Result<BorrowedFd, Error> {
// should not be possible given the way we use it: // should not be possible given the way we use it:
assert!(!self.dirs.is_empty(), "PxarDirStack underrun"); assert!(!self.dirs.is_empty(), "PxarDirStack underrun");
let dirs_len = self.dirs.len();
let mut fd = self.dirs[self.created - 1] let mut fd = self.dirs[self.created - 1]
.try_as_raw_fd() .try_as_borrowed_fd()
.ok_or_else(|| format_err!("lost track of directory file descriptors"))?; .ok_or_else(|| format_err!("lost track of directory file descriptors"))?
while self.created < self.dirs.len() { .as_raw_fd();
fd = self.dirs[self.created].create_dir(fd, allow_existing_dirs)?;
while self.created < dirs_len {
fd = self.dirs[self.created]
.create_dir(fd, allow_existing_dirs)?
.as_raw_fd();
self.created += 1; self.created += 1;
} }
Ok(fd) self.dirs[self.created - 1]
.try_as_borrowed_fd()
.ok_or_else(|| format_err!("lost track of directory file descriptors"))
} }
pub fn create_last_dir(&mut self, allow_existing_dirs: bool) -> Result<(), Error> { pub fn create_last_dir(&mut self, allow_existing_dirs: bool) -> Result<(), Error> {
let _: RawFd = self.last_dir_fd(allow_existing_dirs)?; let _: BorrowedFd = self.last_dir_fd(allow_existing_dirs)?;
Ok(()) Ok(())
} }
pub fn root_dir_fd(&self) -> Result<RawFd, Error> { pub fn root_dir_fd(&self) -> Result<BorrowedFd, Error> {
// should not be possible given the way we use it: // should not be possible given the way we use it:
assert!(!self.dirs.is_empty(), "PxarDirStack underrun"); assert!(!self.dirs.is_empty(), "PxarDirStack underrun");
self.dirs[0] self.dirs[0]
.try_as_raw_fd() .try_as_borrowed_fd()
.ok_or_else(|| format_err!("lost track of directory file descriptors")) .ok_or_else(|| format_err!("lost track of directory file descriptors"))
} }
} }

View File

@ -277,11 +277,11 @@ impl Extractor {
.map_err(|err| format_err!("unexpected end of directory entry: {}", err))? .map_err(|err| format_err!("unexpected end of directory entry: {}", err))?
.ok_or_else(|| format_err!("broken pxar archive (directory stack underrun)"))?; .ok_or_else(|| format_err!("broken pxar archive (directory stack underrun)"))?;
if let Some(fd) = dir.try_as_raw_fd() { if let Some(fd) = dir.try_as_borrowed_fd() {
metadata::apply( metadata::apply(
self.feature_flags, self.feature_flags,
dir.metadata(), dir.metadata(),
fd, fd.as_raw_fd(),
&CString::new(dir.file_name().as_bytes())?, &CString::new(dir.file_name().as_bytes())?,
&mut self.on_error, &mut self.on_error,
) )
@ -298,6 +298,7 @@ impl Extractor {
fn parent_fd(&mut self) -> Result<RawFd, Error> { fn parent_fd(&mut self) -> Result<RawFd, Error> {
self.dir_stack self.dir_stack
.last_dir_fd(self.allow_existing_dirs) .last_dir_fd(self.allow_existing_dirs)
.map(|d| d.as_raw_fd())
.map_err(|err| format_err!("failed to get parent directory file descriptor: {}", err)) .map_err(|err| format_err!("failed to get parent directory file descriptor: {}", err))
} }
@ -325,7 +326,7 @@ impl Extractor {
let root = self.dir_stack.root_dir_fd()?; let root = self.dir_stack.root_dir_fd()?;
let target = CString::new(link.as_bytes())?; let target = CString::new(link.as_bytes())?;
nix::unistd::linkat( nix::unistd::linkat(
Some(root), Some(root.as_raw_fd()),
target.as_c_str(), target.as_c_str(),
Some(parent), Some(parent),
file_name, file_name,

View File

@ -81,7 +81,7 @@ const VERIFY_ERR_TEMPLATE: &str = r###"
Job ID: {{job.id}} Job ID: {{job.id}}
Datastore: {{job.store}} Datastore: {{job.store}}
Verification failed on these snapshots: Verification failed on these snapshots/groups:
{{#each errors}} {{#each errors}}
{{this~}} {{this~}}

View File

@ -4,7 +4,7 @@ use proxmox::try_block;
use crate::{ use crate::{
api2::types::*, api2::types::*,
backup::{compute_prune_info, BackupGroup, DataStore, PruneOptions}, backup::{compute_prune_info, BackupInfo, DataStore, PruneOptions},
server::jobstate::Job, server::jobstate::Job,
server::WorkerTask, server::WorkerTask,
task_log, task_log,
@ -43,7 +43,7 @@ pub fn do_prune_job(
let base_path = datastore.base_path(); let base_path = datastore.base_path();
let groups = BackupGroup::list_groups(&base_path)?; let groups = BackupInfo::list_backup_groups(&base_path)?;
for group in groups { for group in groups {
let list = group.list_backups(&base_path)?; let list = group.list_backups(&base_path)?;
let mut prune_info = compute_prune_info(list, &prune_options)?; let mut prune_info = compute_prune_info(list, &prune_options)?;

View File

@ -20,6 +20,7 @@ fn files() -> Vec<&'static str> {
fn commands() -> Vec<(&'static str, Vec<&'static str>)> { fn commands() -> Vec<(&'static str, Vec<&'static str>)> {
vec![ vec![
// ("<command>", vec![<arg [, arg]>]) // ("<command>", vec![<arg [, arg]>])
("proxmox-backup-manager", vec!["versions", "--verbose"]),
("df", vec!["-h"]), ("df", vec!["-h"]),
("lsblk", vec!["--ascii"]), ("lsblk", vec!["--ascii"]),
("zpool", vec!["status"]), ("zpool", vec!["status"]),
@ -54,10 +55,10 @@ pub fn generate_report() -> String {
.map(|file_name| { .map(|file_name| {
let content = match file_read_optional_string(Path::new(file_name)) { let content = match file_read_optional_string(Path::new(file_name)) {
Ok(Some(content)) => content, Ok(Some(content)) => content,
Ok(None) => String::from("# file does not exists"), Ok(None) => String::from("# file does not exist"),
Err(err) => err.to_string(), Err(err) => err.to_string(),
}; };
format!("# cat '{}'\n{}", file_name, content) format!("$ cat '{}'\n{}", file_name, content)
}) })
.collect::<Vec<String>>() .collect::<Vec<String>>()
.join("\n\n"); .join("\n\n");
@ -73,14 +74,14 @@ pub fn generate_report() -> String {
Ok(output) => String::from_utf8_lossy(&output.stdout).to_string(), Ok(output) => String::from_utf8_lossy(&output.stdout).to_string(),
Err(err) => err.to_string(), Err(err) => err.to_string(),
}; };
format!("# `{} {}`\n{}", command, args.join(" "), output) format!("$ `{} {}`\n{}", command, args.join(" "), output)
}) })
.collect::<Vec<String>>() .collect::<Vec<String>>()
.join("\n\n"); .join("\n\n");
let function_outputs = function_calls() let function_outputs = function_calls()
.iter() .iter()
.map(|(desc, function)| format!("# {}\n{}", desc, function())) .map(|(desc, function)| format!("$ {}\n{}", desc, function()))
.collect::<Vec<String>>() .collect::<Vec<String>>()
.join("\n\n"); .join("\n\n");

View File

@ -623,6 +623,10 @@ fn check_auth(
.ok_or_else(|| format_err!("failed to split API token header"))?; .ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?; let tokenid: Authid = tokenid.parse()?;
if !user_info.is_active_auth_id(&tokenid) {
bail!("user account or token disabled or expired.");
}
let tokensecret = parts.next() let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?; .ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret) let tokensecret = percent_decode_str(tokensecret)

View File

@ -69,8 +69,15 @@ pub fn do_verification_job(
let result = verify_all_backups(datastore, worker.clone(), worker.upid(), None, Some(&filter)); let result = verify_all_backups(datastore, worker.clone(), worker.upid(), None, Some(&filter));
let job_result = match result { let job_result = match result {
Ok(ref errors) if errors.is_empty() => Ok(()), Ok(ref failed_dirs) if failed_dirs.is_empty() => Ok(()),
Ok(_) => Err(format_err!("verification failed - please check the log for details")), Ok(ref failed_dirs) => {
worker.log("Failed to verify the following snapshots/groups:");
for dir in failed_dirs {
worker.log(format!("\t{}", dir));
}
Err(format_err!("verification failed - please check the log for details"))
},
Err(_) => Err(format_err!("verification failed - job aborted")), Err(_) => Err(format_err!("verification failed - job aborted")),
}; };

View File

@ -347,6 +347,9 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
let mut finish_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN)?; let mut finish_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN)?;
let had_index_file = !finish_list.is_empty(); let had_index_file = !finish_list.is_empty();
// We use filter_map because one negative case wants to *move* the data into `finish_list`,
// clippy doesn't quite catch this!
#[allow(clippy::unnecessary_filter_map)]
let mut active_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)? let mut active_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?
.into_iter() .into_iter()
.filter_map(|info| { .filter_map(|info| {
@ -601,7 +604,7 @@ impl WorkerTask {
path.push(upid.to_string()); path.push(upid.to_string());
let logger_options = FileLogOptions { let logger_options = FileLogOptions {
to_stdout: to_stdout, to_stdout,
exclusive: true, exclusive: true,
prefix_time: true, prefix_time: true,
read: true, read: true,

57
src/tape/changer/email.rs Normal file
View File

@ -0,0 +1,57 @@
use anyhow::Error;
use proxmox::tools::email::sendmail;
use super::MediaChange;
/// Send email to a person to request a manual media change
pub struct ChangeMediaEmail {
drive: String,
to: String,
}
impl ChangeMediaEmail {
pub fn new(drive: &str, to: &str) -> Self {
Self {
drive: String::from(drive),
to: String::from(to),
}
}
}
impl MediaChange for ChangeMediaEmail {
fn load_media(&mut self, changer_id: &str) -> Result<(), Error> {
let subject = format!("Load Media '{}' request for drive '{}'", changer_id, self.drive);
let mut text = String::new();
text.push_str("Please insert the requested media into the backup drive.\n\n");
text.push_str(&format!("Drive: {}\n", self.drive));
text.push_str(&format!("Media: {}\n", changer_id));
sendmail(
&[&self.to],
&subject,
Some(&text),
None,
None,
None,
)?;
Ok(())
}
fn unload_media(&mut self) -> Result<(), Error> {
/* ignore ? */
Ok(())
}
fn list_media_changer_ids(&self) -> Result<Vec<String>, Error> {
Ok(Vec::new())
}
}

View File

@ -0,0 +1,145 @@
use anyhow::{bail, Error};
use crate::{
tape::changer::{
MediaChange,
MtxStatus,
ElementStatus,
mtx_status,
mtx_load,
mtx_unload,
},
api2::types::{
ScsiTapeChanger,
LinuxTapeDrive,
},
};
fn unload_to_free_slot(drive_name: &str, path: &str, status: &MtxStatus, drivenum: u64) -> Result<(), Error> {
if drivenum >= status.drives.len() as u64 {
bail!("unload drive '{}' got unexpected drive number '{}' - changer only has '{}' drives",
drive_name, drivenum, status.drives.len());
}
let drive_status = &status.drives[drivenum as usize];
if let Some(slot) = drive_status.loaded_slot {
mtx_unload(path, slot, drivenum)
} else {
let mut free_slot = None;
for i in 0..status.slots.len() {
if let ElementStatus::Empty = status.slots[i] {
free_slot = Some((i+1) as u64);
break;
}
}
if let Some(slot) = free_slot {
mtx_unload(path, slot, drivenum)
} else {
bail!("drive '{}' unload failure - no free slot", drive_name);
}
}
}
impl MediaChange for LinuxTapeDrive {
fn load_media(&mut self, changer_id: &str) -> Result<(), Error> {
if changer_id.starts_with("CLN") {
bail!("unable to load media '{}' (seems top be a a cleaning units)", changer_id);
}
let (config, _digest) = crate::config::drive::config()?;
let changer: ScsiTapeChanger = match self.changer {
Some(ref changer) => config.lookup("changer", changer)?,
None => bail!("drive '{}' has no associated changer", self.name),
};
let status = mtx_status(&changer.path)?;
let drivenum = self.changer_drive_id.unwrap_or(0);
// already loaded?
for (i, drive_status) in status.drives.iter().enumerate() {
if let ElementStatus::VolumeTag(ref tag) = drive_status.status {
if *tag == changer_id {
if i as u64 != drivenum {
bail!("unable to load media '{}' - media in wrong drive ({} != {})",
changer_id, i, drivenum);
}
return Ok(())
}
}
if i as u64 == drivenum {
match drive_status.status {
ElementStatus::Empty => { /* OK */ },
_ => unload_to_free_slot(&self.name, &changer.path, &status, drivenum as u64)?,
}
}
}
let mut slot = None;
for (i, element_status) in status.slots.iter().enumerate() {
if let ElementStatus::VolumeTag(tag) = element_status {
if *tag == changer_id {
slot = Some(i+1);
break;
}
}
}
let slot = match slot {
None => bail!("unable to find media '{}' (offline?)", changer_id),
Some(slot) => slot,
};
mtx_load(&changer.path, slot as u64, drivenum as u64)
}
fn unload_media(&mut self) -> Result<(), Error> {
let (config, _digest) = crate::config::drive::config()?;
let changer: ScsiTapeChanger = match self.changer {
Some(ref changer) => config.lookup("changer", changer)?,
None => return Ok(()),
};
let drivenum = self.changer_drive_id.unwrap_or(0);
let status = mtx_status(&changer.path)?;
unload_to_free_slot(&self.name, &changer.path, &status, drivenum)
}
fn eject_on_unload(&self) -> bool {
true
}
fn list_media_changer_ids(&self) -> Result<Vec<String>, Error> {
let (config, _digest) = crate::config::drive::config()?;
let changer: ScsiTapeChanger = match self.changer {
Some(ref changer) => config.lookup("changer", changer)?,
None => return Ok(Vec::new()),
};
let status = mtx_status(&changer.path)?;
let mut list = Vec::new();
for drive_status in status.drives.iter() {
if let ElementStatus::VolumeTag(ref tag) = drive_status.status {
list.push(tag.clone());
}
}
for element_status in status.slots.iter() {
if let ElementStatus::VolumeTag(ref tag) = element_status {
list.push(tag.clone());
}
}
Ok(list)
}
}

35
src/tape/changer/mod.rs Normal file
View File

@ -0,0 +1,35 @@
mod email;
pub use email::*;
mod parse_mtx_status;
pub use parse_mtx_status::*;
mod mtx_wrapper;
pub use mtx_wrapper::*;
mod linux_tape;
pub use linux_tape::*;
use anyhow::Error;
/// Interface to media change devices
pub trait MediaChange {
/// Load media into drive
///
/// This unloads first if the drive is already loaded with another media.
fn load_media(&mut self, changer_id: &str) -> Result<(), Error>;
/// Unload media from drive
///
/// This is a nop on drives without autoloader.
fn unload_media(&mut self) -> Result<(), Error>;
/// Returns true if unload_media automatically ejects drive media
fn eject_on_unload(&self) -> bool {
false
}
/// List media changer IDs (barcodes)
fn list_media_changer_ids(&self) -> Result<Vec<String>, Error>;
}

View File

@ -0,0 +1,99 @@
use std::collections::HashSet;
use anyhow::Error;
use proxmox::tools::Uuid;
use crate::{
tools::run_command,
tape::{
Inventory,
changer::{
MtxStatus,
ElementStatus,
parse_mtx_status,
},
},
};
/// Run 'mtx status' and return parsed result.
pub fn mtx_status(path: &str) -> Result<MtxStatus, Error> {
let mut command = std::process::Command::new("mtx");
command.args(&["-f", path, "status"]);
let output = run_command(command, None)?;
let status = parse_mtx_status(&output)?;
Ok(status)
}
/// Run 'mtx load'
pub fn mtx_load(
path: &str,
slot: u64,
drivenum: u64,
) -> Result<(), Error> {
let mut command = std::process::Command::new("mtx");
command.args(&["-f", path, "load", &slot.to_string(), &drivenum.to_string()]);
run_command(command, None)?;
Ok(())
}
/// Run 'mtx unload'
pub fn mtx_unload(
path: &str,
slot: u64,
drivenum: u64,
) -> Result<(), Error> {
let mut command = std::process::Command::new("mtx");
command.args(&["-f", path, "unload", &slot.to_string(), &drivenum.to_string()]);
run_command(command, None)?;
Ok(())
}
/// Run 'mtx transfer'
pub fn mtx_transfer(
path: &str,
from_slot: u64,
to_slot: u64,
) -> Result<(), Error> {
let mut command = std::process::Command::new("mtx");
command.args(&["-f", path, "transfer", &from_slot.to_string(), &to_slot.to_string()]);
run_command(command, None)?;
Ok(())
}
/// Extract the list of online media from MtxStatus
///
/// Returns a HashSet containing all found media Uuid
pub fn mtx_status_to_online_set(status: &MtxStatus, inventory: &Inventory) -> HashSet<Uuid> {
let mut online_set = HashSet::new();
for drive_status in status.drives.iter() {
if let ElementStatus::VolumeTag(ref changer_id) = drive_status.status {
if let Some(media_id) = inventory.find_media_by_changer_id(changer_id) {
online_set.insert(media_id.label.uuid.clone());
}
}
}
for slot_status in status.slots.iter() {
if let ElementStatus::VolumeTag(ref changer_id) = slot_status {
if let Some(media_id) = inventory.find_media_by_changer_id(changer_id) {
online_set.insert(media_id.label.uuid.clone());
}
}
}
online_set
}

View File

@ -0,0 +1,161 @@
use anyhow::Error;
use nom::{
bytes::complete::{take_while, tag},
};
use crate::tools::nom::{
parse_complete, multispace0, multispace1, parse_u64,
parse_failure, parse_error, IResult,
};
pub enum ElementStatus {
Empty,
Full,
VolumeTag(String),
}
pub struct DriveStatus {
pub loaded_slot: Option<u64>,
pub status: ElementStatus,
}
pub struct MtxStatus {
pub drives: Vec<DriveStatus>,
pub slots: Vec<ElementStatus>,
}
// Recognizes one line
fn next_line(i: &str) -> IResult<&str, &str> {
let (i, line) = take_while(|c| (c != '\n'))(i)?;
if i.is_empty() {
Ok((i, line))
} else {
Ok((&i[1..], line))
}
}
fn parse_storage_changer(i: &str) -> IResult<&str, ()> {
let (i, _) = multispace0(i)?;
let (i, _) = tag("Storage Changer")(i)?;
let (i, _) = next_line(i)?; // skip
Ok((i, ()))
}
fn parse_drive_status(i: &str) -> IResult<&str, DriveStatus> {
let mut loaded_slot = None;
if i.starts_with("Empty") {
return Ok((&i[5..], DriveStatus { loaded_slot, status: ElementStatus::Empty }));
}
let (mut i, _) = tag("Full (")(i)?;
if i.starts_with("Storage Element ") {
let n = &i[16..];
let (n, id) = parse_u64(n)?;
loaded_slot = Some(id);
let (n, _) = tag(" Loaded")(n)?;
i = n;
} else {
let (n, _) = take_while(|c| !(c == ')' || c == '\n'))(i)?; // skip to ')'
i = n;
}
let (i, _) = tag(")")(i)?;
if i.starts_with(":VolumeTag = ") {
let i = &i[13..];
let (i, tag) = take_while(|c| !(c == ' ' || c == ':' || c == '\n'))(i)?;
let (i, _) = take_while(|c| c != '\n')(i)?; // skip to eol
return Ok((i, DriveStatus { loaded_slot, status: ElementStatus::VolumeTag(tag.to_string()) }));
}
let (i, _) = take_while(|c| c != '\n')(i)?; // skip
Ok((i, DriveStatus { loaded_slot, status: ElementStatus::Full }))
}
fn parse_slot_status(i: &str) -> IResult<&str, ElementStatus> {
if i.starts_with("Empty") {
return Ok((&i[5..], ElementStatus::Empty));
}
if i.starts_with("Full ") {
let mut n = &i[5..];
if n.starts_with(":VolumeTag=") {
n = &n[11..];
let (n, tag) = take_while(|c| !(c == ' ' || c == ':' || c == '\n'))(n)?;
let (n, _) = take_while(|c| c != '\n')(n)?; // skip to eol
return Ok((n, ElementStatus::VolumeTag(tag.to_string())));
}
let (n, _) = take_while(|c| c != '\n')(n)?; // skip
return Ok((n, ElementStatus::Full));
}
Err(parse_error(i, "unexptected element status"))
}
fn parse_data_transfer_element(i: &str) -> IResult<&str, (u64, DriveStatus)> {
let (i, _) = tag("Data Transfer Element")(i)?;
let (i, _) = multispace1(i)?;
let (i, id) = parse_u64(i)?;
let (i, _) = nom::character::complete::char(':')(i)?;
let (i, element_status) = parse_drive_status(i)?;
let (i, _) = nom::character::complete::newline(i)?;
Ok((i, (id, element_status)))
}
fn parse_storage_element(i: &str) -> IResult<&str, (u64, ElementStatus)> {
let (i, _) = multispace1(i)?;
let (i, _) = tag("Storage Element")(i)?;
let (i, _) = multispace1(i)?;
let (i, id) = parse_u64(i)?;
let (i, _) = nom::character::complete::char(':')(i)?;
let (i, element_status) = parse_slot_status(i)?;
let (i, _) = nom::character::complete::newline(i)?;
Ok((i, (id, element_status)))
}
fn parse_status(i: &str) -> IResult<&str, MtxStatus> {
let (mut i, _) = parse_storage_changer(i)?;
let mut drives = Vec::new();
while let Ok((n, (id, drive_status))) = parse_data_transfer_element(i) {
if id != drives.len() as u64 {
return Err(parse_failure(i, "unexpected drive number"));
}
i = n;
drives.push(drive_status);
}
let mut slots = Vec::new();
while let Ok((n, (id, element_status))) = parse_storage_element(i) {
if id != (slots.len() as u64 + 1) {
return Err(parse_failure(i, "unexpected slot number"));
}
i = n;
slots.push(element_status);
}
let status = MtxStatus { drives, slots };
Ok((i, status))
}
/// Parses the output from 'mtx status'
pub fn parse_mtx_status(i: &str) -> Result<MtxStatus, Error> {
let status = parse_complete("mtx status", i, parse_status)?;
Ok(status)
}

View File

@ -0,0 +1,232 @@
use std::path::{Path, PathBuf};
use std::collections::HashMap;
use anyhow::{bail, Error};
use crate::{
api2::types::{
DeviceKind,
TapeDeviceInfo,
},
tools::fs::scan_subdir,
};
/// List linux tape changer devices
pub fn linux_tape_changer_list() -> Vec<TapeDeviceInfo> {
lazy_static::lazy_static!{
static ref SCSI_GENERIC_NAME_REGEX: regex::Regex =
regex::Regex::new(r"^sg\d+$").unwrap();
}
let mut list = Vec::new();
let dir_iter = match scan_subdir(
libc::AT_FDCWD,
"/sys/class/scsi_generic",
&SCSI_GENERIC_NAME_REGEX)
{
Err(_) => return list,
Ok(iter) => iter,
};
for item in dir_iter {
let item = match item {
Err(_) => continue,
Ok(item) => item,
};
let name = item.file_name().to_str().unwrap().to_string();
let mut sys_path = PathBuf::from("/sys/class/scsi_generic");
sys_path.push(&name);
let device = match udev::Device::from_syspath(&sys_path) {
Err(_) => continue,
Ok(device) => device,
};
let devnum = match device.devnum() {
None => continue,
Some(devnum) => devnum,
};
let parent = match device.parent() {
None => continue,
Some(parent) => parent,
};
match parent.attribute_value("type") {
Some(type_osstr) => {
if type_osstr != "8" {
continue;
}
}
_ => { continue; }
}
// let mut test_path = sys_path.clone();
// test_path.push("device/scsi_changer");
// if !test_path.exists() { continue; }
let _dev_path = match device.devnode().map(Path::to_owned) {
None => continue,
Some(dev_path) => dev_path,
};
let serial = match device.property_value("ID_SCSI_SERIAL")
.map(std::ffi::OsString::from)
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
{
None => continue,
Some(serial) => serial,
};
let vendor = device.property_value("ID_VENDOR")
.map(std::ffi::OsString::from)
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
.unwrap_or(String::from("unknown"));
let model = device.property_value("ID_MODEL")
.map(std::ffi::OsString::from)
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
.unwrap_or(String::from("unknown"));
let dev_path = format!("/dev/tape/by-id/scsi-{}", serial);
if PathBuf::from(&dev_path).exists() {
list.push(TapeDeviceInfo {
kind: DeviceKind::Changer,
path: dev_path,
serial,
vendor,
model,
major: unsafe { libc::major(devnum) },
minor: unsafe { libc::minor(devnum) },
});
}
}
list
}
/// List linux tape devices (non-rewinding)
pub fn linux_tape_device_list() -> Vec<TapeDeviceInfo> {
lazy_static::lazy_static!{
static ref NST_TAPE_NAME_REGEX: regex::Regex =
regex::Regex::new(r"^nst\d+$").unwrap();
}
let mut list = Vec::new();
let dir_iter = match scan_subdir(
libc::AT_FDCWD,
"/sys/class/scsi_tape",
&NST_TAPE_NAME_REGEX)
{
Err(_) => return list,
Ok(iter) => iter,
};
for item in dir_iter {
let item = match item {
Err(_) => continue,
Ok(item) => item,
};
let name = item.file_name().to_str().unwrap().to_string();
let mut sys_path = PathBuf::from("/sys/class/scsi_tape");
sys_path.push(&name);
let device = match udev::Device::from_syspath(&sys_path) {
Err(_) => continue,
Ok(device) => device,
};
let devnum = match device.devnum() {
None => continue,
Some(devnum) => devnum,
};
let _dev_path = match device.devnode().map(Path::to_owned) {
None => continue,
Some(dev_path) => dev_path,
};
let serial = match device.property_value("ID_SCSI_SERIAL")
.map(std::ffi::OsString::from)
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
{
None => continue,
Some(serial) => serial,
};
let vendor = device.property_value("ID_VENDOR")
.map(std::ffi::OsString::from)
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
.unwrap_or(String::from("unknown"));
let model = device.property_value("ID_MODEL")
.map(std::ffi::OsString::from)
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
.unwrap_or(String::from("unknown"));
let dev_path = format!("/dev/tape/by-id/scsi-{}-nst", serial);
if PathBuf::from(&dev_path).exists() {
list.push(TapeDeviceInfo {
kind: DeviceKind::Tape,
path: dev_path,
serial,
vendor,
model,
major: unsafe { libc::major(devnum) },
minor: unsafe { libc::minor(devnum) },
});
}
}
list
}
/// Test if path is a linux tape device
pub fn lookup_drive<'a>(
drives: &'a[TapeDeviceInfo],
path: &str,
) -> Option<&'a TapeDeviceInfo> {
if let Ok(stat) = nix::sys::stat::stat(path) {
let major = unsafe { libc::major(stat.st_rdev) };
let minor = unsafe { libc::minor(stat.st_rdev) };
drives.iter().find(|d| d.major == major && d.minor == minor)
} else {
None
}
}
/// Make sure path is a linux tape device
pub fn check_drive_path(
drives: &[TapeDeviceInfo],
path: &str,
) -> Result<(), Error> {
if lookup_drive(drives, path).is_none() {
bail!("path '{}' is not a linux (non-rewinding) tape device", path);
}
Ok(())
}
// shell completion helper
/// List changer device paths
pub fn complete_changer_path(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
linux_tape_changer_list().iter().map(|v| v.path.clone()).collect()
}
/// List tape device paths
pub fn complete_drive_path(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
linux_tape_device_list().iter().map(|v| v.path.clone()).collect()
}

View File

@ -0,0 +1,153 @@
//! Linux Magnetic Tape Driver ioctl definitions
//!
//! from: /usr/include/x86_64-linux-gnu/sys/mtio.h
//!
//! also see: man 4 st
#[repr(C)]
pub struct mtop {
pub mt_op: MTCmd, /* Operations defined below. */
pub mt_count: libc::c_int, /* How many of them. */
}
#[repr(i16)]
#[allow(dead_code)] // do not warn about unused command
pub enum MTCmd {
MTRESET = 0, /* +reset drive in case of problems */
MTFSF = 1, /* forward space over FileMark,
* position at first record of next file
*/
MTBSF = 2, /* backward space FileMark (position before FM) */
MTFSR = 3, /* forward space record */
MTBSR = 4, /* backward space record */
MTWEOF = 5, /* write an end-of-file record (mark) */
MTREW = 6, /* rewind */
MTOFFL = 7, /* rewind and put the drive offline (eject?) */
MTNOP = 8, /* no op, set status only (read with MTIOCGET) */
MTRETEN = 9, /* retension tape */
MTBSFM = 10, /* +backward space FileMark, position at FM */
MTFSFM = 11, /* +forward space FileMark, position at FM */
MTEOM = 12, /* goto end of recorded media (for appending files).
* MTEOM positions after the last FM, ready for
* appending another file.
*/
MTERASE = 13, /* erase tape -- be careful! */
MTRAS1 = 14, /* run self test 1 (nondestructive) */
MTRAS2 = 15, /* run self test 2 (destructive) */
MTRAS3 = 16, /* reserved for self test 3 */
MTSETBLK = 20, /* set block length (SCSI) */
MTSETDENSITY = 21, /* set tape density (SCSI) */
MTSEEK = 22, /* seek to block (Tandberg, etc.) */
MTTELL = 23, /* tell block (Tandberg, etc.) */
MTSETDRVBUFFER = 24,/* set the drive buffering according to SCSI-2 */
/* ordinary buffered operation with code 1 */
MTFSS = 25, /* space forward over setmarks */
MTBSS = 26, /* space backward over setmarks */
MTWSM = 27, /* write setmarks */
MTLOCK = 28, /* lock the drive door */
MTUNLOCK = 29, /* unlock the drive door */
MTLOAD = 30, /* execute the SCSI load command */
MTUNLOAD = 31, /* execute the SCSI unload command */
MTCOMPRESSION = 32, /* control compression with SCSI mode page 15 */
MTSETPART = 33, /* Change the active tape partition */
MTMKPART = 34, /* Format the tape with one or two partitions */
MTWEOFI = 35, /* write an end-of-file record (mark) in immediate mode */
}
//#define MTIOCTOP _IOW('m', 1, struct mtop) /* Do a mag tape op. */
nix::ioctl_write_ptr!(mtioctop, b'm', 1, mtop);
// from: /usr/include/x86_64-linux-gnu/sys/mtio.h
#[derive(Default, Debug)]
#[repr(C)]
pub struct mtget {
pub mt_type: libc::c_long, /* Type of magtape device. */
pub mt_resid: libc::c_long, /* Residual count: (not sure)
number of bytes ignored, or
number of files not skipped, or
number of records not skipped. */
/* The following registers are device dependent. */
pub mt_dsreg: libc::c_long, /* Status register. */
pub mt_gstat: libc::c_long, /* Generic (device independent) status. */
pub mt_erreg: libc::c_long, /* Error register. */
/* The next two fields are not always used. */
pub mt_fileno: i32 , /* Number of current file on tape. */
pub mt_blkno: i32, /* Current block number. */
}
//#define MTIOCGET _IOR('m', 2, struct mtget) /* Get tape status. */
nix::ioctl_read!(mtiocget, b'm', 2, mtget);
#[repr(C)]
#[allow(dead_code)]
pub struct mtpos {
pub mt_blkno: libc::c_long, /* current block number */
}
//#define MTIOCPOS _IOR('m', 3, struct mtpos) /* Get tape position.*/
nix::ioctl_read!(mtiocpos, b'm', 3, mtpos);
pub const MT_ST_BLKSIZE_MASK: libc::c_long = 0x0ffffff;
pub const MT_ST_BLKSIZE_SHIFT: usize = 0;
pub const MT_ST_DENSITY_MASK: libc::c_long = 0xff000000;
pub const MT_ST_DENSITY_SHIFT: usize = 24;
pub const MT_TYPE_ISSCSI1: libc::c_long = 0x71; /* Generic ANSI SCSI-1 tape unit. */
pub const MT_TYPE_ISSCSI2: libc::c_long = 0x72; /* Generic ANSI SCSI-2 tape unit. */
// Generic Mag Tape (device independent) status macros for examining mt_gstat -- HP-UX compatible
// from: /usr/include/x86_64-linux-gnu/sys/mtio.h
bitflags::bitflags!{
pub struct GMTStatusFlags: libc::c_long {
const EOF = 0x80000000;
const BOT = 0x40000000;
const EOT = 0x20000000;
const SM = 0x10000000; /* DDS setmark */
const EOD = 0x08000000; /* DDS EOD */
const WR_PROT = 0x04000000;
const ONLINE = 0x01000000;
const D_6250 = 0x00800000;
const D_1600 = 0x00400000;
const D_800 = 0x00200000;
const DRIVE_OPEN = 0x00040000; /* Door open (no tape). */
const IM_REP_EN = 0x00010000; /* Immediate report mode.*/
const END_OF_STREAM = 0b00000001;
}
}
#[repr(i32)]
#[allow(non_camel_case_types, dead_code)]
pub enum SetDrvBufferCmd {
MT_ST_BOOLEANS = 0x10000000,
MT_ST_SETBOOLEANS = 0x30000000,
MT_ST_CLEARBOOLEANS = 0x40000000,
MT_ST_WRITE_THRESHOLD = 0x20000000,
MT_ST_DEF_BLKSIZE = 0x50000000,
MT_ST_DEF_OPTIONS = 0x60000000,
MT_ST_SET_TIMEOUT = 0x70000000,
MT_ST_SET_LONG_TIMEOUT = 0x70100000,
MT_ST_SET_CLN = 0x80000000u32 as i32,
}
bitflags::bitflags!{
pub struct SetDrvBufferOptions: i32 {
const MT_ST_BUFFER_WRITES = 0x1;
const MT_ST_ASYNC_WRITES = 0x2;
const MT_ST_READ_AHEAD = 0x4;
const MT_ST_DEBUGGING = 0x8;
const MT_ST_TWO_FM = 0x10;
const MT_ST_FAST_MTEOM = 0x20;
const MT_ST_AUTO_LOCK = 0x40;
const MT_ST_DEF_WRITES = 0x80;
const MT_ST_CAN_BSR = 0x100;
const MT_ST_NO_BLKLIMS = 0x200;
const MT_ST_CAN_PARTITIONS = 0x400;
const MT_ST_SCSI2LOGICAL = 0x800;
const MT_ST_SYSV = 0x1000;
const MT_ST_NOWAIT = 0x2000;
const MT_ST_SILI = 0x4000;
}
}

View File

@ -0,0 +1,446 @@
use std::fs::{OpenOptions, File};
use std::os::unix::fs::OpenOptionsExt;
use std::os::unix::io::AsRawFd;
use std::convert::TryFrom;
use anyhow::{bail, format_err, Error};
use nix::fcntl::{fcntl, FcntlArg, OFlag};
use proxmox::sys::error::SysResult;
use proxmox::tools::Uuid;
use crate::{
tape::{
TapeRead,
TapeWrite,
drive::{
LinuxTapeDrive,
TapeDriver,
linux_mtio::*,
},
file_formats::{
PROXMOX_TAPE_BLOCK_SIZE,
MediaSetLabel,
MediaContentHeader,
PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
},
helpers::{
BlockedReader,
BlockedWriter,
},
}
};
#[derive(Debug)]
pub enum TapeDensity {
None, // no tape loaded
LTO2,
LTO3,
LTO4,
LTO5,
LTO6,
LTO7,
LTO7M8,
LTO8,
}
impl TryFrom<u8> for TapeDensity {
type Error = Error;
fn try_from(value: u8) -> Result<Self, Self::Error> {
let density = match value {
0x00 => TapeDensity::None,
0x42 => TapeDensity::LTO2,
0x44 => TapeDensity::LTO3,
0x46 => TapeDensity::LTO4,
0x58 => TapeDensity::LTO5,
0x5a => TapeDensity::LTO6,
0x5c => TapeDensity::LTO7,
0x5d => TapeDensity::LTO7M8,
0x5e => TapeDensity::LTO8,
_ => bail!("unknown tape density code 0x{:02x}", value),
};
Ok(density)
}
}
#[derive(Debug)]
pub struct DriveStatus {
pub blocksize: u32,
pub density: TapeDensity,
pub status: GMTStatusFlags,
pub file_number: i32,
pub block_number: i32,
}
impl DriveStatus {
pub fn tape_is_ready(&self) -> bool {
self.status.contains(GMTStatusFlags::ONLINE) &&
!self.status.contains(GMTStatusFlags::DRIVE_OPEN)
}
}
impl LinuxTapeDrive {
/// This needs to lock the drive
pub fn open(&self) -> Result<LinuxTapeHandle, Error> {
let file = OpenOptions::new()
.read(true)
.write(true)
.custom_flags(libc::O_NONBLOCK)
.open(&self.path)?;
// clear O_NONBLOCK from now on.
let flags = fcntl(file.as_raw_fd(), FcntlArg::F_GETFL)
.into_io_result()?;
let mut flags = OFlag::from_bits_truncate(flags);
flags.remove(OFlag::O_NONBLOCK);
fcntl(file.as_raw_fd(), FcntlArg::F_SETFL(flags))
.into_io_result()?;
if !tape_is_linux_tape_device(&file) {
bail!("file {:?} is not a linux tape device", self.path);
}
let handle = LinuxTapeHandle { drive_name: self.name.clone(), file };
let drive_status = handle.get_drive_status()?;
println!("drive status: {:?}", drive_status);
if !drive_status.tape_is_ready() {
bail!("tape not ready (no tape loaded)");
}
if drive_status.blocksize == 0 {
eprintln!("device is variable block size");
} else {
if drive_status.blocksize != PROXMOX_TAPE_BLOCK_SIZE as u32 {
eprintln!("device is in fixed block size mode with wrong size ({} bytes)", drive_status.blocksize);
eprintln!("trying to set variable block size mode...");
if handle.set_block_size(0).is_err() {
bail!("set variable block size mod failed - device uses wrong blocksize.");
}
} else {
eprintln!("device is in fixed block size mode ({} bytes)", drive_status.blocksize);
}
}
// Only root can seth driver options, so we cannot
// handle.set_default_options()?;
Ok(handle)
}
}
pub struct LinuxTapeHandle {
drive_name: String,
file: File,
//_lock: File,
}
impl LinuxTapeHandle {
/// Return the drive name (useful for log and debug)
pub fn dive_name(&self) -> &str {
&self.drive_name
}
/// Set all options we need/want
pub fn set_default_options(&self) -> Result<(), Error> {
let mut opts = SetDrvBufferOptions::empty();
// fixme: ? man st(4) claims we need to clear this for reliable multivolume
opts.set(SetDrvBufferOptions::MT_ST_BUFFER_WRITES, true);
// fixme: ?man st(4) claims we need to clear this for reliable multivolume
opts.set(SetDrvBufferOptions::MT_ST_ASYNC_WRITES, true);
opts.set(SetDrvBufferOptions::MT_ST_READ_AHEAD, true);
self.set_drive_buffer_options(opts)
}
/// call MTSETDRVBUFFER to set boolean options
///
/// Note: this uses MT_ST_BOOLEANS, so missing options are cleared!
pub fn set_drive_buffer_options(&self, opts: SetDrvBufferOptions) -> Result<(), Error> {
let cmd = mtop {
mt_op: MTCmd::MTSETDRVBUFFER,
mt_count: (SetDrvBufferCmd::MT_ST_BOOLEANS as i32) | opts.bits(),
};
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| format_err!("MTSETDRVBUFFER options failed - {}", err))?;
Ok(())
}
/// This flushes the driver's buffer as a side effect. Should be
/// used before reading status with MTIOCGET.
fn mtnop(&self) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTNOP, mt_count: 1, };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| format_err!("MTNOP failed - {}", err))?;
Ok(())
}
/// Set tape compression feature
pub fn set_compression(&self, on: bool) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTCOMPRESSION, mt_count: if on { 1 } else { 0 } };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| format_err!("set compression to {} failed - {}", on, err))?;
Ok(())
}
/// Write a single EOF mark
pub fn write_eof_mark(&self) -> Result<(), Error> {
tape_write_eof_mark(&self.file)?;
Ok(())
}
/// Set the drive's block length to the value specified.
///
/// A block length of zero sets the drive to variable block
/// size mode.
pub fn set_block_size(&self, block_length: usize) -> Result<(), Error> {
if block_length > 256*1024*1024 {
bail!("block_length too large (> max linux scsii block length)");
}
let cmd = mtop { mt_op: MTCmd::MTSETBLK, mt_count: block_length as i32 };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| format_err!("MTSETBLK failed - {}", err))?;
Ok(())
}
/// Get Tape configuration with MTIOCGET ioctl
pub fn get_drive_status(&self) -> Result<DriveStatus, Error> {
self.mtnop()?;
let mut status = mtget::default();
if let Err(err) = unsafe { mtiocget(self.file.as_raw_fd(), &mut status) } {
bail!("MTIOCGET failed - {}", err);
}
println!("{:?}", status);
let gmt = GMTStatusFlags::from_bits_truncate(status.mt_gstat);
let blocksize;
if status.mt_type == MT_TYPE_ISSCSI1 || status.mt_type == MT_TYPE_ISSCSI2 {
blocksize = ((status.mt_dsreg & MT_ST_BLKSIZE_MASK) >> MT_ST_BLKSIZE_SHIFT) as u32;
} else {
bail!("got unsupported tape type {}", status.mt_type);
}
let density = ((status.mt_dsreg & MT_ST_DENSITY_MASK) >> MT_ST_DENSITY_SHIFT) as u8;
let density = TapeDensity::try_from(density)?;
Ok(DriveStatus {
blocksize,
density,
status: gmt,
file_number: status.mt_fileno,
block_number: status.mt_blkno,
})
}
}
impl TapeDriver for LinuxTapeHandle {
fn sync(&mut self) -> Result<(), Error> {
println!("SYNC/FLUSH TAPE");
// MTWEOF with count 0 => flush
let cmd = mtop { mt_op: MTCmd::MTWEOF, mt_count: 0 };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| proxmox::io_format_err!("MT sync failed - {}", err))?;
Ok(())
}
/// Go to the end of the recorded media (for appending files).
fn move_to_eom(&mut self) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTEOM, mt_count: 1, };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| format_err!("MTEOM failed - {}", err))?;
Ok(())
}
fn rewind(&mut self) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTREW, mt_count: 1, };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| format_err!("tape rewind failed - {}", err))?;
Ok(())
}
fn current_file_number(&mut self) -> Result<usize, Error> {
let mut status = mtget::default();
self.mtnop()?;
if let Err(err) = unsafe { mtiocget(self.file.as_raw_fd(), &mut status) } {
bail!("current_file_number MTIOCGET failed - {}", err);
}
if status.mt_fileno < 0 {
bail!("current_file_number failed (got {})", status.mt_fileno);
}
Ok(status.mt_fileno as usize)
}
fn erase_media(&mut self, fast: bool) -> Result<(), Error> {
self.rewind()?; // important - erase from BOT
let cmd = mtop { mt_op: MTCmd::MTERASE, mt_count: if fast { 0 } else { 1 } };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| format_err!("MTERASE failed - {}", err))?;
Ok(())
}
fn read_next_file<'a>(&'a mut self) -> Result<Option<Box<dyn TapeRead + 'a>>, std::io::Error> {
match BlockedReader::open(&mut self.file)? {
Some(reader) => Ok(Some(Box::new(reader))),
None => Ok(None),
}
}
fn write_file<'a>(&'a mut self) -> Result<Box<dyn TapeWrite + 'a>, std::io::Error> {
let handle = TapeWriterHandle {
writer: BlockedWriter::new(&mut self.file),
};
Ok(Box::new(handle))
}
fn write_media_set_label(&mut self, media_set_label: &MediaSetLabel) -> Result<Uuid, Error> {
let file_number = self.current_file_number()?;
if file_number != 1 {
bail!("write_media_set_label failed - got wrong file number ({} != 1)", file_number);
}
let mut handle = TapeWriterHandle {
writer: BlockedWriter::new(&mut self.file),
};
let raw = serde_json::to_string_pretty(&serde_json::to_value(media_set_label)?)?;
let header = MediaContentHeader::new(PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0, raw.len() as u32);
handle.write_header(&header, raw.as_bytes())?;
handle.finish(false)?;
self.sync()?; // sync data to tape
Ok(Uuid::from(header.uuid))
}
/// Rewind and put the drive off line (Eject media).
fn eject_media(&mut self) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTOFFL, mt_count: 1 };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| format_err!("MTOFFL failed - {}", err))?;
Ok(())
}
}
/// Write a single EOF mark without flushing buffers
fn tape_write_eof_mark(file: &File) -> Result<(), std::io::Error> {
println!("WRITE EOF MARK");
let cmd = mtop { mt_op: MTCmd::MTWEOFI, mt_count: 1 };
unsafe {
mtioctop(file.as_raw_fd(), &cmd)
}.map_err(|err| proxmox::io_format_err!("MTWEOFI failed - {}", err))?;
Ok(())
}
fn tape_is_linux_tape_device(file: &File) -> bool {
let devnum = match nix::sys::stat::fstat(file.as_raw_fd()) {
Ok(stat) => stat.st_rdev,
_ => return false,
};
let major = unsafe { libc::major(devnum) };
let minor = unsafe { libc::minor(devnum) };
if major != 9 { return false; } // The st driver uses major device number 9
if (minor & 128) == 0 {
eprintln!("Detected rewinding tape. Please use non-rewinding tape devices (/dev/nstX).");
return false;
}
true
}
/// like BlockedWriter, but writes EOF mark on finish
pub struct TapeWriterHandle<'a> {
writer: BlockedWriter<&'a mut File>,
}
impl TapeWrite for TapeWriterHandle<'_> {
fn write_all(&mut self, data: &[u8]) -> Result<bool, std::io::Error> {
self.writer.write_all(data)
}
fn bytes_written(&self) -> usize {
self.writer.bytes_written()
}
fn finish(&mut self, incomplete: bool) -> Result<bool, std::io::Error> {
println!("FINISH TAPE HANDLE");
let leof = self.writer.finish(incomplete)?;
tape_write_eof_mark(self.writer.writer_ref_mut())?;
Ok(leof)
}
fn logical_end_of_media(&self) -> bool {
self.writer.logical_end_of_media()
}
}

299
src/tape/drive/mod.rs Normal file
View File

@ -0,0 +1,299 @@
mod virtual_tape;
mod linux_mtio;
mod linux_tape;
mod linux_list_drives;
pub use linux_list_drives::*;
use anyhow::{bail, format_err, Error};
use ::serde::{Deserialize, Serialize};
use proxmox::tools::Uuid;
use proxmox::tools::io::ReadExt;
use proxmox::api::section_config::SectionConfigData;
use crate::{
api2::types::{
VirtualTapeDrive,
LinuxTapeDrive,
},
tape::{
TapeWrite,
TapeRead,
file_formats::{
PROXMOX_BACKUP_DRIVE_LABEL_MAGIC_1_0,
PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
DriveLabel,
MediaSetLabel,
MediaContentHeader,
},
changer::{
MediaChange,
ChangeMediaEmail,
},
},
};
#[derive(Serialize,Deserialize)]
pub struct MediaLabelInfo {
pub label: DriveLabel,
pub label_uuid: Uuid,
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_label: Option<(MediaSetLabel, Uuid)>
}
/// Tape driver interface
pub trait TapeDriver {
/// Flush all data to the tape
fn sync(&mut self) -> Result<(), Error>;
/// Rewind the tape
fn rewind(&mut self) -> Result<(), Error>;
/// Move to end of recorded data
///
/// We assume this flushes the tape write buffer.
fn move_to_eom(&mut self) -> Result<(), Error>;
/// Current file number
fn current_file_number(&mut self) -> Result<usize, Error>;
/// Completely erase the media
fn erase_media(&mut self, fast: bool) -> Result<(), Error>;
/// Read/Open the next file
fn read_next_file<'a>(&'a mut self) -> Result<Option<Box<dyn TapeRead + 'a>>, std::io::Error>;
/// Write/Append a new file
fn write_file<'a>(&'a mut self) -> Result<Box<dyn TapeWrite + 'a>, std::io::Error>;
/// Write label to tape (erase tape content)
///
/// This returns the MediaContentHeader uuid (not the media uuid).
fn label_tape(&mut self, label: &DriveLabel) -> Result<Uuid, Error> {
self.rewind()?;
self.erase_media(true)?;
let raw = serde_json::to_string_pretty(&serde_json::to_value(&label)?)?;
let header = MediaContentHeader::new(PROXMOX_BACKUP_DRIVE_LABEL_MAGIC_1_0, raw.len() as u32);
let content_uuid = header.content_uuid();
{
let mut writer = self.write_file()?;
writer.write_header(&header, raw.as_bytes())?;
writer.finish(false)?;
}
self.sync()?; // sync data to tape
Ok(content_uuid)
}
/// Write the media set label to tape
///
/// This returns the MediaContentHeader uuid (not the media uuid).
fn write_media_set_label(&mut self, media_set_label: &MediaSetLabel) -> Result<Uuid, Error>;
/// Read the media label
///
/// This tries to read both media labels (label and media_set_label).
fn read_label(&mut self) -> Result<Option<MediaLabelInfo>, Error> {
self.rewind()?;
let (label, label_uuid) = {
let mut reader = match self.read_next_file()? {
None => return Ok(None), // tape is empty
Some(reader) => reader,
};
let header: MediaContentHeader = unsafe { reader.read_le_value()? };
header.check(PROXMOX_BACKUP_DRIVE_LABEL_MAGIC_1_0, 1, 64*1024)?;
let data = reader.read_exact_allocated(header.size as usize)?;
let label: DriveLabel = serde_json::from_slice(&data)
.map_err(|err| format_err!("unable to parse drive label - {}", err))?;
// make sure we read the EOF marker
if reader.skip_to_end()? != 0 {
bail!("got unexpected data after label");
}
(label, Uuid::from(header.uuid))
};
let mut info = MediaLabelInfo { label, label_uuid, media_set_label: None };
// try to read MediaSet label
let mut reader = match self.read_next_file()? {
None => return Ok(Some(info)),
Some(reader) => reader,
};
let header: MediaContentHeader = unsafe { reader.read_le_value()? };
header.check(PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0, 1, 64*1024)?;
let data = reader.read_exact_allocated(header.size as usize)?;
let media_set_label: MediaSetLabel = serde_json::from_slice(&data)
.map_err(|err| format_err!("unable to parse media set label - {}", err))?;
// make sure we read the EOF marker
if reader.skip_to_end()? != 0 {
bail!("got unexpected data after media set label");
}
info.media_set_label = Some((media_set_label, Uuid::from(header.uuid)));
Ok(Some(info))
}
/// Eject media
fn eject_media(&mut self) -> Result<(), Error>;
}
/// Get the media changer (name + MediaChange) associated with a tape drie.
///
/// If allow_email is set, returns an ChangeMediaEmail instance for
/// standalone tape drives (changer name set to "").
pub fn media_changer(
config: &SectionConfigData,
drive: &str,
allow_email: bool,
) -> Result<(Box<dyn MediaChange>, String), Error> {
match config.sections.get(drive) {
Some((section_type_name, config)) => {
match section_type_name.as_ref() {
"virtual" => {
let tape = VirtualTapeDrive::deserialize(config)?;
Ok((Box::new(tape), drive.to_string()))
}
"linux" => {
let tape = LinuxTapeDrive::deserialize(config)?;
match tape.changer {
Some(ref changer_name) => {
let changer_name = changer_name.to_string();
Ok((Box::new(tape), changer_name))
}
None => {
if !allow_email {
bail!("drive '{}' has no changer device", drive);
}
let to = "root@localhost"; // fixme
let changer = ChangeMediaEmail::new(drive, to);
Ok((Box::new(changer), String::new()))
},
}
}
_ => bail!("drive type '{}' not implemented!"),
}
}
None => {
bail!("no such drive '{}'", drive);
}
}
}
pub fn open_drive(
config: &SectionConfigData,
drive: &str,
) -> Result<Box<dyn TapeDriver>, Error> {
match config.sections.get(drive) {
Some((section_type_name, config)) => {
match section_type_name.as_ref() {
"virtual" => {
let tape = VirtualTapeDrive::deserialize(config)?;
let handle = tape.open()
.map_err(|err| format_err!("open drive '{}' ({}) failed - {}", drive, tape.path, err))?;
Ok(Box::new(handle))
}
"linux" => {
let tape = LinuxTapeDrive::deserialize(config)?;
let handle = tape.open()
.map_err(|err| format_err!("open drive '{}' ({}) failed - {}", drive, tape.path, err))?;
Ok(Box::new(handle))
}
_ => bail!("drive type '{}' not implemented!"),
}
}
None => {
bail!("no such drive '{}'", drive);
}
}
}
/// Requests a specific 'media' to be inserted into 'drive'. Within a
/// loop, this then tries to read the media label and waits until it
/// finds the requested media.
///
/// Returns a handle to the opened drive and the media labels.
pub fn request_and_load_media(
config: &SectionConfigData,
drive: &str,
label: &DriveLabel,
) -> Result<(
Box<dyn TapeDriver>,
MediaLabelInfo,
), Error> {
match config.sections.get(drive) {
Some((section_type_name, config)) => {
match section_type_name.as_ref() {
"virtual" => {
let mut drive = VirtualTapeDrive::deserialize(config)?;
let changer_id = label.changer_id.clone();
drive.load_media(&changer_id)?;
let mut handle = drive.open()?;
if let Ok(Some(info)) = handle.read_label() {
println!("found media label {} ({})", info.label.changer_id, info.label.uuid.to_string());
if info.label.uuid == label.uuid {
return Ok((Box::new(handle), info));
}
}
bail!("read label failed (label all tapes first)");
}
"linux" => {
let tape = LinuxTapeDrive::deserialize(config)?;
let id = label.changer_id.clone();
println!("Please insert media '{}' into drive '{}'", id, drive);
loop {
let mut handle = match tape.open() {
Ok(handle) => handle,
Err(_) => {
eprintln!("tape open failed - test again in 5 secs");
std::thread::sleep(std::time::Duration::from_millis(5_000));
continue;
}
};
if let Ok(Some(info)) = handle.read_label() {
println!("found media label {} ({})", info.label.changer_id, info.label.uuid.to_string());
if info.label.uuid == label.uuid {
return Ok((Box::new(handle), info));
}
}
println!("read label failed - test again in 5 secs");
std::thread::sleep(std::time::Duration::from_millis(5_000));
}
}
_ => bail!("drive type '{}' not implemented!"),
}
}
None => {
bail!("no such drive '{}'", drive);
}
}
}

View File

@ -0,0 +1,424 @@
// Note: This is only for test an debug
use std::fs::File;
use std::io;
use anyhow::{bail, format_err, Error};
use serde::{Serialize, Deserialize};
use proxmox::tools::{
Uuid,
fs::{replace_file, CreateOptions},
};
use crate::{
tape::{
TapeWrite,
TapeRead,
changer::MediaChange,
drive::{
VirtualTapeDrive,
TapeDriver,
},
file_formats::{
MediaSetLabel,
MediaContentHeader,
PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
},
helpers::{
EmulateTapeReader,
EmulateTapeWriter,
BlockedReader,
BlockedWriter,
},
},
};
impl VirtualTapeDrive {
/// This needs to lock the drive
pub fn open(&self) -> Result<VirtualTapeHandle, Error> {
let mut lock_path = std::path::PathBuf::from(&self.path);
lock_path.push(".drive.lck");
let timeout = std::time::Duration::new(10, 0);
let lock = proxmox::tools::fs::open_file_locked(&lock_path, timeout, true)?;
Ok(VirtualTapeHandle {
_lock: lock,
max_size: self.max_size.unwrap_or(64*1024*1024),
path: std::path::PathBuf::from(&self.path),
})
}
}
#[derive(Serialize,Deserialize)]
struct VirtualTapeStatus {
name: String,
pos: usize,
}
#[derive(Serialize,Deserialize)]
struct VirtualDriveStatus {
current_tape: Option<VirtualTapeStatus>,
}
#[derive(Serialize,Deserialize)]
struct TapeIndex {
files: usize,
}
pub struct VirtualTapeHandle {
path: std::path::PathBuf,
max_size: usize,
_lock: File,
}
impl VirtualTapeHandle {
pub fn insert_tape(&self, _tape_filename: &str) {
unimplemented!();
}
pub fn eject_tape(&self) {
unimplemented!();
}
fn status_file_path(&self) -> std::path::PathBuf {
let mut path = self.path.clone();
path.push("drive-status.json");
path
}
fn tape_index_path(&self, tape_name: &str) -> std::path::PathBuf {
let mut path = self.path.clone();
path.push(format!("tape-{}.json", tape_name));
path
}
fn tape_file_path(&self, tape_name: &str, pos: usize) -> std::path::PathBuf {
let mut path = self.path.clone();
path.push(format!("tapefile-{}-{}.json", pos, tape_name));
path
}
fn load_tape_index(&self, tape_name: &str) -> Result<TapeIndex, Error> {
let path = self.tape_index_path(tape_name);
let raw = proxmox::tools::fs::file_get_contents(&path)?;
if raw.is_empty() {
return Ok(TapeIndex { files: 0 });
}
let data: TapeIndex = serde_json::from_slice(&raw)?;
Ok(data)
}
fn store_tape_index(&self, tape_name: &str, index: &TapeIndex) -> Result<(), Error> {
let path = self.tape_index_path(tape_name);
let raw = serde_json::to_string_pretty(&serde_json::to_value(index)?)?;
let options = CreateOptions::new();
replace_file(&path, raw.as_bytes(), options)?;
Ok(())
}
fn truncate_tape(&self, tape_name: &str, pos: usize) -> Result<usize, Error> {
let mut index = self.load_tape_index(tape_name)?;
if index.files <= pos {
return Ok(index.files)
}
for i in pos..index.files {
let path = self.tape_file_path(tape_name, i);
let _ = std::fs::remove_file(path);
}
index.files = pos;
self.store_tape_index(tape_name, &index)?;
Ok(index.files)
}
fn load_status(&self) -> Result<VirtualDriveStatus, Error> {
let path = self.status_file_path();
let default = serde_json::to_value(VirtualDriveStatus {
current_tape: None,
})?;
let data = proxmox::tools::fs::file_get_json(&path, Some(default))?;
let status: VirtualDriveStatus = serde_json::from_value(data)?;
Ok(status)
}
fn store_status(&self, status: &VirtualDriveStatus) -> Result<(), Error> {
let path = self.status_file_path();
let raw = serde_json::to_string_pretty(&serde_json::to_value(status)?)?;
let options = CreateOptions::new();
replace_file(&path, raw.as_bytes(), options)?;
Ok(())
}
}
impl TapeDriver for VirtualTapeHandle {
fn sync(&mut self) -> Result<(), Error> {
Ok(()) // do nothing for now
}
fn current_file_number(&mut self) -> Result<usize, Error> {
let status = self.load_status()
.map_err(|err| format_err!("current_file_number failed: {}", err.to_string()))?;
match status.current_tape {
Some(VirtualTapeStatus { pos, .. }) => { Ok(pos)},
None => bail!("current_file_number failed: drive is empty (no tape loaded)."),
}
}
fn read_next_file(&mut self) -> Result<Option<Box<dyn TapeRead>>, io::Error> {
let mut status = self.load_status()
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
match status.current_tape {
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
let index = self.load_tape_index(name)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
if *pos >= index.files {
return Ok(None); // EOM
}
let path = self.tape_file_path(name, *pos);
let file = std::fs::OpenOptions::new()
.read(true)
.open(path)?;
*pos += 1;
self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
let reader = Box::new(file);
let reader = Box::new(EmulateTapeReader::new(reader));
match BlockedReader::open(reader)? {
Some(reader) => Ok(Some(Box::new(reader))),
None => Ok(None),
}
}
None => proxmox::io_bail!("drive is empty (no tape loaded)."),
}
}
fn write_file(&mut self) -> Result<Box<dyn TapeWrite>, io::Error> {
let mut status = self.load_status()
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
match status.current_tape {
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
let mut index = self.load_tape_index(name)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
for i in *pos..index.files {
let path = self.tape_file_path(name, i);
let _ = std::fs::remove_file(path);
}
let mut used_space = 0;
for i in 0..*pos {
let path = self.tape_file_path(name, i);
used_space += path.metadata()?.len() as usize;
}
index.files = *pos + 1;
self.store_tape_index(name, &index)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
let path = self.tape_file_path(name, *pos);
let file = std::fs::OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.open(path)?;
*pos = index.files;
self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
let mut free_space = 0;
if used_space < self.max_size {
free_space = self.max_size - used_space;
}
let writer = Box::new(file);
let writer = Box::new(EmulateTapeWriter::new(writer, free_space));
let writer = Box::new(BlockedWriter::new(writer));
Ok(writer)
}
None => proxmox::io_bail!("drive is empty (no tape loaded)."),
}
}
fn move_to_eom(&mut self) -> Result<(), Error> {
let mut status = self.load_status()?;
match status.current_tape {
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
let index = self.load_tape_index(name)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
*pos = index.files;
self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
Ok(())
}
None => bail!("drive is empty (no tape loaded)."),
}
}
fn rewind(&mut self) -> Result<(), Error> {
let mut status = self.load_status()?;
match status.current_tape {
Some(ref mut tape_status) => {
tape_status.pos = 0;
self.store_status(&status)?;
Ok(())
}
None => bail!("drive is empty (no tape loaded)."),
}
}
fn erase_media(&mut self, _fast: bool) -> Result<(), Error> {
let mut status = self.load_status()?;
match status.current_tape {
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
*pos = self.truncate_tape(name, 0)?;
self.store_status(&status)?;
Ok(())
}
None => bail!("drive is empty (no tape loaded)."),
}
}
fn write_media_set_label(&mut self, media_set_label: &MediaSetLabel) -> Result<Uuid, Error> {
let mut status = self.load_status()?;
match status.current_tape {
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
*pos = self.truncate_tape(name, 1)?;
let pos = *pos;
self.store_status(&status)?;
if pos == 0 {
bail!("media is empty (no label).");
}
if pos != 1 {
bail!("write_media_set_label: truncate failed - got wrong pos '{}'", pos);
}
let raw = serde_json::to_string_pretty(&serde_json::to_value(media_set_label)?)?;
let header = MediaContentHeader::new(PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0, raw.len() as u32);
{
let mut writer = self.write_file()?;
writer.write_header(&header, raw.as_bytes())?;
writer.finish(false)?;
}
Ok(Uuid::from(header.uuid))
}
None => bail!("drive is empty (no tape loaded)."),
}
}
fn eject_media(&mut self) -> Result<(), Error> {
let status = VirtualDriveStatus {
current_tape: None,
};
self.store_status(&status)
}
}
impl MediaChange for VirtualTapeHandle {
/// Try to load media
///
/// We automatically create an empty virtual tape here (if it does
/// not exist already)
fn load_media(&mut self, label: &str) -> Result<(), Error> {
let name = format!("tape-{}.json", label);
let mut path = self.path.clone();
path.push(&name);
if !path.exists() {
eprintln!("unable to find tape {} - creating file {:?}", label, path);
let index = TapeIndex { files: 0 };
self.store_tape_index(label, &index)?;
}
let status = VirtualDriveStatus {
current_tape: Some(VirtualTapeStatus {
name: label.to_string(),
pos: 0,
}),
};
self.store_status(&status)
}
fn unload_media(&mut self) -> Result<(), Error> {
self.eject_media()?;
Ok(())
}
fn eject_on_unload(&self) -> bool {
true
}
fn list_media_changer_ids(&self) -> Result<Vec<String>, Error> {
let mut list = Vec::new();
for entry in std::fs::read_dir(&self.path)? {
let entry = entry?;
let path = entry.path();
if path.is_file() && path.extension() == Some(std::ffi::OsStr::new("json")) {
if let Some(name) = path.file_stem() {
if let Some(name) = name.to_str() {
if name.starts_with("tape-") {
list.push(name[5..].to_string());
}
}
}
}
}
Ok(list)
}
}
impl MediaChange for VirtualTapeDrive {
fn load_media(&mut self, changer_id: &str) -> Result<(), Error> {
let mut handle = self.open()?;
handle.load_media(changer_id)
}
fn unload_media(&mut self) -> Result<(), Error> {
let mut handle = self.open()?;
handle.eject_media()?;
Ok(())
}
fn eject_on_unload(&self) -> bool {
true
}
fn list_media_changer_ids(&self) -> Result<Vec<String>, Error> {
let handle = self.open()?;
handle.list_media_changer_ids()
}
}

190
src/tape/file_formats.rs Normal file
View File

@ -0,0 +1,190 @@
use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize};
use endian_trait::Endian;
use bitflags::bitflags;
use proxmox::tools::Uuid;
/// We use 256KB blocksize (always)
pub const PROXMOX_TAPE_BLOCK_SIZE: usize = 256*1024;
// openssl::sha::sha256(b"Proxmox Tape Block Header v1.0")[0..8]
pub const PROXMOX_TAPE_BLOCK_HEADER_MAGIC_1_0: [u8; 8] = [220, 189, 175, 202, 235, 160, 165, 40];
// openssl::sha::sha256(b"Proxmox Backup Content Header v1.0")[0..8];
pub const PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0: [u8; 8] = [99, 238, 20, 159, 205, 242, 155, 12];
// openssl::sha::sha256(b"Proxmox Backup Tape Label v1.0")[0..8];
pub const PROXMOX_BACKUP_DRIVE_LABEL_MAGIC_1_0: [u8; 8] = [42, 5, 191, 60, 176, 48, 170, 57];
// openssl::sha::sha256(b"Proxmox Backup MediaSet Label v1.0")
pub const PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0: [u8; 8] = [8, 96, 99, 249, 47, 151, 83, 216];
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive v1.0")[0..8]
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0: [u8; 8] = [62, 173, 167, 95, 49, 76, 6, 110];
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive Entry v1.0")[0..8]
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_ENTRY_MAGIC_1_0: [u8; 8] = [72, 87, 109, 242, 222, 66, 143, 220];
// openssl::sha::sha256(b"Proxmox Backup Snapshot Archive v1.0")[0..8];
pub const PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0: [u8; 8] = [9, 182, 2, 31, 125, 232, 114, 133];
/// Tape Block Header with data payload
///
/// Note: this struct is large, never put this on the stack!
/// so we use an unsized type to avoid that.
///
/// Tape data block are always read/written with a fixed size
/// (PROXMOX_TAPE_BLOCK_SIZE). But they may contain less data, so the
/// header has an additional size field. For streams of blocks, there
/// is a sequence number ('seq_nr') which may be use for additional
/// error checking.
#[repr(C,packed)]
pub struct BlockHeader {
pub magic: [u8; 8],
pub flags: BlockHeaderFlags,
/// size as 3 bytes unsigned, little endian
pub size: [u8; 3],
/// block sequence number
pub seq_nr: u32,
pub payload: [u8],
}
bitflags! {
pub struct BlockHeaderFlags: u8 {
/// Marks the last block in a stream.
const END_OF_STREAM = 0b00000001;
/// Mark multivolume streams (when set in the last block)
const INCOMPLETE = 0b00000010;
}
}
#[derive(Endian)]
#[repr(C,packed)]
pub struct ChunkArchiveEntryHeader {
pub magic: [u8; 8],
pub digest: [u8; 32],
pub size: u64,
}
#[derive(Endian, Copy, Clone, Debug)]
#[repr(C,packed)]
pub struct MediaContentHeader {
pub magic: [u8; 8],
pub content_magic: [u8; 8],
pub uuid: [u8; 16],
pub ctime: i64,
pub size: u32,
pub part_number: u8,
pub reserved_0: u8,
pub reserved_1: u8,
pub reserved_2: u8,
}
impl MediaContentHeader {
pub fn new(content_magic: [u8; 8], size: u32) -> Self {
let uuid = *proxmox::tools::uuid::Uuid::generate()
.into_inner();
Self {
magic: PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0,
content_magic,
uuid,
ctime: proxmox::tools::time::epoch_i64(),
size,
part_number: 0,
reserved_0: 0,
reserved_1: 0,
reserved_2: 0,
}
}
pub fn check(&self, content_magic: [u8; 8], min_size: u32, max_size: u32) -> Result<(), Error> {
if self.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
bail!("MediaContentHeader: wrong magic");
}
if self.content_magic != content_magic {
bail!("MediaContentHeader: wrong content magic");
}
if self.size < min_size || self.size > max_size {
bail!("MediaContentHeader: got unexpected size");
}
Ok(())
}
pub fn content_uuid(&self) -> Uuid {
Uuid::from(self.uuid)
}
}
#[derive(Serialize,Deserialize,Clone,Debug)]
pub struct DriveLabel {
/// Unique ID
pub uuid: Uuid,
/// Media Changer ID or Barcode
pub changer_id: String,
/// Creation time stamp
pub ctime: i64,
}
#[derive(Serialize,Deserialize,Clone,Debug)]
pub struct MediaSetLabel {
pub pool: String,
/// MediaSet Uuid. We use the all-zero Uuid to reseve an empty media for a specific pool
pub uuid: Uuid,
/// MediaSet media sequence number
pub seq_nr: u64,
/// Creation time stamp
pub ctime: i64,
}
impl MediaSetLabel {
pub fn with_data(pool: &str, uuid: Uuid, seq_nr: u64, ctime: i64) -> Self {
Self {
pool: pool.to_string(),
uuid,
seq_nr,
ctime,
}
}
}
impl BlockHeader {
pub const SIZE: usize = PROXMOX_TAPE_BLOCK_SIZE;
/// Allocates a new instance on the heap
pub fn new() -> Box<Self> {
use std::alloc::{alloc_zeroed, Layout};
let mut buffer = unsafe {
let ptr = alloc_zeroed(
Layout::from_size_align(Self::SIZE, std::mem::align_of::<u64>())
.unwrap(),
);
Box::from_raw(
std::slice::from_raw_parts_mut(ptr, Self::SIZE - 16)
as *mut [u8] as *mut Self
)
};
buffer.magic = PROXMOX_TAPE_BLOCK_HEADER_MAGIC_1_0;
buffer
}
pub fn set_size(&mut self, size: usize) {
let size = size.to_le_bytes();
self.size.copy_from_slice(&size[..3]);
}
pub fn size(&self) -> usize {
(self.size[0] as usize) + ((self.size[1] as usize)<<8) + ((self.size[2] as usize)<<16)
}
pub fn set_seq_nr(&mut self, seq_nr: u32) {
self.seq_nr = seq_nr.to_le();
}
pub fn seq_nr(&self) -> u32 {
u32::from_le(self.seq_nr)
}
}

Some files were not shown because too many files have changed in this diff Show More