Compare commits

..

95 Commits

Author SHA1 Message Date
a417c8a93e bump version to 1.0.13-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-02 15:32:27 +02:00
79e58a903e pxar: handle missing GROUP_OBJ ACL entries
Previously we did not store GROUP_OBJ ACL entries for
directories, this means that these were lost which may
potentially elevate group permissions if they were masked
before via ACLs, so we also show a warning.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-02 11:10:20 +02:00
9f40e09d0a pxar: fix directory ACL entry creation
Don't override `group_obj` with `None` when handling
`ACL_TYPE_DEFAULT` entries for directories.

Reproducer: /var/log/journal ends up without a `MASK` type
entry making it invalid as it has `USER` and `GROUP`
entries.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-02 10:22:04 +02:00
553e57f914 server/rest: drop now unused imports
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-01 11:53:13 +02:00
2200a38671 code cleanup: drop extra newlines at EOF
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-01 11:27:07 +02:00
ba39ab20fb server/rest: extract auth to seperate module
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:26:28 +02:00
ff8945fd2f proxmox_client_tools: move common key related functions to key_source.rs
Add a new module containing key-related functions and schemata from all
over, code moved is not changed as much as possible.

Requires adapting some 'use' statements across proxmox-backup-client and
putting the XDG helpers quite cozily into proxmox_client_tools/mod.rs

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
4876393562 vsock_client: support authorization header
Pass in an optional auth tag, which will be passed as an Authorization
header on every subsequent call.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
971bc6f94b vsock_client: remove some &mut restrictions and rustfmt
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
cab92acb3c vsock_client: remove wrong comment
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
a1d90719e4 bump pxar dep to 0.10.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-31 14:00:20 +02:00
eeff085d9d server/rest: fix type ambiguity
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-29 12:02:30 +02:00
d43c407a00 server/rest: rust format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-29 08:17:26 +02:00
6bc87d3952 ui: verification job: fix subject of edit window
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:57:00 +02:00
04c1c68f31 ui: verify job: fix subject of edit window
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:45:45 +02:00
94b17c804a ui: task descriptions: sort alphabetically
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:45:23 +02:00
94352256b7 ui: task descriptions: fix casing
Enforce title-case. Affects mostly the new tape related task
description.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:50:51 +02:00
b3bed7e41f docs: tape/pool: add backend/ui setting name for allocation policy
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:40:23 +02:00
a4672dd0b1 ui: tape/pool: set onlineHelp for edit/add window
To let users find the good explanation about allocation and retention
policies from the docs easier.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:29:02 +02:00
17bbcb57d7 ui: tape: retention/allocation are Policies, note so
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:28:36 +02:00
843146479a ui: gettext; s/blocksize/block size/
Blocksize is not a word in the English language

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:04:19 +02:00
cf1e117fc7 sgutils2: use enum for ScsiError
This avoids string allocation when we return SenseInfo.
2021-03-27 15:57:48 +01:00
03eac20b87 SgRaw: add do_in_command() 2021-03-27 15:38:08 +01:00
11f5d59396 tape: page-align BlockHeader so that we can use it with SG_IO 2021-03-27 15:36:35 +01:00
6f63c29306 Cargo.toml: fix: set version to 1.0.12 2021-03-26 14:14:12 +01:00
c0e365fd49 bump version to 1.0.12-1 2021-03-26 14:09:30 +01:00
93fb2e0d21 api2/types: add type_text to DATASTORE_MAP_FORMAT
This way we get a better rendering in the api-viewer.
before:
 [<string>, ... ]

after:
 [(<source>=)?<target>, ... ]

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 13:18:10 +01:00
c553407e98 tape: add --scan option for catalog restore 2021-03-25 13:08:34 +01:00
4830de408b tape: avoid writing catalogs for empty backup tasks 2021-03-25 12:50:40 +01:00
7f78528308 OnlineHelpInfo.js: new link for client-repository 2021-03-25 12:26:57 +01:00
2843ba9017 avoid compiler warning 2021-03-25 12:25:23 +01:00
e244b9d03d api2/types: expand DATASTORE_MAP_LIST_SCHEMA description
and give an example

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 12:18:14 +01:00
657c47db35 tape: ui: TapeRestore: make datastore mapping selectable
by adding a custom field (grid) where the user can select
a target datastore for each source datastore on tape

if we have not loaded the content of the media set yet,
we have to load it on window open to get the list of datastores
on the tape

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 12:17:46 +01:00
a32bb86df9 api subscription: drop old hack for api-macro issue
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-25 12:03:33 +01:00
654c56e05d docs: client benchmark: note that tls is only done if repo is set
and remove misleading note about no network involved in tls
speedtest, as normally there is!

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-25 10:33:45 +01:00
589c4dad9e tape: add fsf/bsf to TapeDriver trait 2021-03-25 10:10:16 +01:00
0320deb0a9 proxmox-tape: fix clean api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 08:14:13 +01:00
4c4e5c2b1e api2/tape/restore: enable restore mapping of datastores
by changing the 'store' parameter of the restore api call to a
list of mappings (or a single default datastore)

for example giving:
a=b,c=d,e

would restore
datastore 'a' from tape to local datastore 'b'
datastore 'c' from tape to local datastore 'e'
all other datastores to 'e'

this way, only a single datastore can also be restored, by only
giving a single mapping, e.g. 'a=b'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 07:46:12 +01:00
924373d2df client/backup_writer: clarify backup and upload size
The text 'had to upload [KMG]iB' implies that this is the size we
actually had to send to the server, while in reality it is the
raw data size before compression.

Count the size of the compressed chunks and print it separately.
Split the average speed into its own line so they do not get too long.

Rename 'uploaded' into 'size_dirty' and 'vsize_h' into 'size'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 18:24:56 +01:00
3b60b5098f client/backup_writer: introduce UploadStats struct
instead of using a big anonymous tuple. This way the returned values
are properly named.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 18:24:56 +01:00
4abb3edd9f docs: fix horizontal scrolling issues on desktop and mobile
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 18:24:39 +01:00
932e69a837 docs: improve navigation coloring on mobile
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 18:24:20 +01:00
ef6d49670b client: backup writer: run rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 17:12:05 +01:00
52ea00e9df docs: only apply toctree color override to sidebar one
else the TOC on the index page has some white text on white back
ground

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 17:09:30 +01:00
870681013a tape: fix catalog restore
We need to rewind the tape if fast_catalog_restore() fail ...
2021-03-24 10:09:23 +01:00
c046739461 tape: fix MediaPool regression tests 2021-03-24 09:44:30 +01:00
8b1289f3e4 tape: skip catalog archives in restore 2021-03-24 09:33:39 +01:00
f1d76ecf6c fix #3359: fix blocking writes in async code during pxar create
in commit `asyncify pxar create_archive`, we changed from a
separate thread for creating a pxar to using async code, but the
StdChannelWriter used for both pxar and catalog can block, which
may block the tokio runtime for single (and probably dual) core
environments

this patch adds a wrapper struct for any writer that implements
'std::io::Write' and wraps the write calls with 'block_in_place'
so that if called in a tokio runtime, it knows that this code
potentially blocks

Fixes: 6afb60abf5 ("asyncify pxar create_archive")

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 09:00:07 +01:00
074503f288 tape: implement fast catalog restore 2021-03-24 08:40:34 +01:00
c6f55139f8 tape: impl. MediaCatalog::parse_catalog_header
This is just an optimization, avoiding to read the catalog into memory.

We also expose create_temporary_database_file() now (will be
used for catalog restore).
2021-03-24 06:32:59 +01:00
20cc25d749 tape: add TapeDriver::move_to_last_file 2021-03-24 06:32:59 +01:00
30316192b3 tape: improve locking (lock media-sets)
- new helper: lock_media_set()

- MediaPool: lock media set

- Expose Inventory::new() to avoid double loading

- do not lock pool on restore (only lock media-set)

- change pool lock name to ".pool-{name}"
2021-03-24 06:32:59 +01:00
e93263be1e taoe: implement MediaCatalog::destroy_unrelated_catalog() helper 2021-03-22 12:03:11 +01:00
2ab2ca9c24 tape: add MediaPool::lock_unassigned_media_pool() helper 2021-03-19 10:13:38 +01:00
54fcb7f5d8 api2/tape/backup: wait indefinitely for lock in scheduled backup jobs
so that a user can schedule multiple backup jobs onto a single
media pool without having to consider timing them apart

this makes sense since we can backup multiple datastores onto
the same media-set but can only specify one datastore per backup job

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 09:04:32 +01:00
4abd4dbe38 api2/tape/backup: include a summary on notification e-mails
for now only contains the list of included snapshots (if any),
as well as the backup duration

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 09:03:52 +01:00
eac1beef3c tape: cleanup PoolWriter - factor out common code 2021-03-19 08:56:14 +01:00
166a48f903 tape: cleanup - split PoolWriter into several files 2021-03-19 08:19:13 +01:00
82775c4764 tape: make sure we only commit/write valid catalogs 2021-03-19 07:50:32 +01:00
88bc9635aa tape: store media_uuid in PoolWriterState
This is mainly a cleanup, avoiding to access the catalog_set to get the uuid.
2021-03-19 07:33:59 +01:00
1037f2bc2d tape: cleanup - rename CatalogBuilder to CatalogSet 2021-03-19 07:22:54 +01:00
f24cbee77d server/email_notifications: do not double html escape
the default escape handler is handlebars::html_escape, but this are
plain text emails and we manually escape them for the html part, so
set the default escape handler to 'no_escape'

this avoids double html escape for the characters: '&"<>' in emails

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:02:49 +01:00
25b4d52dce server/email_notifications: do not panic on template registration
instead print an error and continue, the rendering functions will error
out if one of the templates could not be registered

if we `.unwrap()` here, it can lead to problems if the templates are
not correct, i.e. we could panic while holding a lock, if something holds
a mutex while this is called for the first time

add a test to catch registration issues during package build

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:02:17 +01:00
2729d134bd tools/systemd/time: implement some Traits for TimeSpan
namely
* From<Duration> (to convert easily from duration to timespan)
* Display (for better formatting)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:00:55 +01:00
32b75d36a8 tape: backup media catalogs 2021-03-19 06:58:46 +01:00
c4430a937d bump version to 1.0.11-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-18 12:36:28 +01:00
237314ad0d tape: improve catalog consistency checks
Try to check if we read the correct catalog by verifying uuid, media_set_uuid
and seq_nr.

Note: this changes the catalog format again.
2021-03-18 08:43:55 +01:00
caf76ec592 tools/subscription: ignore ENOENT for apt auth config removal
deleting a nonexistant file is hardly an error worth mentioning

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 20:12:58 +01:00
0af8c26b74 ui: tape/BackupOverview: insert a datastore level
since we can now backup multiple datastores in the same media-set,
we show the datastores as first level below that

the final tree structucture looks like this:

tapepool A
- media set 1
 - datastore I
  - tape x
   - ct/100
    - ct/100/2020-01-01T00:00:00Z

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:49 +01:00
825dfe7e0d ui: tape/DriveStatus: fix updating pointer+click handler on info widget
we can only do this after it is rendered, the element does not exist
before

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:39 +01:00
30a0809553 ui: tape/DriveStatus: add erase button
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:17 +01:00
6ee3035523 tape: define magic number for catalog archives 2021-03-17 13:35:23 +01:00
b627ebbf40 tape: improve catalog parser 2021-03-17 11:29:23 +01:00
ef4bdf6b8b tape: proxmox-tape media content - add 'store' attribute 2021-03-17 11:17:54 +01:00
54722acada tape: store datastore name in tape archives and media catalog
So that we can store multiple datastores on a single media set.
Deduplication is now per datastore (not per media set).
2021-03-17 11:08:51 +01:00
0e2bf3aa1d SnapshotReader: add self.datastore_name() helper 2021-03-17 10:16:34 +01:00
365126efa9 tape: PoolWriter - remove unnecessary move_to_eom 2021-03-17 10:16:34 +01:00
03d4c9217d update OnlineHelpInfo.js 2021-03-17 10:16:34 +01:00
8498290848 docs: technically not everything is in rust/js
I mean the whole distro uses quite some C and the like as base, so
avoid being overly strict here.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
654db565cb docs: features: mention that there are no client/data limits
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
51f83548ed docs: drop uncommon spelled out GCM
It does not help users if that is spelled out, and its not a common
use of GCM, and especially in the AES 256 context its clear what is
meant. The link to Wikipedia stays, so interested people can still
read up on it and others get a better overview due to the text being
more concise.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
5847a6bdb5 docs: fix linking, avoid over long text in main feature list
The main feature list should provide a short overview of the, well,
main features. While enterprise support *is* a main and important
feature, it's not the place here to describe things like personal
volume/ngo/... offers and the like.

Move parts of it to getting help, which lacked mentioning the
enterprise support too and is a good place to describe the customer
portal.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
313e5e2047 docs: mention support subscription plans
and change enterprise repository section to present tense.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-03-16 19:24:23 +01:00
7914e62b10 tools/zip: only add zip64 field when necessary
if neither offset nor size exceeds 32bit, do not add the
zip64 extension field

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 09:13:39 +01:00
84d3284609 ui: tape/DriveStatus: open task window on click on state
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 09:00:07 +01:00
70fab5b46e ui: tape: convert slot selection on transfer to combogrid
this is much handier than number field, and the user can instantly
see which one is an import/export slot

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:57:48 +01:00
e36135031d ui: tape/Restore: let the user choose an owner
so that the tape backup can be restored as any user, given
the current logged in user has the correct permission.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:55:42 +01:00
5a5ee0326e proxmox-tape: add missing notify-user to 'proxmox-tape restore'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:54:38 +01:00
776dabfb2e tape: use MB/s for backup speed (to match drive speed specification) 2021-03-16 08:51:49 +01:00
5c4755ad08 tape: speedup backup by doing read/write in parallel 2021-03-16 08:51:49 +01:00
7c1666289d tools/zip: add missing start_disk field for zip64 extension
it is not optional, even though we give the size explicitely

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-15 12:36:40 +01:00
cded320e92 backup info: run rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-14 19:18:35 +01:00
b31cdec225 update to pxar 0.10
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-12 10:48:09 +01:00
591b120d35 fix feature flag logic in pxar create
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-12 10:17:51 +01:00
e8913fea12 tape: write_chunk_archive - do not consume partially written chunk at EOT
So that it is re-written to the next tape.
2021-03-12 07:14:50 +01:00
75 changed files with 4126 additions and 1687 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "1.0.10" version = "1.0.13"
authors = [ authors = [
"Dietmar Maurer <dietmar@proxmox.com>", "Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>", "Dominik Csapak <d.csapak@proxmox.com>",
@ -52,7 +52,7 @@ proxmox = { version = "0.11.0", features = [ "sortable-macro", "api-macro", "web
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.1" proxmox-fuse = "0.1.1"
pxar = { version = "0.9.0", features = [ "tokio-io" ] } pxar = { version = "0.10.1", features = [ "tokio-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io" ] } #pxar = { path = "../pxar", features = [ "tokio-io" ] }
regex = "1.2" regex = "1.2"
rustyline = "7" rustyline = "7"

37
debian/changelog vendored
View File

@ -1,3 +1,40 @@
rust-proxmox-backup (1.0.13-1) unstable; urgency=medium
* pxar: improve handling ACL entries on create and restore
-- Proxmox Support Team <support@proxmox.com> Fri, 02 Apr 2021 15:32:01 +0200
rust-proxmox-backup (1.0.12-1) unstable; urgency=medium
* tape: write catalogs to tape (speedup catalog restore)
* tape: add --scan option for catalog restore
* tape: improve locking (lock media-sets)
* tape: ui: enable datastore mappings
* fix #3359: fix blocking writes in async code during pxar create
* api2/tape/backup: wait indefinitely for lock in scheduled backup jobs
* docu improvements
-- Proxmox Support Team <support@proxmox.com> Fri, 26 Mar 2021 14:08:47 +0100
rust-proxmox-backup (1.0.11-1) unstable; urgency=medium
* fix feature flag logic in pxar create
* tools/zip: add missing start_disk field for zip64 extension to improve
compatibility with some strict archive tools
* tape: speedup backup by doing read/write in parallel
* tape: store datastore name in tape archives and media catalog
-- Proxmox Support Team <support@proxmox.com> Thu, 18 Mar 2021 12:36:01 +0100
rust-proxmox-backup (1.0.10-1) unstable; urgency=medium rust-proxmox-backup (1.0.10-1) unstable; urgency=medium
* tape: improve MediaPool allocation by sorting tapes by creation time and * tape: improve MediaPool allocation by sorting tapes by creation time and

4
debian/control vendored
View File

@ -41,8 +41,8 @@ Build-Depends: debhelper (>= 11),
librust-proxmox-0.11+sortable-macro-dev, librust-proxmox-0.11+sortable-macro-dev,
librust-proxmox-0.11+websocket-dev, librust-proxmox-0.11+websocket-dev,
librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~), librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~),
librust-pxar-0.9+default-dev, librust-pxar-0.10+default-dev (>= 0.10.1-~~),
librust-pxar-0.9+tokio-io-dev, librust-pxar-0.10+tokio-io-dev (>= 0.10.1-~~),
librust-regex-1+default-dev (>= 1.2-~~), librust-regex-1+default-dev (>= 1.2-~~),
librust-rustyline-7+default-dev, librust-rustyline-7+default-dev,
librust-serde-1+default-dev, librust-serde-1+default-dev,

View File

@ -3,6 +3,7 @@ Backup Client Usage
The command line client is called :command:`proxmox-backup-client`. The command line client is called :command:`proxmox-backup-client`.
.. _client_repository:
Repository Locations Repository Locations
-------------------- --------------------
@ -691,8 +692,15 @@ Benchmarking
------------ ------------
The backup client also comes with a benchmarking tool. This tool measures The backup client also comes with a benchmarking tool. This tool measures
various metrics relating to compression and encryption speeds. You can run a various metrics relating to compression and encryption speeds. If a Proxmox
benchmark using the ``benchmark`` subcommand of ``proxmox-backup-client``: Backup repository (remote or local) is specified, the TLS upload speed will get
measured too.
You can run a benchmark using the ``benchmark`` subcommand of
``proxmox-backup-client``:
.. note:: The TLS speed test is only included if a :ref:`backup server
repository is specified <client_repository>`.
.. code-block:: console .. code-block:: console
@ -723,8 +731,7 @@ benchmark using the ``benchmark`` subcommand of ``proxmox-backup-client``:
.. note:: The percentages given in the output table correspond to a .. note:: The percentages given in the output table correspond to a
comparison against a Ryzen 7 2700X. The TLS test connects to the comparison against a Ryzen 7 2700X.
local host, so there is no network involved.
You can also pass the ``--output-format`` parameter to output stats in ``json``, You can also pass the ``--output-format`` parameter to output stats in ``json``,
rather than the default table format. rather than the default table format.

View File

@ -57,6 +57,11 @@ div.sphinxsidebar h3 {
div.sphinxsidebar h1.logo-name { div.sphinxsidebar h1.logo-name {
display: none; display: none;
} }
div.document, div.footer {
width: min(100%, 1320px);
}
@media screen and (max-width: 875px) { @media screen and (max-width: 875px) {
div.sphinxsidebar p.logo { div.sphinxsidebar p.logo {
display: initial; display: initial;
@ -65,9 +70,19 @@ div.sphinxsidebar h1.logo-name {
display: block; display: block;
} }
div.sphinxsidebar span { div.sphinxsidebar span {
color: #AAA; color: #EEE;
} }
ul li.toctree-l1 > a { .sphinxsidebar ul li.toctree-l1 > a, div.sphinxsidebar a {
color: #FFF; color: #FFF;
} }
div.sphinxsidebar {
background-color: #555;
}
div.body {
min-width: 300px;
}
div.footer {
display: block;
margin: 15px auto 0px auto;
}
} }

View File

@ -65,10 +65,10 @@ Main Features
:Compression: The ultra-fast Zstandard_ compression is able to compress :Compression: The ultra-fast Zstandard_ compression is able to compress
several gigabytes of data per second. several gigabytes of data per second.
:Encryption: Backups can be encrypted on the client-side, using AES-256 in :Encryption: Backups can be encrypted on the client-side, using AES-256 GCM_.
Galois/Counter Mode (GCM_). This authenticated encryption (AE_) mode This authenticated encryption (AE_) mode provides very high performance on
provides very high performance on modern hardware. In addition to client-side modern hardware. In addition to client-side encryption, all data is
encryption, all data is transferred via a secure TLS connection. transferred via a secure TLS connection.
:Web interface: Manage the Proxmox Backup Server with the integrated, web-based :Web interface: Manage the Proxmox Backup Server with the integrated, web-based
user interface. user interface.
@ -76,8 +76,16 @@ Main Features
:Open Source: No secrets. Proxmox Backup Server is free and open-source :Open Source: No secrets. Proxmox Backup Server is free and open-source
software. The source code is licensed under AGPL, v3. software. The source code is licensed under AGPL, v3.
:Support: Enterprise support will be available from `Proxmox`_ once the beta :No Limits: Proxmox Backup Server has no artifical limits for backup storage or
phase is over. backup-clients.
:Enterprise Support: Proxmox Server Solutions GmbH offers enterprise support in
form of `Proxmox Backup Server Subscription Plans
<https://www.proxmox.com/en/proxmox-backup-server/pricing>`_. Users at every
subscription level get access to the Proxmox Backup :ref:`Enterprise
Repository <sysadmin_package_repos_enterprise>`. In addition, with a Basic,
Standard or Premium subscription, users have access to the :ref:`Proxmox
Customer Portal <get_help_enterprise_support>`.
Reasons for Data Backup? Reasons for Data Backup?
@ -117,8 +125,8 @@ Proxmox Backup Server consists of multiple components:
* A client CLI tool (`proxmox-backup-client`) to access the server easily from * A client CLI tool (`proxmox-backup-client`) to access the server easily from
any `Linux amd64` environment any `Linux amd64` environment
Aside from the web interface, everything is written in the Rust programming Aside from the web interface, most parts of Proxmox Backup Server are written in
language. the Rust programming language.
"The Rust programming language helps you write faster, more reliable software. "The Rust programming language helps you write faster, more reliable software.
High-level ergonomics and low-level control are often at odds in programming High-level ergonomics and low-level control are often at odds in programming
@ -134,6 +142,17 @@ language.
Getting Help Getting Help
------------ ------------
.. _get_help_enterprise_support:
Enterprise Support
~~~~~~~~~~~~~~~~~~
Users with a `Proxmox Backup Server Basic, Standard or Premium Subscription Plan
<https://www.proxmox.com/en/proxmox-backup-server/pricing>`_ have access to the
Proxmox Customer Portal. The Customer Portal provides support with guaranteed
response times from the Proxmox developers.
For more information or for volume discounts, please contact office@proxmox.com.
Community Support Forum Community Support Forum
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -69,10 +69,12 @@ Here, the output should be:
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
.. _sysadmin_package_repos_enterprise:
`Proxmox Backup`_ Enterprise Repository `Proxmox Backup`_ Enterprise Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This will be the default, stable, and recommended repository. It is available for This is the stable, recommended repository. It is available for
all `Proxmox Backup`_ subscription users. It contains the most stable packages, all `Proxmox Backup`_ subscription users. It contains the most stable packages,
and is suitable for production use. The ``pbs-enterprise`` repository is and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default: enabled by default:

View File

@ -411,7 +411,7 @@ one media pool, so a job only uses tapes from that pool.
The pool additionally defines how long backup jobs can append data The pool additionally defines how long backup jobs can append data
to a media set. The following settings are possible: to a media set. The following settings are possible:
- Try to use the current media set. - Try to use the current media set (``continue``).
This setting produces one large media set. While this is very This setting produces one large media set. While this is very
space efficient (deduplication, no unused space), it can lead to space efficient (deduplication, no unused space), it can lead to
@ -433,7 +433,7 @@ one media pool, so a job only uses tapes from that pool.
.. NOTE:: Retention period starts with the existence of a newer .. NOTE:: Retention period starts with the existence of a newer
media set. media set.
- Always create a new media set. - Always create a new media set (``always``).
With this setting, each backup job creates a new media set. This With this setting, each backup job creates a new media set. This
is less space efficient, because the media from the last set is less space efficient, because the media from the last set

View File

@ -32,9 +32,6 @@ use crate::api2::types::{NODE_SCHEMA, SUBSCRIPTION_KEY_SCHEMA, Authid};
pub fn check_subscription( pub fn check_subscription(
force: bool, force: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
// FIXME: drop once proxmox-api-macro is bumped to >> 5.0.0-1
let _remove_me = API_METHOD_CHECK_SUBSCRIPTION_PARAM_DEFAULT_FORCE;
let info = match subscription::read_subscription() { let info = match subscription::read_subscription() {
Err(err) => bail!("could not read subscription status: {}", err), Err(err) => bail!("could not read subscription status: {}", err),
Ok(Some(info)) => info, Ok(Some(info)) => info,

View File

@ -1,10 +1,11 @@
use std::path::Path; use std::path::Path;
use std::sync::Arc; use std::sync::{Mutex, Arc};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::Value; use serde_json::Value;
use proxmox::{ use proxmox::{
try_block,
api::{ api::{
api, api,
RpcEnvironment, RpcEnvironment,
@ -33,6 +34,7 @@ use crate::{
}, },
server::{ server::{
lookup_user_email, lookup_user_email,
TapeBackupJobSummary,
jobstate::{ jobstate::{
Job, Job,
JobState, JobState,
@ -176,8 +178,15 @@ pub fn do_tape_backup_job(
let (drive_config, _digest) = config::drive::config()?; let (drive_config, _digest) = config::drive::config()?;
// early check/lock before starting worker // for scheduled jobs we acquire the lock later in the worker
let drive_lock = lock_tape_device(&drive_config, &setup.drive)?; let drive_lock = if schedule.is_some() {
None
} else {
Some(lock_tape_device(&drive_config, &setup.drive)?)
};
let notify_user = setup.notify_user.as_ref().unwrap_or_else(|| &Userid::root_userid());
let email = lookup_user_email(notify_user);
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
&worker_type, &worker_type,
@ -185,26 +194,40 @@ pub fn do_tape_backup_job(
auth_id.clone(), auth_id.clone(),
false, false,
move |worker| { move |worker| {
let _drive_lock = drive_lock; // keep lock guard
set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
job.start(&worker.upid().to_string())?; job.start(&worker.upid().to_string())?;
let mut drive_lock = drive_lock;
task_log!(worker,"Starting tape backup job '{}'", job_id); let (job_result, summary) = match try_block!({
if let Some(event_str) = schedule { if schedule.is_some() {
task_log!(worker,"task triggered by schedule '{}'", event_str); // for scheduled tape backup jobs, we wait indefinitely for the lock
} task_log!(worker, "waiting for drive lock...");
loop {
if let Ok(lock) = lock_tape_device(&drive_config, &setup.drive) {
drive_lock = Some(lock);
break;
} // ignore errors
let notify_user = setup.notify_user.as_ref().unwrap_or_else(|| &Userid::root_userid()); worker.check_abort()?;
let email = lookup_user_email(notify_user); }
}
set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
let job_result = backup_worker( task_log!(worker,"Starting tape backup job '{}'", job_id);
&worker, if let Some(event_str) = schedule {
datastore, task_log!(worker,"task triggered by schedule '{}'", event_str);
&pool_config, }
&setup,
email.clone(), backup_worker(
); &worker,
datastore,
&pool_config,
&setup,
email.clone(),
)
}) {
Ok(summary) => (Ok(()), summary),
Err(err) => (Err(err), Default::default()),
};
let status = worker.create_state(&job_result); let status = worker.create_state(&job_result);
@ -214,6 +237,7 @@ pub fn do_tape_backup_job(
Some(job.jobname()), Some(job.jobname()),
&setup, &setup,
&job_result, &job_result,
summary,
) { ) {
eprintln!("send tape backup notification failed: {}", err); eprintln!("send tape backup notification failed: {}", err);
} }
@ -340,13 +364,17 @@ pub fn backup(
move |worker| { move |worker| {
let _drive_lock = drive_lock; // keep lock guard let _drive_lock = drive_lock; // keep lock guard
set_tape_device_state(&setup.drive, &worker.upid().to_string())?; set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
let job_result = backup_worker(
let (job_result, summary) = match backup_worker(
&worker, &worker,
datastore, datastore,
&pool_config, &pool_config,
&setup, &setup,
email.clone(), email.clone(),
); ) {
Ok(summary) => (Ok(()), summary),
Err(err) => (Err(err), Default::default()),
};
if let Some(email) = email { if let Some(email) = email {
if let Err(err) = crate::server::send_tape_backup_status( if let Err(err) = crate::server::send_tape_backup_status(
@ -354,6 +382,7 @@ pub fn backup(
None, None,
&setup, &setup,
&job_result, &job_result,
summary,
) { ) {
eprintln!("send tape backup notification failed: {}", err); eprintln!("send tape backup notification failed: {}", err);
} }
@ -374,16 +403,16 @@ fn backup_worker(
pool_config: &MediaPoolConfig, pool_config: &MediaPoolConfig,
setup: &TapeBackupJobSetup, setup: &TapeBackupJobSetup,
email: Option<String>, email: Option<String>,
) -> Result<(), Error> { ) -> Result<TapeBackupJobSummary, Error> {
let status_path = Path::new(TAPE_STATUS_DIR); let status_path = Path::new(TAPE_STATUS_DIR);
let start = std::time::Instant::now();
let _lock = MediaPool::lock(status_path, &pool_config.name)?; let mut summary: TapeBackupJobSummary = Default::default();
task_log!(worker, "update media online status"); task_log!(worker, "update media online status");
let changer_name = update_media_online_status(&setup.drive)?; let changer_name = update_media_online_status(&setup.drive)?;
let pool = MediaPool::with_config(status_path, &pool_config, changer_name)?; let pool = MediaPool::with_config(status_path, &pool_config, changer_name, false)?;
let mut pool_writer = PoolWriter::new(pool, &setup.drive, worker, email)?; let mut pool_writer = PoolWriter::new(pool, &setup.drive, worker, email)?;
@ -402,8 +431,12 @@ fn backup_worker(
task_log!(worker, "latest-only: true (only considering latest snapshots)"); task_log!(worker, "latest-only: true (only considering latest snapshots)");
} }
let datastore_name = datastore.name();
let mut errors = false; let mut errors = false;
let mut need_catalog = false; // avoid writing catalog for empty jobs
for (group_number, group) in group_list.into_iter().enumerate() { for (group_number, group) in group_list.into_iter().enumerate() {
progress.done_groups = group_number as u64; progress.done_groups = group_number as u64;
progress.done_snapshots = 0; progress.done_snapshots = 0;
@ -416,12 +449,18 @@ fn backup_worker(
if latest_only { if latest_only {
progress.group_snapshots = 1; progress.group_snapshots = 1;
if let Some(info) = snapshot_list.pop() { if let Some(info) = snapshot_list.pop() {
if pool_writer.contains_snapshot(&info.backup_dir.to_string()) { if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
task_log!(worker, "skip snapshot {}", info.backup_dir); task_log!(worker, "skip snapshot {}", info.backup_dir);
continue; continue;
} }
need_catalog = true;
let snapshot_name = info.backup_dir.to_string();
if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? { if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
errors = true; errors = true;
} else {
summary.snapshot_list.push(snapshot_name);
} }
progress.done_snapshots = 1; progress.done_snapshots = 1;
task_log!( task_log!(
@ -433,12 +472,18 @@ fn backup_worker(
} else { } else {
progress.group_snapshots = snapshot_list.len() as u64; progress.group_snapshots = snapshot_list.len() as u64;
for (snapshot_number, info) in snapshot_list.into_iter().enumerate() { for (snapshot_number, info) in snapshot_list.into_iter().enumerate() {
if pool_writer.contains_snapshot(&info.backup_dir.to_string()) { if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
task_log!(worker, "skip snapshot {}", info.backup_dir); task_log!(worker, "skip snapshot {}", info.backup_dir);
continue; continue;
} }
need_catalog = true;
let snapshot_name = info.backup_dir.to_string();
if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? { if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
errors = true; errors = true;
} else {
summary.snapshot_list.push(snapshot_name);
} }
progress.done_snapshots = snapshot_number as u64 + 1; progress.done_snapshots = snapshot_number as u64 + 1;
task_log!( task_log!(
@ -452,6 +497,22 @@ fn backup_worker(
pool_writer.commit()?; pool_writer.commit()?;
if need_catalog {
task_log!(worker, "append media catalog");
let uuid = pool_writer.load_writable_media(worker)?;
let done = pool_writer.append_catalog_archive(worker)?;
if !done {
task_log!(worker, "catalog does not fit on tape, writing to next volume");
pool_writer.set_media_status_full(&uuid)?;
pool_writer.load_writable_media(worker)?;
let done = pool_writer.append_catalog_archive(worker)?;
if !done {
bail!("write_catalog_archive failed on second media");
}
}
}
if setup.export_media_set.unwrap_or(false) { if setup.export_media_set.unwrap_or(false) {
pool_writer.export_media_set(worker)?; pool_writer.export_media_set(worker)?;
} else if setup.eject_media.unwrap_or(false) { } else if setup.eject_media.unwrap_or(false) {
@ -462,7 +523,9 @@ fn backup_worker(
bail!("Tape backup finished with some errors. Please check the task log."); bail!("Tape backup finished with some errors. Please check the task log.");
} }
Ok(()) summary.duration = start.elapsed();
Ok(summary)
} }
// Try to update the the media online status // Try to update the the media online status
@ -508,33 +571,48 @@ pub fn backup_snapshot(
} }
}; };
let mut chunk_iter = snapshot_reader.chunk_iterator()?.peekable(); let snapshot_reader = Arc::new(Mutex::new(snapshot_reader));
let (reader_thread, chunk_iter) = pool_writer.spawn_chunk_reader_thread(
datastore.clone(),
snapshot_reader.clone(),
)?;
let mut chunk_iter = chunk_iter.peekable();
loop { loop {
worker.check_abort()?; worker.check_abort()?;
// test is we have remaining chunks // test is we have remaining chunks
if chunk_iter.peek().is_none() { match chunk_iter.peek() {
break; None => break,
Some(Ok(_)) => { /* Ok */ },
Some(Err(err)) => bail!("{}", err),
} }
let uuid = pool_writer.load_writable_media(worker)?; let uuid = pool_writer.load_writable_media(worker)?;
worker.check_abort()?; worker.check_abort()?;
let (leom, _bytes) = pool_writer.append_chunk_archive(worker, &datastore, &mut chunk_iter)?; let (leom, _bytes) = pool_writer.append_chunk_archive(worker, &mut chunk_iter, datastore.name())?;
if leom { if leom {
pool_writer.set_media_status_full(&uuid)?; pool_writer.set_media_status_full(&uuid)?;
} }
} }
if let Err(_) = reader_thread.join() {
bail!("chunk reader thread failed");
}
worker.check_abort()?; worker.check_abort()?;
let uuid = pool_writer.load_writable_media(worker)?; let uuid = pool_writer.load_writable_media(worker)?;
worker.check_abort()?; worker.check_abort()?;
let snapshot_reader = snapshot_reader.lock().unwrap();
let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?; let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
if !done { if !done {

View File

@ -48,15 +48,20 @@ use crate::{
MamAttribute, MamAttribute,
LinuxDriveAndMediaStatus, LinuxDriveAndMediaStatus,
}, },
tape::restore::restore_media, tape::restore::{
fast_catalog_restore,
restore_media,
},
}, },
server::WorkerTask, server::WorkerTask,
tape::{ tape::{
TAPE_STATUS_DIR, TAPE_STATUS_DIR,
MediaPool,
Inventory, Inventory,
MediaCatalog, MediaCatalog,
MediaId, MediaId,
lock_media_set,
lock_media_pool,
lock_unassigned_media_pool,
linux_tape_device_list, linux_tape_device_list,
lookup_device_identification, lookup_device_identification,
file_formats::{ file_formats::{
@ -373,10 +378,19 @@ pub fn erase_media(
); );
let status_path = Path::new(TAPE_STATUS_DIR); let status_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(status_path)?; let mut inventory = Inventory::new(status_path);
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(status_path, pool)?;
let _media_set_lock = lock_media_set(status_path, uuid, None)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.remove_media(&media_id.label.uuid)?;
} else {
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.remove_media(&media_id.label.uuid)?;
};
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.remove_media(&media_id.label.uuid)?;
handle.erase_media(fast.unwrap_or(true))?; handle.erase_media(fast.unwrap_or(true))?;
} }
} }
@ -548,28 +562,37 @@ fn write_media_label(
drive.label_tape(&label)?; drive.label_tape(&label)?;
let mut media_set_label = None; let status_path = Path::new(TAPE_STATUS_DIR);
if let Some(ref pool) = pool { let media_id = if let Some(ref pool) = pool {
// assign media to pool by writing special media set label // assign media to pool by writing special media set label
worker.log(format!("Label media '{}' for pool '{}'", label.label_text, pool)); worker.log(format!("Label media '{}' for pool '{}'", label.label_text, pool));
let set = MediaSetLabel::with_data(&pool, [0u8; 16].into(), 0, label.ctime, None); let set = MediaSetLabel::with_data(&pool, [0u8; 16].into(), 0, label.ctime, None);
drive.write_media_set_label(&set, None)?; drive.write_media_set_label(&set, None)?;
media_set_label = Some(set);
let media_id = MediaId { label, media_set_label: Some(set) };
// Create the media catalog
MediaCatalog::overwrite(status_path, &media_id, false)?;
let mut inventory = Inventory::new(status_path);
inventory.store(media_id.clone(), false)?;
media_id
} else { } else {
worker.log(format!("Label media '{}' (no pool assignment)", label.label_text)); worker.log(format!("Label media '{}' (no pool assignment)", label.label_text));
}
let media_id = MediaId { label, media_set_label }; let media_id = MediaId { label, media_set_label: None };
let status_path = Path::new(TAPE_STATUS_DIR); // Create the media catalog
MediaCatalog::overwrite(status_path, &media_id, false)?;
// Create the media catalog let mut inventory = Inventory::new(status_path);
MediaCatalog::overwrite(status_path, &media_id, false)?; inventory.store(media_id.clone(), false)?;
let mut inventory = Inventory::load(status_path)?; media_id
inventory.store(media_id.clone(), false)?; };
drive.rewind()?; drive.rewind()?;
@ -705,14 +728,24 @@ pub async fn read_label(
if let Err(err) = drive.set_encryption(encrypt_fingerprint) { if let Err(err) = drive.set_encryption(encrypt_fingerprint) {
// try, but ignore errors. just log to stderr // try, but ignore errors. just log to stderr
eprintln!("uable to load encryption key: {}", err); eprintln!("unable to load encryption key: {}", err);
} }
} }
if let Some(true) = inventorize { if let Some(true) = inventorize {
let state_path = Path::new(TAPE_STATUS_DIR); let state_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(state_path)?; let mut inventory = Inventory::new(state_path);
inventory.store(media_id, false)?;
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(state_path, pool)?;
let _lock = lock_media_set(state_path, uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(state_path, &media_id)?;
inventory.store(media_id, false)?;
} else {
let _lock = lock_unassigned_media_pool(state_path)?;
MediaCatalog::destroy(state_path, &media_id.label.uuid)?;
inventory.store(media_id, false)?;
};
} }
flat flat
@ -947,7 +980,17 @@ pub fn update_inventory(
continue; continue;
} }
worker.log(format!("inventorize media '{}' with uuid '{}'", label_text, media_id.label.uuid)); worker.log(format!("inventorize media '{}' with uuid '{}'", label_text, media_id.label.uuid));
inventory.store(media_id, false)?;
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(state_path, pool)?;
let _lock = lock_media_set(state_path, uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(state_path, &media_id)?;
inventory.store(media_id, false)?;
} else {
let _lock = lock_unassigned_media_pool(state_path)?;
MediaCatalog::destroy(state_path, &media_id.label.uuid)?;
inventory.store(media_id, false)?;
};
} }
} }
changer.unload_media(None)?; changer.unload_media(None)?;
@ -1184,6 +1227,11 @@ pub async fn status(drive: String) -> Result<LinuxDriveAndMediaStatus, Error> {
type: bool, type: bool,
optional: true, optional: true,
}, },
scan: {
description: "Re-read the whole tape to reconstruct the catalog instead of restoring saved versions.",
type: bool,
optional: true,
},
verbose: { verbose: {
description: "Verbose mode - log all found chunks.", description: "Verbose mode - log all found chunks.",
type: bool, type: bool,
@ -1202,11 +1250,13 @@ pub async fn status(drive: String) -> Result<LinuxDriveAndMediaStatus, Error> {
pub fn catalog_media( pub fn catalog_media(
drive: String, drive: String,
force: Option<bool>, force: Option<bool>,
scan: Option<bool>,
verbose: Option<bool>, verbose: Option<bool>,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let verbose = verbose.unwrap_or(false); let verbose = verbose.unwrap_or(false);
let force = force.unwrap_or(false); let force = force.unwrap_or(false);
let scan = scan.unwrap_or(false);
let upid_str = run_drive_worker( let upid_str = run_drive_worker(
rpcenv, rpcenv,
@ -1237,19 +1287,22 @@ pub fn catalog_media(
let status_path = Path::new(TAPE_STATUS_DIR); let status_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(status_path)?; let mut inventory = Inventory::new(status_path);
inventory.store(media_id.clone(), false)?;
let pool = match media_id.media_set_label { let (_media_set_lock, media_set_uuid) = match media_id.media_set_label {
None => { None => {
worker.log("media is empty"); worker.log("media is empty");
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?; MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.store(media_id.clone(), false)?;
return Ok(()); return Ok(());
} }
Some(ref set) => { Some(ref set) => {
if set.uuid.as_ref() == [0u8;16] { // media is empty if set.uuid.as_ref() == [0u8;16] { // media is empty
worker.log("media is empty"); worker.log("media is empty");
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?; MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.store(media_id.clone(), false)?;
return Ok(()); return Ok(());
} }
let encrypt_fingerprint = set.encryption_key_fingerprint.clone() let encrypt_fingerprint = set.encryption_key_fingerprint.clone()
@ -1257,16 +1310,36 @@ pub fn catalog_media(
drive.set_encryption(encrypt_fingerprint)?; drive.set_encryption(encrypt_fingerprint)?;
set.pool.clone() let _pool_lock = lock_media_pool(status_path, &set.pool)?;
let media_set_lock = lock_media_set(status_path, &set.uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(status_path, &media_id)?;
inventory.store(media_id.clone(), false)?;
(media_set_lock, &set.uuid)
} }
}; };
let _lock = MediaPool::lock(status_path, &pool)?;
if MediaCatalog::exists(status_path, &media_id.label.uuid) && !force { if MediaCatalog::exists(status_path, &media_id.label.uuid) && !force {
bail!("media catalog exists (please use --force to overwrite)"); bail!("media catalog exists (please use --force to overwrite)");
} }
if !scan {
let media_set = inventory.compute_media_set_members(media_set_uuid)?;
if fast_catalog_restore(&worker, &mut drive, &media_set, &media_id.label.uuid)? {
return Ok(())
}
task_log!(worker, "no catalog found");
}
task_log!(worker, "scanning entire media to reconstruct catalog");
drive.rewind()?;
drive.read_label()?; // skip over labels - we already read them above
restore_media(&worker, &mut drive, &media_id, None, verbose)?; restore_media(&worker, &mut drive, &media_id, None, verbose)?;
Ok(()) Ok(())

View File

@ -122,7 +122,7 @@ pub async fn list_media(
let config: MediaPoolConfig = config.lookup("pool", pool_name)?; let config: MediaPoolConfig = config.lookup("pool", pool_name)?;
let changer_name = None; // assume standalone drive let changer_name = None; // assume standalone drive
let mut pool = MediaPool::with_config(status_path, &config, changer_name)?; let mut pool = MediaPool::with_config(status_path, &config, changer_name, true)?;
let current_time = proxmox::tools::time::epoch_i64(); let current_time = proxmox::tools::time::epoch_i64();
@ -432,29 +432,32 @@ pub fn list_content(
.generate_media_set_name(&set.uuid, template) .generate_media_set_name(&set.uuid, template)
.unwrap_or_else(|_| set.uuid.to_string()); .unwrap_or_else(|_| set.uuid.to_string());
let catalog = MediaCatalog::open(status_path, &media_id.label.uuid, false, false)?; let catalog = MediaCatalog::open(status_path, &media_id, false, false)?;
for snapshot in catalog.snapshot_index().keys() { for (store, content) in catalog.content() {
let backup_dir: BackupDir = snapshot.parse()?; for snapshot in content.snapshot_index.keys() {
let backup_dir: BackupDir = snapshot.parse()?;
if let Some(ref backup_type) = filter.backup_type { if let Some(ref backup_type) = filter.backup_type {
if backup_dir.group().backup_type() != backup_type { continue; } if backup_dir.group().backup_type() != backup_type { continue; }
}
if let Some(ref backup_id) = filter.backup_id {
if backup_dir.group().backup_id() != backup_id { continue; }
}
list.push(MediaContentEntry {
uuid: media_id.label.uuid.clone(),
label_text: media_id.label.label_text.to_string(),
pool: set.pool.clone(),
media_set_name: media_set_name.clone(),
media_set_uuid: set.uuid.clone(),
media_set_ctime: set.ctime,
seq_nr: set.seq_nr,
snapshot: snapshot.to_owned(),
store: store.to_owned(),
backup_time: backup_dir.backup_time(),
});
} }
if let Some(ref backup_id) = filter.backup_id {
if backup_dir.group().backup_id() != backup_id { continue; }
}
list.push(MediaContentEntry {
uuid: media_id.label.uuid.clone(),
label_text: media_id.label.label_text.to_string(),
pool: set.pool.clone(),
media_set_name: media_set_name.clone(),
media_set_uuid: set.uuid.clone(),
media_set_ctime: set.ctime,
seq_nr: set.seq_nr,
snapshot: snapshot.to_owned(),
backup_time: backup_dir.backup_time(),
});
} }
} }

View File

@ -1,6 +1,9 @@
use std::path::Path; use std::path::Path;
use std::ffi::OsStr; use std::ffi::OsStr;
use std::collections::{HashMap, HashSet};
use std::convert::TryFrom; use std::convert::TryFrom;
use std::io::{Seek, SeekFrom};
use std::sync::Arc;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::Value; use serde_json::Value;
@ -12,6 +15,7 @@ use proxmox::{
RpcEnvironmentType, RpcEnvironmentType,
Router, Router,
Permission, Permission,
schema::parse_property_string,
section_config::SectionConfigData, section_config::SectionConfigData,
}, },
tools::{ tools::{
@ -26,10 +30,12 @@ use proxmox::{
use crate::{ use crate::{
task_log, task_log,
task_warn,
task::TaskState, task::TaskState,
tools::compute_file_csum, tools::compute_file_csum,
api2::types::{ api2::types::{
DATASTORE_SCHEMA, DATASTORE_MAP_ARRAY_SCHEMA,
DATASTORE_MAP_LIST_SCHEMA,
DRIVE_NAME_SCHEMA, DRIVE_NAME_SCHEMA,
UPID_SCHEMA, UPID_SCHEMA,
Authid, Authid,
@ -40,6 +46,7 @@ use crate::{
cached_user_info::CachedUserInfo, cached_user_info::CachedUserInfo,
acl::{ acl::{
PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_MODIFY,
PRIV_TAPE_READ, PRIV_TAPE_READ,
}, },
}, },
@ -64,17 +71,24 @@ use crate::{
TAPE_STATUS_DIR, TAPE_STATUS_DIR,
TapeRead, TapeRead,
MediaId, MediaId,
MediaSet,
MediaCatalog, MediaCatalog,
MediaPool,
Inventory, Inventory,
lock_media_set,
file_formats::{ file_formats::{
PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0, PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1,
PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0, PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0, PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1,
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0,
MediaContentHeader, MediaContentHeader,
ChunkArchiveHeader,
ChunkArchiveDecoder, ChunkArchiveDecoder,
SnapshotArchiveHeader,
CatalogArchiveHeader,
}, },
drive::{ drive::{
TapeDriver, TapeDriver,
@ -85,14 +99,75 @@ use crate::{
}, },
}; };
pub const ROUTER: Router = Router::new() pub struct DataStoreMap {
.post(&API_METHOD_RESTORE); map: HashMap<String, Arc<DataStore>>,
default: Option<Arc<DataStore>>,
}
impl TryFrom<String> for DataStoreMap {
type Error = Error;
fn try_from(value: String) -> Result<Self, Error> {
let value = parse_property_string(&value, &DATASTORE_MAP_ARRAY_SCHEMA)?;
let mut mapping: Vec<String> = value
.as_array()
.unwrap()
.iter()
.map(|v| v.as_str().unwrap().to_string())
.collect();
let mut map = HashMap::new();
let mut default = None;
while let Some(mut store) = mapping.pop() {
if let Some(index) = store.find('=') {
let mut target = store.split_off(index);
target.remove(0); // remove '='
let datastore = DataStore::lookup_datastore(&target)?;
map.insert(store, datastore);
} else if default.is_none() {
default = Some(DataStore::lookup_datastore(&store)?);
} else {
bail!("multiple default stores given");
}
}
Ok(Self { map, default })
}
}
impl DataStoreMap {
fn used_datastores<'a>(&self) -> HashSet<&str> {
let mut set = HashSet::new();
for store in self.map.values() {
set.insert(store.name());
}
if let Some(ref store) = self.default {
set.insert(store.name());
}
set
}
fn get_datastore(&self, source: &str) -> Option<&DataStore> {
if let Some(store) = self.map.get(source) {
return Some(&store);
}
if let Some(ref store) = self.default {
return Some(&store);
}
return None;
}
}
pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
#[api( #[api(
input: { input: {
properties: { properties: {
store: { store: {
schema: DATASTORE_SCHEMA, schema: DATASTORE_MAP_LIST_SCHEMA,
}, },
drive: { drive: {
schema: DRIVE_NAME_SCHEMA, schema: DRIVE_NAME_SCHEMA,
@ -105,6 +180,10 @@ pub const ROUTER: Router = Router::new()
type: Userid, type: Userid,
optional: true, optional: true,
}, },
owner: {
type: Authid,
optional: true,
},
}, },
}, },
returns: { returns: {
@ -123,15 +202,34 @@ pub fn restore(
drive: String, drive: String,
media_set: String, media_set: String,
notify_user: Option<Userid>, notify_user: Option<Userid>,
owner: Option<Authid>,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]); let store_map = DataStoreMap::try_from(store)
if (privs & PRIV_DATASTORE_BACKUP) == 0 { .map_err(|err| format_err!("cannot parse store mapping: {}", err))?;
bail!("no permissions on /datastore/{}", store); let used_datastores = store_map.used_datastores();
if used_datastores.len() == 0 {
bail!("no datastores given");
}
for store in used_datastores.iter() {
let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
if (privs & PRIV_DATASTORE_BACKUP) == 0 {
bail!("no permissions on /datastore/{}", store);
}
if let Some(ref owner) = owner {
let correct_owner = owner == &auth_id
|| (owner.is_token() && !auth_id.is_token() && owner.user() == auth_id.user());
// same permission as changing ownership after syncing
if !correct_owner && privs & PRIV_DATASTORE_MODIFY == 0 {
bail!("no permission to restore as '{}'", owner);
}
}
} }
let privs = user_info.lookup_privs(&auth_id, &["tape", "drive", &drive]); let privs = user_info.lookup_privs(&auth_id, &["tape", "drive", &drive]);
@ -139,11 +237,14 @@ pub fn restore(
bail!("no permissions on /tape/drive/{}", drive); bail!("no permissions on /tape/drive/{}", drive);
} }
let status_path = Path::new(TAPE_STATUS_DIR);
let inventory = Inventory::load(status_path)?;
let media_set_uuid = media_set.parse()?; let media_set_uuid = media_set.parse()?;
let status_path = Path::new(TAPE_STATUS_DIR);
let _lock = lock_media_set(status_path, &media_set_uuid, None)?;
let inventory = Inventory::load(status_path)?;
let pool = inventory.lookup_media_set_pool(&media_set_uuid)?; let pool = inventory.lookup_media_set_pool(&media_set_uuid)?;
let privs = user_info.lookup_privs(&auth_id, &["tape", "pool", &pool]); let privs = user_info.lookup_privs(&auth_id, &["tape", "pool", &pool]);
@ -151,8 +252,6 @@ pub fn restore(
bail!("no permissions on /tape/pool/{}", pool); bail!("no permissions on /tape/pool/{}", pool);
} }
let datastore = DataStore::lookup_datastore(&store)?;
let (drive_config, _digest) = config::drive::config()?; let (drive_config, _digest) = config::drive::config()?;
// early check/lock before starting worker // early check/lock before starting worker
@ -160,9 +259,14 @@ pub fn restore(
let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI; let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
let taskid = used_datastores
.iter()
.map(|s| s.to_string())
.collect::<Vec<String>>()
.join(", ");
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"tape-restore", "tape-restore",
Some(store.clone()), Some(taskid),
auth_id.clone(), auth_id.clone(),
to_stdout, to_stdout,
move |worker| { move |worker| {
@ -170,8 +274,6 @@ pub fn restore(
set_tape_device_state(&drive, &worker.upid().to_string())?; set_tape_device_state(&drive, &worker.upid().to_string())?;
let _lock = MediaPool::lock(status_path, &pool)?;
let members = inventory.compute_media_set_members(&media_set_uuid)?; let members = inventory.compute_media_set_members(&media_set_uuid)?;
let media_list = members.media_list(); let media_list = members.media_list();
@ -202,7 +304,11 @@ pub fn restore(
task_log!(worker, "Encryption key fingerprint: {}", fingerprint); task_log!(worker, "Encryption key fingerprint: {}", fingerprint);
} }
task_log!(worker, "Pool: {}", pool); task_log!(worker, "Pool: {}", pool);
task_log!(worker, "Datastore: {}", store); task_log!(worker, "Datastore(s):");
store_map
.used_datastores()
.iter()
.for_each(|store| task_log!(worker, "\t{}", store));
task_log!(worker, "Drive: {}", drive); task_log!(worker, "Drive: {}", drive);
task_log!( task_log!(
worker, worker,
@ -219,9 +325,10 @@ pub fn restore(
media_id, media_id,
&drive_config, &drive_config,
&drive, &drive,
&datastore, &store_map,
&auth_id, &auth_id,
&notify_user, &notify_user,
&owner,
)?; )?;
} }
@ -249,11 +356,11 @@ pub fn request_and_restore_media(
media_id: &MediaId, media_id: &MediaId,
drive_config: &SectionConfigData, drive_config: &SectionConfigData,
drive_name: &str, drive_name: &str,
datastore: &DataStore, store_map: &DataStoreMap,
authid: &Authid, authid: &Authid,
notify_user: &Option<Userid>, notify_user: &Option<Userid>,
owner: &Option<Authid>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let media_set_uuid = match media_id.media_set_label { let media_set_uuid = match media_id.media_set_label {
None => bail!("restore_media: no media set - internal error"), None => bail!("restore_media: no media set - internal error"),
Some(ref set) => &set.uuid, Some(ref set) => &set.uuid,
@ -284,7 +391,15 @@ pub fn request_and_restore_media(
} }
} }
restore_media(worker, &mut drive, &info, Some((datastore, authid)), false) let restore_owner = owner.as_ref().unwrap_or(authid);
restore_media(
worker,
&mut drive,
&info,
Some((&store_map, restore_owner)),
false,
)
} }
/// Restore complete media content and catalog /// Restore complete media content and catalog
@ -294,7 +409,7 @@ pub fn restore_media(
worker: &WorkerTask, worker: &WorkerTask,
drive: &mut Box<dyn TapeDriver>, drive: &mut Box<dyn TapeDriver>,
media_id: &MediaId, media_id: &MediaId,
target: Option<(&DataStore, &Authid)>, target: Option<(&DataStoreMap, &Authid)>,
verbose: bool, verbose: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
@ -323,11 +438,10 @@ fn restore_archive<'a>(
worker: &WorkerTask, worker: &WorkerTask,
mut reader: Box<dyn 'a + TapeRead>, mut reader: Box<dyn 'a + TapeRead>,
current_file_number: u64, current_file_number: u64,
target: Option<(&DataStore, &Authid)>, target: Option<(&DataStoreMap, &Authid)>,
catalog: &mut MediaCatalog, catalog: &mut MediaCatalog,
verbose: bool, verbose: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let header: MediaContentHeader = unsafe { reader.read_le_value()? }; let header: MediaContentHeader = unsafe { reader.read_le_value()? };
if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 { if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
bail!("missing MediaContentHeader"); bail!("missing MediaContentHeader");
@ -340,67 +454,123 @@ fn restore_archive<'a>(
bail!("unexpected content magic (label)"); bail!("unexpected content magic (label)");
} }
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0 => { PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0 => {
let snapshot = reader.read_exact_allocated(header.size as usize)?; bail!("unexpected snapshot archive version (v1.0)");
let snapshot = std::str::from_utf8(&snapshot) }
.map_err(|_| format_err!("found snapshot archive with non-utf8 characters in name"))?; PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1 => {
task_log!(worker, "Found snapshot archive: {} {}", current_file_number, snapshot); let header_data = reader.read_exact_allocated(header.size as usize)?;
let archive_header: SnapshotArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse snapshot archive header - {}", err))?;
let datastore_name = archive_header.store;
let snapshot = archive_header.snapshot;
task_log!(worker, "File {}: snapshot archive {}:{}", current_file_number, datastore_name, snapshot);
let backup_dir: BackupDir = snapshot.parse()?; let backup_dir: BackupDir = snapshot.parse()?;
if let Some((datastore, authid)) = target.as_ref() { if let Some((store_map, authid)) = target.as_ref() {
if let Some(datastore) = store_map.get_datastore(&datastore_name) {
let (owner, _group_lock) = datastore.create_locked_backup_group(backup_dir.group(), authid)?; let (owner, _group_lock) =
if *authid != &owner { // only the owner is allowed to create additional snapshots datastore.create_locked_backup_group(backup_dir.group(), authid)?;
bail!("restore '{}' failed - owner check failed ({} != {})", snapshot, authid, owner); if *authid != &owner {
} // only the owner is allowed to create additional snapshots
bail!(
let (rel_path, is_new, _snap_lock) = datastore.create_locked_backup_dir(&backup_dir)?; "restore '{}' failed - owner check failed ({} != {})",
let mut path = datastore.base_path(); snapshot,
path.push(rel_path); authid,
owner
if is_new { );
task_log!(worker, "restore snapshot {}", backup_dir);
match restore_snapshot_archive(worker, reader, &path) {
Err(err) => {
std::fs::remove_dir_all(&path)?;
bail!("restore snapshot {} failed - {}", backup_dir, err);
}
Ok(false) => {
std::fs::remove_dir_all(&path)?;
task_log!(worker, "skip incomplete snapshot {}", backup_dir);
}
Ok(true) => {
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, snapshot)?;
catalog.commit_if_large()?;
}
} }
return Ok(());
let (rel_path, is_new, _snap_lock) =
datastore.create_locked_backup_dir(&backup_dir)?;
let mut path = datastore.base_path();
path.push(rel_path);
if is_new {
task_log!(worker, "restore snapshot {}", backup_dir);
match restore_snapshot_archive(worker, reader, &path) {
Err(err) => {
std::fs::remove_dir_all(&path)?;
bail!("restore snapshot {} failed - {}", backup_dir, err);
}
Ok(false) => {
std::fs::remove_dir_all(&path)?;
task_log!(worker, "skip incomplete snapshot {}", backup_dir);
}
Ok(true) => {
catalog.register_snapshot(
Uuid::from(header.uuid),
current_file_number,
&datastore_name,
&snapshot,
)?;
catalog.commit_if_large()?;
}
}
return Ok(());
}
} else {
task_log!(worker, "skipping...");
} }
} }
reader.skip_to_end()?; // read all data reader.skip_to_end()?; // read all data
if let Ok(false) = reader.is_incomplete() { if let Ok(false) = reader.is_incomplete() {
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, snapshot)?; catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, &datastore_name, &snapshot)?;
catalog.commit_if_large()?; catalog.commit_if_large()?;
} }
} }
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0 => { PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0 => {
bail!("unexpected chunk archive version (v1.0)");
task_log!(worker, "Found chunk archive: {}", current_file_number);
let datastore = target.as_ref().map(|t| t.0);
if let Some(chunks) = restore_chunk_archive(worker, reader, datastore, verbose)? {
catalog.start_chunk_archive(Uuid::from(header.uuid), current_file_number)?;
for digest in chunks.iter() {
catalog.register_chunk(&digest)?;
}
task_log!(worker, "register {} chunks", chunks.len());
catalog.end_chunk_archive()?;
catalog.commit_if_large()?;
}
} }
_ => bail!("unknown content magic {:?}", header.content_magic), PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1 => {
let header_data = reader.read_exact_allocated(header.size as usize)?;
let archive_header: ChunkArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse chunk archive header - {}", err))?;
let source_datastore = archive_header.store;
task_log!(worker, "File {}: chunk archive for datastore '{}'", current_file_number, source_datastore);
let datastore = target
.as_ref()
.and_then(|t| t.0.get_datastore(&source_datastore));
if datastore.is_some() || target.is_none() {
if let Some(chunks) = restore_chunk_archive(worker, reader, datastore, verbose)? {
catalog.start_chunk_archive(
Uuid::from(header.uuid),
current_file_number,
&source_datastore,
)?;
for digest in chunks.iter() {
catalog.register_chunk(&digest)?;
}
task_log!(worker, "register {} chunks", chunks.len());
catalog.end_chunk_archive()?;
catalog.commit_if_large()?;
}
return Ok(());
} else if target.is_some() {
task_log!(worker, "skipping...");
}
reader.skip_to_end()?; // read all data
}
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0 => {
let header_data = reader.read_exact_allocated(header.size as usize)?;
let archive_header: CatalogArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse catalog archive header - {}", err))?;
task_log!(worker, "File {}: skip catalog '{}'", current_file_number, archive_header.uuid);
reader.skip_to_end()?; // read all data
}
_ => bail!("unknown content magic {:?}", header.content_magic),
} }
catalog.commit()?; catalog.commit()?;
@ -617,3 +787,137 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
Ok(()) Ok(())
} }
/// Try to restore media catalogs (form catalog_archives)
pub fn fast_catalog_restore(
worker: &WorkerTask,
drive: &mut Box<dyn TapeDriver>,
media_set: &MediaSet,
uuid: &Uuid, // current media Uuid
) -> Result<bool, Error> {
let status_path = Path::new(TAPE_STATUS_DIR);
let current_file_number = drive.current_file_number()?;
if current_file_number != 2 {
bail!("fast_catalog_restore: wrong media position - internal error");
}
let mut found_catalog = false;
let mut moved_to_eom = false;
loop {
let current_file_number = drive.current_file_number()?;
{ // limit reader scope
let mut reader = match drive.read_next_file()? {
None => {
task_log!(worker, "detected EOT after {} files", current_file_number);
break;
}
Some(reader) => reader,
};
let header: MediaContentHeader = unsafe { reader.read_le_value()? };
if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
bail!("missing MediaContentHeader");
}
if header.content_magic == PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0 {
task_log!(worker, "found catalog at pos {}", current_file_number);
let header_data = reader.read_exact_allocated(header.size as usize)?;
let archive_header: CatalogArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse catalog archive header - {}", err))?;
if &archive_header.media_set_uuid != media_set.uuid() {
task_log!(worker, "skipping unrelated catalog at pos {}", current_file_number);
reader.skip_to_end()?; // read all data
continue;
}
let catalog_uuid = &archive_header.uuid;
let wanted = media_set
.media_list()
.iter()
.find(|e| {
match e {
None => false,
Some(uuid) => uuid == catalog_uuid,
}
})
.is_some();
if !wanted {
task_log!(worker, "skip catalog because media '{}' not inventarized", catalog_uuid);
reader.skip_to_end()?; // read all data
continue;
}
if catalog_uuid == uuid {
// always restore and overwrite catalog
} else {
// only restore if catalog does not exist
if MediaCatalog::exists(status_path, catalog_uuid) {
task_log!(worker, "catalog for media '{}' already exists", catalog_uuid);
reader.skip_to_end()?; // read all data
continue;
}
}
let mut file = MediaCatalog::create_temporary_database_file(status_path, catalog_uuid)?;
std::io::copy(&mut reader, &mut file)?;
file.seek(SeekFrom::Start(0))?;
match MediaCatalog::parse_catalog_header(&mut file)? {
(true, Some(media_uuid), Some(media_set_uuid)) => {
if &media_uuid != catalog_uuid {
task_log!(worker, "catalog uuid missmatch at pos {}", current_file_number);
continue;
}
if media_set_uuid != archive_header.media_set_uuid {
task_log!(worker, "catalog media_set missmatch at pos {}", current_file_number);
continue;
}
MediaCatalog::finish_temporary_database(status_path, &media_uuid, true)?;
if catalog_uuid == uuid {
task_log!(worker, "successfully restored catalog");
found_catalog = true
} else {
task_log!(worker, "successfully restored related catalog {}", media_uuid);
}
}
_ => {
task_warn!(worker, "got incomplete catalog header - skip file");
continue;
}
}
continue;
}
}
if moved_to_eom {
break; // already done - stop
}
moved_to_eom = true;
task_log!(worker, "searching for catalog at EOT (moving to EOT)");
drive.move_to_last_file()?;
let new_file_number = drive.current_file_number()?;
if new_file_number < (current_file_number + 1) {
break; // no new content - stop
}
}
Ok(found_catalog)
}

View File

@ -99,6 +99,8 @@ const_regex!{
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$"; pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$"; pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$";
pub DATASTORE_MAP_REGEX = concat!(r"(:?", PROXMOX_SAFE_ID_REGEX_STR!(), r"=)?", PROXMOX_SAFE_ID_REGEX_STR!());
} }
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat = pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
@ -164,6 +166,9 @@ pub const SUBSCRIPTION_KEY_FORMAT: ApiStringFormat =
pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat = pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX); ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX);
pub const DATASTORE_MAP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DATASTORE_MAP_REGEX);
pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.") pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.")
.format(&PASSWORD_FORMAT) .format(&PASSWORD_FORMAT)
.min_length(1) .min_length(1)
@ -356,6 +361,25 @@ pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.max_length(32) .max_length(32)
.schema(); .schema();
pub const DATASTORE_MAP_SCHEMA: Schema = StringSchema::new("Datastore mapping.")
.format(&DATASTORE_MAP_FORMAT)
.min_length(3)
.max_length(65)
.type_text("(<source>=)?<target>")
.schema();
pub const DATASTORE_MAP_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Datastore mapping list.", &DATASTORE_MAP_SCHEMA)
.schema();
pub const DATASTORE_MAP_LIST_SCHEMA: Schema = StringSchema::new(
"A list of Datastore mappings (or single datastore), comma separated. \
For example 'a=b,e' maps the source datastore 'a' to target 'b and \
all other sources to the default 'e'. If no default is given, only the \
specified sources are mapped.")
.format(&ApiStringFormat::PropertyString(&DATASTORE_MAP_ARRAY_SCHEMA))
.schema();
pub const MEDIA_SET_UUID_SCHEMA: Schema = pub const MEDIA_SET_UUID_SCHEMA: Schema =
StringSchema::new("MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).") StringSchema::new("MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).")
.format(&UUID_FORMAT) .format(&UUID_FORMAT)

View File

@ -144,6 +144,8 @@ pub struct MediaContentEntry {
pub seq_nr: u64, pub seq_nr: u64,
/// Media Pool /// Media Pool
pub pool: String, pub pool: String,
/// Datastore Name
pub store: String,
/// Backup snapshot /// Backup snapshot
pub snapshot: String, pub snapshot: String,
/// Snapshot creation time (epoch) /// Snapshot creation time (epoch)

View File

@ -3,17 +3,29 @@ use crate::tools;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use std::os::unix::io::RawFd; use std::os::unix::io::RawFd;
use std::path::{PathBuf, Path}; use std::path::{Path, PathBuf};
use proxmox::const_regex; use proxmox::const_regex;
use super::manifest::MANIFEST_BLOB_NAME; use super::manifest::MANIFEST_BLOB_NAME;
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9_][A-Za-z0-9._\-]*") } macro_rules! BACKUP_ID_RE {
macro_rules! BACKUP_TYPE_RE { () => (r"(?:host|vm|ct)") } () => {
macro_rules! BACKUP_TIME_RE { () => (r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z") } r"[A-Za-z0-9_][A-Za-z0-9._\-]*"
};
}
macro_rules! BACKUP_TYPE_RE {
() => {
r"(?:host|vm|ct)"
};
}
macro_rules! BACKUP_TIME_RE {
() => {
r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z"
};
}
const_regex!{ const_regex! {
BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$"; BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$"); BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$");
@ -38,7 +50,6 @@ pub struct BackupGroup {
} }
impl std::cmp::Ord for BackupGroup { impl std::cmp::Ord for BackupGroup {
fn cmp(&self, other: &Self) -> std::cmp::Ordering { fn cmp(&self, other: &Self) -> std::cmp::Ordering {
let type_order = self.backup_type.cmp(&other.backup_type); let type_order = self.backup_type.cmp(&other.backup_type);
if type_order != std::cmp::Ordering::Equal { if type_order != std::cmp::Ordering::Equal {
@ -51,7 +62,7 @@ impl std::cmp::Ord for BackupGroup {
(Ok(id_self), Ok(id_other)) => id_self.cmp(&id_other), (Ok(id_self), Ok(id_other)) => id_self.cmp(&id_other),
(Ok(_), Err(_)) => std::cmp::Ordering::Less, (Ok(_), Err(_)) => std::cmp::Ordering::Less,
(Err(_), Ok(_)) => std::cmp::Ordering::Greater, (Err(_), Ok(_)) => std::cmp::Ordering::Greater,
_ => self.backup_id.cmp(&other.backup_id), _ => self.backup_id.cmp(&other.backup_id),
} }
} }
} }
@ -63,9 +74,11 @@ impl std::cmp::PartialOrd for BackupGroup {
} }
impl BackupGroup { impl BackupGroup {
pub fn new<T: Into<String>, U: Into<String>>(backup_type: T, backup_id: U) -> Self { pub fn new<T: Into<String>, U: Into<String>>(backup_type: T, backup_id: U) -> Self {
Self { backup_type: backup_type.into(), backup_id: backup_id.into() } Self {
backup_type: backup_type.into(),
backup_id: backup_id.into(),
}
} }
pub fn backup_type(&self) -> &str { pub fn backup_type(&self) -> &str {
@ -76,8 +89,7 @@ impl BackupGroup {
&self.backup_id &self.backup_id
} }
pub fn group_path(&self) -> PathBuf { pub fn group_path(&self) -> PathBuf {
let mut relative_path = PathBuf::new(); let mut relative_path = PathBuf::new();
relative_path.push(&self.backup_type); relative_path.push(&self.backup_type);
@ -88,60 +100,82 @@ impl BackupGroup {
} }
pub fn list_backups(&self, base_path: &Path) -> Result<Vec<BackupInfo>, Error> { pub fn list_backups(&self, base_path: &Path) -> Result<Vec<BackupInfo>, Error> {
let mut list = vec![]; let mut list = vec![];
let mut path = base_path.to_owned(); let mut path = base_path.to_owned();
path.push(self.group_path()); path.push(self.group_path());
tools::scandir(libc::AT_FDCWD, &path, &BACKUP_DATE_REGEX, |l2_fd, backup_time, file_type| { tools::scandir(
if file_type != nix::dir::Type::Directory { return Ok(()); } libc::AT_FDCWD,
&path,
&BACKUP_DATE_REGEX,
|l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
let backup_dir = BackupDir::with_rfc3339(&self.backup_type, &self.backup_id, backup_time)?; let backup_dir =
let files = list_backup_files(l2_fd, backup_time)?; BackupDir::with_rfc3339(&self.backup_type, &self.backup_id, backup_time)?;
let files = list_backup_files(l2_fd, backup_time)?;
list.push(BackupInfo { backup_dir, files }); list.push(BackupInfo { backup_dir, files });
Ok(()) Ok(())
})?; },
)?;
Ok(list) Ok(list)
} }
pub fn last_successful_backup(&self, base_path: &Path) -> Result<Option<i64>, Error> { pub fn last_successful_backup(&self, base_path: &Path) -> Result<Option<i64>, Error> {
let mut last = None; let mut last = None;
let mut path = base_path.to_owned(); let mut path = base_path.to_owned();
path.push(self.group_path()); path.push(self.group_path());
tools::scandir(libc::AT_FDCWD, &path, &BACKUP_DATE_REGEX, |l2_fd, backup_time, file_type| { tools::scandir(
if file_type != nix::dir::Type::Directory { return Ok(()); } libc::AT_FDCWD,
&path,
let mut manifest_path = PathBuf::from(backup_time); &BACKUP_DATE_REGEX,
manifest_path.push(MANIFEST_BLOB_NAME); |l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory {
use nix::fcntl::{openat, OFlag}; return Ok(());
match openat(l2_fd, &manifest_path, OFlag::O_RDONLY, nix::sys::stat::Mode::empty()) {
Ok(rawfd) => {
/* manifest exists --> assume backup was successful */
/* close else this leaks! */
nix::unistd::close(rawfd)?;
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => { return Ok(()); }
Err(err) => {
bail!("last_successful_backup: unexpected error - {}", err);
} }
}
let timestamp = proxmox::tools::time::parse_rfc3339(backup_time)?; let mut manifest_path = PathBuf::from(backup_time);
if let Some(last_timestamp) = last { manifest_path.push(MANIFEST_BLOB_NAME);
if timestamp > last_timestamp { last = Some(timestamp); }
} else {
last = Some(timestamp);
}
Ok(()) use nix::fcntl::{openat, OFlag};
})?; match openat(
l2_fd,
&manifest_path,
OFlag::O_RDONLY,
nix::sys::stat::Mode::empty(),
) {
Ok(rawfd) => {
/* manifest exists --> assume backup was successful */
/* close else this leaks! */
nix::unistd::close(rawfd)?;
}
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
return Ok(());
}
Err(err) => {
bail!("last_successful_backup: unexpected error - {}", err);
}
}
let timestamp = proxmox::tools::time::parse_rfc3339(backup_time)?;
if let Some(last_timestamp) = last {
if timestamp > last_timestamp {
last = Some(timestamp);
}
} else {
last = Some(timestamp);
}
Ok(())
},
)?;
Ok(last) Ok(last)
} }
@ -162,7 +196,8 @@ impl std::str::FromStr for BackupGroup {
/// ///
/// This parses strings like `vm/100". /// This parses strings like `vm/100".
fn from_str(path: &str) -> Result<Self, Self::Err> { fn from_str(path: &str) -> Result<Self, Self::Err> {
let cap = GROUP_PATH_REGEX.captures(path) let cap = GROUP_PATH_REGEX
.captures(path)
.ok_or_else(|| format_err!("unable to parse backup group path '{}'", path))?; .ok_or_else(|| format_err!("unable to parse backup group path '{}'", path))?;
Ok(Self { Ok(Self {
@ -182,11 +217,10 @@ pub struct BackupDir {
/// Backup timestamp /// Backup timestamp
backup_time: i64, backup_time: i64,
// backup_time as rfc3339 // backup_time as rfc3339
backup_time_string: String backup_time_string: String,
} }
impl BackupDir { impl BackupDir {
pub fn new<T, U>(backup_type: T, backup_id: U, backup_time: i64) -> Result<Self, Error> pub fn new<T, U>(backup_type: T, backup_id: U, backup_time: i64) -> Result<Self, Error>
where where
T: Into<String>, T: Into<String>,
@ -196,21 +230,33 @@ impl BackupDir {
BackupDir::with_group(group, backup_time) BackupDir::with_group(group, backup_time)
} }
pub fn with_rfc3339<T,U,V>(backup_type: T, backup_id: U, backup_time_string: V) -> Result<Self, Error> pub fn with_rfc3339<T, U, V>(
backup_type: T,
backup_id: U,
backup_time_string: V,
) -> Result<Self, Error>
where where
T: Into<String>, T: Into<String>,
U: Into<String>, U: Into<String>,
V: Into<String>, V: Into<String>,
{ {
let backup_time_string = backup_time_string.into(); let backup_time_string = backup_time_string.into();
let backup_time = proxmox::tools::time::parse_rfc3339(&backup_time_string)?; let backup_time = proxmox::tools::time::parse_rfc3339(&backup_time_string)?;
let group = BackupGroup::new(backup_type.into(), backup_id.into()); let group = BackupGroup::new(backup_type.into(), backup_id.into());
Ok(Self { group, backup_time, backup_time_string }) Ok(Self {
group,
backup_time,
backup_time_string,
})
} }
pub fn with_group(group: BackupGroup, backup_time: i64) -> Result<Self, Error> { pub fn with_group(group: BackupGroup, backup_time: i64) -> Result<Self, Error> {
let backup_time_string = Self::backup_time_to_string(backup_time)?; let backup_time_string = Self::backup_time_to_string(backup_time)?;
Ok(Self { group, backup_time, backup_time_string }) Ok(Self {
group,
backup_time,
backup_time_string,
})
} }
pub fn group(&self) -> &BackupGroup { pub fn group(&self) -> &BackupGroup {
@ -225,8 +271,7 @@ impl BackupDir {
&self.backup_time_string &self.backup_time_string
} }
pub fn relative_path(&self) -> PathBuf { pub fn relative_path(&self) -> PathBuf {
let mut relative_path = self.group.group_path(); let mut relative_path = self.group.group_path();
relative_path.push(self.backup_time_string.clone()); relative_path.push(self.backup_time_string.clone());
@ -247,7 +292,8 @@ impl std::str::FromStr for BackupDir {
/// ///
/// This parses strings like `host/elsa/2020-06-15T05:18:33Z". /// This parses strings like `host/elsa/2020-06-15T05:18:33Z".
fn from_str(path: &str) -> Result<Self, Self::Err> { fn from_str(path: &str) -> Result<Self, Self::Err> {
let cap = SNAPSHOT_PATH_REGEX.captures(path) let cap = SNAPSHOT_PATH_REGEX
.captures(path)
.ok_or_else(|| format_err!("unable to parse backup snapshot path '{}'", path))?; .ok_or_else(|| format_err!("unable to parse backup snapshot path '{}'", path))?;
BackupDir::with_rfc3339( BackupDir::with_rfc3339(
@ -276,7 +322,6 @@ pub struct BackupInfo {
} }
impl BackupInfo { impl BackupInfo {
pub fn new(base_path: &Path, backup_dir: BackupDir) -> Result<BackupInfo, Error> { pub fn new(base_path: &Path, backup_dir: BackupDir) -> Result<BackupInfo, Error> {
let mut path = base_path.to_owned(); let mut path = base_path.to_owned();
path.push(backup_dir.relative_path()); path.push(backup_dir.relative_path());
@ -287,19 +332,24 @@ impl BackupInfo {
} }
/// Finds the latest backup inside a backup group /// Finds the latest backup inside a backup group
pub fn last_backup(base_path: &Path, group: &BackupGroup, only_finished: bool) pub fn last_backup(
-> Result<Option<BackupInfo>, Error> base_path: &Path,
{ group: &BackupGroup,
only_finished: bool,
) -> Result<Option<BackupInfo>, Error> {
let backups = group.list_backups(base_path)?; let backups = group.list_backups(base_path)?;
Ok(backups.into_iter() Ok(backups
.into_iter()
.filter(|item| !only_finished || item.is_finished()) .filter(|item| !only_finished || item.is_finished())
.max_by_key(|item| item.backup_dir.backup_time())) .max_by_key(|item| item.backup_dir.backup_time()))
} }
pub fn sort_list(list: &mut Vec<BackupInfo>, ascendending: bool) { pub fn sort_list(list: &mut Vec<BackupInfo>, ascendending: bool) {
if ascendending { // oldest first if ascendending {
// oldest first
list.sort_unstable_by(|a, b| a.backup_dir.backup_time.cmp(&b.backup_dir.backup_time)); list.sort_unstable_by(|a, b| a.backup_dir.backup_time.cmp(&b.backup_dir.backup_time));
} else { // newest first } else {
// newest first
list.sort_unstable_by(|a, b| b.backup_dir.backup_time.cmp(&a.backup_dir.backup_time)); list.sort_unstable_by(|a, b| b.backup_dir.backup_time.cmp(&a.backup_dir.backup_time));
} }
} }
@ -316,31 +366,52 @@ impl BackupInfo {
pub fn list_backup_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> { pub fn list_backup_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
let mut list = Vec::new(); let mut list = Vec::new();
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| { tools::scandir(
if file_type != nix::dir::Type::Directory { return Ok(()); } libc::AT_FDCWD,
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_, backup_id, file_type| { base_path,
if file_type != nix::dir::Type::Directory { return Ok(()); } &BACKUP_TYPE_REGEX,
|l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
tools::scandir(
l0_fd,
backup_type,
&BACKUP_ID_REGEX,
|_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
list.push(BackupGroup::new(backup_type, backup_id)); list.push(BackupGroup::new(backup_type, backup_id));
Ok(()) Ok(())
}) },
})?; )
},
)?;
Ok(list) Ok(list)
} }
pub fn is_finished(&self) -> bool { pub fn is_finished(&self) -> bool {
// backup is considered unfinished if there is no manifest // backup is considered unfinished if there is no manifest
self.files.iter().any(|name| name == super::MANIFEST_BLOB_NAME) self.files
.iter()
.any(|name| name == super::MANIFEST_BLOB_NAME)
} }
} }
fn list_backup_files<P: ?Sized + nix::NixPath>(dirfd: RawFd, path: &P) -> Result<Vec<String>, Error> { fn list_backup_files<P: ?Sized + nix::NixPath>(
dirfd: RawFd,
path: &P,
) -> Result<Vec<String>, Error> {
let mut files = vec![]; let mut files = vec![];
tools::scandir(dirfd, path, &BACKUP_FILE_REGEX, |_, filename, file_type| { tools::scandir(dirfd, path, &BACKUP_FILE_REGEX, |_, filename, file_type| {
if file_type != nix::dir::Type::File { return Ok(()); } if file_type != nix::dir::Type::File {
return Ok(());
}
files.push(filename.to_owned()); files.push(filename.to_owned());
Ok(()) Ok(())
})?; })?;

View File

@ -180,7 +180,7 @@ fn get_tape_handle(param: &Value) -> Result<LinuxTapeHandle, Error> {
/// ///
/// Positioning is done by first rewinding the tape and then spacing /// Positioning is done by first rewinding the tape and then spacing
/// forward over count file marks. /// forward over count file marks.
fn asf(count: i32, param: Value) -> Result<(), Error> { fn asf(count: usize, param: Value) -> Result<(), Error> {
let mut handle = get_tape_handle(&param)?; let mut handle = get_tape_handle(&param)?;
@ -212,7 +212,7 @@ fn asf(count: i32, param: Value) -> Result<(), Error> {
/// Backward space count files (position before file mark). /// Backward space count files (position before file mark).
/// ///
/// The tape is positioned on the last block of the previous file. /// The tape is positioned on the last block of the previous file.
fn bsf(count: i32, param: Value) -> Result<(), Error> { fn bsf(count: usize, param: Value) -> Result<(), Error> {
let mut handle = get_tape_handle(&param)?; let mut handle = get_tape_handle(&param)?;
@ -478,7 +478,7 @@ fn erase(fast: Option<bool>, param: Value) -> Result<(), Error> {
/// Forward space count files (position after file mark). /// Forward space count files (position after file mark).
/// ///
/// The tape is positioned on the first block of the next file. /// The tape is positioned on the first block of the next file.
fn fsf(count: i32, param: Value) -> Result<(), Error> { fn fsf(count: usize, param: Value) -> Result<(), Error> {
let mut handle = get_tape_handle(&param)?; let mut handle = get_tape_handle(&param)?;

View File

@ -1,7 +1,5 @@
use std::collections::HashSet; use std::collections::HashSet;
use std::convert::TryFrom;
use std::io::{self, Read, Write, Seek, SeekFrom}; use std::io::{self, Read, Write, Seek, SeekFrom};
use std::os::unix::io::{FromRawFd, RawFd};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::pin::Pin; use std::pin::Pin;
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
@ -19,7 +17,7 @@ use pathpatterns::{MatchEntry, MatchType, PatternFlag};
use proxmox::{ use proxmox::{
tools::{ tools::{
time::{strftime_local, epoch_i64}, time::{strftime_local, epoch_i64},
fs::{file_get_contents, file_get_json, replace_file, CreateOptions, image_size}, fs::{file_get_json, replace_file, CreateOptions, image_size},
}, },
api::{ api::{
api, api,
@ -32,7 +30,11 @@ use proxmox::{
}; };
use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation}; use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
use proxmox_backup::tools; use proxmox_backup::tools::{
self,
StdChannelWriter,
TokioWriterAdapter,
};
use proxmox_backup::api2::types::*; use proxmox_backup::api2::types::*;
use proxmox_backup::api2::version; use proxmox_backup::api2::version;
use proxmox_backup::client::*; use proxmox_backup::client::*;
@ -67,8 +69,18 @@ use proxmox_backup::backup::{
mod proxmox_backup_client; mod proxmox_backup_client;
use proxmox_backup_client::*; use proxmox_backup_client::*;
mod proxmox_client_tools; pub mod proxmox_client_tools;
use proxmox_client_tools::*; use proxmox_client_tools::{
complete_archive_name, complete_auth_id, complete_backup_group, complete_backup_snapshot,
complete_backup_source, complete_chunk_size, complete_group_or_snapshot,
complete_img_archive_name, complete_pxar_archive_name, complete_repository, connect,
extract_repository_from_value,
key_source::{
crypto_parameters, format_key_source, get_encryption_key_password, KEYFD_SCHEMA,
KEYFILE_SCHEMA, MASTER_PUBKEY_FD_SCHEMA, MASTER_PUBKEY_FILE_SCHEMA,
},
CHUNK_SIZE_SCHEMA, REPO_URL_SCHEMA,
};
fn record_repository(repo: &BackupRepository) { fn record_repository(repo: &BackupRepository) {
@ -162,7 +174,7 @@ async fn backup_directory<P: AsRef<Path>>(
dir_path: P, dir_path: P,
archive_name: &str, archive_name: &str,
chunk_size: Option<usize>, chunk_size: Option<usize>,
catalog: Arc<Mutex<CatalogWriter<crate::tools::StdChannelWriter>>>, catalog: Arc<Mutex<CatalogWriter<TokioWriterAdapter<StdChannelWriter>>>>,
pxar_create_options: proxmox_backup::pxar::PxarCreateOptions, pxar_create_options: proxmox_backup::pxar::PxarCreateOptions,
upload_options: UploadOptions, upload_options: UploadOptions,
) -> Result<BackupStats, Error> { ) -> Result<BackupStats, Error> {
@ -460,7 +472,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
} }
struct CatalogUploadResult { struct CatalogUploadResult {
catalog_writer: Arc<Mutex<CatalogWriter<crate::tools::StdChannelWriter>>>, catalog_writer: Arc<Mutex<CatalogWriter<TokioWriterAdapter<StdChannelWriter>>>>,
result: tokio::sync::oneshot::Receiver<Result<BackupStats, Error>>, result: tokio::sync::oneshot::Receiver<Result<BackupStats, Error>>,
} }
@ -473,7 +485,7 @@ fn spawn_catalog_upload(
let catalog_chunk_size = 512*1024; let catalog_chunk_size = 512*1024;
let catalog_chunk_stream = ChunkStream::new(catalog_stream, Some(catalog_chunk_size)); let catalog_chunk_stream = ChunkStream::new(catalog_stream, Some(catalog_chunk_size));
let catalog_writer = Arc::new(Mutex::new(CatalogWriter::new(crate::tools::StdChannelWriter::new(catalog_tx))?)); let catalog_writer = Arc::new(Mutex::new(CatalogWriter::new(TokioWriterAdapter::new(StdChannelWriter::new(catalog_tx)))?));
let (catalog_result_tx, catalog_result_rx) = tokio::sync::oneshot::channel(); let (catalog_result_tx, catalog_result_rx) = tokio::sync::oneshot::channel();
@ -499,437 +511,6 @@ fn spawn_catalog_upload(
Ok(CatalogUploadResult { catalog_writer, result: catalog_result_rx }) Ok(CatalogUploadResult { catalog_writer, result: catalog_result_rx })
} }
#[derive(Clone, Debug, Eq, PartialEq)]
enum KeySource {
DefaultKey,
Fd,
Path(String),
}
fn format_key_source(source: &KeySource, key_type: &str) -> String {
match source {
KeySource::DefaultKey => format!("Using default {} key..", key_type),
KeySource::Fd => format!("Using {} key from file descriptor..", key_type),
KeySource::Path(path) => format!("Using {} key from '{}'..", key_type, path),
}
}
#[derive(Clone, Debug, Eq, PartialEq)]
struct KeyWithSource {
pub source: KeySource,
pub key: Vec<u8>,
}
impl KeyWithSource {
pub fn from_fd(key: Vec<u8>) -> Self {
Self {
source: KeySource::Fd,
key,
}
}
pub fn from_default(key: Vec<u8>) -> Self {
Self {
source: KeySource::DefaultKey,
key,
}
}
pub fn from_path(path: String, key: Vec<u8>) -> Self {
Self {
source: KeySource::Path(path),
key,
}
}
}
#[derive(Debug, Eq, PartialEq)]
struct CryptoParams {
mode: CryptMode,
enc_key: Option<KeyWithSource>,
// FIXME switch to openssl::rsa::rsa<openssl::pkey::Public> once that is Eq?
master_pubkey: Option<KeyWithSource>,
}
fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
let keyfile = match param.get("keyfile") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --keyfile parameter type"),
None => None,
};
let key_fd = match param.get("keyfd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --keyfd parameter type"),
None => None,
};
let master_pubkey_file = match param.get("master-pubkey-file") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --master-pubkey-file parameter type"),
None => None,
};
let master_pubkey_fd = match param.get("master-pubkey-fd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad master public key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad public master key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --master-pubkey-fd parameter type"),
None => None,
};
let mode: Option<CryptMode> = match param.get("crypt-mode") {
Some(mode) => Some(serde_json::from_value(mode.clone())?),
None => None,
};
let key = match (keyfile, key_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }.read_to_end(&mut data).map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err)
})?;
Some(KeyWithSource::from_fd(data))
}
};
let master_pubkey = match (master_pubkey_file, master_pubkey_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }
.read_to_end(&mut data)
.map_err(|err| format_err!("error reading master key from fd {}: {}", fd, err))?;
Some(KeyWithSource::from_fd(data))
}
};
let res = match mode {
// no crypt mode, enable encryption if keys are available
None => match (key, master_pubkey) {
// only default keys if available
(None, None) => match key::read_optional_default_encryption_key()? {
None => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
enc_key => {
let master_pubkey = key::read_optional_default_master_pubkey()?;
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit master key, default enc key needed
(None, master_pubkey) => match key::read_optional_default_encryption_key()? {
None => bail!("--master-pubkey-file/--master-pubkey-fd specified, but no key available"),
enc_key => {
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit keyfile, maybe default master key
(enc_key, None) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey: key::read_optional_default_master_pubkey()? },
// explicit keyfile and master key
(enc_key, master_pubkey) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey },
},
// explicitly disabled encryption
Some(CryptMode::None) => match (key, master_pubkey) {
// no keys => OK, no encryption
(None, None) => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
// --keyfile and --crypt-mode=none
(Some(_), _) => bail!("--keyfile/--keyfd and --crypt-mode=none are mutually exclusive"),
// --master-pubkey-file and --crypt-mode=none
(_, Some(_)) => bail!("--master-pubkey-file/--master-pubkey-fd and --crypt-mode=none are mutually exclusive"),
},
// explicitly enabled encryption
Some(mode) => match (key, master_pubkey) {
// no key, maybe master key
(None, master_pubkey) => match key::read_optional_default_encryption_key()? {
None => bail!("--crypt-mode without --keyfile and no default key file available"),
enc_key => {
eprintln!("Encrypting with default encryption key!");
let master_pubkey = match master_pubkey {
None => key::read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams {
mode,
enc_key,
master_pubkey,
}
},
},
// --keyfile and --crypt-mode other than none
(enc_key, master_pubkey) => {
let master_pubkey = match master_pubkey {
None => key::read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams { mode, enc_key, master_pubkey }
},
},
};
Ok(res)
}
#[test]
// WARNING: there must only be one test for crypto_parameters as the default key handling is not
// safe w.r.t. concurrency
fn test_crypto_parameters_handling() -> Result<(), Error> {
let some_key = vec![1;1];
let default_key = vec![2;1];
let some_master_key = vec![3;1];
let default_master_key = vec![4;1];
let keypath = "./target/testout/keyfile.test";
let master_keypath = "./target/testout/masterkeyfile.test";
let invalid_keypath = "./target/testout/invalid_keyfile.test";
let no_key_res = CryptoParams {
enc_key: None,
master_pubkey: None,
mode: CryptMode::None,
};
let some_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let some_key_some_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_path(
master_keypath.to_string(),
some_master_key.clone(),
)),
mode: CryptMode::Encrypt,
};
let some_key_default_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_default(default_master_key.clone())),
mode: CryptMode::Encrypt,
};
let some_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
let default_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let default_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
replace_file(&keypath, &some_key, CreateOptions::default())?;
replace_file(&master_keypath, &some_master_key, CreateOptions::default())?;
// no params, no default key == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, no default key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now set a default key
unsafe { key::set_test_encryption_key(Ok(Some(default_key.clone()))); }
// and repeat
// no params but default key == default key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), default_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key == default key with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only"}));
assert_eq!(res.unwrap(), default_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt"}));
assert_eq!(res.unwrap(), default_key_res);
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now make default key retrieval error
unsafe { key::set_test_encryption_key(Err(format_err!("test error"))); }
// and repeat
// no params, default key retrieval errors == Error
assert!(crypto_parameters(&json!({})).is_err());
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key error == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now remove default key again
unsafe { key::set_test_encryption_key(Ok(None)); }
// set a default master key
unsafe { key::set_test_default_master_pubkey(Ok(Some(default_master_key.clone()))); }
// and use an explicit master key
assert!(crypto_parameters(&json!({"master-pubkey-file": master_keypath})).is_err());
// just a default == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
// same with fallback to default master key
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// crypt mode none == error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "master-pubkey-file": master_keypath})).is_err());
// with just default master key == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt without enc key == error
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt", "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath, "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// invalid master keyfile parameter always errors when a key is passed, even with a valid
// default master key
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "encrypt"})).is_err());
Ok(())
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -1160,7 +741,7 @@ async fn create_backup(
); );
let (key, created, fingerprint) = let (key, created, fingerprint) =
decrypt_key(&key_with_source.key, &key::get_encryption_key_password)?; decrypt_key(&key_with_source.key, &get_encryption_key_password)?;
println!("Encryption key fingerprint: {}", fingerprint); println!("Encryption key fingerprint: {}", fingerprint);
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
@ -1510,7 +1091,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
None => None, None => None,
Some(ref key) => { Some(ref key) => {
let (key, _, _) = let (key, _, _) =
decrypt_key(&key.key, &key::get_encryption_key_password).map_err(|err| { decrypt_key(&key.key, &get_encryption_key_password).map_err(|err| {
eprintln!("{}", format_key_source(&key.source, "encryption")); eprintln!("{}", format_key_source(&key.source, "encryption"));
err err
})?; })?;

View File

@ -27,10 +27,13 @@ use proxmox_backup::{
api2::{ api2::{
self, self,
types::{ types::{
Authid,
DATASTORE_SCHEMA, DATASTORE_SCHEMA,
DATASTORE_MAP_LIST_SCHEMA,
DRIVE_NAME_SCHEMA, DRIVE_NAME_SCHEMA,
MEDIA_LABEL_SCHEMA, MEDIA_LABEL_SCHEMA,
MEDIA_POOL_NAME_SCHEMA, MEDIA_POOL_NAME_SCHEMA,
Userid,
}, },
}, },
config::{ config::{
@ -781,8 +784,8 @@ async fn clean_drive(mut param: Value) -> Result<(), Error> {
let mut client = connect_to_localhost()?; let mut client = connect_to_localhost()?;
let path = format!("api2/json/tape/drive/{}/clean-drive", drive); let path = format!("api2/json/tape/drive/{}/clean", drive);
let result = client.post(&path, Some(param)).await?; let result = client.put(&path, Some(param)).await?;
view_task_result(&mut client, result, &output_format).await?; view_task_result(&mut client, result, &output_format).await?;
@ -853,7 +856,7 @@ async fn backup(mut param: Value) -> Result<(), Error> {
input: { input: {
properties: { properties: {
store: { store: {
schema: DATASTORE_SCHEMA, schema: DATASTORE_MAP_LIST_SCHEMA,
}, },
drive: { drive: {
schema: DRIVE_NAME_SCHEMA, schema: DRIVE_NAME_SCHEMA,
@ -863,6 +866,14 @@ async fn backup(mut param: Value) -> Result<(), Error> {
description: "Media set UUID.", description: "Media set UUID.",
type: String, type: String,
}, },
"notify-user": {
type: Userid,
optional: true,
},
owner: {
type: Authid,
optional: true,
},
"output-format": { "output-format": {
schema: OUTPUT_FORMAT, schema: OUTPUT_FORMAT,
optional: true, optional: true,
@ -900,6 +911,11 @@ async fn restore(mut param: Value) -> Result<(), Error> {
type: bool, type: bool,
optional: true, optional: true,
}, },
scan: {
description: "Re-read the whole tape to reconstruct the catalog instead of restoring saved versions.",
type: bool,
optional: true,
},
verbose: { verbose: {
description: "Verbose mode - log all found chunks.", description: "Verbose mode - log all found chunks.",
type: bool, type: bool,

View File

@ -34,6 +34,8 @@ use crate::{
connect, connect,
}; };
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[api()] #[api()]
#[derive(Copy, Clone, Serialize)] #[derive(Copy, Clone, Serialize)]
/// Speed test result /// Speed test result
@ -152,7 +154,7 @@ pub async fn benchmark(
let crypt_config = match keyfile { let crypt_config = match keyfile {
None => None, None => None,
Some(path) => { Some(path) => {
let (key, _, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?; let (key, _, _) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }

View File

@ -17,7 +17,6 @@ use crate::{
extract_repository_from_value, extract_repository_from_value,
format_key_source, format_key_source,
record_repository, record_repository,
key::get_encryption_key_password,
decrypt_key, decrypt_key,
api_datastore_latest_snapshot, api_datastore_latest_snapshot,
complete_repository, complete_repository,
@ -38,6 +37,8 @@ use crate::{
Shell, Shell,
}; };
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[api( #[api(
input: { input: {
properties: { properties: {

View File

@ -20,114 +20,10 @@ use proxmox_backup::{
tools::paperkey::{generate_paper_key, PaperkeyFormat}, tools::paperkey::{generate_paper_key, PaperkeyFormat},
}; };
use crate::KeyWithSource; use crate::proxmox_client_tools::key_source::{
find_default_encryption_key, find_default_master_pubkey, get_encryption_key_password,
pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json"; place_default_encryption_key, place_default_master_pubkey,
pub const DEFAULT_MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem"; };
pub fn find_default_master_pubkey() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn place_default_master_pubkey() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn find_default_encryption_key() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
pub fn place_default_encryption_key() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
find_default_encryption_key()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
find_default_master_pubkey()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(test)]
static mut TEST_DEFAULT_ENCRYPTION_KEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_ENCRYPTION_KEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_encryption_key(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_ENCRYPTION_KEY = value;
}
#[cfg(test)]
static mut TEST_DEFAULT_MASTER_PUBKEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_MASTER_PUBKEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_default_master_pubkey(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_MASTER_PUBKEY = value;
}
pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
// fixme: implement other input methods
use std::env::VarError::*;
match std::env::var("PBS_ENCRYPTION_PASSWORD") {
Ok(p) => return Ok(p.as_bytes().to_vec()),
Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
Err(NotPresent) => {
// Try another method
}
}
// If we're on a TTY, query the user for a password
if tty::stdin_isatty() {
return Ok(tty::read_password("Encryption Key Password: ")?);
}
bail!("no password input mechanism available");
}
#[api( #[api(
input: { input: {

View File

@ -1,5 +1,3 @@
use anyhow::{Context, Error};
mod benchmark; mod benchmark;
pub use benchmark::*; pub use benchmark::*;
mod mount; mod mount;
@ -13,29 +11,3 @@ pub use snapshot::*;
pub mod key; pub mod key;
pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
}
/// Convenience helper for better error messages:
pub fn find_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<Option<std::path::PathBuf>, Error> {
let file_name = file_name.as_ref();
base_directories()
.map(|base| base.find_config_file(file_name))
.with_context(|| format!("error searching for {}", description))
}
pub fn place_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<std::path::PathBuf, Error> {
let file_name = file_name.as_ref();
base_directories()
.and_then(|base| {
base.place_config_file(file_name).map_err(Error::from)
})
.with_context(|| format!("failed to place {} in xdg home", description))
}

View File

@ -43,6 +43,8 @@ use crate::{
BufferedDynamicReadAt, BufferedDynamicReadAt,
}; };
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[sortable] #[sortable]
const API_METHOD_MOUNT: ApiMethod = ApiMethod::new( const API_METHOD_MOUNT: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&mount), &ApiHandler::Sync(&mount),
@ -182,7 +184,7 @@ async fn mount_do(param: Value, pipe: Option<Fd>) -> Result<Value, Error> {
None => None, None => None,
Some(path) => { Some(path) => {
println!("Encryption key file: '{:?}'", path); println!("Encryption key file: '{:?}'", path);
let (key, _, fingerprint) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?; let (key, _, fingerprint) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
println!("Encryption key fingerprint: '{}'", fingerprint); println!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?)) Some(Arc::new(CryptConfig::new(key)?))
} }

View File

@ -35,6 +35,8 @@ use crate::{
record_repository, record_repository,
}; };
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[api( #[api(
input: { input: {
properties: { properties: {
@ -239,7 +241,7 @@ async fn upload_log(param: Value) -> Result<Value, Error> {
let crypt_config = match crypto.enc_key { let crypt_config = match crypto.enc_key {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _created, _) = decrypt_key(&key.key, &crate::key::get_encryption_key_password)?; let (key, _created, _) = decrypt_key(&key.key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }

View File

@ -0,0 +1,572 @@
use std::convert::TryFrom;
use std::path::PathBuf;
use std::os::unix::io::{FromRawFd, RawFd};
use std::io::Read;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::api::schema::*;
use proxmox::sys::linux::tty;
use proxmox::tools::fs::file_get_contents;
use proxmox_backup::backup::CryptMode;
pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
pub const DEFAULT_MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
pub const KEYFILE_SCHEMA: Schema =
StringSchema::new("Path to encryption key. All data will be encrypted using this key.")
.schema();
pub const KEYFD_SCHEMA: Schema =
IntegerSchema::new("Pass an encryption key via an already opened file descriptor.")
.minimum(0)
.schema();
pub const MASTER_PUBKEY_FILE_SCHEMA: Schema = StringSchema::new(
"Path to master public key. The encryption key used for a backup will be encrypted using this key and appended to the backup.")
.schema();
pub const MASTER_PUBKEY_FD_SCHEMA: Schema =
IntegerSchema::new("Pass a master public key via an already opened file descriptor.")
.minimum(0)
.schema();
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum KeySource {
DefaultKey,
Fd,
Path(String),
}
pub fn format_key_source(source: &KeySource, key_type: &str) -> String {
match source {
KeySource::DefaultKey => format!("Using default {} key..", key_type),
KeySource::Fd => format!("Using {} key from file descriptor..", key_type),
KeySource::Path(path) => format!("Using {} key from '{}'..", key_type, path),
}
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct KeyWithSource {
pub source: KeySource,
pub key: Vec<u8>,
}
impl KeyWithSource {
pub fn from_fd(key: Vec<u8>) -> Self {
Self {
source: KeySource::Fd,
key,
}
}
pub fn from_default(key: Vec<u8>) -> Self {
Self {
source: KeySource::DefaultKey,
key,
}
}
pub fn from_path(path: String, key: Vec<u8>) -> Self {
Self {
source: KeySource::Path(path),
key,
}
}
}
#[derive(Debug, Eq, PartialEq)]
pub struct CryptoParams {
pub mode: CryptMode,
pub enc_key: Option<KeyWithSource>,
// FIXME switch to openssl::rsa::rsa<openssl::pkey::Public> once that is Eq?
pub master_pubkey: Option<KeyWithSource>,
}
pub fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
let keyfile = match param.get("keyfile") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --keyfile parameter type"),
None => None,
};
let key_fd = match param.get("keyfd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --keyfd parameter type"),
None => None,
};
let master_pubkey_file = match param.get("master-pubkey-file") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --master-pubkey-file parameter type"),
None => None,
};
let master_pubkey_fd = match param.get("master-pubkey-fd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad master public key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad public master key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --master-pubkey-fd parameter type"),
None => None,
};
let mode: Option<CryptMode> = match param.get("crypt-mode") {
Some(mode) => Some(serde_json::from_value(mode.clone())?),
None => None,
};
let key = match (keyfile, key_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }.read_to_end(&mut data).map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err)
})?;
Some(KeyWithSource::from_fd(data))
}
};
let master_pubkey = match (master_pubkey_file, master_pubkey_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }
.read_to_end(&mut data)
.map_err(|err| format_err!("error reading master key from fd {}: {}", fd, err))?;
Some(KeyWithSource::from_fd(data))
}
};
let res = match mode {
// no crypt mode, enable encryption if keys are available
None => match (key, master_pubkey) {
// only default keys if available
(None, None) => match read_optional_default_encryption_key()? {
None => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
enc_key => {
let master_pubkey = read_optional_default_master_pubkey()?;
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit master key, default enc key needed
(None, master_pubkey) => match read_optional_default_encryption_key()? {
None => bail!("--master-pubkey-file/--master-pubkey-fd specified, but no key available"),
enc_key => {
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit keyfile, maybe default master key
(enc_key, None) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey: read_optional_default_master_pubkey()? },
// explicit keyfile and master key
(enc_key, master_pubkey) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey },
},
// explicitly disabled encryption
Some(CryptMode::None) => match (key, master_pubkey) {
// no keys => OK, no encryption
(None, None) => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
// --keyfile and --crypt-mode=none
(Some(_), _) => bail!("--keyfile/--keyfd and --crypt-mode=none are mutually exclusive"),
// --master-pubkey-file and --crypt-mode=none
(_, Some(_)) => bail!("--master-pubkey-file/--master-pubkey-fd and --crypt-mode=none are mutually exclusive"),
},
// explicitly enabled encryption
Some(mode) => match (key, master_pubkey) {
// no key, maybe master key
(None, master_pubkey) => match read_optional_default_encryption_key()? {
None => bail!("--crypt-mode without --keyfile and no default key file available"),
enc_key => {
eprintln!("Encrypting with default encryption key!");
let master_pubkey = match master_pubkey {
None => read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams {
mode,
enc_key,
master_pubkey,
}
},
},
// --keyfile and --crypt-mode other than none
(enc_key, master_pubkey) => {
let master_pubkey = match master_pubkey {
None => read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams { mode, enc_key, master_pubkey }
},
},
};
Ok(res)
}
pub fn find_default_master_pubkey() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn place_default_master_pubkey() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn find_default_encryption_key() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
pub fn place_default_encryption_key() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
find_default_encryption_key()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
find_default_master_pubkey()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(test)]
static mut TEST_DEFAULT_ENCRYPTION_KEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_ENCRYPTION_KEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_encryption_key(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_ENCRYPTION_KEY = value;
}
#[cfg(test)]
static mut TEST_DEFAULT_MASTER_PUBKEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_MASTER_PUBKEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_default_master_pubkey(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_MASTER_PUBKEY = value;
}
pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
// fixme: implement other input methods
use std::env::VarError::*;
match std::env::var("PBS_ENCRYPTION_PASSWORD") {
Ok(p) => return Ok(p.as_bytes().to_vec()),
Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
Err(NotPresent) => {
// Try another method
}
}
// If we're on a TTY, query the user for a password
if tty::stdin_isatty() {
return Ok(tty::read_password("Encryption Key Password: ")?);
}
bail!("no password input mechanism available");
}
#[test]
// WARNING: there must only be one test for crypto_parameters as the default key handling is not
// safe w.r.t. concurrency
fn test_crypto_parameters_handling() -> Result<(), Error> {
use serde_json::json;
use proxmox::tools::fs::{replace_file, CreateOptions};
let some_key = vec![1;1];
let default_key = vec![2;1];
let some_master_key = vec![3;1];
let default_master_key = vec![4;1];
let keypath = "./target/testout/keyfile.test";
let master_keypath = "./target/testout/masterkeyfile.test";
let invalid_keypath = "./target/testout/invalid_keyfile.test";
let no_key_res = CryptoParams {
enc_key: None,
master_pubkey: None,
mode: CryptMode::None,
};
let some_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let some_key_some_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_path(
master_keypath.to_string(),
some_master_key.clone(),
)),
mode: CryptMode::Encrypt,
};
let some_key_default_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_default(default_master_key.clone())),
mode: CryptMode::Encrypt,
};
let some_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
let default_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let default_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
replace_file(&keypath, &some_key, CreateOptions::default())?;
replace_file(&master_keypath, &some_master_key, CreateOptions::default())?;
// no params, no default key == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, no default key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now set a default key
unsafe { set_test_encryption_key(Ok(Some(default_key.clone()))); }
// and repeat
// no params but default key == default key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), default_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key == default key with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only"}));
assert_eq!(res.unwrap(), default_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt"}));
assert_eq!(res.unwrap(), default_key_res);
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now make default key retrieval error
unsafe { set_test_encryption_key(Err(format_err!("test error"))); }
// and repeat
// no params, default key retrieval errors == Error
assert!(crypto_parameters(&json!({})).is_err());
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key error == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now remove default key again
unsafe { set_test_encryption_key(Ok(None)); }
// set a default master key
unsafe { set_test_default_master_pubkey(Ok(Some(default_master_key.clone()))); }
// and use an explicit master key
assert!(crypto_parameters(&json!({"master-pubkey-file": master_keypath})).is_err());
// just a default == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
// same with fallback to default master key
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// crypt mode none == error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "master-pubkey-file": master_keypath})).is_err());
// with just default master key == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt without enc key == error
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt", "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath, "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// invalid master keyfile parameter always errors when a key is passed, even with a valid
// default master key
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "encrypt"})).is_err());
Ok(())
}

View File

@ -1,8 +1,7 @@
//! Shared tools useful for common CLI clients. //! Shared tools useful for common CLI clients.
use std::collections::HashMap; use std::collections::HashMap;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Context, Error};
use serde_json::{json, Value}; use serde_json::{json, Value};
use xdg::BaseDirectories; use xdg::BaseDirectories;
@ -17,6 +16,8 @@ use proxmox_backup::backup::BackupDir;
use proxmox_backup::client::*; use proxmox_backup::client::*;
use proxmox_backup::tools; use proxmox_backup::tools;
pub mod key_source;
const ENV_VAR_PBS_FINGERPRINT: &str = "PBS_FINGERPRINT"; const ENV_VAR_PBS_FINGERPRINT: &str = "PBS_FINGERPRINT";
const ENV_VAR_PBS_PASSWORD: &str = "PBS_PASSWORD"; const ENV_VAR_PBS_PASSWORD: &str = "PBS_PASSWORD";
@ -25,24 +26,6 @@ pub const REPO_URL_SCHEMA: Schema = StringSchema::new("Repository URL.")
.max_length(256) .max_length(256)
.schema(); .schema();
pub const KEYFILE_SCHEMA: Schema =
StringSchema::new("Path to encryption key. All data will be encrypted using this key.")
.schema();
pub const KEYFD_SCHEMA: Schema =
IntegerSchema::new("Pass an encryption key via an already opened file descriptor.")
.minimum(0)
.schema();
pub const MASTER_PUBKEY_FILE_SCHEMA: Schema = StringSchema::new(
"Path to master public key. The encryption key used for a backup will be encrypted using this key and appended to the backup.")
.schema();
pub const MASTER_PUBKEY_FD_SCHEMA: Schema =
IntegerSchema::new("Pass a master public key via an already opened file descriptor.")
.minimum(0)
.schema();
pub const CHUNK_SIZE_SCHEMA: Schema = IntegerSchema::new("Chunk size in KB. Must be a power of 2.") pub const CHUNK_SIZE_SCHEMA: Schema = IntegerSchema::new("Chunk size in KB. Must be a power of 2.")
.minimum(64) .minimum(64)
.maximum(4096) .maximum(4096)
@ -364,3 +347,28 @@ pub fn complete_backup_source(arg: &str, param: &HashMap<String, String>) -> Vec
result result
} }
pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
}
/// Convenience helper for better error messages:
pub fn find_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<Option<std::path::PathBuf>, Error> {
let file_name = file_name.as_ref();
base_directories()
.map(|base| base.find_config_file(file_name))
.with_context(|| format!("error searching for {}", description))
}
pub fn place_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<std::path::PathBuf, Error> {
let file_name = file_name.as_ref();
base_directories()
.and_then(|base| base.place_config_file(file_name).map_err(Error::from))
.with_context(|| format!("failed to place {} in xdg home", description))
}

View File

@ -177,12 +177,14 @@ fn list_content(
let options = default_table_format_options() let options = default_table_format_options()
.sortby("media-set-uuid", false) .sortby("media-set-uuid", false)
.sortby("seq-nr", false) .sortby("seq-nr", false)
.sortby("store", false)
.sortby("snapshot", false) .sortby("snapshot", false)
.sortby("backup-time", false) .sortby("backup-time", false)
.column(ColumnConfig::new("label-text")) .column(ColumnConfig::new("label-text"))
.column(ColumnConfig::new("pool")) .column(ColumnConfig::new("pool"))
.column(ColumnConfig::new("media-set-name")) .column(ColumnConfig::new("media-set-name"))
.column(ColumnConfig::new("seq-nr")) .column(ColumnConfig::new("seq-nr"))
.column(ColumnConfig::new("store"))
.column(ColumnConfig::new("snapshot")) .column(ColumnConfig::new("snapshot"))
.column(ColumnConfig::new("media-set-uuid")) .column(ColumnConfig::new("media-set-uuid"))
; ;

View File

@ -130,22 +130,22 @@ fn extract_archive(
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut feature_flags = Flags::DEFAULT; let mut feature_flags = Flags::DEFAULT;
if no_xattrs { if no_xattrs {
feature_flags ^= Flags::WITH_XATTRS; feature_flags.remove(Flags::WITH_XATTRS);
} }
if no_fcaps { if no_fcaps {
feature_flags ^= Flags::WITH_FCAPS; feature_flags.remove(Flags::WITH_FCAPS);
} }
if no_acls { if no_acls {
feature_flags ^= Flags::WITH_ACL; feature_flags.remove(Flags::WITH_ACL);
} }
if no_device_nodes { if no_device_nodes {
feature_flags ^= Flags::WITH_DEVICE_NODES; feature_flags.remove(Flags::WITH_DEVICE_NODES);
} }
if no_fifos { if no_fifos {
feature_flags ^= Flags::WITH_FIFOS; feature_flags.remove(Flags::WITH_FIFOS);
} }
if no_sockets { if no_sockets {
feature_flags ^= Flags::WITH_SOCKETS; feature_flags.remove(Flags::WITH_SOCKETS);
} }
let pattern = pattern.unwrap_or_else(Vec::new); let pattern = pattern.unwrap_or_else(Vec::new);
@ -353,22 +353,22 @@ async fn create_archive(
let writer = std::io::BufWriter::with_capacity(1024 * 1024, file); let writer = std::io::BufWriter::with_capacity(1024 * 1024, file);
let mut feature_flags = Flags::DEFAULT; let mut feature_flags = Flags::DEFAULT;
if no_xattrs { if no_xattrs {
feature_flags ^= Flags::WITH_XATTRS; feature_flags.remove(Flags::WITH_XATTRS);
} }
if no_fcaps { if no_fcaps {
feature_flags ^= Flags::WITH_FCAPS; feature_flags.remove(Flags::WITH_FCAPS);
} }
if no_acls { if no_acls {
feature_flags ^= Flags::WITH_ACL; feature_flags.remove(Flags::WITH_ACL);
} }
if no_device_nodes { if no_device_nodes {
feature_flags ^= Flags::WITH_DEVICE_NODES; feature_flags.remove(Flags::WITH_DEVICE_NODES);
} }
if no_fifos { if no_fifos {
feature_flags ^= Flags::WITH_FIFOS; feature_flags.remove(Flags::WITH_FIFOS);
} }
if no_sockets { if no_sockets {
feature_flags ^= Flags::WITH_SOCKETS; feature_flags.remove(Flags::WITH_SOCKETS);
} }
let writer = pxar::encoder::sync::StandardWriter::new(writer); let writer = pxar::encoder::sync::StandardWriter::new(writer);

View File

@ -1,12 +1,12 @@
use std::collections::HashSet; use std::collections::HashSet;
use std::os::unix::fs::OpenOptionsExt; use std::os::unix::fs::OpenOptionsExt;
use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::*;
use futures::stream::Stream;
use futures::future::AbortHandle; use futures::future::AbortHandle;
use futures::stream::Stream;
use futures::*;
use serde_json::{json, Value}; use serde_json::{json, Value};
use tokio::io::AsyncReadExt; use tokio::io::AsyncReadExt;
use tokio::sync::{mpsc, oneshot}; use tokio::sync::{mpsc, oneshot};
@ -14,11 +14,11 @@ use tokio_stream::wrappers::ReceiverStream;
use proxmox::tools::digest_to_hex; use proxmox::tools::digest_to_hex;
use super::merge_known_chunks::{MergedChunkInfo, MergeKnownChunks}; use super::merge_known_chunks::{MergeKnownChunks, MergedChunkInfo};
use crate::backup::*; use crate::backup::*;
use crate::tools::format::HumanByte; use crate::tools::format::HumanByte;
use super::{HttpClient, H2Client}; use super::{H2Client, HttpClient};
pub struct BackupWriter { pub struct BackupWriter {
h2: H2Client, h2: H2Client,
@ -28,7 +28,6 @@ pub struct BackupWriter {
} }
impl Drop for BackupWriter { impl Drop for BackupWriter {
fn drop(&mut self) { fn drop(&mut self) {
self.abort.abort(); self.abort.abort();
} }
@ -48,13 +47,32 @@ pub struct UploadOptions {
pub fixed_size: Option<u64>, pub fixed_size: Option<u64>,
} }
struct UploadStats {
chunk_count: usize,
chunk_reused: usize,
size: usize,
size_reused: usize,
size_compressed: usize,
duration: std::time::Duration,
csum: [u8; 32],
}
type UploadQueueSender = mpsc::Sender<(MergedChunkInfo, Option<h2::client::ResponseFuture>)>; type UploadQueueSender = mpsc::Sender<(MergedChunkInfo, Option<h2::client::ResponseFuture>)>;
type UploadResultReceiver = oneshot::Receiver<Result<(), Error>>; type UploadResultReceiver = oneshot::Receiver<Result<(), Error>>;
impl BackupWriter { impl BackupWriter {
fn new(
fn new(h2: H2Client, abort: AbortHandle, crypt_config: Option<Arc<CryptConfig>>, verbose: bool) -> Arc<Self> { h2: H2Client,
Arc::new(Self { h2, abort, crypt_config, verbose }) abort: AbortHandle,
crypt_config: Option<Arc<CryptConfig>>,
verbose: bool,
) -> Arc<Self> {
Arc::new(Self {
h2,
abort,
crypt_config,
verbose,
})
} }
// FIXME: extract into (flattened) parameter struct? // FIXME: extract into (flattened) parameter struct?
@ -67,9 +85,8 @@ impl BackupWriter {
backup_id: &str, backup_id: &str,
backup_time: i64, backup_time: i64,
debug: bool, debug: bool,
benchmark: bool benchmark: bool,
) -> Result<Arc<BackupWriter>, Error> { ) -> Result<Arc<BackupWriter>, Error> {
let param = json!({ let param = json!({
"backup-type": backup_type, "backup-type": backup_type,
"backup-id": backup_id, "backup-id": backup_id,
@ -80,34 +97,30 @@ impl BackupWriter {
}); });
let req = HttpClient::request_builder( let req = HttpClient::request_builder(
client.server(), client.port(), "GET", "/api2/json/backup", Some(param)).unwrap(); client.server(),
client.port(),
"GET",
"/api2/json/backup",
Some(param),
)
.unwrap();
let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_PROTOCOL_ID_V1!())).await?; let (h2, abort) = client
.start_h2_connection(req, String::from(PROXMOX_BACKUP_PROTOCOL_ID_V1!()))
.await?;
Ok(BackupWriter::new(h2, abort, crypt_config, debug)) Ok(BackupWriter::new(h2, abort, crypt_config, debug))
} }
pub async fn get( pub async fn get(&self, path: &str, param: Option<Value>) -> Result<Value, Error> {
&self,
path: &str,
param: Option<Value>,
) -> Result<Value, Error> {
self.h2.get(path, param).await self.h2.get(path, param).await
} }
pub async fn put( pub async fn put(&self, path: &str, param: Option<Value>) -> Result<Value, Error> {
&self,
path: &str,
param: Option<Value>,
) -> Result<Value, Error> {
self.h2.put(path, param).await self.h2.put(path, param).await
} }
pub async fn post( pub async fn post(&self, path: &str, param: Option<Value>) -> Result<Value, Error> {
&self,
path: &str,
param: Option<Value>,
) -> Result<Value, Error> {
self.h2.post(path, param).await self.h2.post(path, param).await
} }
@ -118,7 +131,9 @@ impl BackupWriter {
content_type: &str, content_type: &str,
data: Vec<u8>, data: Vec<u8>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
self.h2.upload("POST", path, param, content_type, data).await self.h2
.upload("POST", path, param, content_type, data)
.await
} }
pub async fn send_upload_request( pub async fn send_upload_request(
@ -129,9 +144,13 @@ impl BackupWriter {
content_type: &str, content_type: &str,
data: Vec<u8>, data: Vec<u8>,
) -> Result<h2::client::ResponseFuture, Error> { ) -> Result<h2::client::ResponseFuture, Error> {
let request =
let request = H2Client::request_builder("localhost", method, path, param, Some(content_type)).unwrap(); H2Client::request_builder("localhost", method, path, param, Some(content_type))
let response_future = self.h2.send_request(request, Some(bytes::Bytes::from(data.clone()))).await?; .unwrap();
let response_future = self
.h2
.send_request(request, Some(bytes::Bytes::from(data.clone())))
.await?;
Ok(response_future) Ok(response_future)
} }
@ -163,7 +182,7 @@ impl BackupWriter {
&self, &self,
mut reader: R, mut reader: R,
file_name: &str, file_name: &str,
) -> Result<BackupStats, Error> { ) -> Result<BackupStats, Error> {
let mut raw_data = Vec::new(); let mut raw_data = Vec::new();
// fixme: avoid loading into memory // fixme: avoid loading into memory
reader.read_to_end(&mut raw_data)?; reader.read_to_end(&mut raw_data)?;
@ -171,7 +190,16 @@ impl BackupWriter {
let csum = openssl::sha::sha256(&raw_data); let csum = openssl::sha::sha256(&raw_data);
let param = json!({"encoded-size": raw_data.len(), "file-name": file_name }); let param = json!({"encoded-size": raw_data.len(), "file-name": file_name });
let size = raw_data.len() as u64; let size = raw_data.len() as u64;
let _value = self.h2.upload("POST", "blob", Some(param), "application/octet-stream", raw_data).await?; let _value = self
.h2
.upload(
"POST",
"blob",
Some(param),
"application/octet-stream",
raw_data,
)
.await?;
Ok(BackupStats { size, csum }) Ok(BackupStats { size, csum })
} }
@ -182,9 +210,11 @@ impl BackupWriter {
options: UploadOptions, options: UploadOptions,
) -> Result<BackupStats, Error> { ) -> Result<BackupStats, Error> {
let blob = match (options.encrypt, &self.crypt_config) { let blob = match (options.encrypt, &self.crypt_config) {
(false, _) => DataBlob::encode(&data, None, options.compress)?, (false, _) => DataBlob::encode(&data, None, options.compress)?,
(true, None) => bail!("requested encryption without a crypt config"), (true, None) => bail!("requested encryption without a crypt config"),
(true, Some(crypt_config)) => DataBlob::encode(&data, Some(crypt_config), options.compress)?, (true, Some(crypt_config)) => {
DataBlob::encode(&data, Some(crypt_config), options.compress)?
}
}; };
let raw_data = blob.into_inner(); let raw_data = blob.into_inner();
@ -192,7 +222,16 @@ impl BackupWriter {
let csum = openssl::sha::sha256(&raw_data); let csum = openssl::sha::sha256(&raw_data);
let param = json!({"encoded-size": size, "file-name": file_name }); let param = json!({"encoded-size": size, "file-name": file_name });
let _value = self.h2.upload("POST", "blob", Some(param), "application/octet-stream", raw_data).await?; let _value = self
.h2
.upload(
"POST",
"blob",
Some(param),
"application/octet-stream",
raw_data,
)
.await?;
Ok(BackupStats { size, csum }) Ok(BackupStats { size, csum })
} }
@ -202,7 +241,6 @@ impl BackupWriter {
file_name: &str, file_name: &str,
options: UploadOptions, options: UploadOptions,
) -> Result<BackupStats, Error> { ) -> Result<BackupStats, Error> {
let src_path = src_path.as_ref(); let src_path = src_path.as_ref();
let mut file = tokio::fs::File::open(src_path) let mut file = tokio::fs::File::open(src_path)
@ -215,7 +253,8 @@ impl BackupWriter {
.await .await
.map_err(|err| format_err!("unable to read file {:?} - {}", src_path, err))?; .map_err(|err| format_err!("unable to read file {:?} - {}", src_path, err))?;
self.upload_blob_from_data(contents, file_name, options).await self.upload_blob_from_data(contents, file_name, options)
.await
} }
pub async fn upload_stream( pub async fn upload_stream(
@ -245,72 +284,118 @@ impl BackupWriter {
// try, but ignore errors // try, but ignore errors
match archive_type(archive_name) { match archive_type(archive_name) {
Ok(ArchiveType::FixedIndex) => { Ok(ArchiveType::FixedIndex) => {
let _ = self.download_previous_fixed_index(archive_name, &manifest, known_chunks.clone()).await; let _ = self
.download_previous_fixed_index(
archive_name,
&manifest,
known_chunks.clone(),
)
.await;
} }
Ok(ArchiveType::DynamicIndex) => { Ok(ArchiveType::DynamicIndex) => {
let _ = self.download_previous_dynamic_index(archive_name, &manifest, known_chunks.clone()).await; let _ = self
.download_previous_dynamic_index(
archive_name,
&manifest,
known_chunks.clone(),
)
.await;
} }
_ => { /* do nothing */ } _ => { /* do nothing */ }
} }
} }
let wid = self.h2.post(&index_path, Some(param)).await?.as_u64().unwrap(); let wid = self
.h2
.post(&index_path, Some(param))
.await?
.as_u64()
.unwrap();
let (chunk_count, chunk_reused, size, size_reused, duration, csum) = let upload_stats = Self::upload_chunk_info_stream(
Self::upload_chunk_info_stream( self.h2.clone(),
self.h2.clone(), wid,
wid, stream,
stream, &prefix,
&prefix, known_chunks.clone(),
known_chunks.clone(), if options.encrypt {
if options.encrypt { self.crypt_config.clone() } else { None }, self.crypt_config.clone()
options.compress, } else {
self.verbose, None
) },
.await?; options.compress,
self.verbose,
)
.await?;
let uploaded = size - size_reused; let size_dirty = upload_stats.size - upload_stats.size_reused;
let vsize_h: HumanByte = size.into(); let size: HumanByte = upload_stats.size.into();
let archive = if self.verbose { let archive = if self.verbose {
archive_name.to_string() archive_name.to_string()
} else { } else {
crate::tools::format::strip_server_file_extension(archive_name) crate::tools::format::strip_server_file_extension(archive_name)
}; };
if archive_name != CATALOG_NAME { if archive_name != CATALOG_NAME {
let speed: HumanByte = ((uploaded * 1_000_000) / (duration.as_micros() as usize)).into(); let speed: HumanByte =
let uploaded: HumanByte = uploaded.into(); ((size_dirty * 1_000_000) / (upload_stats.duration.as_micros() as usize)).into();
println!("{}: had to upload {} of {} in {:.2}s, average speed {}/s).", archive, uploaded, vsize_h, duration.as_secs_f64(), speed); let size_dirty: HumanByte = size_dirty.into();
let size_compressed: HumanByte = upload_stats.size_compressed.into();
println!(
"{}: had to backup {} of {} (compressed {}) in {:.2}s",
archive,
size_dirty,
size,
size_compressed,
upload_stats.duration.as_secs_f64()
);
println!("{}: average backup speed: {}/s", archive, speed);
} else { } else {
println!("Uploaded backup catalog ({})", vsize_h); println!("Uploaded backup catalog ({})", size);
} }
if size_reused > 0 && size > 1024*1024 { if upload_stats.size_reused > 0 && upload_stats.size > 1024 * 1024 {
let reused_percent = size_reused as f64 * 100. / size as f64; let reused_percent = upload_stats.size_reused as f64 * 100. / upload_stats.size as f64;
let reused: HumanByte = size_reused.into(); let reused: HumanByte = upload_stats.size_reused.into();
println!("{}: backup was done incrementally, reused {} ({:.1}%)", archive, reused, reused_percent); println!(
"{}: backup was done incrementally, reused {} ({:.1}%)",
archive, reused, reused_percent
);
} }
if self.verbose && chunk_count > 0 { if self.verbose && upload_stats.chunk_count > 0 {
println!("{}: Reused {} from {} chunks.", archive, chunk_reused, chunk_count); println!(
println!("{}: Average chunk size was {}.", archive, HumanByte::from(size/chunk_count)); "{}: Reused {} from {} chunks.",
println!("{}: Average time per request: {} microseconds.", archive, (duration.as_micros())/(chunk_count as u128)); archive, upload_stats.chunk_reused, upload_stats.chunk_count
);
println!(
"{}: Average chunk size was {}.",
archive,
HumanByte::from(upload_stats.size / upload_stats.chunk_count)
);
println!(
"{}: Average time per request: {} microseconds.",
archive,
(upload_stats.duration.as_micros()) / (upload_stats.chunk_count as u128)
);
} }
let param = json!({ let param = json!({
"wid": wid , "wid": wid ,
"chunk-count": chunk_count, "chunk-count": upload_stats.chunk_count,
"size": size, "size": upload_stats.size,
"csum": proxmox::tools::digest_to_hex(&csum), "csum": proxmox::tools::digest_to_hex(&upload_stats.csum),
}); });
let _value = self.h2.post(&close_path, Some(param)).await?; let _value = self.h2.post(&close_path, Some(param)).await?;
Ok(BackupStats { Ok(BackupStats {
size: size as u64, size: upload_stats.size as u64,
csum, csum: upload_stats.csum,
}) })
} }
fn response_queue(verbose: bool) -> ( fn response_queue(
verbose: bool,
) -> (
mpsc::Sender<h2::client::ResponseFuture>, mpsc::Sender<h2::client::ResponseFuture>,
oneshot::Receiver<Result<(), Error>> oneshot::Receiver<Result<(), Error>>,
) { ) {
let (verify_queue_tx, verify_queue_rx) = mpsc::channel(100); let (verify_queue_tx, verify_queue_rx) = mpsc::channel(100);
let (verify_result_tx, verify_result_rx) = oneshot::channel(); let (verify_result_tx, verify_result_rx) = oneshot::channel();
@ -336,12 +421,16 @@ impl BackupWriter {
response response
.map_err(Error::from) .map_err(Error::from)
.and_then(H2Client::h2api_response) .and_then(H2Client::h2api_response)
.map_ok(move |result| if verbose { println!("RESPONSE: {:?}", result) }) .map_ok(move |result| {
if verbose {
println!("RESPONSE: {:?}", result)
}
})
.map_err(|err| format_err!("pipelined request failed: {}", err)) .map_err(|err| format_err!("pipelined request failed: {}", err))
}) })
.map(|result| { .map(|result| {
let _ignore_closed_channel = verify_result_tx.send(result); let _ignore_closed_channel = verify_result_tx.send(result);
}) }),
); );
(verify_queue_tx, verify_result_rx) (verify_queue_tx, verify_result_rx)
@ -418,9 +507,8 @@ impl BackupWriter {
&self, &self,
archive_name: &str, archive_name: &str,
manifest: &BackupManifest, manifest: &BackupManifest,
known_chunks: Arc<Mutex<HashSet<[u8;32]>>>, known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
) -> Result<FixedIndexReader, Error> { ) -> Result<FixedIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new() let mut tmpfile = std::fs::OpenOptions::new()
.write(true) .write(true)
.read(true) .read(true)
@ -428,10 +516,13 @@ impl BackupWriter {
.open("/tmp")?; .open("/tmp")?;
let param = json!({ "archive-name": archive_name }); let param = json!({ "archive-name": archive_name });
self.h2.download("previous", Some(param), &mut tmpfile).await?; self.h2
.download("previous", Some(param), &mut tmpfile)
.await?;
let index = FixedIndexReader::new(tmpfile) let index = FixedIndexReader::new(tmpfile).map_err(|err| {
.map_err(|err| format_err!("unable to read fixed index '{}' - {}", archive_name, err))?; format_err!("unable to read fixed index '{}' - {}", archive_name, err)
})?;
// Note: do not use values stored in index (not trusted) - instead, computed them again // Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum(); let (csum, size) = index.compute_csum();
manifest.verify_file(archive_name, &csum, size)?; manifest.verify_file(archive_name, &csum, size)?;
@ -443,7 +534,11 @@ impl BackupWriter {
} }
if self.verbose { if self.verbose {
println!("{}: known chunks list length is {}", archive_name, index.index_count()); println!(
"{}: known chunks list length is {}",
archive_name,
index.index_count()
);
} }
Ok(index) Ok(index)
@ -453,9 +548,8 @@ impl BackupWriter {
&self, &self,
archive_name: &str, archive_name: &str,
manifest: &BackupManifest, manifest: &BackupManifest,
known_chunks: Arc<Mutex<HashSet<[u8;32]>>>, known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
) -> Result<DynamicIndexReader, Error> { ) -> Result<DynamicIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new() let mut tmpfile = std::fs::OpenOptions::new()
.write(true) .write(true)
.read(true) .read(true)
@ -463,10 +557,13 @@ impl BackupWriter {
.open("/tmp")?; .open("/tmp")?;
let param = json!({ "archive-name": archive_name }); let param = json!({ "archive-name": archive_name });
self.h2.download("previous", Some(param), &mut tmpfile).await?; self.h2
.download("previous", Some(param), &mut tmpfile)
.await?;
let index = DynamicIndexReader::new(tmpfile) let index = DynamicIndexReader::new(tmpfile).map_err(|err| {
.map_err(|err| format_err!("unable to read dynmamic index '{}' - {}", archive_name, err))?; format_err!("unable to read dynmamic index '{}' - {}", archive_name, err)
})?;
// Note: do not use values stored in index (not trusted) - instead, computed them again // Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum(); let (csum, size) = index.compute_csum();
manifest.verify_file(archive_name, &csum, size)?; manifest.verify_file(archive_name, &csum, size)?;
@ -478,7 +575,11 @@ impl BackupWriter {
} }
if self.verbose { if self.verbose {
println!("{}: known chunks list length is {}", archive_name, index.index_count()); println!(
"{}: known chunks list length is {}",
archive_name,
index.index_count()
);
} }
Ok(index) Ok(index)
@ -487,23 +588,29 @@ impl BackupWriter {
/// Retrieve backup time of last backup /// Retrieve backup time of last backup
pub async fn previous_backup_time(&self) -> Result<Option<i64>, Error> { pub async fn previous_backup_time(&self) -> Result<Option<i64>, Error> {
let data = self.h2.get("previous_backup_time", None).await?; let data = self.h2.get("previous_backup_time", None).await?;
serde_json::from_value(data) serde_json::from_value(data).map_err(|err| {
.map_err(|err| format_err!("Failed to parse backup time value returned by server - {}", err)) format_err!(
"Failed to parse backup time value returned by server - {}",
err
)
})
} }
/// Download backup manifest (index.json) of last backup /// Download backup manifest (index.json) of last backup
pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> { pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {
let mut raw_data = Vec::with_capacity(64 * 1024); let mut raw_data = Vec::with_capacity(64 * 1024);
let param = json!({ "archive-name": MANIFEST_BLOB_NAME }); let param = json!({ "archive-name": MANIFEST_BLOB_NAME });
self.h2.download("previous", Some(param), &mut raw_data).await?; self.h2
.download("previous", Some(param), &mut raw_data)
.await?;
let blob = DataBlob::load_from_reader(&mut &raw_data[..])?; let blob = DataBlob::load_from_reader(&mut &raw_data[..])?;
// no expected digest available // no expected digest available
let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref), None)?; let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref), None)?;
let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?; let manifest =
BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;
Ok(manifest) Ok(manifest)
} }
@ -517,12 +624,11 @@ impl BackupWriter {
wid: u64, wid: u64,
stream: impl Stream<Item = Result<bytes::BytesMut, Error>>, stream: impl Stream<Item = Result<bytes::BytesMut, Error>>,
prefix: &str, prefix: &str,
known_chunks: Arc<Mutex<HashSet<[u8;32]>>>, known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
crypt_config: Option<Arc<CryptConfig>>, crypt_config: Option<Arc<CryptConfig>>,
compress: bool, compress: bool,
verbose: bool, verbose: bool,
) -> impl Future<Output = Result<(usize, usize, usize, usize, std::time::Duration, [u8; 32]), Error>> { ) -> impl Future<Output = Result<UploadStats, Error>> {
let total_chunks = Arc::new(AtomicUsize::new(0)); let total_chunks = Arc::new(AtomicUsize::new(0));
let total_chunks2 = total_chunks.clone(); let total_chunks2 = total_chunks.clone();
let known_chunk_count = Arc::new(AtomicUsize::new(0)); let known_chunk_count = Arc::new(AtomicUsize::new(0));
@ -530,6 +636,8 @@ impl BackupWriter {
let stream_len = Arc::new(AtomicUsize::new(0)); let stream_len = Arc::new(AtomicUsize::new(0));
let stream_len2 = stream_len.clone(); let stream_len2 = stream_len.clone();
let compressed_stream_len = Arc::new(AtomicU64::new(0));
let compressed_stream_len2 = compressed_stream_len.clone();
let reused_len = Arc::new(AtomicUsize::new(0)); let reused_len = Arc::new(AtomicUsize::new(0));
let reused_len2 = reused_len.clone(); let reused_len2 = reused_len.clone();
@ -547,14 +655,12 @@ impl BackupWriter {
stream stream
.and_then(move |data| { .and_then(move |data| {
let chunk_len = data.len(); let chunk_len = data.len();
total_chunks.fetch_add(1, Ordering::SeqCst); total_chunks.fetch_add(1, Ordering::SeqCst);
let offset = stream_len.fetch_add(chunk_len, Ordering::SeqCst) as u64; let offset = stream_len.fetch_add(chunk_len, Ordering::SeqCst) as u64;
let mut chunk_builder = DataChunkBuilder::new(data.as_ref()) let mut chunk_builder = DataChunkBuilder::new(data.as_ref()).compress(compress);
.compress(compress);
if let Some(ref crypt_config) = crypt_config { if let Some(ref crypt_config) = crypt_config {
chunk_builder = chunk_builder.crypt_config(crypt_config); chunk_builder = chunk_builder.crypt_config(crypt_config);
@ -568,7 +674,9 @@ impl BackupWriter {
let chunk_end = offset + chunk_len as u64; let chunk_end = offset + chunk_len as u64;
if !is_fixed_chunk_size { csum.update(&chunk_end.to_le_bytes()); } if !is_fixed_chunk_size {
csum.update(&chunk_end.to_le_bytes());
}
csum.update(digest); csum.update(digest);
let chunk_is_known = known_chunks.contains(digest); let chunk_is_known = known_chunks.contains(digest);
@ -577,16 +685,17 @@ impl BackupWriter {
reused_len.fetch_add(chunk_len, Ordering::SeqCst); reused_len.fetch_add(chunk_len, Ordering::SeqCst);
future::ok(MergedChunkInfo::Known(vec![(offset, *digest)])) future::ok(MergedChunkInfo::Known(vec![(offset, *digest)]))
} else { } else {
let compressed_stream_len2 = compressed_stream_len.clone();
known_chunks.insert(*digest); known_chunks.insert(*digest);
future::ready(chunk_builder future::ready(chunk_builder.build().map(move |(chunk, digest)| {
.build() compressed_stream_len2.fetch_add(chunk.raw_size(), Ordering::SeqCst);
.map(move |(chunk, digest)| MergedChunkInfo::New(ChunkInfo { MergedChunkInfo::New(ChunkInfo {
chunk, chunk,
digest, digest,
chunk_len: chunk_len as u64, chunk_len: chunk_len as u64,
offset, offset,
})) })
) }))
} }
}) })
.merge_known_chunks() .merge_known_chunks()
@ -614,20 +723,28 @@ impl BackupWriter {
}); });
let ct = "application/octet-stream"; let ct = "application/octet-stream";
let request = H2Client::request_builder("localhost", "POST", &upload_chunk_path, Some(param), Some(ct)).unwrap(); let request = H2Client::request_builder(
"localhost",
"POST",
&upload_chunk_path,
Some(param),
Some(ct),
)
.unwrap();
let upload_data = Some(bytes::Bytes::from(chunk_data)); let upload_data = Some(bytes::Bytes::from(chunk_data));
let new_info = MergedChunkInfo::Known(vec![(offset, digest)]); let new_info = MergedChunkInfo::Known(vec![(offset, digest)]);
future::Either::Left(h2 future::Either::Left(h2.send_request(request, upload_data).and_then(
.send_request(request, upload_data) move |response| async move {
.and_then(move |response| async move {
upload_queue upload_queue
.send((new_info, Some(response))) .send((new_info, Some(response)))
.await .await
.map_err(|err| format_err!("failed to send to upload queue: {}", err)) .map_err(|err| {
}) format_err!("failed to send to upload queue: {}", err)
) })
},
))
} else { } else {
future::Either::Right(async move { future::Either::Right(async move {
upload_queue upload_queue
@ -637,31 +754,37 @@ impl BackupWriter {
}) })
} }
}) })
.then(move |result| async move { .then(move |result| async move { upload_result.await?.and(result) }.boxed())
upload_result.await?.and(result)
}.boxed())
.and_then(move |_| { .and_then(move |_| {
let duration = start_time.elapsed(); let duration = start_time.elapsed();
let total_chunks = total_chunks2.load(Ordering::SeqCst); let chunk_count = total_chunks2.load(Ordering::SeqCst);
let known_chunk_count = known_chunk_count2.load(Ordering::SeqCst); let chunk_reused = known_chunk_count2.load(Ordering::SeqCst);
let stream_len = stream_len2.load(Ordering::SeqCst); let size = stream_len2.load(Ordering::SeqCst);
let reused_len = reused_len2.load(Ordering::SeqCst); let size_reused = reused_len2.load(Ordering::SeqCst);
let size_compressed = compressed_stream_len2.load(Ordering::SeqCst) as usize;
let mut guard = index_csum_2.lock().unwrap(); let mut guard = index_csum_2.lock().unwrap();
let csum = guard.take().unwrap().finish(); let csum = guard.take().unwrap().finish();
futures::future::ok((total_chunks, known_chunk_count, stream_len, reused_len, duration, csum)) futures::future::ok(UploadStats {
chunk_count,
chunk_reused,
size,
size_reused,
size_compressed,
duration,
csum,
})
}) })
} }
/// Upload speed test - prints result to stderr /// Upload speed test - prints result to stderr
pub async fn upload_speedtest(&self, verbose: bool) -> Result<f64, Error> { pub async fn upload_speedtest(&self, verbose: bool) -> Result<f64, Error> {
let mut data = vec![]; let mut data = vec![];
// generate pseudo random byte sequence // generate pseudo random byte sequence
for i in 0..1024*1024 { for i in 0..1024 * 1024 {
for j in 0..4 { for j in 0..4 {
let byte = ((i >> (j<<3))&0xff) as u8; let byte = ((i >> (j << 3)) & 0xff) as u8;
data.push(byte); data.push(byte);
} }
} }
@ -680,9 +803,15 @@ impl BackupWriter {
break; break;
} }
if verbose { eprintln!("send test data ({} bytes)", data.len()); } if verbose {
let request = H2Client::request_builder("localhost", "POST", "speedtest", None, None).unwrap(); eprintln!("send test data ({} bytes)", data.len());
let request_future = self.h2.send_request(request, Some(bytes::Bytes::from(data.clone()))).await?; }
let request =
H2Client::request_builder("localhost", "POST", "speedtest", None, None).unwrap();
let request_future = self
.h2
.send_request(request, Some(bytes::Bytes::from(data.clone())))
.await?;
upload_queue.send(request_future).await?; upload_queue.send(request_future).await?;
} }
@ -691,9 +820,16 @@ impl BackupWriter {
let _ = upload_result.await?; let _ = upload_result.await?;
eprintln!("Uploaded {} chunks in {} seconds.", repeat, start_time.elapsed().as_secs()); eprintln!(
let speed = ((item_len*(repeat as usize)) as f64)/start_time.elapsed().as_secs_f64(); "Uploaded {} chunks in {} seconds.",
eprintln!("Time per request: {} microseconds.", (start_time.elapsed().as_micros())/(repeat as u128)); repeat,
start_time.elapsed().as_secs()
);
let speed = ((item_len * (repeat as usize)) as f64) / start_time.elapsed().as_secs_f64();
eprintln!(
"Time per request: {} microseconds.",
(start_time.elapsed().as_micros()) / (repeat as u128)
);
Ok(speed) Ok(speed)
} }

View File

@ -13,6 +13,10 @@ use nix::fcntl::OFlag;
use nix::sys::stat::Mode; use nix::sys::stat::Mode;
use crate::backup::CatalogWriter; use crate::backup::CatalogWriter;
use crate::tools::{
StdChannelWriter,
TokioWriterAdapter,
};
/// Stream implementation to encode and upload .pxar archives. /// Stream implementation to encode and upload .pxar archives.
/// ///
@ -45,10 +49,10 @@ impl PxarBackupStream {
let error = Arc::new(Mutex::new(None)); let error = Arc::new(Mutex::new(None));
let error2 = Arc::clone(&error); let error2 = Arc::clone(&error);
let handler = async move { let handler = async move {
let writer = std::io::BufWriter::with_capacity( let writer = TokioWriterAdapter::new(std::io::BufWriter::with_capacity(
buffer_size, buffer_size,
crate::tools::StdChannelWriter::new(tx), StdChannelWriter::new(tx),
); ));
let verbose = options.verbose; let verbose = options.verbose;

View File

@ -12,13 +12,12 @@ use hyper::client::Client;
use hyper::Body; use hyper::Body;
use pin_project::pin_project; use pin_project::pin_project;
use serde_json::Value; use serde_json::Value;
use tokio::io::{ReadBuf, AsyncRead, AsyncWrite, AsyncWriteExt}; use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt, ReadBuf};
use tokio::net::UnixStream; use tokio::net::UnixStream;
use crate::tools; use crate::tools;
use proxmox::api::error::HttpError; use proxmox::api::error::HttpError;
/// Port below 1024 is privileged, this is intentional so only root (on host) can connect
pub const DEFAULT_VSOCK_PORT: u16 = 807; pub const DEFAULT_VSOCK_PORT: u16 = 807;
#[derive(Clone)] #[derive(Clone)]
@ -138,43 +137,48 @@ pub struct VsockClient {
client: Client<VsockConnector>, client: Client<VsockConnector>,
cid: i32, cid: i32,
port: u16, port: u16,
auth: Option<String>,
} }
impl VsockClient { impl VsockClient {
pub fn new(cid: i32, port: u16) -> Self { pub fn new(cid: i32, port: u16, auth: Option<String>) -> Self {
let conn = VsockConnector {}; let conn = VsockConnector {};
let client = Client::builder().build::<_, Body>(conn); let client = Client::builder().build::<_, Body>(conn);
Self { client, cid, port } Self {
client,
cid,
port,
auth,
}
} }
pub async fn get(&self, path: &str, data: Option<Value>) -> Result<Value, Error> { pub async fn get(&self, path: &str, data: Option<Value>) -> Result<Value, Error> {
let req = Self::request_builder(self.cid, self.port, "GET", path, data)?; let req = self.request_builder("GET", path, data)?;
self.api_request(req).await self.api_request(req).await
} }
pub async fn post(&mut self, path: &str, data: Option<Value>) -> Result<Value, Error> { pub async fn post(&self, path: &str, data: Option<Value>) -> Result<Value, Error> {
let req = Self::request_builder(self.cid, self.port, "POST", path, data)?; let req = self.request_builder("POST", path, data)?;
self.api_request(req).await self.api_request(req).await
} }
pub async fn download( pub async fn download(
&mut self, &self,
path: &str, path: &str,
data: Option<Value>, data: Option<Value>,
output: &mut (dyn AsyncWrite + Send + Unpin), output: &mut (dyn AsyncWrite + Send + Unpin),
) -> Result<(), Error> { ) -> Result<(), Error> {
let req = Self::request_builder(self.cid, self.port, "GET", path, data)?; let req = self.request_builder("GET", path, data)?;
let client = self.client.clone(); let client = self.client.clone();
let resp = client.request(req) let resp = client
.request(req)
.await .await
.map_err(|_| format_err!("vsock download request timed out"))?; .map_err(|_| format_err!("vsock download request timed out"))?;
let status = resp.status(); let status = resp.status();
if !status.is_success() { if !status.is_success() {
Self::api_response(resp) Self::api_response(resp).await.map(|_| ())?
.await
.map(|_| ())?
} else { } else {
resp.into_body() resp.into_body()
.map_err(Error::from) .map_err(Error::from)
@ -212,47 +216,43 @@ impl VsockClient {
.await .await
} }
pub fn request_builder( fn request_builder(
cid: i32, &self,
port: u16,
method: &str, method: &str,
path: &str, path: &str,
data: Option<Value>, data: Option<Value>,
) -> Result<Request<Body>, Error> { ) -> Result<Request<Body>, Error> {
let path = path.trim_matches('/'); let path = path.trim_matches('/');
let url: Uri = format!("vsock://{}:{}/{}", cid, port, path).parse()?; let url: Uri = format!("vsock://{}:{}/{}", self.cid, self.port, path).parse()?;
let make_builder = |content_type: &str, url: &Uri| {
let mut builder = Request::builder()
.method(method)
.uri(url)
.header(hyper::header::CONTENT_TYPE, content_type);
if let Some(auth) = &self.auth {
builder = builder.header(hyper::header::AUTHORIZATION, auth);
}
builder
};
if let Some(data) = data { if let Some(data) = data {
if method == "POST" { if method == "POST" {
let request = Request::builder() let builder = make_builder("application/json", &url);
.method(method) let request = builder.body(Body::from(data.to_string()))?;
.uri(url)
.header(hyper::header::CONTENT_TYPE, "application/json")
.body(Body::from(data.to_string()))?;
return Ok(request); return Ok(request);
} else { } else {
let query = tools::json_object_to_query(data)?; let query = tools::json_object_to_query(data)?;
let url: Uri = format!("vsock://{}:{}/{}?{}", cid, port, path, query).parse()?; let url: Uri =
let request = Request::builder() format!("vsock://{}:{}/{}?{}", self.cid, self.port, path, query).parse()?;
.method(method) let builder = make_builder("application/x-www-form-urlencoded", &url);
.uri(url) let request = builder.body(Body::empty())?;
.header(
hyper::header::CONTENT_TYPE,
"application/x-www-form-urlencoded",
)
.body(Body::empty())?;
return Ok(request); return Ok(request);
} }
} }
let request = Request::builder() let builder = make_builder("application/x-www-form-urlencoded", &url);
.method(method) let request = builder.body(Body::empty())?;
.uri(url)
.header(
hyper::header::CONTENT_TYPE,
"application/x-www-form-urlencoded",
)
.body(Body::empty())?;
Ok(request) Ok(request)
} }

View File

@ -752,10 +752,7 @@ fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Resu
flags: 0, flags: 0,
uid: stat.st_uid, uid: stat.st_uid,
gid: stat.st_gid, gid: stat.st_gid,
mtime: pxar::format::StatxTimestamp { mtime: pxar::format::StatxTimestamp::new(stat.st_mtime, stat.st_mtime_nsec as u32),
secs: stat.st_mtime,
nanos: stat.st_mtime_nsec as u32,
},
}, },
..Default::default() ..Default::default()
}; };
@ -768,7 +765,7 @@ fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Resu
} }
fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags) -> Result<(), Error> { fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags) -> Result<(), Error> {
if flags.contains(Flags::WITH_FCAPS) { if !flags.contains(Flags::WITH_FCAPS) {
return Ok(()); return Ok(());
} }
@ -790,7 +787,7 @@ fn get_xattr_fcaps_acl(
proc_path: &Path, proc_path: &Path,
flags: Flags, flags: Flags,
) -> Result<(), Error> { ) -> Result<(), Error> {
if flags.contains(Flags::WITH_XATTRS) { if !flags.contains(Flags::WITH_XATTRS) {
return Ok(()); return Ok(());
} }
@ -879,7 +876,7 @@ fn get_quota_project_id(
return Ok(()); return Ok(());
} }
if flags.contains(Flags::WITH_QUOTA_PROJID) { if !flags.contains(Flags::WITH_QUOTA_PROJID) {
return Ok(()); return Ok(());
} }
@ -914,7 +911,7 @@ fn get_quota_project_id(
} }
fn get_acl(metadata: &mut Metadata, proc_path: &Path, flags: Flags) -> Result<(), Error> { fn get_acl(metadata: &mut Metadata, proc_path: &Path, flags: Flags) -> Result<(), Error> {
if flags.contains(Flags::WITH_ACL) { if !flags.contains(Flags::WITH_ACL) {
return Ok(()); return Ok(());
} }
@ -1009,6 +1006,7 @@ fn process_acl(
metadata.acl.users = acl_user; metadata.acl.users = acl_user;
metadata.acl.groups = acl_group; metadata.acl.groups = acl_group;
metadata.acl.group_obj = acl_group_obj;
} }
acl::ACL_TYPE_DEFAULT => { acl::ACL_TYPE_DEFAULT => {
if user_obj_permissions != None if user_obj_permissions != None
@ -1028,13 +1026,11 @@ fn process_acl(
metadata.acl.default_users = acl_user; metadata.acl.default_users = acl_user;
metadata.acl.default_groups = acl_group; metadata.acl.default_groups = acl_group;
metadata.acl.default = acl_default;
} }
_ => bail!("Unexpected ACL type encountered"), _ => bail!("Unexpected ACL type encountered"),
} }
metadata.acl.group_obj = acl_group_obj;
metadata.acl.default = acl_default;
Ok(()) Ok(())
} }

View File

@ -1,6 +1,6 @@
use std::ffi::{OsStr, OsString}; use std::ffi::OsString;
use std::os::unix::io::{AsRawFd, RawFd}; use std::os::unix::io::{AsRawFd, RawFd};
use std::path::PathBuf; use std::path::{Path, PathBuf};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use nix::dir::Dir; use nix::dir::Dir;
@ -78,10 +78,6 @@ impl PxarDir {
pub fn metadata(&self) -> &Metadata { pub fn metadata(&self) -> &Metadata {
&self.metadata &self.metadata
} }
pub fn file_name(&self) -> &OsStr {
&self.file_name
}
} }
pub struct PxarDirStack { pub struct PxarDirStack {
@ -159,4 +155,8 @@ impl PxarDirStack {
.try_as_borrowed_fd() .try_as_borrowed_fd()
.ok_or_else(|| format_err!("lost track of directory file descriptors")) .ok_or_else(|| format_err!("lost track of directory file descriptors"))
} }
pub fn path(&self) -> &Path {
&self.path
}
} }

View File

@ -285,6 +285,8 @@ impl Extractor {
/// When done with a directory we can apply its metadata if it has been created. /// When done with a directory we can apply its metadata if it has been created.
pub fn leave_directory(&mut self) -> Result<(), Error> { pub fn leave_directory(&mut self) -> Result<(), Error> {
let path_info = self.dir_stack.path().to_owned();
let dir = self let dir = self
.dir_stack .dir_stack
.pop() .pop()
@ -296,7 +298,7 @@ impl Extractor {
self.feature_flags, self.feature_flags,
dir.metadata(), dir.metadata(),
fd.as_raw_fd(), fd.as_raw_fd(),
&CString::new(dir.file_name().as_bytes())?, &path_info,
&mut self.on_error, &mut self.on_error,
) )
.map_err(|err| format_err!("failed to apply directory metadata: {}", err))?; .map_err(|err| format_err!("failed to apply directory metadata: {}", err))?;
@ -329,6 +331,7 @@ impl Extractor {
metadata, metadata,
parent, parent,
file_name, file_name,
self.dir_stack.path(),
&mut self.on_error, &mut self.on_error,
) )
} }
@ -382,6 +385,7 @@ impl Extractor {
metadata, metadata,
parent, parent,
file_name, file_name,
self.dir_stack.path(),
&mut self.on_error, &mut self.on_error,
) )
} }
@ -437,7 +441,7 @@ impl Extractor {
self.feature_flags, self.feature_flags,
metadata, metadata,
file.as_raw_fd(), file.as_raw_fd(),
file_name, self.dir_stack.path(),
&mut self.on_error, &mut self.on_error,
) )
} }
@ -494,7 +498,7 @@ impl Extractor {
self.feature_flags, self.feature_flags,
metadata, metadata,
file.as_raw_fd(), file.as_raw_fd(),
file_name, self.dir_stack.path(),
&mut self.on_error, &mut self.on_error,
) )
} }

View File

@ -1,5 +1,6 @@
use std::ffi::{CStr, CString}; use std::ffi::{CStr, CString};
use std::os::unix::io::{AsRawFd, FromRawFd, RawFd}; use std::os::unix::io::{AsRawFd, FromRawFd, RawFd};
use std::path::Path;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use nix::errno::Errno; use nix::errno::Errno;
@ -62,6 +63,7 @@ pub fn apply_at(
metadata: &Metadata, metadata: &Metadata,
parent: RawFd, parent: RawFd,
file_name: &CStr, file_name: &CStr,
path_info: &Path,
on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send), on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send),
) -> Result<(), Error> { ) -> Result<(), Error> {
let fd = proxmox::tools::fd::Fd::openat( let fd = proxmox::tools::fd::Fd::openat(
@ -71,7 +73,7 @@ pub fn apply_at(
Mode::empty(), Mode::empty(),
)?; )?;
apply(flags, metadata, fd.as_raw_fd(), file_name, on_error) apply(flags, metadata, fd.as_raw_fd(), path_info, on_error)
} }
pub fn apply_initial_flags( pub fn apply_initial_flags(
@ -94,7 +96,7 @@ pub fn apply(
flags: Flags, flags: Flags,
metadata: &Metadata, metadata: &Metadata,
fd: RawFd, fd: RawFd,
file_name: &CStr, path_info: &Path,
on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send), on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send),
) -> Result<(), Error> { ) -> Result<(), Error> {
let c_proc_path = CString::new(format!("/proc/self/fd/{}", fd)).unwrap(); let c_proc_path = CString::new(format!("/proc/self/fd/{}", fd)).unwrap();
@ -116,7 +118,7 @@ pub fn apply(
apply_xattrs(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs) apply_xattrs(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)
.or_else(&mut *on_error)?; .or_else(&mut *on_error)?;
add_fcaps(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs).or_else(&mut *on_error)?; add_fcaps(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs).or_else(&mut *on_error)?;
apply_acls(flags, &c_proc_path, metadata) apply_acls(flags, &c_proc_path, metadata, path_info)
.map_err(|err| format_err!("failed to apply acls: {}", err)) .map_err(|err| format_err!("failed to apply acls: {}", err))
.or_else(&mut *on_error)?; .or_else(&mut *on_error)?;
apply_quota_project_id(flags, fd, metadata).or_else(&mut *on_error)?; apply_quota_project_id(flags, fd, metadata).or_else(&mut *on_error)?;
@ -147,7 +149,7 @@ pub fn apply(
Err(err) => { Err(err) => {
on_error(format_err!( on_error(format_err!(
"failed to restore mtime attribute on {:?}: {}", "failed to restore mtime attribute on {:?}: {}",
file_name, path_info,
err err
))?; ))?;
} }
@ -227,7 +229,12 @@ fn apply_xattrs(
Ok(()) Ok(())
} }
fn apply_acls(flags: Flags, c_proc_path: &CStr, metadata: &Metadata) -> Result<(), Error> { fn apply_acls(
flags: Flags,
c_proc_path: &CStr,
metadata: &Metadata,
path_info: &Path,
) -> Result<(), Error> {
if !flags.contains(Flags::WITH_ACL) || metadata.acl.is_empty() { if !flags.contains(Flags::WITH_ACL) || metadata.acl.is_empty() {
return Ok(()); return Ok(());
} }
@ -257,11 +264,17 @@ fn apply_acls(flags: Flags, c_proc_path: &CStr, metadata: &Metadata) -> Result<(
acl.add_entry_full(acl::ACL_GROUP_OBJ, None, group_obj.permissions.0)?; acl.add_entry_full(acl::ACL_GROUP_OBJ, None, group_obj.permissions.0)?;
} }
None => { None => {
acl.add_entry_full( let mode = acl::mode_group_to_acl_permissions(metadata.stat.mode);
acl::ACL_GROUP_OBJ,
None, acl.add_entry_full(acl::ACL_GROUP_OBJ, None, mode)?;
acl::mode_group_to_acl_permissions(metadata.stat.mode),
)?; if !metadata.acl.users.is_empty() || !metadata.acl.groups.is_empty() {
eprintln!(
"Warning: {:?}: Missing GROUP_OBJ entry in ACL, resetting to value of MASK",
path_info,
);
acl.add_entry_full(acl::ACL_MASK, None, mode)?;
}
} }
} }

View File

@ -89,3 +89,5 @@ mod report;
pub use report::*; pub use report::*;
pub mod ticket; pub mod ticket;
pub mod auth;

101
src/server/auth.rs Normal file
View File

@ -0,0 +1,101 @@
//! Provides authentication primitives for the HTTP server
use anyhow::{bail, format_err, Error};
use crate::tools::ticket::Ticket;
use crate::auth_helpers::*;
use crate::tools;
use crate::config::cached_user_info::CachedUserInfo;
use crate::api2::types::{Authid, Userid};
use hyper::header;
use percent_encoding::percent_decode_str;
pub struct UserAuthData {
ticket: String,
csrf_token: Option<String>,
}
pub enum AuthData {
User(UserAuthData),
ApiToken(String),
}
pub fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
if let Some(raw_cookie) = headers.get(header::COOKIE) {
if let Ok(cookie) = raw_cookie.to_str() {
if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
Some(Ok(v)) => Some(v.to_owned()),
_ => None,
};
return Some(AuthData::User(UserAuthData {
ticket,
csrf_token,
}));
}
}
}
match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
Some(Ok(v)) => {
if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
} else {
None
}
},
_ => None,
}
}
pub fn check_auth(
method: &hyper::Method,
auth_data: &AuthData,
user_info: &CachedUserInfo,
) -> Result<Authid, Error> {
match auth_data {
AuthData::User(user_auth_data) => {
let ticket = user_auth_data.ticket.clone();
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
.verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
.require_full()?;
let auth_id = Authid::from(userid.clone());
if !user_info.is_active_auth_id(&auth_id) {
bail!("user account disabled or expired.");
}
if method != hyper::Method::GET {
if let Some(csrf_token) = &user_auth_data.csrf_token {
verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?;
} else {
bail!("missing CSRF prevention token");
}
}
Ok(auth_id)
},
AuthData::ApiToken(api_token) => {
let mut parts = api_token.splitn(2, ':');
let tokenid = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?;
if !user_info.is_active_auth_id(&tokenid) {
bail!("user account or token disabled or expired.");
}
let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret)
.decode_utf8()
.map_err(|_| format_err!("failed to decode API token header"))?;
crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
Ok(tokenid)
}
}
}

View File

@ -1,10 +1,11 @@
use anyhow::Error; use anyhow::Error;
use serde_json::json; use serde_json::json;
use handlebars::{Handlebars, Helper, Context, RenderError, RenderContext, Output, HelperResult}; use handlebars::{Handlebars, Helper, Context, RenderError, RenderContext, Output, HelperResult, TemplateError};
use proxmox::tools::email::sendmail; use proxmox::tools::email::sendmail;
use proxmox::api::schema::parse_property_string; use proxmox::api::schema::parse_property_string;
use proxmox::try_block;
use crate::{ use crate::{
config::datastore::DataStoreConfig, config::datastore::DataStoreConfig,
@ -148,6 +149,14 @@ Datastore: {{job.store}}
Tape Pool: {{job.pool}} Tape Pool: {{job.pool}}
Tape Drive: {{job.drive}} Tape Drive: {{job.drive}}
{{#if snapshot-list ~}}
Snapshots included:
{{#each snapshot-list~}}
{{this}}
{{/each~}}
{{/if}}
Duration: {{duration}}
Tape Backup successful. Tape Backup successful.
@ -181,30 +190,48 @@ lazy_static::lazy_static!{
static ref HANDLEBARS: Handlebars<'static> = { static ref HANDLEBARS: Handlebars<'static> = {
let mut hb = Handlebars::new(); let mut hb = Handlebars::new();
let result: Result<(), TemplateError> = try_block!({
hb.set_strict_mode(true); hb.set_strict_mode(true);
hb.register_escape_fn(handlebars::no_escape);
hb.register_helper("human-bytes", Box::new(handlebars_humam_bytes_helper)); hb.register_helper("human-bytes", Box::new(handlebars_humam_bytes_helper));
hb.register_helper("relative-percentage", Box::new(handlebars_relative_percentage_helper)); hb.register_helper("relative-percentage", Box::new(handlebars_relative_percentage_helper));
hb.register_template_string("gc_ok_template", GC_OK_TEMPLATE).unwrap(); hb.register_template_string("gc_ok_template", GC_OK_TEMPLATE)?;
hb.register_template_string("gc_err_template", GC_ERR_TEMPLATE).unwrap(); hb.register_template_string("gc_err_template", GC_ERR_TEMPLATE)?;
hb.register_template_string("verify_ok_template", VERIFY_OK_TEMPLATE).unwrap(); hb.register_template_string("verify_ok_template", VERIFY_OK_TEMPLATE)?;
hb.register_template_string("verify_err_template", VERIFY_ERR_TEMPLATE).unwrap(); hb.register_template_string("verify_err_template", VERIFY_ERR_TEMPLATE)?;
hb.register_template_string("sync_ok_template", SYNC_OK_TEMPLATE).unwrap(); hb.register_template_string("sync_ok_template", SYNC_OK_TEMPLATE)?;
hb.register_template_string("sync_err_template", SYNC_ERR_TEMPLATE).unwrap(); hb.register_template_string("sync_err_template", SYNC_ERR_TEMPLATE)?;
hb.register_template_string("tape_backup_ok_template", TAPE_BACKUP_OK_TEMPLATE).unwrap(); hb.register_template_string("tape_backup_ok_template", TAPE_BACKUP_OK_TEMPLATE)?;
hb.register_template_string("tape_backup_err_template", TAPE_BACKUP_ERR_TEMPLATE).unwrap(); hb.register_template_string("tape_backup_err_template", TAPE_BACKUP_ERR_TEMPLATE)?;
hb.register_template_string("package_update_template", PACKAGE_UPDATES_TEMPLATE).unwrap(); hb.register_template_string("package_update_template", PACKAGE_UPDATES_TEMPLATE)?;
Ok(())
});
if let Err(err) = result {
eprintln!("error during template registration: {}", err);
}
hb hb
}; };
} }
/// Summary of a successful Tape Job
#[derive(Default)]
pub struct TapeBackupJobSummary {
/// The list of snaphots backed up
pub snapshot_list: Vec<String>,
/// The total time of the backup job
pub duration: std::time::Duration,
}
fn send_job_status_mail( fn send_job_status_mail(
email: &str, email: &str,
subject: &str, subject: &str,
@ -402,14 +429,18 @@ pub fn send_tape_backup_status(
id: Option<&str>, id: Option<&str>,
job: &TapeBackupJobSetup, job: &TapeBackupJobSetup,
result: &Result<(), Error>, result: &Result<(), Error>,
summary: TapeBackupJobSummary,
) -> Result<(), Error> { ) -> Result<(), Error> {
let (fqdn, port) = get_server_url(); let (fqdn, port) = get_server_url();
let duration: crate::tools::systemd::time::TimeSpan = summary.duration.into();
let mut data = json!({ let mut data = json!({
"job": job, "job": job,
"fqdn": fqdn, "fqdn": fqdn,
"port": port, "port": port,
"id": id, "id": id,
"snapshot-list": summary.snapshot_list,
"duration": duration.to_string(),
}); });
let text = match result { let text = match result {
@ -600,3 +631,23 @@ fn handlebars_relative_percentage_helper(
} }
Ok(()) Ok(())
} }
#[test]
fn test_template_register() {
HANDLEBARS.get_helper("human-bytes").unwrap();
HANDLEBARS.get_helper("relative-percentage").unwrap();
assert!(HANDLEBARS.has_template("gc_ok_template"));
assert!(HANDLEBARS.has_template("gc_err_template"));
assert!(HANDLEBARS.has_template("verify_ok_template"));
assert!(HANDLEBARS.has_template("verify_err_template"));
assert!(HANDLEBARS.has_template("sync_ok_template"));
assert!(HANDLEBARS.has_template("sync_err_template"));
assert!(HANDLEBARS.has_template("tape_backup_ok_template"));
assert!(HANDLEBARS.has_template("tape_backup_err_template"));
assert!(HANDLEBARS.has_template("package_update_template"));
}

View File

@ -9,48 +9,41 @@ use std::task::{Context, Poll};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::future::{self, FutureExt, TryFutureExt}; use futures::future::{self, FutureExt, TryFutureExt};
use futures::stream::TryStreamExt; use futures::stream::TryStreamExt;
use hyper::header::{self, HeaderMap};
use hyper::body::HttpBody; use hyper::body::HttpBody;
use hyper::header::{self, HeaderMap};
use hyper::http::request::Parts; use hyper::http::request::Parts;
use hyper::{Body, Request, Response, StatusCode}; use hyper::{Body, Request, Response, StatusCode};
use lazy_static::lazy_static; use lazy_static::lazy_static;
use regex::Regex;
use serde_json::{json, Value}; use serde_json::{json, Value};
use tokio::fs::File; use tokio::fs::File;
use tokio::time::Instant; use tokio::time::Instant;
use percent_encoding::percent_decode_str;
use url::form_urlencoded; use url::form_urlencoded;
use regex::Regex;
use proxmox::http_err;
use proxmox::api::{
ApiHandler,
ApiMethod,
HttpError,
Permission,
RpcEnvironment,
RpcEnvironmentType,
check_api_permission,
};
use proxmox::api::schema::{ use proxmox::api::schema::{
ObjectSchemaType, parse_parameter_strings, parse_simple_value, verify_json_object, ObjectSchemaType,
ParameterSchema, ParameterSchema,
parse_parameter_strings,
parse_simple_value,
verify_json_object,
}; };
use proxmox::api::{
check_api_permission, ApiHandler, ApiMethod, HttpError, Permission, RpcEnvironment,
RpcEnvironmentType,
};
use proxmox::http_err;
use super::environment::RestEnvironment; use super::environment::RestEnvironment;
use super::formatter::*; use super::formatter::*;
use super::ApiConfig; use super::ApiConfig;
use super::auth::{check_auth, extract_auth_data};
use crate::auth_helpers::*;
use crate::api2::types::{Authid, Userid}; use crate::api2::types::{Authid, Userid};
use crate::auth_helpers::*;
use crate::config::cached_user_info::CachedUserInfo;
use crate::tools; use crate::tools;
use crate::tools::FileLogger; use crate::tools::FileLogger;
use crate::tools::ticket::Ticket;
use crate::config::cached_user_info::CachedUserInfo;
extern "C" { fn tzset(); } extern "C" {
fn tzset();
}
pub struct RestServer { pub struct RestServer {
pub api_config: Arc<ApiConfig>, pub api_config: Arc<ApiConfig>,
@ -59,13 +52,16 @@ pub struct RestServer {
const MAX_URI_QUERY_LENGTH: usize = 3072; const MAX_URI_QUERY_LENGTH: usize = 3072;
impl RestServer { impl RestServer {
pub fn new(api_config: ApiConfig) -> Self { pub fn new(api_config: ApiConfig) -> Self {
Self { api_config: Arc::new(api_config) } Self {
api_config: Arc::new(api_config),
}
} }
} }
impl tower_service::Service<&Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>> for RestServer { impl tower_service::Service<&Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>>
for RestServer
{
type Response = ApiService; type Response = ApiService;
type Error = Error; type Error = Error;
type Future = Pin<Box<dyn Future<Output = Result<ApiService, Error>> + Send>>; type Future = Pin<Box<dyn Future<Output = Result<ApiService, Error>> + Send>>;
@ -74,14 +70,17 @@ impl tower_service::Service<&Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStr
Poll::Ready(Ok(())) Poll::Ready(Ok(()))
} }
fn call(&mut self, ctx: &Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>) -> Self::Future { fn call(
&mut self,
ctx: &Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>,
) -> Self::Future {
match ctx.get_ref().peer_addr() { match ctx.get_ref().peer_addr() {
Err(err) => { Err(err) => future::err(format_err!("unable to get peer address - {}", err)).boxed(),
future::err(format_err!("unable to get peer address - {}", err)).boxed() Ok(peer) => future::ok(ApiService {
} peer,
Ok(peer) => { api_config: self.api_config.clone(),
future::ok(ApiService { peer, api_config: self.api_config.clone() }).boxed() })
} .boxed(),
} }
} }
} }
@ -97,12 +96,12 @@ impl tower_service::Service<&tokio::net::TcpStream> for RestServer {
fn call(&mut self, ctx: &tokio::net::TcpStream) -> Self::Future { fn call(&mut self, ctx: &tokio::net::TcpStream) -> Self::Future {
match ctx.peer_addr() { match ctx.peer_addr() {
Err(err) => { Err(err) => future::err(format_err!("unable to get peer address - {}", err)).boxed(),
future::err(format_err!("unable to get peer address - {}", err)).boxed() Ok(peer) => future::ok(ApiService {
} peer,
Ok(peer) => { api_config: self.api_config.clone(),
future::ok(ApiService { peer, api_config: self.api_config.clone() }).boxed() })
} .boxed(),
} }
} }
} }
@ -122,8 +121,9 @@ impl tower_service::Service<&tokio::net::UnixStream> for RestServer {
let fake_peer = "0.0.0.0:807".parse().unwrap(); let fake_peer = "0.0.0.0:807".parse().unwrap();
future::ok(ApiService { future::ok(ApiService {
peer: fake_peer, peer: fake_peer,
api_config: self.api_config.clone() api_config: self.api_config.clone(),
}).boxed() })
.boxed()
} }
} }
@ -140,8 +140,9 @@ fn log_response(
resp: &Response<Body>, resp: &Response<Body>,
user_agent: Option<String>, user_agent: Option<String>,
) { ) {
if resp.extensions().get::<NoLogExtension>().is_some() {
if resp.extensions().get::<NoLogExtension>().is_some() { return; }; return;
};
// we also log URL-to-long requests, so avoid message bigger than PIPE_BUF (4k on Linux) // we also log URL-to-long requests, so avoid message bigger than PIPE_BUF (4k on Linux)
// to profit from atomicty guarantees for O_APPEND opened logfiles // to profit from atomicty guarantees for O_APPEND opened logfiles
@ -157,7 +158,15 @@ fn log_response(
message = &data.0; message = &data.0;
} }
log::error!("{} {}: {} {}: [client {}] {}", method.as_str(), path, status.as_str(), reason, peer, message); log::error!(
"{} {}: {} {}: [client {}] {}",
method.as_str(),
path,
status.as_str(),
reason,
peer,
message
);
} }
if let Some(logfile) = logfile { if let Some(logfile) = logfile {
let auth_id = match resp.extensions().get::<Authid>() { let auth_id = match resp.extensions().get::<Authid>() {
@ -169,20 +178,17 @@ fn log_response(
let datetime = proxmox::tools::time::strftime_local("%d/%m/%Y:%H:%M:%S %z", now) let datetime = proxmox::tools::time::strftime_local("%d/%m/%Y:%H:%M:%S %z", now)
.unwrap_or_else(|_| "-".to_string()); .unwrap_or_else(|_| "-".to_string());
logfile logfile.lock().unwrap().log(format!(
.lock() "{} - {} [{}] \"{} {}\" {} {} {}",
.unwrap() peer.ip(),
.log(format!( auth_id,
"{} - {} [{}] \"{} {}\" {} {} {}", datetime,
peer.ip(), method.as_str(),
auth_id, path,
datetime, status.as_str(),
method.as_str(), resp.body().size_hint().lower(),
path, user_agent.unwrap_or_else(|| "-".to_string()),
status.as_str(), ));
resp.body().size_hint().lower(),
user_agent.unwrap_or_else(|| "-".to_string()),
));
} }
} }
pub fn auth_logger() -> Result<FileLogger, Error> { pub fn auth_logger() -> Result<FileLogger, Error> {
@ -208,11 +214,13 @@ fn get_proxied_peer(headers: &HeaderMap) -> Option<std::net::SocketAddr> {
fn get_user_agent(headers: &HeaderMap) -> Option<String> { fn get_user_agent(headers: &HeaderMap) -> Option<String> {
let agent = headers.get(header::USER_AGENT)?.to_str(); let agent = headers.get(header::USER_AGENT)?.to_str();
agent.map(|s| { agent
let mut s = s.to_owned(); .map(|s| {
s.truncate(128); let mut s = s.to_owned();
s s.truncate(128);
}).ok() s
})
.ok()
} }
impl tower_service::Service<Request<Body>> for ApiService { impl tower_service::Service<Request<Body>> for ApiService {
@ -260,7 +268,6 @@ fn parse_query_parameters<S: 'static + BuildHasher + Send>(
parts: &Parts, parts: &Parts,
uri_param: &HashMap<String, String, S>, uri_param: &HashMap<String, String, S>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let mut param_list: Vec<(String, String)> = vec![]; let mut param_list: Vec<(String, String)> = vec![];
if !form.is_empty() { if !form.is_empty() {
@ -271,7 +278,9 @@ fn parse_query_parameters<S: 'static + BuildHasher + Send>(
if let Some(query_str) = parts.uri.query() { if let Some(query_str) = parts.uri.query() {
for (k, v) in form_urlencoded::parse(query_str.as_bytes()).into_owned() { for (k, v) in form_urlencoded::parse(query_str.as_bytes()).into_owned() {
if k == "_dc" { continue; } // skip extjs "disable cache" parameter if k == "_dc" {
continue;
} // skip extjs "disable cache" parameter
param_list.push((k, v)); param_list.push((k, v));
} }
} }
@ -291,7 +300,6 @@ async fn get_request_parameters<S: 'static + BuildHasher + Send>(
req_body: Body, req_body: Body,
uri_param: HashMap<String, String, S>, uri_param: HashMap<String, String, S>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let mut is_json = false; let mut is_json = false;
if let Some(value) = parts.headers.get(header::CONTENT_TYPE) { if let Some(value) = parts.headers.get(header::CONTENT_TYPE) {
@ -306,19 +314,22 @@ async fn get_request_parameters<S: 'static + BuildHasher + Send>(
} }
} }
let body = req_body let body = TryStreamExt::map_err(req_body, |err| {
.map_err(|err| http_err!(BAD_REQUEST, "Promlems reading request body: {}", err)) http_err!(BAD_REQUEST, "Problems reading request body: {}", err)
.try_fold(Vec::new(), |mut acc, chunk| async move { })
if acc.len() + chunk.len() < 64*1024 { //fimxe: max request body size? .try_fold(Vec::new(), |mut acc, chunk| async move {
acc.extend_from_slice(&*chunk); // FIXME: max request body size?
Ok(acc) if acc.len() + chunk.len() < 64 * 1024 {
} else { acc.extend_from_slice(&*chunk);
Err(http_err!(BAD_REQUEST, "Request body too large")) Ok(acc)
} } else {
}).await?; Err(http_err!(BAD_REQUEST, "Request body too large"))
}
})
.await?;
let utf8_data = std::str::from_utf8(&body) let utf8_data =
.map_err(|err| format_err!("Request body not uft8: {}", err))?; std::str::from_utf8(&body).map_err(|err| format_err!("Request body not uft8: {}", err))?;
if is_json { if is_json {
let mut params: Value = serde_json::from_str(utf8_data)?; let mut params: Value = serde_json::from_str(utf8_data)?;
@ -342,7 +353,6 @@ async fn proxy_protected_request(
req_body: Body, req_body: Body,
peer: &std::net::SocketAddr, peer: &std::net::SocketAddr,
) -> Result<Response<Body>, Error> { ) -> Result<Response<Body>, Error> {
let mut uri_parts = parts.uri.clone().into_parts(); let mut uri_parts = parts.uri.clone().into_parts();
uri_parts.scheme = Some(http::uri::Scheme::HTTP); uri_parts.scheme = Some(http::uri::Scheme::HTTP);
@ -352,9 +362,10 @@ async fn proxy_protected_request(
parts.uri = new_uri; parts.uri = new_uri;
let mut request = Request::from_parts(parts, req_body); let mut request = Request::from_parts(parts, req_body);
request request.headers_mut().insert(
.headers_mut() header::FORWARDED,
.insert(header::FORWARDED, format!("for=\"{}\";", peer).parse().unwrap()); format!("for=\"{}\";", peer).parse().unwrap(),
);
let reload_timezone = info.reload_timezone; let reload_timezone = info.reload_timezone;
@ -367,7 +378,11 @@ async fn proxy_protected_request(
}) })
.await?; .await?;
if reload_timezone { unsafe { tzset(); } } if reload_timezone {
unsafe {
tzset();
}
}
Ok(resp) Ok(resp)
} }
@ -380,7 +395,6 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
req_body: Body, req_body: Body,
uri_param: HashMap<String, String, S>, uri_param: HashMap<String, String, S>,
) -> Result<Response<Body>, Error> { ) -> Result<Response<Body>, Error> {
let delay_unauth_time = std::time::Instant::now() + std::time::Duration::from_millis(3000); let delay_unauth_time = std::time::Instant::now() + std::time::Duration::from_millis(3000);
let result = match info.handler { let result = match info.handler {
@ -389,12 +403,13 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
(handler)(parts, req_body, params, info, Box::new(rpcenv)).await (handler)(parts, req_body, params, info, Box::new(rpcenv)).await
} }
ApiHandler::Sync(handler) => { ApiHandler::Sync(handler) => {
let params = get_request_parameters(info.parameters, parts, req_body, uri_param).await?; let params =
(handler)(params, info, &mut rpcenv) get_request_parameters(info.parameters, parts, req_body, uri_param).await?;
.map(|data| (formatter.format_data)(data, &rpcenv)) (handler)(params, info, &mut rpcenv).map(|data| (formatter.format_data)(data, &rpcenv))
} }
ApiHandler::Async(handler) => { ApiHandler::Async(handler) => {
let params = get_request_parameters(info.parameters, parts, req_body, uri_param).await?; let params =
get_request_parameters(info.parameters, parts, req_body, uri_param).await?;
(handler)(params, info, &mut rpcenv) (handler)(params, info, &mut rpcenv)
.await .await
.map(|data| (formatter.format_data)(data, &rpcenv)) .map(|data| (formatter.format_data)(data, &rpcenv))
@ -413,7 +428,11 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
} }
}; };
if info.reload_timezone { unsafe { tzset(); } } if info.reload_timezone {
unsafe {
tzset();
}
}
Ok(resp) Ok(resp)
} }
@ -424,8 +443,7 @@ fn get_index(
language: Option<String>, language: Option<String>,
api: &Arc<ApiConfig>, api: &Arc<ApiConfig>,
parts: Parts, parts: Parts,
) -> Response<Body> { ) -> Response<Body> {
let nodename = proxmox::tools::nodename(); let nodename = proxmox::tools::nodename();
let user = userid.as_ref().map(|u| u.as_str()).unwrap_or(""); let user = userid.as_ref().map(|u| u.as_str()).unwrap_or("");
@ -462,9 +480,7 @@ fn get_index(
let (ct, index) = match api.render_template(template_file, &data) { let (ct, index) = match api.render_template(template_file, &data) {
Ok(index) => ("text/html", index), Ok(index) => ("text/html", index),
Err(err) => { Err(err) => ("text/plain", format!("Error rendering template: {}", err)),
("text/plain", format!("Error rendering template: {}", err))
}
}; };
let mut resp = Response::builder() let mut resp = Response::builder()
@ -481,7 +497,6 @@ fn get_index(
} }
fn extension_to_content_type(filename: &Path) -> (&'static str, bool) { fn extension_to_content_type(filename: &Path) -> (&'static str, bool) {
if let Some(ext) = filename.extension().and_then(|osstr| osstr.to_str()) { if let Some(ext) = filename.extension().and_then(|osstr| osstr.to_str()) {
return match ext { return match ext {
"css" => ("text/css", false), "css" => ("text/css", false),
@ -510,7 +525,6 @@ fn extension_to_content_type(filename: &Path) -> (&'static str, bool) {
} }
async fn simple_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> { async fn simple_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> {
let (content_type, _nocomp) = extension_to_content_type(&filename); let (content_type, _nocomp) = extension_to_content_type(&filename);
use tokio::io::AsyncReadExt; use tokio::io::AsyncReadExt;
@ -527,7 +541,8 @@ async fn simple_static_file_download(filename: PathBuf) -> Result<Response<Body>
let mut response = Response::new(data.into()); let mut response = Response::new(data.into());
response.headers_mut().insert( response.headers_mut().insert(
header::CONTENT_TYPE, header::CONTENT_TYPE,
header::HeaderValue::from_static(content_type)); header::HeaderValue::from_static(content_type),
);
Ok(response) Ok(response)
} }
@ -542,22 +557,20 @@ async fn chuncked_static_file_download(filename: PathBuf) -> Result<Response<Bod
.map_ok(|bytes| bytes.freeze()); .map_ok(|bytes| bytes.freeze());
let body = Body::wrap_stream(payload); let body = Body::wrap_stream(payload);
// fixme: set other headers ? // FIXME: set other headers ?
Ok(Response::builder() Ok(Response::builder()
.status(StatusCode::OK) .status(StatusCode::OK)
.header(header::CONTENT_TYPE, content_type) .header(header::CONTENT_TYPE, content_type)
.body(body) .body(body)
.unwrap() .unwrap())
)
} }
async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> { async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> {
let metadata = tokio::fs::metadata(filename.clone()) let metadata = tokio::fs::metadata(filename.clone())
.map_err(|err| http_err!(BAD_REQUEST, "File access problems: {}", err)) .map_err(|err| http_err!(BAD_REQUEST, "File access problems: {}", err))
.await?; .await?;
if metadata.len() < 1024*32 { if metadata.len() < 1024 * 32 {
simple_static_file_download(filename).await simple_static_file_download(filename).await
} else { } else {
chuncked_static_file_download(filename).await chuncked_static_file_download(filename).await
@ -574,102 +587,11 @@ fn extract_lang_header(headers: &http::HeaderMap) -> Option<String> {
None None
} }
struct UserAuthData{
ticket: String,
csrf_token: Option<String>,
}
enum AuthData {
User(UserAuthData),
ApiToken(String),
}
fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
if let Some(raw_cookie) = headers.get(header::COOKIE) {
if let Ok(cookie) = raw_cookie.to_str() {
if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
Some(Ok(v)) => Some(v.to_owned()),
_ => None,
};
return Some(AuthData::User(UserAuthData {
ticket,
csrf_token,
}));
}
}
}
match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
Some(Ok(v)) => {
if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
} else {
None
}
},
_ => None,
}
}
fn check_auth(
method: &hyper::Method,
auth_data: &AuthData,
user_info: &CachedUserInfo,
) -> Result<Authid, Error> {
match auth_data {
AuthData::User(user_auth_data) => {
let ticket = user_auth_data.ticket.clone();
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
.verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
.require_full()?;
let auth_id = Authid::from(userid.clone());
if !user_info.is_active_auth_id(&auth_id) {
bail!("user account disabled or expired.");
}
if method != hyper::Method::GET {
if let Some(csrf_token) = &user_auth_data.csrf_token {
verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?;
} else {
bail!("missing CSRF prevention token");
}
}
Ok(auth_id)
},
AuthData::ApiToken(api_token) => {
let mut parts = api_token.splitn(2, ':');
let tokenid = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?;
if !user_info.is_active_auth_id(&tokenid) {
bail!("user account or token disabled or expired.");
}
let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret)
.decode_utf8()
.map_err(|_| format_err!("failed to decode API token header"))?;
crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
Ok(tokenid)
}
}
}
async fn handle_request( async fn handle_request(
api: Arc<ApiConfig>, api: Arc<ApiConfig>,
req: Request<Body>, req: Request<Body>,
peer: &std::net::SocketAddr, peer: &std::net::SocketAddr,
) -> Result<Response<Body>, Error> { ) -> Result<Response<Body>, Error> {
let (parts, body) = req.into_parts(); let (parts, body) = req.into_parts();
let method = parts.method.clone(); let method = parts.method.clone();
let (path, components) = tools::normalize_uri_path(parts.uri.path())?; let (path, components) = tools::normalize_uri_path(parts.uri.path())?;
@ -695,15 +617,13 @@ async fn handle_request(
let access_forbidden_time = std::time::Instant::now() + std::time::Duration::from_millis(500); let access_forbidden_time = std::time::Instant::now() + std::time::Duration::from_millis(500);
if comp_len >= 1 && components[0] == "api2" { if comp_len >= 1 && components[0] == "api2" {
if comp_len >= 2 { if comp_len >= 2 {
let format = components[1]; let format = components[1];
let formatter = match format { let formatter = match format {
"json" => &JSON_FORMATTER, "json" => &JSON_FORMATTER,
"extjs" => &EXTJS_FORMATTER, "extjs" => &EXTJS_FORMATTER,
_ => bail!("Unsupported output format '{}'.", format), _ => bail!("Unsupported output format '{}'.", format),
}; };
let mut uri_param = HashMap::new(); let mut uri_param = HashMap::new();
@ -725,8 +645,10 @@ async fn handle_request(
Ok(authid) => rpcenv.set_auth_id(Some(authid.to_string())), Ok(authid) => rpcenv.set_auth_id(Some(authid.to_string())),
Err(err) => { Err(err) => {
let peer = peer.ip(); let peer = peer.ip();
auth_logger()? auth_logger()?.log(format!(
.log(format!("authentication failure; rhost={} msg={}", peer, err)); "authentication failure; rhost={} msg={}",
peer, err
));
// always delay unauthorized calls by 3 seconds (from start of request) // always delay unauthorized calls by 3 seconds (from start of request)
let err = http_err!(UNAUTHORIZED, "authentication failed - {}", err); let err = http_err!(UNAUTHORIZED, "authentication failed - {}", err);
@ -743,7 +665,12 @@ async fn handle_request(
} }
Some(api_method) => { Some(api_method) => {
let auth_id = rpcenv.get_auth_id(); let auth_id = rpcenv.get_auth_id();
if !check_api_permission(api_method.access.permission, auth_id.as_deref(), &uri_param, user_info.as_ref()) { if !check_api_permission(
api_method.access.permission,
auth_id.as_deref(),
&uri_param,
user_info.as_ref(),
) {
let err = http_err!(FORBIDDEN, "permission check failed"); let err = http_err!(FORBIDDEN, "permission check failed");
tokio::time::sleep_until(Instant::from_std(access_forbidden_time)).await; tokio::time::sleep_until(Instant::from_std(access_forbidden_time)).await;
return Ok((formatter.format_error)(err)); return Ok((formatter.format_error)(err));
@ -752,7 +679,8 @@ async fn handle_request(
let result = if api_method.protected && env_type == RpcEnvironmentType::PUBLIC { let result = if api_method.protected && env_type == RpcEnvironmentType::PUBLIC {
proxy_protected_request(api_method, parts, body, peer).await proxy_protected_request(api_method, parts, body, peer).await
} else { } else {
handle_api_request(rpcenv, api_method, formatter, parts, body, uri_param).await handle_api_request(rpcenv, api_method, formatter, parts, body, uri_param)
.await
}; };
let mut response = match result { let mut response = match result {
@ -768,9 +696,8 @@ async fn handle_request(
return Ok(response); return Ok(response);
} }
} }
} }
} else { } else {
// not Auth required for accessing files! // not Auth required for accessing files!
if method != hyper::Method::GET { if method != hyper::Method::GET {
@ -784,8 +711,14 @@ async fn handle_request(
Ok(auth_id) if !auth_id.is_token() => { Ok(auth_id) if !auth_id.is_token() => {
let userid = auth_id.user(); let userid = auth_id.user();
let new_csrf_token = assemble_csrf_prevention_token(csrf_secret(), userid); let new_csrf_token = assemble_csrf_prevention_token(csrf_secret(), userid);
return Ok(get_index(Some(userid.clone()), Some(new_csrf_token), language, &api, parts)); return Ok(get_index(
}, Some(userid.clone()),
Some(new_csrf_token),
language,
&api,
parts,
));
}
_ => { _ => {
tokio::time::sleep_until(Instant::from_std(delay_unauth_time)).await; tokio::time::sleep_until(Instant::from_std(delay_unauth_time)).await;
return Ok(get_index(None, None, language, &api, parts)); return Ok(get_index(None, None, language, &api, parts));

View File

@ -28,6 +28,7 @@ use crate::{
SENSE_KEY_UNIT_ATTENTION, SENSE_KEY_UNIT_ATTENTION,
SENSE_KEY_NOT_READY, SENSE_KEY_NOT_READY,
InquiryInfo, InquiryInfo,
ScsiError,
scsi_ascii_to_string, scsi_ascii_to_string,
scsi_inquiry, scsi_inquiry,
}, },
@ -103,7 +104,7 @@ fn execute_scsi_command<F: AsRawFd>(
if !retry { if !retry {
bail!("{} failed: {}", error_prefix, err); bail!("{} failed: {}", error_prefix, err);
} }
if let Some(ref sense) = err.sense { if let ScsiError::Sense(ref sense) = err {
if sense.sense_key == SENSE_KEY_NO_SENSE || if sense.sense_key == SENSE_KEY_NO_SENSE ||
sense.sense_key == SENSE_KEY_RECOVERED_ERROR || sense.sense_key == SENSE_KEY_RECOVERED_ERROR ||

View File

@ -242,32 +242,6 @@ impl LinuxTapeHandle {
Ok(()) Ok(())
} }
pub fn forward_space_count_files(&mut self, count: i32) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTFSF, mt_count: count, };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("forward space {} files failed - {}", count, err)
})?;
Ok(())
}
pub fn backward_space_count_files(&mut self, count: i32) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTBSF, mt_count: count, };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("backward space {} files failed - {}", count, err)
})?;
Ok(())
}
/// Set tape compression feature /// Set tape compression feature
pub fn set_compression(&self, on: bool) -> Result<(), Error> { pub fn set_compression(&self, on: bool) -> Result<(), Error> {
@ -467,6 +441,32 @@ impl TapeDriver for LinuxTapeHandle {
Ok(()) Ok(())
} }
fn forward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTFSF, mt_count: i32::try_from(count)? };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("forward space {} files failed - {}", count, err)
})?;
Ok(())
}
fn backward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTBSF, mt_count: i32::try_from(count)? };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("backward space {} files failed - {}", count, err)
})?;
Ok(())
}
fn rewind(&mut self) -> Result<(), Error> { fn rewind(&mut self) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTREW, mt_count: 1, }; let cmd = mtop { mt_op: MTCmd::MTREW, mt_count: 1, };

View File

@ -87,6 +87,26 @@ pub trait TapeDriver {
/// We assume this flushes the tape write buffer. /// We assume this flushes the tape write buffer.
fn move_to_eom(&mut self) -> Result<(), Error>; fn move_to_eom(&mut self) -> Result<(), Error>;
/// Move to last file
fn move_to_last_file(&mut self) -> Result<(), Error> {
self.move_to_eom()?;
if self.current_file_number()? == 0 {
bail!("move_to_last_file failed - media contains no data");
}
self.backward_space_count_files(2)?;
Ok(())
}
/// Forward space count files. The tape is positioned on the first block of the next file.
fn forward_space_count_files(&mut self, count: usize) -> Result<(), Error>;
/// Backward space count files. The tape is positioned on the last block of the previous file.
fn backward_space_count_files(&mut self, count: usize) -> Result<(), Error>;
/// Current file number /// Current file number
fn current_file_number(&mut self) -> Result<u64, Error>; fn current_file_number(&mut self) -> Result<u64, Error>;

View File

@ -296,6 +296,51 @@ impl TapeDriver for VirtualTapeHandle {
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?; .map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
*pos = index.files; *pos = index.files;
self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
Ok(())
}
None => bail!("drive is empty (no tape loaded)."),
}
}
fn forward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let mut status = self.load_status()?;
match status.current_tape {
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
let index = self.load_tape_index(name)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
let new_pos = *pos + count;
if new_pos <= index.files {
*pos = new_pos;
} else {
bail!("forward_space_count_files failed: move beyond EOT");
}
self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
Ok(())
}
None => bail!("drive is empty (no tape loaded)."),
}
}
fn backward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let mut status = self.load_status()?;
match status.current_tape {
Some(VirtualTapeStatus { ref mut pos, .. }) => {
if count <= *pos {
*pos = *pos - count;
} else {
bail!("backward_space_count_files failed: move before BOT");
}
self.store_status(&status) self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?; .map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;

View File

@ -0,0 +1,89 @@
use std::fs::File;
use std::io::Read;
use proxmox::{
sys::error::SysError,
tools::Uuid,
};
use crate::{
tape::{
TapeWrite,
file_formats::{
PROXMOX_TAPE_BLOCK_SIZE,
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0,
MediaContentHeader,
CatalogArchiveHeader,
},
},
};
/// Write a media catalog to the tape
///
/// Returns `Ok(Some(content_uuid))` on success, and `Ok(None)` if
/// `LEOM` was detected before all data was written. The stream is
/// marked inclomplete in that case and does not contain all data (The
/// backup task must rewrite the whole file on the next media).
///
pub fn tape_write_catalog<'a>(
writer: &mut (dyn TapeWrite + 'a),
uuid: &Uuid,
media_set_uuid: &Uuid,
seq_nr: usize,
file: &mut File,
) -> Result<Option<Uuid>, std::io::Error> {
let archive_header = CatalogArchiveHeader {
uuid: uuid.clone(),
media_set_uuid: media_set_uuid.clone(),
seq_nr: seq_nr as u64,
};
let header_data = serde_json::to_string_pretty(&archive_header)?.as_bytes().to_vec();
let header = MediaContentHeader::new(
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0, header_data.len() as u32);
let content_uuid: Uuid = header.uuid.into();
let leom = writer.write_header(&header, &header_data)?;
if leom {
writer.finish(true)?; // mark as incomplete
return Ok(None);
}
let mut file_copy_buffer = proxmox::tools::vec::undefined(PROXMOX_TAPE_BLOCK_SIZE);
let result: Result<(), std::io::Error> = proxmox::try_block!({
let file_size = file.metadata()?.len();
let mut remaining = file_size;
while remaining != 0 {
let got = file.read(&mut file_copy_buffer[..])?;
if got as u64 > remaining {
proxmox::io_bail!("catalog '{}' changed while reading", uuid);
}
writer.write_all(&file_copy_buffer[..got])?;
remaining -= got as u64;
}
if remaining > 0 {
proxmox::io_bail!("catalog '{}' shrunk while reading", uuid);
}
Ok(())
});
match result {
Ok(()) => {
writer.finish(false)?;
Ok(Some(content_uuid))
}
Err(err) => {
if err.is_errno(nix::errno::Errno::ENOSPC) && writer.logical_end_of_media() {
writer.finish(true)?; // mark as incomplete
Ok(None)
} else {
Err(err)
}
}
}
}

View File

@ -14,9 +14,10 @@ use crate::tape::{
TapeWrite, TapeWrite,
file_formats::{ file_formats::{
PROXMOX_TAPE_BLOCK_SIZE, PROXMOX_TAPE_BLOCK_SIZE,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1,
PROXMOX_BACKUP_CHUNK_ARCHIVE_ENTRY_MAGIC_1_0, PROXMOX_BACKUP_CHUNK_ARCHIVE_ENTRY_MAGIC_1_0,
MediaContentHeader, MediaContentHeader,
ChunkArchiveHeader,
ChunkArchiveEntryHeader, ChunkArchiveEntryHeader,
}, },
}; };
@ -36,13 +37,20 @@ pub struct ChunkArchiveWriter<'a> {
impl <'a> ChunkArchiveWriter<'a> { impl <'a> ChunkArchiveWriter<'a> {
pub const MAGIC: [u8; 8] = PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0; pub const MAGIC: [u8; 8] = PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1;
/// Creates a new instance /// Creates a new instance
pub fn new(mut writer: Box<dyn TapeWrite + 'a>, close_on_leom: bool) -> Result<(Self,Uuid), Error> { pub fn new(
mut writer: Box<dyn TapeWrite + 'a>,
store: &str,
close_on_leom: bool,
) -> Result<(Self,Uuid), Error> {
let header = MediaContentHeader::new(Self::MAGIC, 0); let archive_header = ChunkArchiveHeader { store: store.to_string() };
writer.write_header(&header, &[])?; let header_data = serde_json::to_string_pretty(&archive_header)?.as_bytes().to_vec();
let header = MediaContentHeader::new(Self::MAGIC, header_data.len() as u32);
writer.write_header(&header, &header_data)?;
let me = Self { let me = Self {
writer: Some(writer), writer: Some(writer),

View File

@ -13,6 +13,9 @@ pub use chunk_archive::*;
mod snapshot_archive; mod snapshot_archive;
pub use snapshot_archive::*; pub use snapshot_archive::*;
mod catalog_archive;
pub use catalog_archive::*;
mod multi_volume_writer; mod multi_volume_writer;
pub use multi_volume_writer::*; pub use multi_volume_writer::*;
@ -44,12 +47,22 @@ pub const PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0: [u8; 8] = [42, 5, 191, 60, 176,
pub const PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0: [u8; 8] = [8, 96, 99, 249, 47, 151, 83, 216]; pub const PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0: [u8; 8] = [8, 96, 99, 249, 47, 151, 83, 216];
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive v1.0")[0..8] // openssl::sha::sha256(b"Proxmox Backup Chunk Archive v1.0")[0..8]
// only used in unreleased version - no longer supported
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0: [u8; 8] = [62, 173, 167, 95, 49, 76, 6, 110]; pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0: [u8; 8] = [62, 173, 167, 95, 49, 76, 6, 110];
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive v1.1")[0..8]
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1: [u8; 8] = [109, 49, 99, 109, 215, 2, 131, 191];
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive Entry v1.0")[0..8] // openssl::sha::sha256(b"Proxmox Backup Chunk Archive Entry v1.0")[0..8]
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_ENTRY_MAGIC_1_0: [u8; 8] = [72, 87, 109, 242, 222, 66, 143, 220]; pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_ENTRY_MAGIC_1_0: [u8; 8] = [72, 87, 109, 242, 222, 66, 143, 220];
// openssl::sha::sha256(b"Proxmox Backup Snapshot Archive v1.0")[0..8]; // openssl::sha::sha256(b"Proxmox Backup Snapshot Archive v1.0")[0..8];
// only used in unreleased version - no longer supported
pub const PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0: [u8; 8] = [9, 182, 2, 31, 125, 232, 114, 133]; pub const PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0: [u8; 8] = [9, 182, 2, 31, 125, 232, 114, 133];
// openssl::sha::sha256(b"Proxmox Backup Snapshot Archive v1.1")[0..8];
pub const PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1: [u8; 8] = [218, 22, 21, 208, 17, 226, 154, 98];
// openssl::sha::sha256(b"Proxmox Backup Catalog Archive v1.0")[0..8];
pub const PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0: [u8; 8] = [183, 207, 199, 37, 158, 153, 30, 115];
lazy_static::lazy_static!{ lazy_static::lazy_static!{
// Map content magic numbers to human readable names. // Map content magic numbers to human readable names.
@ -58,7 +71,10 @@ lazy_static::lazy_static!{
map.insert(&PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0, "Proxmox Backup Tape Label v1.0"); map.insert(&PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0, "Proxmox Backup Tape Label v1.0");
map.insert(&PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0, "Proxmox Backup MediaSet Label v1.0"); map.insert(&PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0, "Proxmox Backup MediaSet Label v1.0");
map.insert(&PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0, "Proxmox Backup Chunk Archive v1.0"); map.insert(&PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0, "Proxmox Backup Chunk Archive v1.0");
map.insert(&PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1, "Proxmox Backup Chunk Archive v1.1");
map.insert(&PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, "Proxmox Backup Snapshot Archive v1.0"); map.insert(&PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, "Proxmox Backup Snapshot Archive v1.0");
map.insert(&PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1, "Proxmox Backup Snapshot Archive v1.1");
map.insert(&PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0, "Proxmox Backup Catalog Archive v1.0");
map map
}; };
} }
@ -172,6 +188,13 @@ impl MediaContentHeader {
} }
} }
#[derive(Deserialize, Serialize)]
/// Header for chunk archives
pub struct ChunkArchiveHeader {
// Datastore name
pub store: String,
}
#[derive(Endian)] #[derive(Endian)]
#[repr(C,packed)] #[repr(C,packed)]
/// Header for data blobs inside a chunk archive /// Header for data blobs inside a chunk archive
@ -184,6 +207,26 @@ pub struct ChunkArchiveEntryHeader {
pub size: u64, pub size: u64,
} }
#[derive(Deserialize, Serialize)]
/// Header for snapshot archives
pub struct SnapshotArchiveHeader {
/// Snapshot name
pub snapshot: String,
/// Datastore name
pub store: String,
}
#[derive(Deserialize, Serialize)]
/// Header for Catalog archives
pub struct CatalogArchiveHeader {
/// The uuid of the media the catalog is for
pub uuid: Uuid,
/// The media set uuid the catalog is for
pub media_set_uuid: Uuid,
/// Media sequence number
pub seq_nr: u64,
}
#[derive(Serialize,Deserialize,Clone,Debug)] #[derive(Serialize,Deserialize,Clone,Debug)]
/// Media Label /// Media Label
/// ///
@ -245,9 +288,12 @@ impl BlockHeader {
pub fn new() -> Box<Self> { pub fn new() -> Box<Self> {
use std::alloc::{alloc_zeroed, Layout}; use std::alloc::{alloc_zeroed, Layout};
// align to PAGESIZE, so that we can use it with SG_IO
let page_size = unsafe { libc::sysconf(libc::_SC_PAGESIZE) } as usize;
let mut buffer = unsafe { let mut buffer = unsafe {
let ptr = alloc_zeroed( let ptr = alloc_zeroed(
Layout::from_size_align(Self::SIZE, std::mem::align_of::<u64>()) Layout::from_size_align(Self::SIZE, page_size)
.unwrap(), .unwrap(),
); );
Box::from_raw( Box::from_raw(

View File

@ -12,11 +12,13 @@ use crate::tape::{
SnapshotReader, SnapshotReader,
file_formats::{ file_formats::{
PROXMOX_TAPE_BLOCK_SIZE, PROXMOX_TAPE_BLOCK_SIZE,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1,
MediaContentHeader, MediaContentHeader,
SnapshotArchiveHeader,
}, },
}; };
/// Write a set of files as `pxar` archive to the tape /// Write a set of files as `pxar` archive to the tape
/// ///
/// This ignores file attributes like ACLs and xattrs. /// This ignores file attributes like ACLs and xattrs.
@ -31,12 +33,15 @@ pub fn tape_write_snapshot_archive<'a>(
) -> Result<Option<Uuid>, std::io::Error> { ) -> Result<Option<Uuid>, std::io::Error> {
let snapshot = snapshot_reader.snapshot().to_string(); let snapshot = snapshot_reader.snapshot().to_string();
let store = snapshot_reader.datastore_name().to_string();
let file_list = snapshot_reader.file_list(); let file_list = snapshot_reader.file_list();
let header_data = snapshot.as_bytes().to_vec(); let archive_header = SnapshotArchiveHeader { snapshot, store };
let header_data = serde_json::to_string_pretty(&archive_header)?.as_bytes().to_vec();
let header = MediaContentHeader::new( let header = MediaContentHeader::new(
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, header_data.len() as u32); PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1, header_data.len() as u32);
let content_uuid = header.uuid.into(); let content_uuid = header.uuid.into();
let root_metadata = pxar::Metadata::dir_builder(0o0664).build(); let root_metadata = pxar::Metadata::dir_builder(0o0664).build();

View File

@ -26,6 +26,7 @@ use crate::{
/// This make it easy to iterate over all used chunks and files. /// This make it easy to iterate over all used chunks and files.
pub struct SnapshotReader { pub struct SnapshotReader {
snapshot: BackupDir, snapshot: BackupDir,
datastore_name: String,
file_list: Vec<String>, file_list: Vec<String>,
locked_dir: Dir, locked_dir: Dir,
} }
@ -42,11 +43,13 @@ impl SnapshotReader {
"snapshot", "snapshot",
"locked by another operation")?; "locked by another operation")?;
let datastore_name = datastore.name().to_string();
let manifest = match datastore.load_manifest(&snapshot) { let manifest = match datastore.load_manifest(&snapshot) {
Ok((manifest, _)) => manifest, Ok((manifest, _)) => manifest,
Err(err) => { Err(err) => {
bail!("manifest load error on datastore '{}' snapshot '{}' - {}", bail!("manifest load error on datastore '{}' snapshot '{}' - {}",
datastore.name(), snapshot, err); datastore_name, snapshot, err);
} }
}; };
@ -60,7 +63,7 @@ impl SnapshotReader {
file_list.push(CLIENT_LOG_BLOB_NAME.to_string()); file_list.push(CLIENT_LOG_BLOB_NAME.to_string());
} }
Ok(Self { snapshot, file_list, locked_dir }) Ok(Self { snapshot, datastore_name, file_list, locked_dir })
} }
/// Return the snapshot directory /// Return the snapshot directory
@ -68,6 +71,11 @@ impl SnapshotReader {
&self.snapshot &self.snapshot
} }
/// Return the datastore name
pub fn datastore_name(&self) -> &str {
&self.datastore_name
}
/// Returns the list of files the snapshot refers to. /// Returns the list of files the snapshot refers to.
pub fn file_list(&self) -> &Vec<String> { pub fn file_list(&self) -> &Vec<String> {
&self.file_list &self.file_list
@ -96,7 +104,6 @@ impl SnapshotReader {
/// Note: The iterator returns a `Result`, and the iterator state is /// Note: The iterator returns a `Result`, and the iterator state is
/// undefined after the first error. So it make no sense to continue /// undefined after the first error. So it make no sense to continue
/// iteration after the first error. /// iteration after the first error.
#[derive(Clone)]
pub struct SnapshotChunkIterator<'a> { pub struct SnapshotChunkIterator<'a> {
snapshot_reader: &'a SnapshotReader, snapshot_reader: &'a SnapshotReader,
todo_list: Vec<String>, todo_list: Vec<String>,

View File

@ -3,10 +3,30 @@
//! The Inventory persistently stores the list of known backup //! The Inventory persistently stores the list of known backup
//! media. A backup media is identified by its 'MediaId', which is the //! media. A backup media is identified by its 'MediaId', which is the
//! MediaLabel/MediaSetLabel combination. //! MediaLabel/MediaSetLabel combination.
//!
//! Inventory Locking
//!
//! The inventory itself has several methods to update single entries,
//! but all of them can be considered atomic.
//!
//! Pool Locking
//!
//! To add/modify media assigned to a pool, we always do
//! lock_media_pool(). For unassigned media, we call
//! lock_unassigned_media_pool().
//!
//! MediaSet Locking
//!
//! To add/remove media from a media set, or to modify catalogs we
//! always do lock_media_set(). Also, we aquire this lock during
//! restore, to make sure it is not reused for backups.
//!
use std::collections::{HashMap, BTreeMap}; use std::collections::{HashMap, BTreeMap};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
use std::fs::File;
use std::time::Duration;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
use serde::{Serialize, Deserialize}; use serde::{Serialize, Deserialize};
@ -78,7 +98,8 @@ impl Inventory {
pub const MEDIA_INVENTORY_FILENAME: &'static str = "inventory.json"; pub const MEDIA_INVENTORY_FILENAME: &'static str = "inventory.json";
pub const MEDIA_INVENTORY_LOCKFILE: &'static str = ".inventory.lck"; pub const MEDIA_INVENTORY_LOCKFILE: &'static str = ".inventory.lck";
fn new(base_path: &Path) -> Self { /// Create empty instance, no data loaded
pub fn new(base_path: &Path) -> Self {
let mut inventory_path = base_path.to_owned(); let mut inventory_path = base_path.to_owned();
inventory_path.push(Self::MEDIA_INVENTORY_FILENAME); inventory_path.push(Self::MEDIA_INVENTORY_FILENAME);
@ -127,7 +148,7 @@ impl Inventory {
} }
/// Lock the database /// Lock the database
pub fn lock(&self) -> Result<std::fs::File, Error> { fn lock(&self) -> Result<std::fs::File, Error> {
let file = open_file_locked(&self.lockfile_path, std::time::Duration::new(10, 0), true)?; let file = open_file_locked(&self.lockfile_path, std::time::Duration::new(10, 0), true)?;
if cfg!(test) { if cfg!(test) {
// We cannot use chown inside test environment (no permissions) // We cannot use chown inside test environment (no permissions)
@ -733,6 +754,57 @@ impl Inventory {
} }
/// Lock a media pool
pub fn lock_media_pool(base_path: &Path, name: &str) -> Result<File, Error> {
let mut path = base_path.to_owned();
path.push(format!(".pool-{}", name));
path.set_extension("lck");
let timeout = std::time::Duration::new(10, 0);
let lock = proxmox::tools::fs::open_file_locked(&path, timeout, true)?;
if cfg!(test) {
// We cannot use chown inside test environment (no permissions)
return Ok(lock);
}
let backup_user = crate::backup::backup_user()?;
fchown(lock.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))?;
Ok(lock)
}
/// Lock for media not assigned to any pool
pub fn lock_unassigned_media_pool(base_path: &Path) -> Result<File, Error> {
// lock artificial "__UNASSIGNED__" pool to avoid races
lock_media_pool(base_path, "__UNASSIGNED__")
}
/// Lock a media set
///
/// Timeout is 10 seconds by default
pub fn lock_media_set(
base_path: &Path,
media_set_uuid: &Uuid,
timeout: Option<Duration>,
) -> Result<File, Error> {
let mut path = base_path.to_owned();
path.push(format!(".media-set-{}", media_set_uuid));
path.set_extension("lck");
let timeout = timeout.unwrap_or(Duration::new(10, 0));
let file = open_file_locked(&path, timeout, true)?;
if cfg!(test) {
// We cannot use chown inside test environment (no permissions)
return Ok(file);
}
let backup_user = crate::backup::backup_user()?;
fchown(file.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))?;
Ok(file)
}
// shell completion helper // shell completion helper
/// List of known media uuids /// List of known media uuids

View File

@ -26,9 +26,24 @@ use crate::{
backup::BackupDir, backup::BackupDir,
tape::{ tape::{
MediaId, MediaId,
file_formats::MediaSetLabel,
}, },
}; };
pub struct DatastoreContent {
pub snapshot_index: HashMap<String, u64>, // snapshot => file_nr
pub chunk_index: HashMap<[u8;32], u64>, // chunk => file_nr
}
impl DatastoreContent {
pub fn new() -> Self {
Self {
chunk_index: HashMap::new(),
snapshot_index: HashMap::new(),
}
}
}
/// The Media Catalog /// The Media Catalog
/// ///
@ -44,13 +59,11 @@ pub struct MediaCatalog {
log_to_stdout: bool, log_to_stdout: bool,
current_archive: Option<(Uuid, u64)>, current_archive: Option<(Uuid, u64, String)>, // (uuid, file_nr, store)
last_entry: Option<(Uuid, u64)>, last_entry: Option<(Uuid, u64)>,
chunk_index: HashMap<[u8;32], u64>, content: HashMap<String, DatastoreContent>,
snapshot_index: HashMap<String, u64>,
pending: Vec<u8>, pending: Vec<u8>,
} }
@ -59,8 +72,12 @@ impl MediaCatalog {
/// Magic number for media catalog files. /// Magic number for media catalog files.
// openssl::sha::sha256(b"Proxmox Backup Media Catalog v1.0")[0..8] // openssl::sha::sha256(b"Proxmox Backup Media Catalog v1.0")[0..8]
// Note: this version did not store datastore names (not supported anymore)
pub const PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0: [u8; 8] = [221, 29, 164, 1, 59, 69, 19, 40]; pub const PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0: [u8; 8] = [221, 29, 164, 1, 59, 69, 19, 40];
// openssl::sha::sha256(b"Proxmox Backup Media Catalog v1.1")[0..8]
pub const PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1: [u8; 8] = [76, 142, 232, 193, 32, 168, 137, 113];
/// List media with catalogs /// List media with catalogs
pub fn media_with_catalogs(base_path: &Path) -> Result<HashSet<Uuid>, Error> { pub fn media_with_catalogs(base_path: &Path) -> Result<HashSet<Uuid>, Error> {
let mut catalogs = HashSet::new(); let mut catalogs = HashSet::new();
@ -99,6 +116,59 @@ impl MediaCatalog {
} }
} }
/// Destroy the media catalog if media_set uuid does not match
pub fn destroy_unrelated_catalog(
base_path: &Path,
media_id: &MediaId,
) -> Result<(), Error> {
let uuid = &media_id.label.uuid;
let mut path = base_path.to_owned();
path.push(uuid.to_string());
path.set_extension("log");
let file = match std::fs::OpenOptions::new().read(true).open(&path) {
Ok(file) => file,
Err(ref err) if err.kind() == std::io::ErrorKind::NotFound => {
return Ok(());
}
Err(err) => return Err(err.into()),
};
let mut file = BufReader::new(file);
let expected_media_set_id = match media_id.media_set_label {
None => {
std::fs::remove_file(path)?;
return Ok(())
},
Some(ref set) => &set.uuid,
};
let (found_magic_number, media_uuid, media_set_uuid) =
Self::parse_catalog_header(&mut file)?;
if !found_magic_number {
return Ok(());
}
if let Some(ref media_uuid) = media_uuid {
if media_uuid != uuid {
std::fs::remove_file(path)?;
return Ok(());
}
}
if let Some(ref media_set_uuid) = media_set_uuid {
if media_set_uuid != expected_media_set_id {
std::fs::remove_file(path)?;
}
}
Ok(())
}
/// Enable/Disable logging to stdout (disabled by default) /// Enable/Disable logging to stdout (disabled by default)
pub fn log_to_stdout(&mut self, enable: bool) { pub fn log_to_stdout(&mut self, enable: bool) {
self.log_to_stdout = enable; self.log_to_stdout = enable;
@ -120,11 +190,13 @@ impl MediaCatalog {
/// Open a catalog database, load into memory /// Open a catalog database, load into memory
pub fn open( pub fn open(
base_path: &Path, base_path: &Path,
uuid: &Uuid, media_id: &MediaId,
write: bool, write: bool,
create: bool, create: bool,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let uuid = &media_id.label.uuid;
let mut path = base_path.to_owned(); let mut path = base_path.to_owned();
path.push(uuid.to_string()); path.push(uuid.to_string());
path.set_extension("log"); path.set_extension("log");
@ -149,15 +221,14 @@ impl MediaCatalog {
log_to_stdout: false, log_to_stdout: false,
current_archive: None, current_archive: None,
last_entry: None, last_entry: None,
chunk_index: HashMap::new(), content: HashMap::new(),
snapshot_index: HashMap::new(),
pending: Vec::new(), pending: Vec::new(),
}; };
let found_magic_number = me.load_catalog(&mut file)?; let (found_magic_number, _) = me.load_catalog(&mut file, media_id.media_set_label.as_ref())?;
if !found_magic_number { if !found_magic_number {
me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0); me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1);
} }
if write { if write {
@ -171,6 +242,32 @@ impl MediaCatalog {
Ok(me) Ok(me)
} }
/// Creates a temporary empty catalog file
pub fn create_temporary_database_file(
base_path: &Path,
uuid: &Uuid,
) -> Result<File, Error> {
Self::create_basedir(base_path)?;
let mut tmp_path = base_path.to_owned();
tmp_path.push(uuid.to_string());
tmp_path.set_extension("tmp");
let file = std::fs::OpenOptions::new()
.read(true)
.write(true)
.create(true)
.truncate(true)
.open(&tmp_path)?;
let backup_user = crate::backup::backup_user()?;
fchown(file.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))
.map_err(|err| format_err!("fchown failed - {}", err))?;
Ok(file)
}
/// Creates a temporary, empty catalog database /// Creates a temporary, empty catalog database
/// ///
/// Creates a new catalog file using a ".tmp" file extension. /// Creates a new catalog file using a ".tmp" file extension.
@ -188,18 +285,7 @@ impl MediaCatalog {
let me = proxmox::try_block!({ let me = proxmox::try_block!({
Self::create_basedir(base_path)?; let file = Self::create_temporary_database_file(base_path, uuid)?;
let file = std::fs::OpenOptions::new()
.read(true)
.write(true)
.create(true)
.truncate(true)
.open(&tmp_path)?;
let backup_user = crate::backup::backup_user()?;
fchown(file.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))
.map_err(|err| format_err!("fchown failed - {}", err))?;
let mut me = Self { let mut me = Self {
uuid: uuid.clone(), uuid: uuid.clone(),
@ -207,19 +293,18 @@ impl MediaCatalog {
log_to_stdout: false, log_to_stdout: false,
current_archive: None, current_archive: None,
last_entry: None, last_entry: None,
chunk_index: HashMap::new(), content: HashMap::new(),
snapshot_index: HashMap::new(),
pending: Vec::new(), pending: Vec::new(),
}; };
me.log_to_stdout = log_to_stdout; me.log_to_stdout = log_to_stdout;
me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0); me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1);
me.register_label(&media_id.label.uuid, 0)?; me.register_label(&media_id.label.uuid, 0, 0)?;
if let Some(ref set) = media_id.media_set_label { if let Some(ref set) = media_id.media_set_label {
me.register_label(&set.uuid, 1)?; me.register_label(&set.uuid, set.seq_nr, 1)?;
} }
me.commit()?; me.commit()?;
@ -265,8 +350,8 @@ impl MediaCatalog {
} }
/// Accessor to content list /// Accessor to content list
pub fn snapshot_index(&self) -> &HashMap<String, u64> { pub fn content(&self) -> &HashMap<String, DatastoreContent> {
&self.snapshot_index &self.content
} }
/// Commit pending changes /// Commit pending changes
@ -296,6 +381,9 @@ impl MediaCatalog {
/// Conditionally commit if in pending data is large (> 1Mb) /// Conditionally commit if in pending data is large (> 1Mb)
pub fn commit_if_large(&mut self) -> Result<(), Error> { pub fn commit_if_large(&mut self) -> Result<(), Error> {
if self.current_archive.is_some() {
bail!("can't commit catalog in the middle of an chunk archive");
}
if self.pending.len() > 1024*1024 { if self.pending.len() > 1024*1024 {
self.commit()?; self.commit()?;
} }
@ -319,31 +407,47 @@ impl MediaCatalog {
} }
/// Test if the catalog already contain a snapshot /// Test if the catalog already contain a snapshot
pub fn contains_snapshot(&self, snapshot: &str) -> bool { pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
self.snapshot_index.contains_key(snapshot) match self.content.get(store) {
None => false,
Some(content) => content.snapshot_index.contains_key(snapshot),
}
} }
/// Returns the chunk archive file number /// Returns the snapshot archive file number
pub fn lookup_snapshot(&self, snapshot: &str) -> Option<u64> { pub fn lookup_snapshot(&self, store: &str, snapshot: &str) -> Option<u64> {
self.snapshot_index.get(snapshot).copied() match self.content.get(store) {
None => None,
Some(content) => content.snapshot_index.get(snapshot).copied(),
}
} }
/// Test if the catalog already contain a chunk /// Test if the catalog already contain a chunk
pub fn contains_chunk(&self, digest: &[u8;32]) -> bool { pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
self.chunk_index.contains_key(digest) match self.content.get(store) {
None => false,
Some(content) => content.chunk_index.contains_key(digest),
}
} }
/// Returns the chunk archive file number /// Returns the chunk archive file number
pub fn lookup_chunk(&self, digest: &[u8;32]) -> Option<u64> { pub fn lookup_chunk(&self, store: &str, digest: &[u8;32]) -> Option<u64> {
self.chunk_index.get(digest).copied() match self.content.get(store) {
None => None,
Some(content) => content.chunk_index.get(digest).copied(),
}
} }
fn check_register_label(&self, file_number: u64) -> Result<(), Error> { fn check_register_label(&self, file_number: u64, uuid: &Uuid) -> Result<(), Error> {
if file_number >= 2 { if file_number >= 2 {
bail!("register label failed: got wrong file number ({} >= 2)", file_number); bail!("register label failed: got wrong file number ({} >= 2)", file_number);
} }
if file_number == 0 && uuid != &self.uuid {
bail!("register label failed: uuid does not match");
}
if self.current_archive.is_some() { if self.current_archive.is_some() {
bail!("register label failed: inside chunk archive"); bail!("register label failed: inside chunk archive");
} }
@ -363,15 +467,21 @@ impl MediaCatalog {
/// Register media labels (file 0 and 1) /// Register media labels (file 0 and 1)
pub fn register_label( pub fn register_label(
&mut self, &mut self,
uuid: &Uuid, // Uuid form MediaContentHeader uuid: &Uuid, // Media/MediaSet Uuid
seq_nr: u64, // onyl used for media set labels
file_number: u64, file_number: u64,
) -> Result<(), Error> { ) -> Result<(), Error> {
self.check_register_label(file_number)?; self.check_register_label(file_number, uuid)?;
if file_number == 0 && seq_nr != 0 {
bail!("register_label failed - seq_nr should be 0 - iternal error");
}
let entry = LabelEntry { let entry = LabelEntry {
file_number, file_number,
uuid: *uuid.as_bytes(), uuid: *uuid.as_bytes(),
seq_nr,
}; };
if self.log_to_stdout { if self.log_to_stdout {
@ -395,9 +505,9 @@ impl MediaCatalog {
digest: &[u8;32], digest: &[u8;32],
) -> Result<(), Error> { ) -> Result<(), Error> {
let file_number = match self.current_archive { let (file_number, store) = match self.current_archive {
None => bail!("register_chunk failed: no archive started"), None => bail!("register_chunk failed: no archive started"),
Some((_, file_number)) => file_number, Some((_, file_number, ref store)) => (file_number, store),
}; };
if self.log_to_stdout { if self.log_to_stdout {
@ -407,7 +517,12 @@ impl MediaCatalog {
self.pending.push(b'C'); self.pending.push(b'C');
self.pending.extend(digest); self.pending.extend(digest);
self.chunk_index.insert(*digest, file_number); match self.content.get_mut(store) {
None => bail!("storage {} not registered - internal error", store),
Some(content) => {
content.chunk_index.insert(*digest, file_number);
}
}
Ok(()) Ok(())
} }
@ -440,24 +555,29 @@ impl MediaCatalog {
&mut self, &mut self,
uuid: Uuid, // Uuid form MediaContentHeader uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64, file_number: u64,
) -> Result<(), Error> { store: &str,
) -> Result<(), Error> {
self.check_start_chunk_archive(file_number)?; self.check_start_chunk_archive(file_number)?;
let entry = ChunkArchiveStart { let entry = ChunkArchiveStart {
file_number, file_number,
uuid: *uuid.as_bytes(), uuid: *uuid.as_bytes(),
store_name_len: u8::try_from(store.len())?,
}; };
if self.log_to_stdout { if self.log_to_stdout {
println!("A|{}|{}", file_number, uuid.to_string()); println!("A|{}|{}|{}", file_number, uuid.to_string(), store);
} }
self.pending.push(b'A'); self.pending.push(b'A');
unsafe { self.pending.write_le_value(entry)?; } unsafe { self.pending.write_le_value(entry)?; }
self.pending.extend(store.as_bytes());
self.current_archive = Some((uuid, file_number)); self.content.entry(store.to_string()).or_insert(DatastoreContent::new());
self.current_archive = Some((uuid, file_number, store.to_string()));
Ok(()) Ok(())
} }
@ -466,7 +586,7 @@ impl MediaCatalog {
match self.current_archive { match self.current_archive {
None => bail!("end_chunk archive failed: not started"), None => bail!("end_chunk archive failed: not started"),
Some((ref expected_uuid, expected_file_number)) => { Some((ref expected_uuid, expected_file_number, ..)) => {
if uuid != expected_uuid { if uuid != expected_uuid {
bail!("end_chunk_archive failed: got unexpected uuid"); bail!("end_chunk_archive failed: got unexpected uuid");
} }
@ -476,7 +596,6 @@ impl MediaCatalog {
} }
} }
} }
Ok(()) Ok(())
} }
@ -485,7 +604,7 @@ impl MediaCatalog {
match self.current_archive.take() { match self.current_archive.take() {
None => bail!("end_chunk_archive failed: not started"), None => bail!("end_chunk_archive failed: not started"),
Some((uuid, file_number)) => { Some((uuid, file_number, ..)) => {
let entry = ChunkArchiveEnd { let entry = ChunkArchiveEnd {
file_number, file_number,
@ -539,6 +658,7 @@ impl MediaCatalog {
&mut self, &mut self,
uuid: Uuid, // Uuid form MediaContentHeader uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64, file_number: u64,
store: &str,
snapshot: &str, snapshot: &str,
) -> Result<(), Error> { ) -> Result<(), Error> {
@ -547,32 +667,90 @@ impl MediaCatalog {
let entry = SnapshotEntry { let entry = SnapshotEntry {
file_number, file_number,
uuid: *uuid.as_bytes(), uuid: *uuid.as_bytes(),
store_name_len: u8::try_from(store.len())?,
name_len: u16::try_from(snapshot.len())?, name_len: u16::try_from(snapshot.len())?,
}; };
if self.log_to_stdout { if self.log_to_stdout {
println!("S|{}|{}|{}", file_number, uuid.to_string(), snapshot); println!("S|{}|{}|{}:{}", file_number, uuid.to_string(), store, snapshot);
} }
self.pending.push(b'S'); self.pending.push(b'S');
unsafe { self.pending.write_le_value(entry)?; } unsafe { self.pending.write_le_value(entry)?; }
self.pending.extend(store.as_bytes());
self.pending.push(b':');
self.pending.extend(snapshot.as_bytes()); self.pending.extend(snapshot.as_bytes());
self.snapshot_index.insert(snapshot.to_string(), file_number); let content = self.content.entry(store.to_string())
.or_insert(DatastoreContent::new());
content.snapshot_index.insert(snapshot.to_string(), file_number);
self.last_entry = Some((uuid, file_number)); self.last_entry = Some((uuid, file_number));
Ok(()) Ok(())
} }
fn load_catalog(&mut self, file: &mut File) -> Result<bool, Error> { /// Parse the catalog header
pub fn parse_catalog_header<R: Read>(
reader: &mut R,
) -> Result<(bool, Option<Uuid>, Option<Uuid>), Error> {
// read/check magic number
let mut magic = [0u8; 8];
if !reader.read_exact_or_eof(&mut magic)? {
/* EOF */
return Ok((false, None, None));
}
if magic == Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0 {
// only use in unreleased versions
bail!("old catalog format (v1.0) is no longer supported");
}
if magic != Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1 {
bail!("wrong magic number");
}
let mut entry_type = [0u8; 1];
if !reader.read_exact_or_eof(&mut entry_type)? {
/* EOF */
return Ok((true, None, None));
}
if entry_type[0] != b'L' {
bail!("got unexpected entry type");
}
let entry0: LabelEntry = unsafe { reader.read_le_value()? };
let mut entry_type = [0u8; 1];
if !reader.read_exact_or_eof(&mut entry_type)? {
/* EOF */
return Ok((true, Some(entry0.uuid.into()), None));
}
if entry_type[0] != b'L' {
bail!("got unexpected entry type");
}
let entry1: LabelEntry = unsafe { reader.read_le_value()? };
Ok((true, Some(entry0.uuid.into()), Some(entry1.uuid.into())))
}
fn load_catalog(
&mut self,
file: &mut File,
media_set_label: Option<&MediaSetLabel>,
) -> Result<(bool, Option<Uuid>), Error> {
let mut file = BufReader::new(file); let mut file = BufReader::new(file);
let mut found_magic_number = false; let mut found_magic_number = false;
let mut media_set_uuid = None;
loop { loop {
let pos = file.seek(SeekFrom::Current(0))?; let pos = file.seek(SeekFrom::Current(0))?; // get current pos
if pos == 0 { // read/check magic number if pos == 0 { // read/check magic number
let mut magic = [0u8; 8]; let mut magic = [0u8; 8];
@ -581,7 +759,11 @@ impl MediaCatalog {
Ok(true) => { /* OK */ } Ok(true) => { /* OK */ }
Err(err) => bail!("read failed - {}", err), Err(err) => bail!("read failed - {}", err),
} }
if magic != Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0 { if magic == Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0 {
// only use in unreleased versions
bail!("old catalog format (v1.0) is no longer supported");
}
if magic != Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1 {
bail!("wrong magic number"); bail!("wrong magic number");
} }
found_magic_number = true; found_magic_number = true;
@ -597,23 +779,35 @@ impl MediaCatalog {
match entry_type[0] { match entry_type[0] {
b'C' => { b'C' => {
let file_number = match self.current_archive { let (file_number, store) = match self.current_archive {
None => bail!("register_chunk failed: no archive started"), None => bail!("register_chunk failed: no archive started"),
Some((_, file_number)) => file_number, Some((_, file_number, ref store)) => (file_number, store),
}; };
let mut digest = [0u8; 32]; let mut digest = [0u8; 32];
file.read_exact(&mut digest)?; file.read_exact(&mut digest)?;
self.chunk_index.insert(digest, file_number); match self.content.get_mut(store) {
None => bail!("storage {} not registered - internal error", store),
Some(content) => {
content.chunk_index.insert(digest, file_number);
}
}
} }
b'A' => { b'A' => {
let entry: ChunkArchiveStart = unsafe { file.read_le_value()? }; let entry: ChunkArchiveStart = unsafe { file.read_le_value()? };
let file_number = entry.file_number; let file_number = entry.file_number;
let uuid = Uuid::from(entry.uuid); let uuid = Uuid::from(entry.uuid);
let store_name_len = entry.store_name_len as usize;
let store = file.read_exact_allocated(store_name_len)?;
let store = std::str::from_utf8(&store)?;
self.check_start_chunk_archive(file_number)?; self.check_start_chunk_archive(file_number)?;
self.current_archive = Some((uuid, file_number)); self.content.entry(store.to_string())
} .or_insert(DatastoreContent::new());
self.current_archive = Some((uuid, file_number, store.to_string()));
}
b'E' => { b'E' => {
let entry: ChunkArchiveEnd = unsafe { file.read_le_value()? }; let entry: ChunkArchiveEnd = unsafe { file.read_le_value()? };
let file_number = entry.file_number; let file_number = entry.file_number;
@ -627,15 +821,26 @@ impl MediaCatalog {
b'S' => { b'S' => {
let entry: SnapshotEntry = unsafe { file.read_le_value()? }; let entry: SnapshotEntry = unsafe { file.read_le_value()? };
let file_number = entry.file_number; let file_number = entry.file_number;
let name_len = entry.name_len; let store_name_len = entry.store_name_len as usize;
let name_len = entry.name_len as usize;
let uuid = Uuid::from(entry.uuid); let uuid = Uuid::from(entry.uuid);
let snapshot = file.read_exact_allocated(name_len.into())?; let store = file.read_exact_allocated(store_name_len + 1)?;
if store[store_name_len] != b':' {
bail!("parse-error: missing separator in SnapshotEntry");
}
let store = std::str::from_utf8(&store[..store_name_len])?;
let snapshot = file.read_exact_allocated(name_len)?;
let snapshot = std::str::from_utf8(&snapshot)?; let snapshot = std::str::from_utf8(&snapshot)?;
self.check_register_snapshot(file_number, snapshot)?; self.check_register_snapshot(file_number, snapshot)?;
self.snapshot_index.insert(snapshot.to_string(), file_number); let content = self.content.entry(store.to_string())
.or_insert(DatastoreContent::new());
content.snapshot_index.insert(snapshot.to_string(), file_number);
self.last_entry = Some((uuid, file_number)); self.last_entry = Some((uuid, file_number));
} }
@ -644,7 +849,19 @@ impl MediaCatalog {
let file_number = entry.file_number; let file_number = entry.file_number;
let uuid = Uuid::from(entry.uuid); let uuid = Uuid::from(entry.uuid);
self.check_register_label(file_number)?; self.check_register_label(file_number, &uuid)?;
if file_number == 1 {
if let Some(set) = media_set_label {
if set.uuid != uuid {
bail!("got unexpected media set uuid");
}
if set.seq_nr != entry.seq_nr {
bail!("got unexpected media set sequence number");
}
}
media_set_uuid = Some(uuid.clone());
}
self.last_entry = Some((uuid, file_number)); self.last_entry = Some((uuid, file_number));
} }
@ -655,7 +872,7 @@ impl MediaCatalog {
} }
Ok(found_magic_number) Ok((found_magic_number, media_set_uuid))
} }
} }
@ -693,9 +910,9 @@ impl MediaSetCatalog {
} }
/// Test if the catalog already contain a snapshot /// Test if the catalog already contain a snapshot
pub fn contains_snapshot(&self, snapshot: &str) -> bool { pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
for catalog in self.catalog_list.values() { for catalog in self.catalog_list.values() {
if catalog.contains_snapshot(snapshot) { if catalog.contains_snapshot(store, snapshot) {
return true; return true;
} }
} }
@ -703,9 +920,9 @@ impl MediaSetCatalog {
} }
/// Test if the catalog already contain a chunk /// Test if the catalog already contain a chunk
pub fn contains_chunk(&self, digest: &[u8;32]) -> bool { pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
for catalog in self.catalog_list.values() { for catalog in self.catalog_list.values() {
if catalog.contains_chunk(digest) { if catalog.contains_chunk(store, digest) {
return true; return true;
} }
} }
@ -720,6 +937,7 @@ impl MediaSetCatalog {
struct LabelEntry { struct LabelEntry {
file_number: u64, file_number: u64,
uuid: [u8;16], uuid: [u8;16],
seq_nr: u64, // only used for media set labels
} }
#[derive(Endian)] #[derive(Endian)]
@ -727,6 +945,8 @@ struct LabelEntry {
struct ChunkArchiveStart { struct ChunkArchiveStart {
file_number: u64, file_number: u64,
uuid: [u8;16], uuid: [u8;16],
store_name_len: u8,
/* datastore name follows */
} }
#[derive(Endian)] #[derive(Endian)]
@ -741,6 +961,7 @@ struct ChunkArchiveEnd{
struct SnapshotEntry{ struct SnapshotEntry{
file_number: u64, file_number: u64,
uuid: [u8;16], uuid: [u8;16],
store_name_len: u8,
name_len: u16, name_len: u16,
/* snapshot name follows */ /* datastore name, ':', snapshot name follows */
} }

View File

@ -8,6 +8,8 @@
//! //!
use std::path::{PathBuf, Path}; use std::path::{PathBuf, Path};
use std::fs::File;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize}; use ::serde::{Deserialize, Serialize};
@ -27,6 +29,9 @@ use crate::{
MediaId, MediaId,
MediaSet, MediaSet,
Inventory, Inventory,
lock_media_set,
lock_media_pool,
lock_unassigned_media_pool,
file_formats::{ file_formats::{
MediaLabel, MediaLabel,
MediaSetLabel, MediaSetLabel,
@ -34,9 +39,6 @@ use crate::{
} }
}; };
/// Media Pool lock guard
pub struct MediaPoolLockGuard(std::fs::File);
/// Media Pool /// Media Pool
pub struct MediaPool { pub struct MediaPool {
@ -49,11 +51,16 @@ pub struct MediaPool {
changer_name: Option<String>, changer_name: Option<String>,
force_media_availability: bool, force_media_availability: bool,
// Set this if you do not need to allocate writeable media - this
// is useful for list_media()
no_media_set_locking: bool,
encrypt_fingerprint: Option<Fingerprint>, encrypt_fingerprint: Option<Fingerprint>,
inventory: Inventory, inventory: Inventory,
current_media_set: MediaSet, current_media_set: MediaSet,
current_media_set_lock: Option<File>,
} }
impl MediaPool { impl MediaPool {
@ -72,8 +79,15 @@ impl MediaPool {
retention: RetentionPolicy, retention: RetentionPolicy,
changer_name: Option<String>, changer_name: Option<String>,
encrypt_fingerprint: Option<Fingerprint>, encrypt_fingerprint: Option<Fingerprint>,
no_media_set_locking: bool, // for list_media()
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let _pool_lock = if no_media_set_locking {
None
} else {
Some(lock_media_pool(state_path, name)?)
};
let inventory = Inventory::load(state_path)?; let inventory = Inventory::load(state_path)?;
let current_media_set = match inventory.latest_media_set(name) { let current_media_set = match inventory.latest_media_set(name) {
@ -81,6 +95,12 @@ impl MediaPool {
None => MediaSet::new(), None => MediaSet::new(),
}; };
let current_media_set_lock = if no_media_set_locking {
None
} else {
Some(lock_media_set(state_path, current_media_set.uuid(), None)?)
};
Ok(MediaPool { Ok(MediaPool {
name: String::from(name), name: String::from(name),
state_path: state_path.to_owned(), state_path: state_path.to_owned(),
@ -89,8 +109,10 @@ impl MediaPool {
changer_name, changer_name,
inventory, inventory,
current_media_set, current_media_set,
current_media_set_lock,
encrypt_fingerprint, encrypt_fingerprint,
force_media_availability: false, force_media_availability: false,
no_media_set_locking,
}) })
} }
@ -101,9 +123,9 @@ impl MediaPool {
self.force_media_availability = true; self.force_media_availability = true;
} }
/// Returns the Uuid of the current media set /// Returns the the current media set
pub fn current_media_set(&self) -> &Uuid { pub fn current_media_set(&self) -> &MediaSet {
self.current_media_set.uuid() &self.current_media_set
} }
/// Creates a new instance using the media pool configuration /// Creates a new instance using the media pool configuration
@ -111,6 +133,7 @@ impl MediaPool {
state_path: &Path, state_path: &Path,
config: &MediaPoolConfig, config: &MediaPoolConfig,
changer_name: Option<String>, changer_name: Option<String>,
no_media_set_locking: bool, // for list_media()
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let allocation = config.allocation.clone().unwrap_or_else(|| String::from("continue")).parse()?; let allocation = config.allocation.clone().unwrap_or_else(|| String::from("continue")).parse()?;
@ -129,6 +152,7 @@ impl MediaPool {
retention, retention,
changer_name, changer_name,
encrypt_fingerprint, encrypt_fingerprint,
no_media_set_locking,
) )
} }
@ -239,9 +263,20 @@ impl MediaPool {
/// status, so this must not change persistent/saved state. /// status, so this must not change persistent/saved state.
/// ///
/// Returns the reason why we started a new media set (if we do) /// Returns the reason why we started a new media set (if we do)
pub fn start_write_session(&mut self, current_time: i64) -> Result<Option<String>, Error> { pub fn start_write_session(
&mut self,
current_time: i64,
) -> Result<Option<String>, Error> {
let mut create_new_set = match self.current_set_usable() { let _pool_lock = if self.no_media_set_locking {
None
} else {
Some(lock_media_pool(&self.state_path, &self.name)?)
};
self.inventory.reload()?;
let mut create_new_set = match self.current_set_usable() {
Err(err) => { Err(err) => {
Some(err.to_string()) Some(err.to_string())
} }
@ -268,6 +303,14 @@ impl MediaPool {
if create_new_set.is_some() { if create_new_set.is_some() {
let media_set = MediaSet::new(); let media_set = MediaSet::new();
let current_media_set_lock = if self.no_media_set_locking {
None
} else {
Some(lock_media_set(&self.state_path, media_set.uuid(), None)?)
};
self.current_media_set_lock = current_media_set_lock;
self.current_media_set = media_set; self.current_media_set = media_set;
} }
@ -327,6 +370,10 @@ impl MediaPool {
fn add_media_to_current_set(&mut self, mut media_id: MediaId, current_time: i64) -> Result<(), Error> { fn add_media_to_current_set(&mut self, mut media_id: MediaId, current_time: i64) -> Result<(), Error> {
if self.current_media_set_lock.is_none() {
bail!("add_media_to_current_set: media set is not locked - internal error");
}
let seq_nr = self.current_media_set.media_list().len() as u64; let seq_nr = self.current_media_set.media_list().len() as u64;
let pool = self.name.clone(); let pool = self.name.clone();
@ -357,6 +404,10 @@ impl MediaPool {
/// Allocates a writable media to the current media set /// Allocates a writable media to the current media set
pub fn alloc_writable_media(&mut self, current_time: i64) -> Result<Uuid, Error> { pub fn alloc_writable_media(&mut self, current_time: i64) -> Result<Uuid, Error> {
if self.current_media_set_lock.is_none() {
bail!("alloc_writable_media: media set is not locked - internal error");
}
let last_is_writable = self.current_set_usable()?; let last_is_writable = self.current_set_usable()?;
if last_is_writable { if last_is_writable {
@ -367,83 +418,95 @@ impl MediaPool {
// try to find empty media in pool, add to media set // try to find empty media in pool, add to media set
let media_list = self.list_media(); { // limit pool lock scope
let _pool_lock = lock_media_pool(&self.state_path, &self.name)?;
let mut empty_media = Vec::new(); self.inventory.reload()?;
let mut used_media = Vec::new();
for media in media_list.into_iter() { let media_list = self.list_media();
if !self.location_is_available(media.location()) {
continue;
}
// already part of a media set?
if media.media_set_label().is_some() {
used_media.push(media);
} else {
// only consider writable empty media
if media.status() == &MediaStatus::Writable {
empty_media.push(media);
}
}
}
// sort empty_media, newest first -> oldest last let mut empty_media = Vec::new();
empty_media.sort_unstable_by(|a, b| { let mut used_media = Vec::new();
let mut res = b.label().ctime.cmp(&a.label().ctime);
if res == std::cmp::Ordering::Equal {
res = b.label().label_text.cmp(&a.label().label_text);
}
res
});
if let Some(media) = empty_media.pop() { for media in media_list.into_iter() {
// found empty media, add to media set an use it if !self.location_is_available(media.location()) {
let uuid = media.uuid().clone();
self.add_media_to_current_set(media.into_id(), current_time)?;
return Ok(uuid);
}
println!("no empty media in pool, try to reuse expired media");
let mut expired_media = Vec::new();
for media in used_media.into_iter() {
if let Some(set) = media.media_set_label() {
if &set.uuid == self.current_media_set.uuid() {
continue; continue;
} }
} else { // already part of a media set?
continue; if media.media_set_label().is_some() {
used_media.push(media);
} else {
// only consider writable empty media
if media.status() == &MediaStatus::Writable {
empty_media.push(media);
}
}
} }
if self.media_is_expired(&media, current_time) { // sort empty_media, newest first -> oldest last
println!("found expired media on media '{}'", media.label_text()); empty_media.sort_unstable_by(|a, b| {
expired_media.push(media); let mut res = b.label().ctime.cmp(&a.label().ctime);
} if res == std::cmp::Ordering::Equal {
} res = b.label().label_text.cmp(&a.label().label_text);
}
res
});
// sort expired_media, newest first -> oldest last if let Some(media) = empty_media.pop() {
expired_media.sort_unstable_by(|a, b| { // found empty media, add to media set an use it
let mut res = b.media_set_label().unwrap().ctime.cmp(&a.media_set_label().unwrap().ctime); let uuid = media.uuid().clone();
if res == std::cmp::Ordering::Equal { self.add_media_to_current_set(media.into_id(), current_time)?;
res = b.label().label_text.cmp(&a.label().label_text); return Ok(uuid);
} }
res
});
if let Some(media) = expired_media.pop() { println!("no empty media in pool, try to reuse expired media");
println!("reuse expired media '{}'", media.label_text());
let uuid = media.uuid().clone(); let mut expired_media = Vec::new();
self.add_media_to_current_set(media.into_id(), current_time)?;
return Ok(uuid); for media in used_media.into_iter() {
if let Some(set) = media.media_set_label() {
if &set.uuid == self.current_media_set.uuid() {
continue;
}
} else {
continue;
}
if self.media_is_expired(&media, current_time) {
println!("found expired media on media '{}'", media.label_text());
expired_media.push(media);
}
}
// sort expired_media, newest first -> oldest last
expired_media.sort_unstable_by(|a, b| {
let mut res = b.media_set_label().unwrap().ctime.cmp(&a.media_set_label().unwrap().ctime);
if res == std::cmp::Ordering::Equal {
res = b.label().label_text.cmp(&a.label().label_text);
}
res
});
while let Some(media) = expired_media.pop() {
// check if we can modify the media-set (i.e. skip
// media used by a restore job)
if let Ok(_media_set_lock) = lock_media_set(
&self.state_path,
&media.media_set_label().unwrap().uuid,
Some(std::time::Duration::new(0, 0)), // do not wait
) {
println!("reuse expired media '{}'", media.label_text());
let uuid = media.uuid().clone();
self.add_media_to_current_set(media.into_id(), current_time)?;
return Ok(uuid);
}
}
} }
println!("no expired media in pool, try to find unassigned/free media"); println!("no expired media in pool, try to find unassigned/free media");
// try unassigned media // try unassigned media
let _lock = lock_unassigned_media_pool(&self.state_path)?;
// lock artificial "__UNASSIGNED__" pool to avoid races
let _lock = MediaPool::lock(&self.state_path, "__UNASSIGNED__")?;
self.inventory.reload()?; self.inventory.reload()?;
@ -563,17 +626,6 @@ impl MediaPool {
self.inventory.generate_media_set_name(media_set_uuid, template) self.inventory.generate_media_set_name(media_set_uuid, template)
} }
/// Lock the pool
pub fn lock(base_path: &Path, name: &str) -> Result<MediaPoolLockGuard, Error> {
let mut path = base_path.to_owned();
path.push(format!(".{}", name));
path.set_extension("lck");
let timeout = std::time::Duration::new(10, 0);
let lock = proxmox::tools::fs::open_file_locked(&path, timeout, true)?;
Ok(MediaPoolLockGuard(lock))
}
} }
/// Backup media /// Backup media

View File

@ -0,0 +1,118 @@
use anyhow::{bail, Error};
use proxmox::tools::Uuid;
use crate::{
tape::{
MediaCatalog,
MediaSetCatalog,
},
};
/// Helper to build and query sets of catalogs
///
/// Similar to MediaSetCatalog, but allows to modify the last catalog.
pub struct CatalogSet {
// read only part
pub media_set_catalog: MediaSetCatalog,
// catalog to modify (latest in set)
pub catalog: Option<MediaCatalog>,
}
impl CatalogSet {
/// Create empty instance
pub fn new() -> Self {
Self {
media_set_catalog: MediaSetCatalog::new(),
catalog: None,
}
}
/// Add catalog to the read-only set
pub fn append_read_only_catalog(&mut self, catalog: MediaCatalog) -> Result<(), Error> {
self.media_set_catalog.append_catalog(catalog)
}
/// Test if the catalog already contains a snapshot
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
if let Some(ref catalog) = self.catalog {
if catalog.contains_snapshot(store, snapshot) {
return true;
}
}
self.media_set_catalog.contains_snapshot(store, snapshot)
}
/// Test if the catalog already contains a chunk
pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
if let Some(ref catalog) = self.catalog {
if catalog.contains_chunk(store, digest) {
return true;
}
}
self.media_set_catalog.contains_chunk(store, digest)
}
/// Add a new catalog, move the old on to the read-only set
pub fn append_catalog(&mut self, new_catalog: MediaCatalog) -> Result<(), Error> {
// append current catalog to read-only set
if let Some(catalog) = self.catalog.take() {
self.media_set_catalog.append_catalog(catalog)?;
}
// remove read-only version from set (in case it is there)
self.media_set_catalog.remove_catalog(&new_catalog.uuid());
self.catalog = Some(new_catalog);
Ok(())
}
/// Register a snapshot
pub fn register_snapshot(
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
snapshot: &str,
) -> Result<(), Error> {
match self.catalog {
Some(ref mut catalog) => {
catalog.register_snapshot(uuid, file_number, store, snapshot)?;
}
None => bail!("no catalog loaded - internal error"),
}
Ok(())
}
/// Register a chunk archive
pub fn register_chunk_archive(
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
chunk_list: &[[u8; 32]],
) -> Result<(), Error> {
match self.catalog {
Some(ref mut catalog) => {
catalog.start_chunk_archive(uuid, file_number, store)?;
for digest in chunk_list {
catalog.register_chunk(digest)?;
}
catalog.end_chunk_archive()?;
}
None => bail!("no catalog loaded - internal error"),
}
Ok(())
}
/// Commit the catalog changes
pub fn commit(&mut self) -> Result<(), Error> {
if let Some(ref mut catalog) = self.catalog {
catalog.commit()?;
}
Ok(())
}
}

View File

@ -1,6 +1,13 @@
use std::collections::HashSet; mod catalog_set;
pub use catalog_set::*;
mod new_chunks_iterator;
pub use new_chunks_iterator::*;
use std::path::Path; use std::path::Path;
use std::fs::File;
use std::time::SystemTime; use std::time::SystemTime;
use std::sync::{Arc, Mutex};
use anyhow::{bail, Error}; use anyhow::{bail, Error};
@ -18,15 +25,14 @@ use crate::{
COMMIT_BLOCK_SIZE, COMMIT_BLOCK_SIZE,
TapeWrite, TapeWrite,
SnapshotReader, SnapshotReader,
SnapshotChunkIterator,
MediaPool, MediaPool,
MediaId, MediaId,
MediaCatalog, MediaCatalog,
MediaSetCatalog,
file_formats::{ file_formats::{
MediaSetLabel, MediaSetLabel,
ChunkArchiveWriter, ChunkArchiveWriter,
tape_write_snapshot_archive, tape_write_snapshot_archive,
tape_write_catalog,
}, },
drive::{ drive::{
TapeDriver, TapeDriver,
@ -41,35 +47,31 @@ use crate::{
struct PoolWriterState { struct PoolWriterState {
drive: Box<dyn TapeDriver>, drive: Box<dyn TapeDriver>,
catalog: MediaCatalog, // Media Uuid from loaded media
media_uuid: Uuid,
// tell if we already moved to EOM // tell if we already moved to EOM
at_eom: bool, at_eom: bool,
// bytes written after the last tape fush/sync // bytes written after the last tape fush/sync
bytes_written: usize, bytes_written: usize,
} }
impl PoolWriterState {
fn commit(&mut self) -> Result<(), Error> {
self.drive.sync()?; // sync all data to the tape
self.catalog.commit()?; // then commit the catalog
self.bytes_written = 0;
Ok(())
}
}
/// Helper to manage a backup job, writing several tapes of a pool /// Helper to manage a backup job, writing several tapes of a pool
pub struct PoolWriter { pub struct PoolWriter {
pool: MediaPool, pool: MediaPool,
drive_name: String, drive_name: String,
status: Option<PoolWriterState>, status: Option<PoolWriterState>,
media_set_catalog: MediaSetCatalog, catalog_set: Arc<Mutex<CatalogSet>>,
notify_email: Option<String>, notify_email: Option<String>,
} }
impl PoolWriter { impl PoolWriter {
pub fn new(mut pool: MediaPool, drive_name: &str, worker: &WorkerTask, notify_email: Option<String>) -> Result<Self, Error> { pub fn new(
mut pool: MediaPool,
drive_name: &str,
worker: &WorkerTask,
notify_email: Option<String>,
) -> Result<Self, Error> {
let current_time = proxmox::tools::time::epoch_i64(); let current_time = proxmox::tools::time::epoch_i64();
@ -82,26 +84,28 @@ impl PoolWriter {
); );
} }
task_log!(worker, "media set uuid: {}", pool.current_media_set()); let media_set_uuid = pool.current_media_set().uuid();
task_log!(worker, "media set uuid: {}", media_set_uuid);
let mut media_set_catalog = MediaSetCatalog::new(); let mut catalog_set = CatalogSet::new();
// load all catalogs read-only at start // load all catalogs read-only at start
for media_uuid in pool.current_media_list()? { for media_uuid in pool.current_media_list()? {
let media_info = pool.lookup_media(media_uuid).unwrap();
let media_catalog = MediaCatalog::open( let media_catalog = MediaCatalog::open(
Path::new(TAPE_STATUS_DIR), Path::new(TAPE_STATUS_DIR),
&media_uuid, media_info.id(),
false, false,
false, false,
)?; )?;
media_set_catalog.append_catalog(media_catalog)?; catalog_set.append_read_only_catalog(media_catalog)?;
} }
Ok(Self { Ok(Self {
pool, pool,
drive_name: drive_name.to_string(), drive_name: drive_name.to_string(),
status: None, status: None,
media_set_catalog, catalog_set: Arc::new(Mutex::new(catalog_set)),
notify_email, notify_email,
}) })
} }
@ -116,13 +120,8 @@ impl PoolWriter {
Ok(()) Ok(())
} }
pub fn contains_snapshot(&self, snapshot: &str) -> bool { pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
if let Some(PoolWriterState { ref catalog, .. }) = self.status { self.catalog_set.lock().unwrap().contains_snapshot(store, snapshot)
if catalog.contains_snapshot(snapshot) {
return true;
}
}
self.media_set_catalog.contains_snapshot(snapshot)
} }
/// Eject media and drop PoolWriterState (close drive) /// Eject media and drop PoolWriterState (close drive)
@ -188,16 +187,17 @@ impl PoolWriter {
/// This is done automatically during a backupsession, but needs to /// This is done automatically during a backupsession, but needs to
/// be called explicitly before dropping the PoolWriter /// be called explicitly before dropping the PoolWriter
pub fn commit(&mut self) -> Result<(), Error> { pub fn commit(&mut self) -> Result<(), Error> {
if let Some(ref mut status) = self.status { if let Some(PoolWriterState {ref mut drive, .. }) = self.status {
status.commit()?; drive.sync()?; // sync all data to the tape
} }
self.catalog_set.lock().unwrap().commit()?; // then commit the catalog
Ok(()) Ok(())
} }
/// Load a writable media into the drive /// Load a writable media into the drive
pub fn load_writable_media(&mut self, worker: &WorkerTask) -> Result<Uuid, Error> { pub fn load_writable_media(&mut self, worker: &WorkerTask) -> Result<Uuid, Error> {
let last_media_uuid = match self.status { let last_media_uuid = match self.status {
Some(PoolWriterState { ref catalog, .. }) => Some(catalog.uuid().clone()), Some(PoolWriterState { ref media_uuid, ..}) => Some(media_uuid.clone()),
None => None, None => None,
}; };
@ -217,13 +217,11 @@ impl PoolWriter {
task_log!(worker, "allocated new writable media '{}'", media.label_text()); task_log!(worker, "allocated new writable media '{}'", media.label_text());
// remove read-only catalog (we store a writable version in status) if let Some(PoolWriterState {mut drive, .. }) = self.status.take() {
self.media_set_catalog.remove_catalog(&media_uuid); if last_media_uuid.is_some() {
task_log!(worker, "eject current media");
if let Some(PoolWriterState {mut drive, catalog, .. }) = self.status.take() { drive.eject_media()?;
self.media_set_catalog.append_catalog(catalog)?; }
task_log!(worker, "eject current media");
drive.eject_media()?;
} }
let (drive_config, _digest) = crate::config::drive::config()?; let (drive_config, _digest) = crate::config::drive::config()?;
@ -242,13 +240,15 @@ impl PoolWriter {
} }
} }
let catalog = update_media_set_label( let (catalog, is_new_media) = update_media_set_label(
worker, worker,
drive.as_mut(), drive.as_mut(),
old_media_id.media_set_label, old_media_id.media_set_label,
media.id(), media.id(),
)?; )?;
self.catalog_set.lock().unwrap().append_catalog(catalog)?;
let media_set = media.media_set_label().clone().unwrap(); let media_set = media.media_set_label().clone().unwrap();
let encrypt_fingerprint = media_set let encrypt_fingerprint = media_set
@ -258,17 +258,160 @@ impl PoolWriter {
drive.set_encryption(encrypt_fingerprint)?; drive.set_encryption(encrypt_fingerprint)?;
self.status = Some(PoolWriterState { drive, catalog, at_eom: false, bytes_written: 0 }); self.status = Some(PoolWriterState {
drive,
media_uuid: media_uuid.clone(),
at_eom: false,
bytes_written: 0,
});
if is_new_media {
// add catalogs from previous media
self.append_media_set_catalogs(worker)?;
}
Ok(media_uuid) Ok(media_uuid)
} }
/// uuid of currently loaded BackupMedia fn open_catalog_file(uuid: &Uuid) -> Result<File, Error> {
pub fn current_media_uuid(&self) -> Result<&Uuid, Error> {
match self.status { let status_path = Path::new(TAPE_STATUS_DIR);
Some(PoolWriterState { ref catalog, ..}) => Ok(catalog.uuid()), let mut path = status_path.to_owned();
None => bail!("PoolWriter - no media loaded"), path.push(uuid.to_string());
path.set_extension("log");
let file = std::fs::OpenOptions::new()
.read(true)
.open(&path)?;
Ok(file)
}
// Check it tape is loaded, then move to EOM (if not already there)
//
// Returns the tape position at EOM.
fn prepare_tape_write(
status: &mut PoolWriterState,
worker: &WorkerTask,
) -> Result<u64, Error> {
if !status.at_eom {
worker.log(String::from("moving to end of media"));
status.drive.move_to_eom()?;
status.at_eom = true;
} }
let current_file_number = status.drive.current_file_number()?;
if current_file_number < 2 {
bail!("got strange file position number from drive ({})", current_file_number);
}
Ok(current_file_number)
}
/// Move to EOM (if not already there), then write the current
/// catalog to the tape. On success, this return 'Ok(true)'.
/// Please note that this may fail when there is not enough space
/// on the media (return value 'Ok(false, _)'). In that case, the
/// archive is marked incomplete. The caller should mark the media
/// as full and try again using another media.
pub fn append_catalog_archive(
&mut self,
worker: &WorkerTask,
) -> Result<bool, Error> {
let status = match self.status {
Some(ref mut status) => status,
None => bail!("PoolWriter - no media loaded"),
};
Self::prepare_tape_write(status, worker)?;
let catalog_set = self.catalog_set.lock().unwrap();
let catalog = match catalog_set.catalog {
None => bail!("append_catalog_archive failed: no catalog - internal error"),
Some(ref catalog) => catalog,
};
let media_set = self.pool.current_media_set();
let media_list = media_set.media_list();
let uuid = match media_list.last() {
None => bail!("got empty media list - internal error"),
Some(None) => bail!("got incomplete media list - internal error"),
Some(Some(last_uuid)) => {
if last_uuid != catalog.uuid() {
bail!("got wrong media - internal error");
}
last_uuid
}
};
let seq_nr = media_list.len() - 1;
let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
let mut file = Self::open_catalog_file(uuid)?;
let done = tape_write_catalog(
writer.as_mut(),
uuid,
media_set.uuid(),
seq_nr,
&mut file,
)?.is_some();
Ok(done)
}
// Append catalogs for all previous media in set (without last)
fn append_media_set_catalogs(
&mut self,
worker: &WorkerTask,
) -> Result<(), Error> {
let media_set = self.pool.current_media_set();
let mut media_list = &media_set.media_list()[..];
if media_list.len() < 2 {
return Ok(());
}
media_list = &media_list[..(media_list.len()-1)];
let status = match self.status {
Some(ref mut status) => status,
None => bail!("PoolWriter - no media loaded"),
};
Self::prepare_tape_write(status, worker)?;
for (seq_nr, uuid) in media_list.iter().enumerate() {
let uuid = match uuid {
None => bail!("got incomplete media list - internal error"),
Some(uuid) => uuid,
};
let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
let mut file = Self::open_catalog_file(uuid)?;
task_log!(worker, "write catalog for previous media: {}", uuid);
if tape_write_catalog(
writer.as_mut(),
uuid,
media_set.uuid(),
seq_nr,
&mut file,
)?.is_none() {
bail!("got EOM while writing start catalog");
}
}
Ok(())
} }
/// Move to EOM (if not already there), then creates a new snapshot /// Move to EOM (if not already there), then creates a new snapshot
@ -292,25 +435,17 @@ impl PoolWriter {
None => bail!("PoolWriter - no media loaded"), None => bail!("PoolWriter - no media loaded"),
}; };
if !status.at_eom { let current_file_number = Self::prepare_tape_write(status, worker)?;
worker.log(String::from("moving to end of media"));
status.drive.move_to_eom()?;
status.at_eom = true;
}
let current_file_number = status.drive.current_file_number()?;
if current_file_number < 2 {
bail!("got strange file position number from drive ({})", current_file_number);
}
let (done, bytes_written) = { let (done, bytes_written) = {
let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?; let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
match tape_write_snapshot_archive(writer.as_mut(), snapshot_reader)? { match tape_write_snapshot_archive(writer.as_mut(), snapshot_reader)? {
Some(content_uuid) => { Some(content_uuid) => {
status.catalog.register_snapshot( self.catalog_set.lock().unwrap().register_snapshot(
content_uuid, content_uuid,
current_file_number, current_file_number,
&snapshot_reader.datastore_name().to_string(),
&snapshot_reader.snapshot().to_string(), &snapshot_reader.snapshot().to_string(),
)?; )?;
(true, writer.bytes_written()) (true, writer.bytes_written())
@ -324,7 +459,7 @@ impl PoolWriter {
let request_sync = status.bytes_written >= COMMIT_BLOCK_SIZE; let request_sync = status.bytes_written >= COMMIT_BLOCK_SIZE;
if !done || request_sync { if !done || request_sync {
status.commit()?; self.commit()?;
} }
Ok((done, bytes_written)) Ok((done, bytes_written))
@ -337,8 +472,8 @@ impl PoolWriter {
pub fn append_chunk_archive( pub fn append_chunk_archive(
&mut self, &mut self,
worker: &WorkerTask, worker: &WorkerTask,
datastore: &DataStore, chunk_iter: &mut std::iter::Peekable<NewChunksIterator>,
chunk_iter: &mut std::iter::Peekable<SnapshotChunkIterator>, store: &str,
) -> Result<(bool, usize), Error> { ) -> Result<(bool, usize), Error> {
let status = match self.status { let status = match self.status {
@ -346,16 +481,8 @@ impl PoolWriter {
None => bail!("PoolWriter - no media loaded"), None => bail!("PoolWriter - no media loaded"),
}; };
if !status.at_eom { let current_file_number = Self::prepare_tape_write(status, worker)?;
worker.log(String::from("moving to end of media"));
status.drive.move_to_eom()?;
status.at_eom = true;
}
let current_file_number = status.drive.current_file_number()?;
if current_file_number < 2 {
bail!("got strange file position number from drive ({})", current_file_number);
}
let writer = status.drive.write_file()?; let writer = status.drive.write_file()?;
let start_time = SystemTime::now(); let start_time = SystemTime::now();
@ -363,10 +490,8 @@ impl PoolWriter {
let (saved_chunks, content_uuid, leom, bytes_written) = write_chunk_archive( let (saved_chunks, content_uuid, leom, bytes_written) = write_chunk_archive(
worker, worker,
writer, writer,
datastore,
chunk_iter, chunk_iter,
&self.media_set_catalog, store,
&status.catalog,
MAX_CHUNK_ARCHIVE_SIZE, MAX_CHUNK_ARCHIVE_SIZE,
)?; )?;
@ -374,43 +499,48 @@ impl PoolWriter {
let elapsed = start_time.elapsed()?.as_secs_f64(); let elapsed = start_time.elapsed()?.as_secs_f64();
worker.log(format!( worker.log(format!(
"wrote {} chunks ({:.2} MiB at {:.2} MiB/s)", "wrote {} chunks ({:.2} MB at {:.2} MB/s)",
saved_chunks.len(), saved_chunks.len(),
bytes_written as f64 / (1024.0*1024.0), bytes_written as f64 /1_000_000.0,
(bytes_written as f64)/(1024.0*1024.0*elapsed), (bytes_written as f64)/(1_000_000.0*elapsed),
)); ));
let request_sync = status.bytes_written >= COMMIT_BLOCK_SIZE; let request_sync = status.bytes_written >= COMMIT_BLOCK_SIZE;
// register chunks in media_catalog // register chunks in media_catalog
status.catalog.start_chunk_archive(content_uuid, current_file_number)?; self.catalog_set.lock().unwrap()
for digest in saved_chunks { .register_chunk_archive(content_uuid, current_file_number, store, &saved_chunks)?;
status.catalog.register_chunk(&digest)?;
}
status.catalog.end_chunk_archive()?;
if leom || request_sync { if leom || request_sync {
status.commit()?; self.commit()?;
} }
Ok((leom, bytes_written)) Ok((leom, bytes_written))
} }
pub fn spawn_chunk_reader_thread(
&self,
datastore: Arc<DataStore>,
snapshot_reader: Arc<Mutex<SnapshotReader>>,
) -> Result<(std::thread::JoinHandle<()>, NewChunksIterator), Error> {
NewChunksIterator::spawn(
datastore,
snapshot_reader,
Arc::clone(&self.catalog_set),
)
}
} }
/// write up to <max_size> of chunks /// write up to <max_size> of chunks
fn write_chunk_archive<'a>( fn write_chunk_archive<'a>(
_worker: &WorkerTask, _worker: &WorkerTask,
writer: Box<dyn 'a + TapeWrite>, writer: Box<dyn 'a + TapeWrite>,
datastore: &DataStore, chunk_iter: &mut std::iter::Peekable<NewChunksIterator>,
chunk_iter: &mut std::iter::Peekable<SnapshotChunkIterator>, store: &str,
media_set_catalog: &MediaSetCatalog,
media_catalog: &MediaCatalog,
max_size: usize, max_size: usize,
) -> Result<(Vec<[u8;32]>, Uuid, bool, usize), Error> { ) -> Result<(Vec<[u8;32]>, Uuid, bool, usize), Error> {
let (mut writer, content_uuid) = ChunkArchiveWriter::new(writer, true)?; let (mut writer, content_uuid) = ChunkArchiveWriter::new(writer, store, true)?;
let mut chunk_index: HashSet<[u8;32]> = HashSet::new();
// we want to get the chunk list in correct order // we want to get the chunk list in correct order
let mut chunk_list: Vec<[u8;32]> = Vec::new(); let mut chunk_list: Vec<[u8;32]> = Vec::new();
@ -418,26 +548,21 @@ fn write_chunk_archive<'a>(
let mut leom = false; let mut leom = false;
loop { loop {
let digest = match chunk_iter.next() { let (digest, blob) = match chunk_iter.peek() {
None => break, None => break,
Some(digest) => digest?, Some(Ok((digest, blob))) => (digest, blob),
Some(Err(err)) => bail!("{}", err),
}; };
if media_catalog.contains_chunk(&digest)
|| chunk_index.contains(&digest)
|| media_set_catalog.contains_chunk(&digest)
{
continue;
}
let blob = datastore.load_chunk(&digest)?; //println!("CHUNK {} size {}", proxmox::tools::digest_to_hex(digest), blob.raw_size());
//println!("CHUNK {} size {}", proxmox::tools::digest_to_hex(&digest), blob.raw_size());
match writer.try_write_chunk(&digest, &blob) { match writer.try_write_chunk(&digest, &blob) {
Ok(true) => { Ok(true) => {
chunk_index.insert(digest); chunk_list.push(*digest);
chunk_list.push(digest); chunk_iter.next(); // consume
} }
Ok(false) => { Ok(false) => {
// Note; we do not consume the chunk (no chunk_iter.next())
leom = true; leom = true;
break; break;
} }
@ -463,7 +588,7 @@ fn update_media_set_label(
drive: &mut dyn TapeDriver, drive: &mut dyn TapeDriver,
old_set: Option<MediaSetLabel>, old_set: Option<MediaSetLabel>,
media_id: &MediaId, media_id: &MediaId,
) -> Result<MediaCatalog, Error> { ) -> Result<(MediaCatalog, bool), Error> {
let media_catalog; let media_catalog;
@ -486,11 +611,12 @@ fn update_media_set_label(
let status_path = Path::new(TAPE_STATUS_DIR); let status_path = Path::new(TAPE_STATUS_DIR);
match old_set { let new_media = match old_set {
None => { None => {
worker.log("wrinting new media set label".to_string()); worker.log("wrinting new media set label".to_string());
drive.write_media_set_label(new_set, key_config.as_ref())?; drive.write_media_set_label(new_set, key_config.as_ref())?;
media_catalog = MediaCatalog::overwrite(status_path, media_id, false)?; media_catalog = MediaCatalog::overwrite(status_path, media_id, false)?;
true
} }
Some(media_set_label) => { Some(media_set_label) => {
if new_set.uuid == media_set_label.uuid { if new_set.uuid == media_set_label.uuid {
@ -501,7 +627,11 @@ fn update_media_set_label(
if new_set.encryption_key_fingerprint != media_set_label.encryption_key_fingerprint { if new_set.encryption_key_fingerprint != media_set_label.encryption_key_fingerprint {
bail!("detected changed encryption fingerprint - internal error"); bail!("detected changed encryption fingerprint - internal error");
} }
media_catalog = MediaCatalog::open(status_path, &media_id.label.uuid, true, false)?; media_catalog = MediaCatalog::open(status_path, &media_id, true, false)?;
// todo: verify last content/media_catalog somehow?
false
} else { } else {
worker.log( worker.log(
format!("wrinting new media set label (overwrite '{}/{}')", format!("wrinting new media set label (overwrite '{}/{}')",
@ -510,12 +640,10 @@ fn update_media_set_label(
drive.write_media_set_label(new_set, key_config.as_ref())?; drive.write_media_set_label(new_set, key_config.as_ref())?;
media_catalog = MediaCatalog::overwrite(status_path, media_id, false)?; media_catalog = MediaCatalog::overwrite(status_path, media_id, false)?;
true
} }
} }
} };
// todo: verify last content/media_catalog somehow? Ok((media_catalog, new_media))
drive.move_to_eom()?; // just to be sure
Ok(media_catalog)
} }

View File

@ -0,0 +1,99 @@
use std::collections::HashSet;
use std::sync::{Arc, Mutex};
use anyhow::{format_err, Error};
use crate::{
backup::{
DataStore,
DataBlob,
},
tape::{
CatalogSet,
SnapshotReader,
},
};
/// Chunk iterator which use a separate thread to read chunks
///
/// The iterator skips duplicate chunks and chunks already in the
/// catalog.
pub struct NewChunksIterator {
rx: std::sync::mpsc::Receiver<Result<Option<([u8; 32], DataBlob)>, Error>>,
}
impl NewChunksIterator {
/// Creates the iterator, spawning a new thread
///
/// Make sure to join() the returnd thread handle.
pub fn spawn(
datastore: Arc<DataStore>,
snapshot_reader: Arc<Mutex<SnapshotReader>>,
catalog_set: Arc<Mutex<CatalogSet>>,
) -> Result<(std::thread::JoinHandle<()>, Self), Error> {
let (tx, rx) = std::sync::mpsc::sync_channel(3);
let reader_thread = std::thread::spawn(move || {
let snapshot_reader = snapshot_reader.lock().unwrap();
let mut chunk_index: HashSet<[u8;32]> = HashSet::new();
let datastore_name = snapshot_reader.datastore_name();
let result: Result<(), Error> = proxmox::try_block!({
let mut chunk_iter = snapshot_reader.chunk_iterator()?;
loop {
let digest = match chunk_iter.next() {
None => {
tx.send(Ok(None)).unwrap();
break;
}
Some(digest) => digest?,
};
if chunk_index.contains(&digest) {
continue;
}
if catalog_set.lock().unwrap().contains_chunk(&datastore_name, &digest) {
continue;
};
let blob = datastore.load_chunk(&digest)?;
//println!("LOAD CHUNK {}", proxmox::tools::digest_to_hex(&digest));
tx.send(Ok(Some((digest, blob)))).unwrap();
chunk_index.insert(digest);
}
Ok(())
});
if let Err(err) = result {
tx.send(Err(err)).unwrap();
}
});
Ok((reader_thread, Self { rx }))
}
}
// We do not use Receiver::into_iter(). The manual implementation
// returns a simpler type.
impl Iterator for NewChunksIterator {
type Item = Result<([u8; 32], DataBlob), Error>;
fn next(&mut self) -> Option<Self::Item> {
match self.rx.recv() {
Ok(Ok(None)) => None,
Ok(Ok(Some((digest, blob)))) => Some(Ok((digest, blob))),
Ok(Err(err)) => Some(Err(err)),
Err(_) => Some(Err(format_err!("reader thread failed"))),
}
}
}

View File

@ -42,6 +42,7 @@ fn test_alloc_writable_media_1() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
ctime += 10; ctime += 10;
@ -71,6 +72,7 @@ fn test_alloc_writable_media_2() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
let ctime = 10; let ctime = 10;
@ -110,6 +112,7 @@ fn test_alloc_writable_media_3() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
let mut ctime = 10; let mut ctime = 10;
@ -156,6 +159,7 @@ fn test_alloc_writable_media_4() -> Result<(), Error> {
RetentionPolicy::ProtectFor(parse_time_span("12s")?), RetentionPolicy::ProtectFor(parse_time_span("12s")?),
None, None,
None, None,
false,
)?; )?;
let start_time = 10; let start_time = 10;

View File

@ -69,6 +69,7 @@ fn test_compute_media_state() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
// tape1 is free // tape1 is free
@ -116,6 +117,7 @@ fn test_media_expire_time() -> Result<(), Error> {
RetentionPolicy::ProtectFor(span), RetentionPolicy::ProtectFor(span),
None, None,
None, None,
false,
)?; )?;
assert_eq!(pool.lookup_media(&tape0_uuid)?.status(), &MediaStatus::Full); assert_eq!(pool.lookup_media(&tape0_uuid)?.status(), &MediaStatus::Full);

View File

@ -49,6 +49,7 @@ fn test_current_set_usable_1() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert_eq!(pool.current_set_usable()?, false); assert_eq!(pool.current_set_usable()?, false);
@ -75,6 +76,7 @@ fn test_current_set_usable_2() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert_eq!(pool.current_set_usable()?, false); assert_eq!(pool.current_set_usable()?, false);
@ -103,6 +105,7 @@ fn test_current_set_usable_3() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
Some(String::from("changer1")), Some(String::from("changer1")),
None, None,
false,
)?; )?;
assert_eq!(pool.current_set_usable()?, false); assert_eq!(pool.current_set_usable()?, false);
@ -131,6 +134,7 @@ fn test_current_set_usable_4() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert_eq!(pool.current_set_usable()?, true); assert_eq!(pool.current_set_usable()?, true);
@ -161,6 +165,7 @@ fn test_current_set_usable_5() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert_eq!(pool.current_set_usable()?, true); assert_eq!(pool.current_set_usable()?, true);
@ -189,6 +194,7 @@ fn test_current_set_usable_6() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert!(pool.current_set_usable().is_err()); assert!(pool.current_set_usable().is_err());
@ -223,6 +229,7 @@ fn test_current_set_usable_7() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert!(pool.current_set_usable().is_err()); assert!(pool.current_set_usable().is_err());

View File

@ -57,6 +57,9 @@ pub use async_channel_writer::AsyncChannelWriter;
mod std_channel_writer; mod std_channel_writer;
pub use std_channel_writer::StdChannelWriter; pub use std_channel_writer::StdChannelWriter;
mod tokio_writer_adapter;
pub use tokio_writer_adapter::TokioWriterAdapter;
mod process_locker; mod process_locker;
pub use process_locker::{ProcessLocker, ProcessLockExclusiveGuard, ProcessLockSharedGuard}; pub use process_locker::{ProcessLocker, ProcessLockExclusiveGuard, ProcessLockSharedGuard};

View File

@ -44,32 +44,38 @@ impl ToString for SenseInfo {
} }
#[derive(Debug)] #[derive(Debug)]
pub struct ScsiError { pub enum ScsiError {
pub error: Error, Error(Error),
pub sense: Option<SenseInfo>, Sense(SenseInfo),
} }
impl std::fmt::Display for ScsiError { impl std::fmt::Display for ScsiError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.error) match self {
ScsiError::Error(err) => write!(f, "{}", err),
ScsiError::Sense(sense) => write!(f, "{}", sense.to_string()),
}
} }
} }
impl std::error::Error for ScsiError { impl std::error::Error for ScsiError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> { fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
self.error.source() match self {
ScsiError::Error(err) => err.source(),
ScsiError::Sense(_) => None,
}
} }
} }
impl From<anyhow::Error> for ScsiError { impl From<anyhow::Error> for ScsiError {
fn from(error: anyhow::Error) -> Self { fn from(error: anyhow::Error) -> Self {
Self { error, sense: None } Self::Error(error)
} }
} }
impl From<std::io::Error> for ScsiError { impl From<std::io::Error> for ScsiError {
fn from(error: std::io::Error) -> Self { fn from(error: std::io::Error) -> Self {
Self { error: error.into(), sense: None } Self::Error(error.into())
} }
} }
@ -483,10 +489,7 @@ impl <'a, F: AsRawFd> SgRaw<'a, F> {
} }
}; };
return Err(ScsiError { return Err(ScsiError::Sense(sense));
error: format_err!("{}", sense.to_string()),
sense: Some(sense),
});
} }
SCSI_PT_RESULT_TRANSPORT_ERR => return Err(format_err!("scsi command failed: transport error").into()), SCSI_PT_RESULT_TRANSPORT_ERR => return Err(format_err!("scsi command failed: transport error").into()),
SCSI_PT_RESULT_OS_ERR => { SCSI_PT_RESULT_OS_ERR => {
@ -506,7 +509,7 @@ impl <'a, F: AsRawFd> SgRaw<'a, F> {
} }
if self.buffer.len() < 16 { if self.buffer.len() < 16 {
return Err(format_err!("output buffer too small").into()); return Err(format_err!("input buffer too small").into());
} }
let mut ptvp = self.create_scsi_pt_obj()?; let mut ptvp = self.create_scsi_pt_obj()?;
@ -530,6 +533,45 @@ impl <'a, F: AsRawFd> SgRaw<'a, F> {
Ok(&self.buffer[..data_len]) Ok(&self.buffer[..data_len])
} }
/// Run the specified RAW SCSI command, use data as input buffer
pub fn do_in_command<'b>(&mut self, cmd: &[u8], data: &'b mut [u8]) -> Result<&'b [u8], ScsiError> {
if !unsafe { sg_is_scsi_cdb(cmd.as_ptr(), cmd.len() as c_int) } {
return Err(format_err!("no valid SCSI command").into());
}
if data.len() == 0 {
return Err(format_err!("got zero-sized input buffer").into());
}
let mut ptvp = self.create_scsi_pt_obj()?;
unsafe {
set_scsi_pt_data_in(
ptvp.as_mut_ptr(),
data.as_mut_ptr(),
data.len() as c_int,
);
set_scsi_pt_cdb(
ptvp.as_mut_ptr(),
cmd.as_ptr(),
cmd.len() as c_int,
);
};
self.do_scsi_pt_checked(&mut ptvp)?;
let resid = unsafe { get_scsi_pt_resid(ptvp.as_ptr()) } as usize;
if resid > data.len() {
return Err(format_err!("do_scsi_pt failed - got strange resid (value too big)").into());
}
let data_len = data.len() - resid;
Ok(&data[..data_len])
}
/// Run dataout command /// Run dataout command
/// ///
/// Note: use alloc_page_aligned_buffer to alloc data transfer buffer /// Note: use alloc_page_aligned_buffer to alloc data transfer buffer

View File

@ -318,8 +318,11 @@ pub fn update_apt_auth(key: Option<String>, password: Option<String>) -> Result<
replace_file(auth_conf, conf.as_bytes(), file_opts) replace_file(auth_conf, conf.as_bytes(), file_opts)
.map_err(|e| format_err!("Error saving apt auth config - {}", e))?; .map_err(|e| format_err!("Error saving apt auth config - {}", e))?;
} }
_ => nix::unistd::unlink(auth_conf) _ => match nix::unistd::unlink(auth_conf) {
.map_err(|e| format_err!("Error clearing apt auth config - {}", e))?, Ok(()) => Ok(()),
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => Ok(()), // ignore not existing
Err(err) => Err(err),
}.map_err(|e| format_err!("Error clearing apt auth config - {}", e))?,
} }
Ok(()) Ok(())
} }

View File

@ -141,6 +141,88 @@ impl From<TimeSpan> for f64 {
} }
} }
impl From<std::time::Duration> for TimeSpan {
fn from(duration: std::time::Duration) -> Self {
let mut duration = duration.as_nanos();
let nsec = (duration % 1000) as u64;
duration /= 1000;
let usec = (duration % 1000) as u64;
duration /= 1000;
let msec = (duration % 1000) as u64;
duration /= 1000;
let seconds = (duration % 60) as u64;
duration /= 60;
let minutes = (duration % 60) as u64;
duration /= 60;
let hours = (duration % 24) as u64;
duration /= 24;
let years = (duration as f64 / 365.25) as u64;
let ydays = (duration as f64 % 365.25) as u64;
let months = (ydays as f64 / 30.44) as u64;
let mdays = (ydays as f64 % 30.44) as u64;
let weeks = mdays / 7;
let days = mdays % 7;
Self {
nsec,
usec,
msec,
seconds,
minutes,
hours,
days,
weeks,
months,
years,
}
}
}
impl std::fmt::Display for TimeSpan {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> {
let mut first = true;
{ // block scope for mutable borrows
let mut do_write = |v: u64, unit: &str| -> Result<(), std::fmt::Error> {
if !first {
write!(f, " ")?;
}
first = false;
write!(f, "{}{}", v, unit)
};
if self.years > 0 {
do_write(self.years, "y")?;
}
if self.months > 0 {
do_write(self.months, "m")?;
}
if self.weeks > 0 {
do_write(self.weeks, "w")?;
}
if self.days > 0 {
do_write(self.days, "d")?;
}
if self.hours > 0 {
do_write(self.hours, "h")?;
}
if self.minutes > 0 {
do_write(self.minutes, "min")?;
}
}
if !first {
write!(f, " ")?;
}
let seconds = self.seconds as f64 + (self.msec as f64 / 1000.0);
if seconds >= 0.1 {
if seconds >= 1.0 || !first {
write!(f, "{:.0}s", seconds)?;
} else {
write!(f, "{:.1}s", seconds)?;
}
} else if first {
write!(f, "<0.1s")?;
}
Ok(())
}
}
pub fn verify_time_span(i: &str) -> Result<(), Error> { pub fn verify_time_span(i: &str) -> Result<(), Error> {
parse_time_span(i)?; parse_time_span(i)?;

View File

@ -0,0 +1,26 @@
use std::io::Write;
use tokio::task::block_in_place;
/// Wrapper around a writer which implements Write
///
/// wraps each write with a 'block_in_place' so that
/// any (blocking) writer can be safely used in async context in a
/// tokio runtime
pub struct TokioWriterAdapter<W: Write>(W);
impl<W: Write> TokioWriterAdapter<W> {
pub fn new(writer: W) -> Self {
Self(writer)
}
}
impl<W: Write> Write for TokioWriterAdapter<W> {
fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
block_in_place(|| self.0.write(buf))
}
fn flush(&mut self) -> Result<(), std::io::Error> {
block_in_place(|| self.0.flush())
}
}

View File

@ -80,6 +80,7 @@ struct Zip64FieldWithOffset {
uncompressed_size: u64, uncompressed_size: u64,
compressed_size: u64, compressed_size: u64,
offset: u64, offset: u64,
start_disk: u32,
} }
#[derive(Endian)] #[derive(Endian)]
@ -300,10 +301,26 @@ impl ZipEntry {
let filename_len = filename.len(); let filename_len = filename.len();
let header_size = size_of::<CentralDirectoryFileHeader>(); let header_size = size_of::<CentralDirectoryFileHeader>();
let zip_field_size = size_of::<Zip64FieldWithOffset>(); let zip_field_size = size_of::<Zip64FieldWithOffset>();
let size: usize = header_size + filename_len + zip_field_size; let mut size: usize = header_size + filename_len;
let (date, time) = epoch_to_dos(self.mtime); let (date, time) = epoch_to_dos(self.mtime);
let (compressed_size, uncompressed_size, offset, need_zip64) = if self.compressed_size
>= (u32::MAX as u64)
|| self.uncompressed_size >= (u32::MAX as u64)
|| self.offset >= (u32::MAX as u64)
{
size += zip_field_size;
(0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, true)
} else {
(
self.compressed_size as u32,
self.uncompressed_size as u32,
self.offset as u32,
false,
)
};
write_struct( write_struct(
&mut buf, &mut buf,
CentralDirectoryFileHeader { CentralDirectoryFileHeader {
@ -315,32 +332,35 @@ impl ZipEntry {
time, time,
date, date,
crc32: self.crc32, crc32: self.crc32,
compressed_size: 0xFFFFFFFF, compressed_size,
uncompressed_size: 0xFFFFFFFF, uncompressed_size,
filename_len: filename_len as u16, filename_len: filename_len as u16,
extra_field_len: zip_field_size as u16, extra_field_len: if need_zip64 { zip_field_size as u16 } else { 0 },
comment_len: 0, comment_len: 0,
start_disk: 0, start_disk: 0,
internal_flags: 0, internal_flags: 0,
external_flags: (self.mode as u32) << 16 | (!self.is_file as u32) << 4, external_flags: (self.mode as u32) << 16 | (!self.is_file as u32) << 4,
offset: 0xFFFFFFFF, offset,
}, },
) )
.await?; .await?;
buf.write_all(filename).await?; buf.write_all(filename).await?;
write_struct( if need_zip64 {
&mut buf, write_struct(
Zip64FieldWithOffset { &mut buf,
field_type: 1, Zip64FieldWithOffset {
field_size: 3 * 8, field_type: 1,
uncompressed_size: self.uncompressed_size, field_size: 3 * 8 + 4,
compressed_size: self.compressed_size, uncompressed_size: self.uncompressed_size,
offset: self.offset, compressed_size: self.compressed_size,
}, offset: self.offset,
) start_disk: 0,
.await?; },
)
.await?;
}
Ok(size) Ok(size)
} }

View File

@ -3,6 +3,10 @@ const proxmoxOnlineHelpInfo = {
"link": "/docs/index.html", "link": "/docs/index.html",
"title": "Proxmox Backup Server Documentation Index" "title": "Proxmox Backup Server Documentation Index"
}, },
"client-repository": {
"link": "/docs/backup-client.html#client-repository",
"title": "Repository Locations"
},
"client-creating-backups": { "client-creating-backups": {
"link": "/docs/backup-client.html#client-creating-backups", "link": "/docs/backup-client.html#client-creating-backups",
"title": "Creating Backups" "title": "Creating Backups"
@ -47,10 +51,18 @@ const proxmoxOnlineHelpInfo = {
"link": "/docs/package-repositories.html#sysadmin-package-repositories", "link": "/docs/package-repositories.html#sysadmin-package-repositories",
"title": "Debian Package Repositories" "title": "Debian Package Repositories"
}, },
"sysadmin-package-repos-enterprise": {
"link": "/docs/package-repositories.html#sysadmin-package-repos-enterprise",
"title": "`Proxmox Backup`_ Enterprise Repository"
},
"get-help": { "get-help": {
"link": "/docs/introduction.html#get-help", "link": "/docs/introduction.html#get-help",
"title": "Getting Help" "title": "Getting Help"
}, },
"get-help-enterprise-support": {
"link": "/docs/introduction.html#get-help-enterprise-support",
"title": "Enterprise Support"
},
"chapter-zfs": { "chapter-zfs": {
"link": "/docs/sysadmin.html#chapter-zfs", "link": "/docs/sysadmin.html#chapter-zfs",
"title": "ZFS on Linux" "title": "ZFS on Linux"

View File

@ -369,30 +369,30 @@ Ext.define('PBS.Utils', {
// do whatever you want here // do whatever you want here
Proxmox.Utils.override_task_descriptions({ Proxmox.Utils.override_task_descriptions({
backup: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Backup')), backup: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Backup')),
"tape-backup": (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup')), 'barcode-label-media': [gettext('Drive'), gettext('Barcode-Label Media')],
"tape-backup-job": (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup Job')), 'catalog-media': [gettext('Drive'), gettext('Catalog Media')],
"tape-restore": ['Datastore', gettext('Tape Restore')],
"barcode-label-media": [gettext('Drive'), gettext('Barcode label media')],
dircreate: [gettext('Directory Storage'), gettext('Create')], dircreate: [gettext('Directory Storage'), gettext('Create')],
dirremove: [gettext('Directory'), gettext('Remove')], dirremove: [gettext('Directory'), gettext('Remove')],
"load-media": (type, id) => PBS.Utils.render_drive_load_media_id(id, gettext('Load media')), 'eject-media': [gettext('Drive'), gettext('Eject Media')],
"unload-media": [gettext('Drive'), gettext('Unload media')], 'erase-media': [gettext('Drive'), gettext('Erase Media')],
"eject-media": [gettext('Drive'), gettext('Eject media')], garbage_collection: ['Datastore', gettext('Garbage Collect')],
"erase-media": [gettext('Drive'), gettext('Erase media')], 'inventory-update': [gettext('Drive'), gettext('Inventory Update')],
garbage_collection: ['Datastore', gettext('Garbage collect')], 'label-media': [gettext('Drive'), gettext('Label Media')],
"inventory-update": [gettext('Drive'), gettext('Inventory update')], 'load-media': (type, id) => PBS.Utils.render_drive_load_media_id(id, gettext('Load Media')),
"label-media": [gettext('Drive'), gettext('Label media')],
"catalog-media": [gettext('Drive'), gettext('Catalog media')],
logrotate: [null, gettext('Log Rotation')], logrotate: [null, gettext('Log Rotation')],
prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')), prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')),
reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read objects')), reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read Objects')),
"rewind-media": [gettext('Drive'), gettext('Rewind media')], 'rewind-media': [gettext('Drive'), gettext('Rewind Media')],
sync: ['Datastore', gettext('Remote Sync')], sync: ['Datastore', gettext('Remote Sync')],
syncjob: [gettext('Sync Job'), gettext('Remote Sync')], syncjob: [gettext('Sync Job'), gettext('Remote Sync')],
'tape-backup': (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup')),
'tape-backup-job': (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup Job')),
'tape-restore': ['Datastore', gettext('Tape Restore')],
'unload-media': [gettext('Drive'), gettext('Unload Media')],
verificationjob: [gettext('Verify Job'), gettext('Scheduled Verification')],
verify: ['Datastore', gettext('Verification')], verify: ['Datastore', gettext('Verification')],
verify_group: ['Group', gettext('Verification')], verify_group: ['Group', gettext('Verification')],
verify_snapshot: ['Snapshot', gettext('Verification')], verify_snapshot: ['Snapshot', gettext('Verification')],
verificationjob: [gettext('Verify Job'), gettext('Scheduled Verification')],
zfscreate: [gettext('ZFS Storage'), gettext('Create')], zfscreate: [gettext('ZFS Storage'), gettext('Create')],
}); });
}, },

View File

@ -273,3 +273,7 @@ span.snapshot-comment-column {
height: 20px; height: 20px;
background-image:url(../images/icon-tape-drive.svg); background-image:url(../images/icon-tape-drive.svg);
} }
.info-pointer div.right-aligned {
cursor: pointer;
}

View File

@ -24,11 +24,18 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
return; return;
} }
let mediaset = selection[0].data.text; let node = selection[0];
let uuid = selection[0].data['media-set-uuid']; let mediaset = node.data.text;
let uuid = node.data['media-set-uuid'];
let datastores = node.data.datastores;
while (!datastores && node.get('depth') > 2) {
node = node.parentNode;
datastores = node.data.datastores;
}
Ext.create('PBS.TapeManagement.TapeRestoreWindow', { Ext.create('PBS.TapeManagement.TapeRestoreWindow', {
mediaset, mediaset,
uuid, uuid,
datastores,
listeners: { listeners: {
destroy: function() { destroy: function() {
me.reload(); me.reload();
@ -127,9 +134,16 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
}, },
}); });
list.result.data.sort((a, b) => a.snapshot.localeCompare(b.snapshot)); list.result.data.sort(function(a, b) {
let storeRes = a.store.localeCompare(b.store);
if (storeRes === 0) {
return a.snapshot.localeCompare(b.snapshot);
} else {
return storeRes;
}
});
let tapes = {}; let stores = {};
for (let entry of list.result.data) { for (let entry of list.result.data) {
entry.text = entry.snapshot; entry.text = entry.snapshot;
@ -140,9 +154,19 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
entry.iconCls = `fa ${iconCls}`; entry.iconCls = `fa ${iconCls}`;
} }
let store = entry.store;
let tape = entry['label-text']; let tape = entry['label-text'];
if (tapes[tape] === undefined) { if (stores[store] === undefined) {
tapes[tape] = { stores[store] = {
text: store,
'media-set-uuid': entry['media-set-uuid'],
iconCls: 'fa fa-database',
tapes: {},
};
}
if (stores[store].tapes[tape] === undefined) {
stores[store].tapes[tape] = {
text: tape, text: tape,
'media-set-uuid': entry['media-set-uuid'], 'media-set-uuid': entry['media-set-uuid'],
'seq-nr': entry['seq-nr'], 'seq-nr': entry['seq-nr'],
@ -153,7 +177,7 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
} }
let [type, group, _id] = PBS.Utils.parse_snapshot_id(entry.snapshot); let [type, group, _id] = PBS.Utils.parse_snapshot_id(entry.snapshot);
let children = tapes[tape].children; let children = stores[store].tapes[tape].children;
let text = `${type}/${group}`; let text = `${type}/${group}`;
if (children.length < 1 || children[children.length - 1].text !== text) { if (children.length < 1 || children[children.length - 1].text !== text) {
children.push({ children.push({
@ -167,8 +191,14 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
children[children.length - 1].children.push(entry); children[children.length - 1].children.push(entry);
} }
for (const tape of Object.values(tapes)) { let storeList = Object.values(stores);
node.appendChild(tape); let storeNameList = Object.keys(stores);
let expand = storeList.length === 1;
for (const store of storeList) {
store.children = Object.values(store.tapes);
store.expanded = expand;
delete store.tapes;
node.appendChild(store);
} }
if (list.result.data.length === 0) { if (list.result.data.length === 0) {
@ -176,6 +206,7 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
} }
node.set('loaded', true); node.set('loaded', true);
node.set('datastores', storeNameList);
Proxmox.Utils.setErrorMask(view, false); Proxmox.Utils.setErrorMask(view, false);
node.expand(); node.expand();
} catch (error) { } catch (error) {

View File

@ -11,6 +11,29 @@ Ext.define('pbs-slot-model', {
idProperty: 'entry-id', idProperty: 'entry-id',
}); });
Ext.define('PBS.TapeManagement.FreeSlotSelector', {
extend: 'Proxmox.form.ComboGrid',
alias: 'widget.pbsFreeSlotSelector',
valueField: 'id',
displayField: 'id',
listConfig: {
columns: [
{
dataIndex: 'id',
text: gettext('ID'),
flex: 1,
},
{
dataIndex: 'type',
text: gettext('Type'),
flex: 1,
},
],
},
});
Ext.define('PBS.TapeManagement.ChangerStatus', { Ext.define('PBS.TapeManagement.ChangerStatus', {
extend: 'Ext.panel.Panel', extend: 'Ext.panel.Panel',
alias: 'widget.pbsChangerStatus', alias: 'widget.pbsChangerStatus',
@ -40,9 +63,12 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
fieldLabel: gettext('From Slot'), fieldLabel: gettext('From Slot'),
}, },
{ {
xtype: 'proxmoxintegerfield', xtype: 'pbsFreeSlotSelector',
name: 'to', name: 'to',
fieldLabel: gettext('To Slot'), fieldLabel: gettext('To Slot'),
store: {
data: me.free_slots,
},
}, },
], ],
listeners: { listeners: {
@ -73,9 +99,12 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
fieldLabel: gettext('From Slot'), fieldLabel: gettext('From Slot'),
}, },
{ {
xtype: 'proxmoxintegerfield', xtype: 'pbsFreeSlotSelector',
name: 'to', name: 'to',
fieldLabel: gettext('To Slot'), fieldLabel: gettext('To Slot'),
store: {
data: me.free_slots.concat(me.free_ie_slots),
},
}, },
], ],
listeners: { listeners: {
@ -340,6 +369,14 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
me.reload_full(false); me.reload_full(false);
}, },
free_slots: [],
updateFreeSlots: function(free_slots, free_ie_slots) {
let me = this;
me.free_slots = free_slots;
me.free_ie_slots = free_ie_slots;
},
reload_full: async function(use_cache) { reload_full: async function(use_cache) {
let me = this; let me = this;
let view = me.getView(); let view = me.getView();
@ -399,6 +436,9 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
drive_entries[entry['changer-drivenum'] || 0] = entry; drive_entries[entry['changer-drivenum'] || 0] = entry;
} }
let free_slots = [];
let free_ie_slots = [];
for (let entry of status.result.data) { for (let entry of status.result.data) {
let type = entry['entry-kind']; let type = entry['entry-kind'];
@ -414,6 +454,19 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
entry['is-labeled'] = false; entry['is-labeled'] = false;
} }
if (!entry['label-text'] && type !== 'drive') {
if (type === 'slot') {
free_slots.push({
id: entry['entry-id'],
type,
});
} else {
free_ie_slots.push({
id: entry['entry-id'],
type,
});
}
}
data[type].push(entry); data[type].push(entry);
} }
@ -433,6 +486,8 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
// manually fire selectionchange to update button status // manually fire selectionchange to update button status
me.lookup('drives').getSelectionModel().fireEvent('selectionchange', me); me.lookup('drives').getSelectionModel().fireEvent('selectionchange', me);
me.updateFreeSlots(free_slots, free_ie_slots);
if (!use_cache) { if (!use_cache) {
Proxmox.Utils.setErrorMask(view); Proxmox.Utils.setErrorMask(view);
} }

View File

@ -84,6 +84,24 @@ Ext.define('PBS.TapeManagement.DriveStatus', {
}).show(); }).show();
}, },
erase: function() {
let me = this;
let view = me.getView();
let driveid = view.drive;
PBS.Utils.driveCommand(driveid, 'erase-media', {
waitMsgTarget: view,
method: 'POST',
success: function(response) {
Ext.create('Proxmox.window.TaskProgress', {
upid: response.result.data,
taskDone: function() {
me.reload();
},
}).show();
},
});
},
ejectMedia: function() { ejectMedia: function() {
let me = this; let me = this;
let view = me.getView(); let view = me.getView();
@ -193,6 +211,18 @@ Ext.define('PBS.TapeManagement.DriveStatus', {
disabled: '{!online}', disabled: '{!online}',
}, },
}, },
{
text: gettext('Erase'),
xtype: 'proxmoxButton',
handler: 'erase',
iconCls: 'fa fa-trash-o',
dangerous: true,
confirmMsg: gettext('Are you sure you want to erase the inserted tape?'),
disabled: true,
bind: {
disabled: '{!online}',
},
},
{ {
text: gettext('Catalog'), text: gettext('Catalog'),
xtype: 'proxmoxButton', xtype: 'proxmoxButton',
@ -275,7 +305,7 @@ Ext.define('PBS.TapeManagement.DriveStatusGrid', {
rows: { rows: {
'blocksize': { 'blocksize': {
required: true, required: true,
header: gettext('Blocksize'), header: gettext('Block Size'),
renderer: function(value) { renderer: function(value) {
if (!value) { if (!value) {
return gettext('Dynamic'); return gettext('Dynamic');
@ -400,6 +430,7 @@ Ext.define('PBS.TapeManagement.DriveInfoPanel', {
}, },
{ {
xtype: 'pmxInfoWidget', xtype: 'pmxInfoWidget',
reference: 'statewidget',
title: gettext('State'), title: gettext('State'),
bind: { bind: {
data: { data: {
@ -409,6 +440,23 @@ Ext.define('PBS.TapeManagement.DriveInfoPanel', {
}, },
], ],
clickState: function(e, t, eOpts) {
let me = this;
let vm = me.getViewModel();
let drive = vm.get('drive');
if (t.classList.contains('right-aligned')) {
let upid = drive.state;
if (!upid || !upid.startsWith("UPID")) {
return;
}
Ext.create('Proxmox.window.TaskViewer', {
autoShow: true,
upid,
});
}
},
updateData: function(store) { updateData: function(store) {
let me = this; let me = this;
if (!store) { if (!store) {
@ -422,6 +470,37 @@ Ext.define('PBS.TapeManagement.DriveInfoPanel', {
let vm = me.getViewModel(); let vm = me.getViewModel();
vm.set('drive', record.data); vm.set('drive', record.data);
vm.notify(); vm.notify();
me.updatePointer();
},
updatePointer: function() {
let me = this;
let stateWidget = me.down('pmxInfoWidget[reference=statewidget]');
let stateEl = stateWidget.getEl();
if (!stateEl) {
setTimeout(function() {
me.updatePointer();
}, 100);
return;
}
let vm = me.getViewModel();
let drive = vm.get('drive');
if (drive.state) {
stateEl.addCls('info-pointer');
} else {
stateEl.removeCls('info-pointer');
}
},
listeners: {
afterrender: function() {
let me = this;
let stateWidget = me.down('pmxInfoWidget[reference=statewidget]');
let stateEl = stateWidget.getEl();
stateEl.on('click', me.clickState, me);
},
}, },
initComponent: function() { initComponent: function() {
@ -430,12 +509,12 @@ Ext.define('PBS.TapeManagement.DriveInfoPanel', {
throw "no drive given"; throw "no drive given";
} }
me.callParent();
let tapeStore = Ext.ComponentQuery.query('navigationtree')[0].tapestore; let tapeStore = Ext.ComponentQuery.query('navigationtree')[0].tapestore;
me.mon(tapeStore, 'load', me.updateData, me); me.mon(tapeStore, 'load', me.updateData, me);
if (tapeStore.isLoaded()) { if (tapeStore.isLoaded()) {
me.updateData(tapeStore); me.updateData(tapeStore);
} }
me.callParent();
}, },
}); });

View File

@ -113,11 +113,11 @@ Ext.define('PBS.TapeManagement.PoolPanel', {
flex: 1, flex: 1,
}, },
{ {
text: gettext('Allocation'), text: gettext('Allocation Policy'),
dataIndex: 'allocation', dataIndex: 'allocation',
}, },
{ {
text: gettext('Retention'), text: gettext('Retention Policy'),
dataIndex: 'retention', dataIndex: 'retention',
}, },
{ {

View File

@ -27,11 +27,11 @@ Ext.define('PBS.TapeManagement.PoolSelector', {
dataIndex: 'drive', dataIndex: 'drive',
}, },
{ {
text: gettext('Allocation'), text: gettext('Allocation Policy'),
dataIndex: 'allocation', dataIndex: 'allocation',
}, },
{ {
text: gettext('Retention'), text: gettext('Retention Policy'),
dataIndex: 'retention', dataIndex: 'retention',
}, },
{ {

View File

@ -3,6 +3,8 @@ Ext.define('PBS.TapeManagement.PoolEditWindow', {
alias: 'widget.pbsPoolEditWindow', alias: 'widget.pbsPoolEditWindow',
mixins: ['Proxmox.Mixin.CBind'], mixins: ['Proxmox.Mixin.CBind'],
onlineHelp: 'tape_media_pool_config',
isCreate: true, isCreate: true,
isAdd: true, isAdd: true,
subject: gettext('Media Pool'), subject: gettext('Media Pool'),
@ -33,7 +35,7 @@ Ext.define('PBS.TapeManagement.PoolEditWindow', {
}, },
}, },
{ {
fieldLabel: gettext('Allocation'), fieldLabel: gettext('Allocation Policy'),
xtype: 'pbsAllocationSelector', xtype: 'pbsAllocationSelector',
name: 'allocation', name: 'allocation',
skipEmptyText: true, skipEmptyText: true,
@ -44,7 +46,7 @@ Ext.define('PBS.TapeManagement.PoolEditWindow', {
}, },
}, },
{ {
fieldLabel: gettext('Retention'), fieldLabel: gettext('Retention Policy'),
xtype: 'pbsRetentionSelector', xtype: 'pbsRetentionSelector',
name: 'retention', name: 'retention',
skipEmptyText: true, skipEmptyText: true,

View File

@ -1,9 +1,9 @@
Ext.define('PBS.TapeManagement.TapeRestoreWindow', { Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
extend: 'Proxmox.window.Edit', extend: 'Proxmox.window.Edit',
alias: 'pbsTapeRestoreWindow', alias: 'widget.pbsTapeRestoreWindow',
mixins: ['Proxmox.Mixin.CBind'], mixins: ['Proxmox.Mixin.CBind'],
width: 400, width: 800,
title: gettext('Restore Media Set'), title: gettext('Restore Media Set'),
url: '/api2/extjs/tape/restore', url: '/api2/extjs/tape/restore',
method: 'POST', method: 'POST',
@ -14,42 +14,272 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
labelWidth: 120, labelWidth: 120,
}, },
referenceHolder: true,
items: [ items: [
{ {
xtype: 'displayfield', xtype: 'inputpanel',
fieldLabel: gettext('Media Set'),
cbind: { onGetValues: function(values) {
value: '{mediaset}', let me = this;
let datastores = [];
if (values.store && values.store !== "") {
datastores.push(values.store);
delete values.store;
}
if (values.mapping) {
datastores.push(values.mapping);
delete values.mapping;
}
values.store = datastores.join(',');
return values;
}, },
column1: [
{
xtype: 'displayfield',
fieldLabel: gettext('Media Set'),
cbind: {
value: '{mediaset}',
},
},
{
xtype: 'displayfield',
fieldLabel: gettext('Media Set UUID'),
name: 'media-set',
submitValue: true,
cbind: {
value: '{uuid}',
},
},
{
xtype: 'pbsDriveSelector',
fieldLabel: gettext('Drive'),
name: 'drive',
},
],
column2: [
{
xtype: 'pbsUserSelector',
name: 'notify-user',
fieldLabel: gettext('Notify User'),
emptyText: gettext('Current User'),
value: null,
allowBlank: true,
skipEmptyText: true,
renderer: Ext.String.htmlEncode,
},
{
xtype: 'pbsUserSelector',
name: 'owner',
fieldLabel: gettext('Owner'),
emptyText: gettext('Current User'),
value: null,
allowBlank: true,
skipEmptyText: true,
renderer: Ext.String.htmlEncode,
},
{
xtype: 'pbsDataStoreSelector',
fieldLabel: gettext('Datastore'),
reference: 'defaultDatastore',
name: 'store',
listeners: {
change: function(field, value) {
let me = this;
let grid = me.up('window').lookup('mappingGrid');
grid.setNeedStores(!value);
},
},
},
],
columnB: [
{
fieldLabel: gettext('Datastore Mapping'),
labelWidth: 200,
hidden: true,
reference: 'mappingLabel',
xtype: 'displayfield',
},
{
xtype: 'pbsDataStoreMappingField',
reference: 'mappingGrid',
name: 'mapping',
defaultBindProperty: 'value',
hidden: true,
},
],
},
],
setDataStores: function(datastores) {
let me = this;
let label = me.lookup('mappingLabel');
let grid = me.lookup('mappingGrid');
let defaultField = me.lookup('defaultDatastore');
if (!datastores || datastores.length <= 1) {
label.setVisible(false);
grid.setVisible(false);
defaultField.setFieldLabel(gettext('Datastore'));
defaultField.setAllowBlank(false);
defaultField.setEmptyText("");
return;
}
label.setVisible(true);
defaultField.setFieldLabel(gettext('Default Datastore'));
defaultField.setAllowBlank(true);
defaultField.setEmptyText(Proxmox.Utils.NoneText);
grid.setDataStores(datastores);
grid.setVisible(true);
},
initComponent: function() {
let me = this;
me.callParent();
if (me.datastores) {
me.setDataStores(me.datastores);
} else {
// use timeout so that the window is rendered already
// for correct masking
setTimeout(function() {
Proxmox.Utils.API2Request({
waitMsgTarget: me,
url: `/tape/media/content?media-set=${me.uuid}`,
success: function(response, opt) {
let datastores = {};
for (const content of response.result.data) {
datastores[content.store] = true;
}
me.setDataStores(Object.keys(datastores));
},
failure: function() {
// ignore failing api call, maybe catalog is missing
me.setDataStores();
},
});
}, 10);
}
},
});
Ext.define('PBS.TapeManagement.DataStoreMappingGrid', {
extend: 'Ext.grid.Panel',
alias: 'widget.pbsDataStoreMappingField',
mixins: ['Ext.form.field.Field'],
getValue: function() {
let me = this;
let datastores = [];
me.getStore().each((rec) => {
let source = rec.data.source;
let target = rec.data.target;
if (target && target !== "") {
datastores.push(`${source}=${target}`);
}
});
return datastores.join(',');
},
// this determines if we need at least one valid mapping
needStores: false,
setNeedStores: function(needStores) {
let me = this;
me.needStores = needStores;
me.checkChange();
me.validate();
},
setValue: function(value) {
let me = this;
me.setDataStores(value);
return me;
},
getErrors: function(value) {
let me = this;
let error = false;
if (me.needStores) {
error = true;
me.getStore().each((rec) => {
if (rec.data.target) {
error = false;
}
});
}
if (error) {
me.addCls(['x-form-trigger-wrap-default', 'x-form-trigger-wrap-invalid']);
let errorMsg = gettext("Need at least one mapping");
me.getActionEl().dom.setAttribute('data-errorqtip', errorMsg);
return [errorMsg];
}
me.removeCls(['x-form-trigger-wrap-default', 'x-form-trigger-wrap-invalid']);
me.getActionEl().dom.setAttribute('data-errorqtip', "");
return [];
},
setDataStores: function(datastores) {
let me = this;
let store = me.getStore();
let data = [];
for (const datastore of datastores) {
data.push({
source: datastore,
target: '',
});
}
store.setData(data);
},
viewConfig: {
markDirty: false,
},
store: { data: [] },
columns: [
{
text: gettext('Source Datastore'),
dataIndex: 'source',
flex: 1,
}, },
{ {
xtype: 'displayfield', text: gettext('Target Datastore'),
fieldLabel: gettext('Media Set UUID'), xtype: 'widgetcolumn',
name: 'media-set', dataIndex: 'target',
submitValue: true, flex: 1,
cbind: { widget: {
value: '{uuid}', xtype: 'pbsDataStoreSelector',
allowBlank: true,
emptyText: Proxmox.Utils.NoneText,
listeners: {
change: function(selector, value) {
let me = this;
let rec = me.getWidgetRecord();
if (!rec) {
return;
}
rec.set('target', value);
me.up('grid').checkChange();
},
},
}, },
}, },
{
xtype: 'pbsDataStoreSelector',
fieldLabel: gettext('Datastore'),
name: 'store',
},
{
xtype: 'pbsDriveSelector',
fieldLabel: gettext('Drive'),
name: 'drive',
},
{
xtype: 'pbsUserSelector',
name: 'notify-user',
fieldLabel: gettext('Notify User'),
emptyText: gettext('Current User'),
value: null,
allowBlank: true,
skipEmptyText: true,
renderer: Ext.String.htmlEncode,
},
], ],
}); });

View File

@ -9,7 +9,7 @@ Ext.define('PBS.window.VerifyJobEdit', {
isAdd: true, isAdd: true,
subject: gettext('VerifyJob'), subject: gettext('Verification Job'),
fieldDefaults: { labelWidth: 120 }, fieldDefaults: { labelWidth: 120 },
defaultFocus: 'field[name="ignore-verified"]', defaultFocus: 'field[name="ignore-verified"]',