Compare commits

..

118 Commits

Author SHA1 Message Date
a417c8a93e bump version to 1.0.13-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-02 15:32:27 +02:00
79e58a903e pxar: handle missing GROUP_OBJ ACL entries
Previously we did not store GROUP_OBJ ACL entries for
directories, this means that these were lost which may
potentially elevate group permissions if they were masked
before via ACLs, so we also show a warning.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-02 11:10:20 +02:00
9f40e09d0a pxar: fix directory ACL entry creation
Don't override `group_obj` with `None` when handling
`ACL_TYPE_DEFAULT` entries for directories.

Reproducer: /var/log/journal ends up without a `MASK` type
entry making it invalid as it has `USER` and `GROUP`
entries.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-02 10:22:04 +02:00
553e57f914 server/rest: drop now unused imports
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-01 11:53:13 +02:00
2200a38671 code cleanup: drop extra newlines at EOF
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-01 11:27:07 +02:00
ba39ab20fb server/rest: extract auth to seperate module
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:26:28 +02:00
ff8945fd2f proxmox_client_tools: move common key related functions to key_source.rs
Add a new module containing key-related functions and schemata from all
over, code moved is not changed as much as possible.

Requires adapting some 'use' statements across proxmox-backup-client and
putting the XDG helpers quite cozily into proxmox_client_tools/mod.rs

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
4876393562 vsock_client: support authorization header
Pass in an optional auth tag, which will be passed as an Authorization
header on every subsequent call.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
971bc6f94b vsock_client: remove some &mut restrictions and rustfmt
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
cab92acb3c vsock_client: remove wrong comment
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
a1d90719e4 bump pxar dep to 0.10.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-31 14:00:20 +02:00
eeff085d9d server/rest: fix type ambiguity
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-29 12:02:30 +02:00
d43c407a00 server/rest: rust format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-29 08:17:26 +02:00
6bc87d3952 ui: verification job: fix subject of edit window
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:57:00 +02:00
04c1c68f31 ui: verify job: fix subject of edit window
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:45:45 +02:00
94b17c804a ui: task descriptions: sort alphabetically
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:45:23 +02:00
94352256b7 ui: task descriptions: fix casing
Enforce title-case. Affects mostly the new tape related task
description.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:50:51 +02:00
b3bed7e41f docs: tape/pool: add backend/ui setting name for allocation policy
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:40:23 +02:00
a4672dd0b1 ui: tape/pool: set onlineHelp for edit/add window
To let users find the good explanation about allocation and retention
policies from the docs easier.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:29:02 +02:00
17bbcb57d7 ui: tape: retention/allocation are Policies, note so
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:28:36 +02:00
843146479a ui: gettext; s/blocksize/block size/
Blocksize is not a word in the English language

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:04:19 +02:00
cf1e117fc7 sgutils2: use enum for ScsiError
This avoids string allocation when we return SenseInfo.
2021-03-27 15:57:48 +01:00
03eac20b87 SgRaw: add do_in_command() 2021-03-27 15:38:08 +01:00
11f5d59396 tape: page-align BlockHeader so that we can use it with SG_IO 2021-03-27 15:36:35 +01:00
6f63c29306 Cargo.toml: fix: set version to 1.0.12 2021-03-26 14:14:12 +01:00
c0e365fd49 bump version to 1.0.12-1 2021-03-26 14:09:30 +01:00
93fb2e0d21 api2/types: add type_text to DATASTORE_MAP_FORMAT
This way we get a better rendering in the api-viewer.
before:
 [<string>, ... ]

after:
 [(<source>=)?<target>, ... ]

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 13:18:10 +01:00
c553407e98 tape: add --scan option for catalog restore 2021-03-25 13:08:34 +01:00
4830de408b tape: avoid writing catalogs for empty backup tasks 2021-03-25 12:50:40 +01:00
7f78528308 OnlineHelpInfo.js: new link for client-repository 2021-03-25 12:26:57 +01:00
2843ba9017 avoid compiler warning 2021-03-25 12:25:23 +01:00
e244b9d03d api2/types: expand DATASTORE_MAP_LIST_SCHEMA description
and give an example

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 12:18:14 +01:00
657c47db35 tape: ui: TapeRestore: make datastore mapping selectable
by adding a custom field (grid) where the user can select
a target datastore for each source datastore on tape

if we have not loaded the content of the media set yet,
we have to load it on window open to get the list of datastores
on the tape

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 12:17:46 +01:00
a32bb86df9 api subscription: drop old hack for api-macro issue
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-25 12:03:33 +01:00
654c56e05d docs: client benchmark: note that tls is only done if repo is set
and remove misleading note about no network involved in tls
speedtest, as normally there is!

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-25 10:33:45 +01:00
589c4dad9e tape: add fsf/bsf to TapeDriver trait 2021-03-25 10:10:16 +01:00
0320deb0a9 proxmox-tape: fix clean api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 08:14:13 +01:00
4c4e5c2b1e api2/tape/restore: enable restore mapping of datastores
by changing the 'store' parameter of the restore api call to a
list of mappings (or a single default datastore)

for example giving:
a=b,c=d,e

would restore
datastore 'a' from tape to local datastore 'b'
datastore 'c' from tape to local datastore 'e'
all other datastores to 'e'

this way, only a single datastore can also be restored, by only
giving a single mapping, e.g. 'a=b'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 07:46:12 +01:00
924373d2df client/backup_writer: clarify backup and upload size
The text 'had to upload [KMG]iB' implies that this is the size we
actually had to send to the server, while in reality it is the
raw data size before compression.

Count the size of the compressed chunks and print it separately.
Split the average speed into its own line so they do not get too long.

Rename 'uploaded' into 'size_dirty' and 'vsize_h' into 'size'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 18:24:56 +01:00
3b60b5098f client/backup_writer: introduce UploadStats struct
instead of using a big anonymous tuple. This way the returned values
are properly named.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 18:24:56 +01:00
4abb3edd9f docs: fix horizontal scrolling issues on desktop and mobile
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 18:24:39 +01:00
932e69a837 docs: improve navigation coloring on mobile
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 18:24:20 +01:00
ef6d49670b client: backup writer: run rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 17:12:05 +01:00
52ea00e9df docs: only apply toctree color override to sidebar one
else the TOC on the index page has some white text on white back
ground

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 17:09:30 +01:00
870681013a tape: fix catalog restore
We need to rewind the tape if fast_catalog_restore() fail ...
2021-03-24 10:09:23 +01:00
c046739461 tape: fix MediaPool regression tests 2021-03-24 09:44:30 +01:00
8b1289f3e4 tape: skip catalog archives in restore 2021-03-24 09:33:39 +01:00
f1d76ecf6c fix #3359: fix blocking writes in async code during pxar create
in commit `asyncify pxar create_archive`, we changed from a
separate thread for creating a pxar to using async code, but the
StdChannelWriter used for both pxar and catalog can block, which
may block the tokio runtime for single (and probably dual) core
environments

this patch adds a wrapper struct for any writer that implements
'std::io::Write' and wraps the write calls with 'block_in_place'
so that if called in a tokio runtime, it knows that this code
potentially blocks

Fixes: 6afb60abf5 ("asyncify pxar create_archive")

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 09:00:07 +01:00
074503f288 tape: implement fast catalog restore 2021-03-24 08:40:34 +01:00
c6f55139f8 tape: impl. MediaCatalog::parse_catalog_header
This is just an optimization, avoiding to read the catalog into memory.

We also expose create_temporary_database_file() now (will be
used for catalog restore).
2021-03-24 06:32:59 +01:00
20cc25d749 tape: add TapeDriver::move_to_last_file 2021-03-24 06:32:59 +01:00
30316192b3 tape: improve locking (lock media-sets)
- new helper: lock_media_set()

- MediaPool: lock media set

- Expose Inventory::new() to avoid double loading

- do not lock pool on restore (only lock media-set)

- change pool lock name to ".pool-{name}"
2021-03-24 06:32:59 +01:00
e93263be1e taoe: implement MediaCatalog::destroy_unrelated_catalog() helper 2021-03-22 12:03:11 +01:00
2ab2ca9c24 tape: add MediaPool::lock_unassigned_media_pool() helper 2021-03-19 10:13:38 +01:00
54fcb7f5d8 api2/tape/backup: wait indefinitely for lock in scheduled backup jobs
so that a user can schedule multiple backup jobs onto a single
media pool without having to consider timing them apart

this makes sense since we can backup multiple datastores onto
the same media-set but can only specify one datastore per backup job

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 09:04:32 +01:00
4abd4dbe38 api2/tape/backup: include a summary on notification e-mails
for now only contains the list of included snapshots (if any),
as well as the backup duration

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 09:03:52 +01:00
eac1beef3c tape: cleanup PoolWriter - factor out common code 2021-03-19 08:56:14 +01:00
166a48f903 tape: cleanup - split PoolWriter into several files 2021-03-19 08:19:13 +01:00
82775c4764 tape: make sure we only commit/write valid catalogs 2021-03-19 07:50:32 +01:00
88bc9635aa tape: store media_uuid in PoolWriterState
This is mainly a cleanup, avoiding to access the catalog_set to get the uuid.
2021-03-19 07:33:59 +01:00
1037f2bc2d tape: cleanup - rename CatalogBuilder to CatalogSet 2021-03-19 07:22:54 +01:00
f24cbee77d server/email_notifications: do not double html escape
the default escape handler is handlebars::html_escape, but this are
plain text emails and we manually escape them for the html part, so
set the default escape handler to 'no_escape'

this avoids double html escape for the characters: '&"<>' in emails

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:02:49 +01:00
25b4d52dce server/email_notifications: do not panic on template registration
instead print an error and continue, the rendering functions will error
out if one of the templates could not be registered

if we `.unwrap()` here, it can lead to problems if the templates are
not correct, i.e. we could panic while holding a lock, if something holds
a mutex while this is called for the first time

add a test to catch registration issues during package build

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:02:17 +01:00
2729d134bd tools/systemd/time: implement some Traits for TimeSpan
namely
* From<Duration> (to convert easily from duration to timespan)
* Display (for better formatting)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:00:55 +01:00
32b75d36a8 tape: backup media catalogs 2021-03-19 06:58:46 +01:00
c4430a937d bump version to 1.0.11-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-18 12:36:28 +01:00
237314ad0d tape: improve catalog consistency checks
Try to check if we read the correct catalog by verifying uuid, media_set_uuid
and seq_nr.

Note: this changes the catalog format again.
2021-03-18 08:43:55 +01:00
caf76ec592 tools/subscription: ignore ENOENT for apt auth config removal
deleting a nonexistant file is hardly an error worth mentioning

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 20:12:58 +01:00
0af8c26b74 ui: tape/BackupOverview: insert a datastore level
since we can now backup multiple datastores in the same media-set,
we show the datastores as first level below that

the final tree structucture looks like this:

tapepool A
- media set 1
 - datastore I
  - tape x
   - ct/100
    - ct/100/2020-01-01T00:00:00Z

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:49 +01:00
825dfe7e0d ui: tape/DriveStatus: fix updating pointer+click handler on info widget
we can only do this after it is rendered, the element does not exist
before

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:39 +01:00
30a0809553 ui: tape/DriveStatus: add erase button
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:17 +01:00
6ee3035523 tape: define magic number for catalog archives 2021-03-17 13:35:23 +01:00
b627ebbf40 tape: improve catalog parser 2021-03-17 11:29:23 +01:00
ef4bdf6b8b tape: proxmox-tape media content - add 'store' attribute 2021-03-17 11:17:54 +01:00
54722acada tape: store datastore name in tape archives and media catalog
So that we can store multiple datastores on a single media set.
Deduplication is now per datastore (not per media set).
2021-03-17 11:08:51 +01:00
0e2bf3aa1d SnapshotReader: add self.datastore_name() helper 2021-03-17 10:16:34 +01:00
365126efa9 tape: PoolWriter - remove unnecessary move_to_eom 2021-03-17 10:16:34 +01:00
03d4c9217d update OnlineHelpInfo.js 2021-03-17 10:16:34 +01:00
8498290848 docs: technically not everything is in rust/js
I mean the whole distro uses quite some C and the like as base, so
avoid being overly strict here.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
654db565cb docs: features: mention that there are no client/data limits
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
51f83548ed docs: drop uncommon spelled out GCM
It does not help users if that is spelled out, and its not a common
use of GCM, and especially in the AES 256 context its clear what is
meant. The link to Wikipedia stays, so interested people can still
read up on it and others get a better overview due to the text being
more concise.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
5847a6bdb5 docs: fix linking, avoid over long text in main feature list
The main feature list should provide a short overview of the, well,
main features. While enterprise support *is* a main and important
feature, it's not the place here to describe things like personal
volume/ngo/... offers and the like.

Move parts of it to getting help, which lacked mentioning the
enterprise support too and is a good place to describe the customer
portal.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
313e5e2047 docs: mention support subscription plans
and change enterprise repository section to present tense.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-03-16 19:24:23 +01:00
7914e62b10 tools/zip: only add zip64 field when necessary
if neither offset nor size exceeds 32bit, do not add the
zip64 extension field

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 09:13:39 +01:00
84d3284609 ui: tape/DriveStatus: open task window on click on state
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 09:00:07 +01:00
70fab5b46e ui: tape: convert slot selection on transfer to combogrid
this is much handier than number field, and the user can instantly
see which one is an import/export slot

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:57:48 +01:00
e36135031d ui: tape/Restore: let the user choose an owner
so that the tape backup can be restored as any user, given
the current logged in user has the correct permission.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:55:42 +01:00
5a5ee0326e proxmox-tape: add missing notify-user to 'proxmox-tape restore'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:54:38 +01:00
776dabfb2e tape: use MB/s for backup speed (to match drive speed specification) 2021-03-16 08:51:49 +01:00
5c4755ad08 tape: speedup backup by doing read/write in parallel 2021-03-16 08:51:49 +01:00
7c1666289d tools/zip: add missing start_disk field for zip64 extension
it is not optional, even though we give the size explicitely

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-15 12:36:40 +01:00
cded320e92 backup info: run rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-14 19:18:35 +01:00
b31cdec225 update to pxar 0.10
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-12 10:48:09 +01:00
591b120d35 fix feature flag logic in pxar create
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-12 10:17:51 +01:00
e8913fea12 tape: write_chunk_archive - do not consume partially written chunk at EOT
So that it is re-written to the next tape.
2021-03-12 07:14:50 +01:00
355a41a763 bump version to 1.0.10-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 13:40:22 +01:00
5bd4825432 d/postinst: fixup tape permissions if existing with wrong permissions
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 13:40:22 +01:00
8f7e5b028a d/postinst: only check for broken task index on upgrade
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 13:40:22 +01:00
2a29d9a1ee d/postinst: tell user that we restart when updating from older version 2021-03-11 13:40:00 +01:00
e056966bc7 d/postinst: restart when updating from older version
Else one has quite a terrible UX when installing from 1.0 ISO and
then upgrading to latest release..

commit 0ec79339f7 for the fix and some other details

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 09:56:12 +01:00
ef0ea4ba05 server/worker_task: improve endtime for unknown tasks
instead of always using the starttime, use the last timestamp from the log
this way, one can see when the task was aborted without having to read
the log

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-11 09:56:12 +01:00
2892624783 tape/send_load_media_email: move to server/email_notifications
and reuse 'send_job_status_mail' there so that we get consistent
formatted mails from pbs (e.g. html part and author)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-11 09:56:12 +01:00
2c10410b0d tape: improve backup task log 2021-03-11 08:43:13 +01:00
d1d74c4367 typo fixes all over the place
found and semi-manually replaced by using:
 codespell -L mut -L crate -i 3 -w

Mostly in comments, but also email notification and two occurrences
of misspelled  'reserved' struct member, which where not used and
cargo build did not complain about the change, soo ...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-10 16:39:57 +01:00
8b7f3b8f1d ui: fix typo in comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-10 15:24:39 +01:00
3f6c2efb8d ui: fix typo in options
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2021-03-10 15:23:17 +01:00
227f36497a d/postinst: fix typo in comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-10 15:22:52 +01:00
5ef4c7bcd3 tape: fix scsi volume_statistics and cartridge_memory for quantum drives 2021-03-10 14:13:48 +01:00
70d00e0149 tape: documentation language fixup
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-03-10 11:05:30 +01:00
dcf155dac9 ui: tape: increase tapestore interval
from 2 to 60 seconds. To retain the response time of the gui
when adding/editing/removing, trigger a manual reload on these actions

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-10 11:00:10 +01:00
3c5b523631 ui: NavigationTree: do not modify list while iterating
iterating over a nodeinterfaces children while removing them
will lead to 'child' being undefined

instead collect the children to remove in a separate list
and iterate over them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-10 10:59:59 +01:00
6396bace3d tape: improve backup task log (show percentage) 2021-03-10 10:59:13 +01:00
713a128adf tape: improve backup task log format 2021-03-10 09:54:51 +01:00
affc224aca tape: read_tape_mam - pass correct allocation len 2021-03-10 09:24:38 +01:00
6f82d32977 tape: cleanup - remove wrong inline comment 2021-03-10 08:11:51 +01:00
2a06e08618 api2/tape/backup: continue on vanishing snapshots
when we do a prune during a tape backup, do not cancel the tape backup,
but continue with a warning

the task still fails and prompts the user to check the log

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-09 10:20:54 +01:00
1057b1f5a5 tape: lock artificial "__UNASSIGNED__" pool to avoid races 2021-03-09 10:00:26 +01:00
af76234112 tape: improve MediaPool allocation by sorting tapes by ctime and label_text 2021-03-09 08:33:21 +01:00
109 changed files with 4507 additions and 1948 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "1.0.9"
version = "1.0.13"
authors = [
"Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>",
@ -52,7 +52,7 @@ proxmox = { version = "0.11.0", features = [ "sortable-macro", "api-macro", "web
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.1"
pxar = { version = "0.9.0", features = [ "tokio-io" ] }
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io" ] }
regex = "1.2"
rustyline = "7"

53
debian/changelog vendored
View File

@ -1,3 +1,56 @@
rust-proxmox-backup (1.0.13-1) unstable; urgency=medium
* pxar: improve handling ACL entries on create and restore
-- Proxmox Support Team <support@proxmox.com> Fri, 02 Apr 2021 15:32:01 +0200
rust-proxmox-backup (1.0.12-1) unstable; urgency=medium
* tape: write catalogs to tape (speedup catalog restore)
* tape: add --scan option for catalog restore
* tape: improve locking (lock media-sets)
* tape: ui: enable datastore mappings
* fix #3359: fix blocking writes in async code during pxar create
* api2/tape/backup: wait indefinitely for lock in scheduled backup jobs
* docu improvements
-- Proxmox Support Team <support@proxmox.com> Fri, 26 Mar 2021 14:08:47 +0100
rust-proxmox-backup (1.0.11-1) unstable; urgency=medium
* fix feature flag logic in pxar create
* tools/zip: add missing start_disk field for zip64 extension to improve
compatibility with some strict archive tools
* tape: speedup backup by doing read/write in parallel
* tape: store datastore name in tape archives and media catalog
-- Proxmox Support Team <support@proxmox.com> Thu, 18 Mar 2021 12:36:01 +0100
rust-proxmox-backup (1.0.10-1) unstable; urgency=medium
* tape: improve MediaPool allocation by sorting tapes by creation time and
label text
* api: tape backup: continue on vanishing snapshots, as a prune during long
running tape backup jobs is OK
* tape: fix scsi volume_statistics and cartridge_memory for quantum drives
* typo fixes all over the place
* d/postinst: restart, not reload, when updating from a to old version
-- Proxmox Support Team <support@proxmox.com> Thu, 11 Mar 2021 08:24:31 +0100
rust-proxmox-backup (1.0.9-1) unstable; urgency=medium
* client: track key source, print when used

4
debian/control vendored
View File

@ -41,8 +41,8 @@ Build-Depends: debhelper (>= 11),
librust-proxmox-0.11+sortable-macro-dev,
librust-proxmox-0.11+websocket-dev,
librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~),
librust-pxar-0.9+default-dev,
librust-pxar-0.9+tokio-io-dev,
librust-pxar-0.10+default-dev (>= 0.10.1-~~),
librust-pxar-0.10+tokio-io-dev (>= 0.10.1-~~),
librust-regex-1+default-dev (>= 1.2-~~),
librust-rustyline-7+default-dev,
librust-serde-1+default-dev,

27
debian/postinst vendored
View File

@ -6,13 +6,21 @@ set -e
case "$1" in
configure)
# need to have user backup in the tapoe group
# need to have user backup in the tape group
usermod -a -G tape backup
# modeled after dh_systemd_start output
systemctl --system daemon-reload >/dev/null || true
if [ -n "$2" ]; then
_dh_action=try-reload-or-restart
if dpkg --compare-versions "$2" 'lt' '1.0.7-1'; then
# there was an issue with reloading and systemd being confused in older daemon versions
# so restart instead of reload if upgrading from there, see commit 0ec79339f7aebf9
# FIXME: remove with PBS 2.1
echo "Upgrading from older proxmox-backup-server: restart (not reload) daemons"
_dh_action=try-restart
else
_dh_action=try-reload-or-restart
fi
else
_dh_action=start
fi
@ -40,11 +48,16 @@ case "$1" in
/etc/proxmox-backup/remote.cfg || true
fi
fi
fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active || true
# FIXME: remove with 2.0
if [ -d "/var/lib/proxmox-backup/tape" ] &&
[ "$(stat --printf '%a' '/var/lib/proxmox-backup/tape')" != "750" ]; then
chmod 0750 /var/lib/proxmox-backup/tape || true
fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active || true
fi
fi
;;

View File

@ -3,6 +3,7 @@ Backup Client Usage
The command line client is called :command:`proxmox-backup-client`.
.. _client_repository:
Repository Locations
--------------------
@ -691,8 +692,15 @@ Benchmarking
------------
The backup client also comes with a benchmarking tool. This tool measures
various metrics relating to compression and encryption speeds. You can run a
benchmark using the ``benchmark`` subcommand of ``proxmox-backup-client``:
various metrics relating to compression and encryption speeds. If a Proxmox
Backup repository (remote or local) is specified, the TLS upload speed will get
measured too.
You can run a benchmark using the ``benchmark`` subcommand of
``proxmox-backup-client``:
.. note:: The TLS speed test is only included if a :ref:`backup server
repository is specified <client_repository>`.
.. code-block:: console
@ -723,8 +731,7 @@ benchmark using the ``benchmark`` subcommand of ``proxmox-backup-client``:
.. note:: The percentages given in the output table correspond to a
comparison against a Ryzen 7 2700X. The TLS test connects to the
local host, so there is no network involved.
comparison against a Ryzen 7 2700X.
You can also pass the ``--output-format`` parameter to output stats in ``json``,
rather than the default table format.

View File

@ -57,6 +57,11 @@ div.sphinxsidebar h3 {
div.sphinxsidebar h1.logo-name {
display: none;
}
div.document, div.footer {
width: min(100%, 1320px);
}
@media screen and (max-width: 875px) {
div.sphinxsidebar p.logo {
display: initial;
@ -65,9 +70,19 @@ div.sphinxsidebar h1.logo-name {
display: block;
}
div.sphinxsidebar span {
color: #AAA;
color: #EEE;
}
ul li.toctree-l1 > a {
.sphinxsidebar ul li.toctree-l1 > a, div.sphinxsidebar a {
color: #FFF;
}
div.sphinxsidebar {
background-color: #555;
}
div.body {
min-width: 300px;
}
div.footer {
display: block;
margin: 15px auto 0px auto;
}
}

View File

@ -65,10 +65,10 @@ Main Features
:Compression: The ultra-fast Zstandard_ compression is able to compress
several gigabytes of data per second.
:Encryption: Backups can be encrypted on the client-side, using AES-256 in
Galois/Counter Mode (GCM_). This authenticated encryption (AE_) mode
provides very high performance on modern hardware. In addition to client-side
encryption, all data is transferred via a secure TLS connection.
:Encryption: Backups can be encrypted on the client-side, using AES-256 GCM_.
This authenticated encryption (AE_) mode provides very high performance on
modern hardware. In addition to client-side encryption, all data is
transferred via a secure TLS connection.
:Web interface: Manage the Proxmox Backup Server with the integrated, web-based
user interface.
@ -76,8 +76,16 @@ Main Features
:Open Source: No secrets. Proxmox Backup Server is free and open-source
software. The source code is licensed under AGPL, v3.
:Support: Enterprise support will be available from `Proxmox`_ once the beta
phase is over.
:No Limits: Proxmox Backup Server has no artifical limits for backup storage or
backup-clients.
:Enterprise Support: Proxmox Server Solutions GmbH offers enterprise support in
form of `Proxmox Backup Server Subscription Plans
<https://www.proxmox.com/en/proxmox-backup-server/pricing>`_. Users at every
subscription level get access to the Proxmox Backup :ref:`Enterprise
Repository <sysadmin_package_repos_enterprise>`. In addition, with a Basic,
Standard or Premium subscription, users have access to the :ref:`Proxmox
Customer Portal <get_help_enterprise_support>`.
Reasons for Data Backup?
@ -117,8 +125,8 @@ Proxmox Backup Server consists of multiple components:
* A client CLI tool (`proxmox-backup-client`) to access the server easily from
any `Linux amd64` environment
Aside from the web interface, everything is written in the Rust programming
language.
Aside from the web interface, most parts of Proxmox Backup Server are written in
the Rust programming language.
"The Rust programming language helps you write faster, more reliable software.
High-level ergonomics and low-level control are often at odds in programming
@ -134,6 +142,17 @@ language.
Getting Help
------------
.. _get_help_enterprise_support:
Enterprise Support
~~~~~~~~~~~~~~~~~~
Users with a `Proxmox Backup Server Basic, Standard or Premium Subscription Plan
<https://www.proxmox.com/en/proxmox-backup-server/pricing>`_ have access to the
Proxmox Customer Portal. The Customer Portal provides support with guaranteed
response times from the Proxmox developers.
For more information or for volume discounts, please contact office@proxmox.com.
Community Support Forum
~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -69,10 +69,12 @@ Here, the output should be:
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
.. _sysadmin_package_repos_enterprise:
`Proxmox Backup`_ Enterprise Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This will be the default, stable, and recommended repository. It is available for
This is the stable, recommended repository. It is available for
all `Proxmox Backup`_ subscription users. It contains the most stable packages,
and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default:

View File

@ -1,11 +1,11 @@
All command supports the following parameters to specify the tape device:
All commands support the following parameters to specify the tape device:
--device <path> Path to the Linux tape device
--drive <name> Use drive from Proxmox Backup Server configuration.
Commands generating output supports the ``--output-format``
Commands which generate output support the ``--output-format``
parameter. It accepts the following values:
:``text``: Text format (default). Human readable.

View File

@ -4,7 +4,7 @@ Tape Backup
===========
.. CAUTION:: Tape Backup is a technical preview feature, not meant for
production usage. To enable the GUI, you need to issue the
production use. To enable it in the GUI, you need to issue the
following command (as root user on the console):
.. code-block:: console
@ -14,36 +14,36 @@ Tape Backup
Proxmox tape backup provides an easy way to store datastore content
onto magnetic tapes. This increases data safety because you get:
- an additional copy of the data
- to a different media type (tape)
- an additional copy of the data,
- on a different media type (tape),
- to an additional location (you can move tapes off-site)
In most restore jobs, only data from the last backup job is restored.
Restore requests further decline the older the data
Restore requests further decline, the older the data
gets. Considering this, tape backup may also help to reduce disk
usage, because you can safely remove data from disk once archived on
tape. This is especially true if you need to keep data for several
usage, because you can safely remove data from disk, once it's archived on
tape. This is especially true if you need to retain data for several
years.
Tape backups do not provide random access to the stored data. Instead,
you need to restore the data to disk before you can access it
you need to restore the data to disk, before you can access it
again. Also, if you store your tapes off-site (using some kind of tape
vaulting service), you need to bring them on-site before you can do any
restore. So please consider that restores from tapes can take much
longer than restores from disk.
vaulting service), you need to bring them back on-site, before you can do any
restores. So please consider that restoring from tape can take much
longer than restoring from disk.
Tape Technology Primer
----------------------
.. _Linear Tape Open: https://en.wikipedia.org/wiki/Linear_Tape-Open
.. _Linear Tape-Open: https://en.wikipedia.org/wiki/Linear_Tape-Open
As of 2021, the only broadly available tape technology standard is
`Linear Tape Open`_, and different vendors offers LTO Ultrium tape
drives, auto-loaders and LTO tape cartridges.
As of 2021, the only widely available tape technology standard is
`Linear Tape-Open`_ (LTO). Different vendors offer LTO Ultrium tape
drives, auto-loaders, and LTO tape cartridges.
There are a few vendors offering proprietary drives with
slight advantages in performance and capacity, but they have
There are a few vendors that offer proprietary drives with
slight advantages in performance and capacity. Nevertheless, they have
significant disadvantages:
- proprietary (single vendor)
@ -53,13 +53,13 @@ So we currently do not test such drives.
In general, LTO tapes offer the following advantages:
- Durable (30 years)
- Durability (30 year lifespan)
- High Capacity (12 TB)
- Relatively low cost per TB
- Cold Media
- Movable (storable inside vault)
- Multiple vendors (for both media and drives)
- Build in AES-CGM Encryption engine
- Build in AES-GCM Encryption engine
Note that `Proxmox Backup Server` already stores compressed data, so using the
tape compression feature has no advantage.
@ -68,41 +68,40 @@ tape compression feature has no advantage.
Supported Hardware
------------------
Proxmox Backup Server supports `Linear Tape Open`_ generation 4 (LTO4)
or later. In general, all SCSI2 tape drives supported by the Linux
kernel should work, but feature like hardware encryptions needs LTO4
Proxmox Backup Server supports `Linear Tape-Open`_ generation 4 (LTO-4)
or later. In general, all SCSI-2 tape drives supported by the Linux
kernel should work, but features like hardware encryption need LTO-4
or later.
Tape changer support is done using the Linux 'mtx' command line
tool. So any changer device supported by that tool should work.
Tape changing is carried out using the Linux 'mtx' command line
tool, so any changer device supported by this tool should work.
Drive Performance
~~~~~~~~~~~~~~~~~
Current LTO-8 tapes provide read/write speeds up to 360 MB/s. This means,
Current LTO-8 tapes provide read/write speeds of up to 360 MB/s. This means,
that it still takes a minimum of 9 hours to completely write or
read a single tape (even at maximum speed).
The only way to speed up that data rate is to use more than one
drive. That way you can run several backup jobs in parallel, or run
drive. That way, you can run several backup jobs in parallel, or run
restore jobs while the other dives are used for backups.
Also consider that you need to read data first from your datastore
(disk). But a single spinning disk is unable to deliver data at this
Also consider that you first need to read data from your datastore
(disk). However, a single spinning disk is unable to deliver data at this
rate. We measured a maximum rate of about 60MB/s to 100MB/s in practice,
so it takes 33 hours to read 12TB to fill up an LTO-8 tape. If you want
to run your tape at full speed, please make sure that the source
so it takes 33 hours to read the 12TB needed to fill up an LTO-8 tape. If you want
to write to your tape at full speed, please make sure that the source
datastore is able to deliver that performance (e.g, by using SSDs).
Terminology
-----------
:Tape Labels: are used to uniquely identify a tape. You normally use
some sticky paper labels and apply them on the front of the
cartridge. We additionally store the label text magnetically on the
tape (first file on tape).
:Tape Labels: are used to uniquely identify a tape. You would normally apply a
sticky paper label to the front of the cartridge. We additionally store the
label text magnetically on the tape (first file on tape).
.. _Code 39: https://en.wikipedia.org/wiki/Code_39
@ -116,10 +115,10 @@ Terminology
Specification`_.
You can either buy such barcode labels from your cartridge vendor,
or print them yourself. You can use our `LTO Barcode Generator`_ App
for that.
or print them yourself. You can use our `LTO Barcode Generator`_
app, if you would like to print them yourself.
.. Note:: Physical labels and the associated adhesive shall have an
.. Note:: Physical labels and the associated adhesive should have an
environmental performance to match or exceed the environmental
specifications of the cartridge to which it is applied.
@ -133,7 +132,7 @@ Terminology
media pool).
:Tape drive: The device used to read and write data to the tape. There
are standalone drives, but drives often ship within tape libraries.
are standalone drives, but drives are usually shipped within tape libraries.
:Tape changer: A device which can change the tapes inside a tape drive
(tape robot). They are usually part of a tape library.
@ -142,10 +141,10 @@ Terminology
:`Tape library`_: A storage device that contains one or more tape drives,
a number of slots to hold tape cartridges, a barcode reader to
identify tape cartridges and an automated method for loading tapes
identify tape cartridges, and an automated method for loading tapes
(a robot).
This is also commonly known as 'autoloader', 'tape robot' or 'tape jukebox'.
This is also commonly known as an 'autoloader', 'tape robot' or 'tape jukebox'.
:Inventory: The inventory stores the list of known tapes (with
additional status information).
@ -153,14 +152,14 @@ Terminology
:Catalog: A media catalog stores information about the media content.
Tape Quickstart
Tape Quick Start
---------------
1. Configure your tape hardware (drives and changers)
2. Configure one or more media pools
3. Label your tape cartridges.
3. Label your tape cartridges
4. Start your first tape backup job ...
@ -169,7 +168,7 @@ Configuration
-------------
Please note that you can configure anything using the graphical user
interface or the command line interface. Both methods results in the
interface or the command line interface. Both methods result in the
same configuration.
.. _tape_changer_config:
@ -180,7 +179,7 @@ Tape changers
Tape changers (robots) are part of a `Tape Library`_. You can skip
this step if you are using a standalone drive.
Linux is able to auto detect those devices, and you can get a list
Linux is able to auto detect these devices, and you can get a list
of available devices using:
.. code-block:: console
@ -192,7 +191,7 @@ of available devices using:
│ /dev/tape/by-id/scsi-CC2C52 │ Quantum │ Superloader3 │ CC2C52 │
└─────────────────────────────┴─────────┴──────────────┴────────┘
In order to use that device with Proxmox, you need to create a
In order to use a device with Proxmox Backup Server, you need to create a
configuration entry:
.. code-block:: console
@ -201,11 +200,11 @@ configuration entry:
Where ``sl3`` is an arbitrary name you can choose.
.. Note:: Please use stable device path names from inside
.. Note:: Please use the persistent device path names from inside
``/dev/tape/by-id/``. Names like ``/dev/sg0`` may point to a
different device after reboot, and that is not what you want.
You can show the final configuration with:
You can display the final configuration with:
.. code-block:: console
@ -255,12 +254,12 @@ Tape libraries usually provide some special import/export slots (also
called "mail slots"). Tapes inside those slots are accessible from
outside, making it easy to add/remove tapes to/from the library. Those
tapes are considered to be "offline", so backup jobs will not use
them. Those special slots are auto-detected and marked as
them. Those special slots are auto-detected and marked as an
``import-export`` slot in the status command.
It's worth noting that some of the smaller tape libraries don't have
such slots. While they have something called "Mail Slot", that slot
is just a way to grab the tape from the gripper. But they are unable
such slots. While they have something called a "Mail Slot", that slot
is just a way to grab the tape from the gripper. They are unable
to hold media while the robot does other things. They also do not
expose that "Mail Slot" over the SCSI interface, so you wont see them in
the status output.
@ -322,7 +321,7 @@ configuration entry:
# proxmox-tape drive create mydrive --path /dev/tape/by-id/scsi-12345-nst
.. Note:: Please use stable device path names from inside
.. Note:: Please use the persistent device path names from inside
``/dev/tape/by-id/``. Names like ``/dev/nst0`` may point to a
different device after reboot, and that is not what you want.
@ -334,10 +333,10 @@ changer device:
# proxmox-tape drive update mydrive --changer sl3 --changer-drivenum 0
The ``--changer-drivenum`` is only necessary if the tape library
includes more than one drive (The changer status command lists all
includes more than one drive (the changer status command lists all
drive numbers).
You can show the final configuration with:
You can display the final configuration with:
.. code-block:: console
@ -353,7 +352,7 @@ You can show the final configuration with:
└─────────┴────────────────────────────────┘
.. NOTE:: The ``changer-drivenum`` value 0 is not stored in the
configuration, because that is the default.
configuration, because it is the default.
To list all configured drives use:
@ -383,7 +382,7 @@ For testing, you can simply query the drive status with:
└───────────┴────────────────────────┘
.. NOTE:: Blocksize should always be 0 (variable block size
mode). This is the default anyways.
mode). This is the default anyway.
.. _tape_media_pool_config:
@ -399,11 +398,11 @@ one media pool, so a job only uses tapes from that pool.
A media set is a group of continuously written tapes, used to split
the larger pool into smaller, restorable units. One or more backup
jobs write to a media set, producing an ordered group of
tapes. Media sets are identified by an unique ID. That ID and the
sequence number is stored on each tape of that set (tape label).
tapes. Media sets are identified by a unique ID. That ID and the
sequence number are stored on each tape of that set (tape label).
Media sets are the basic unit for restore tasks, i.e. you need all
tapes in the set to restore the media set content. Data is fully
Media sets are the basic unit for restore tasks. This means that you need
every tape in the set to restore the media set contents. Data is fully
deduplicated inside a media set.
@ -412,37 +411,37 @@ one media pool, so a job only uses tapes from that pool.
The pool additionally defines how long backup jobs can append data
to a media set. The following settings are possible:
- Try to use the current media set.
- Try to use the current media set (``continue``).
This setting produce one large media set. While this is very
This setting produces one large media set. While this is very
space efficient (deduplication, no unused space), it can lead to
long restore times, because restore jobs needs to read all tapes in the
long restore times, because restore jobs need to read all tapes in the
set.
.. NOTE:: Data is fully deduplicated inside a media set. That
.. NOTE:: Data is fully deduplicated inside a media set. This
also means that data is randomly distributed over the tapes in
the set. So even if you restore a single VM, this may have to
read data from all tapes inside the media set.
the set. Thus, even if you restore a single VM, data may have to be
read from all tapes inside the media set.
Larger media sets are also more error prone, because a single
damaged media makes the restore fail.
Larger media sets are also more error-prone, because a single
damaged tape makes the restore fail.
Usage scenario: Mostly used with tape libraries, and you manually
Usage scenario: Mostly used with tape libraries. You manually
trigger new set creation by running a backup job with the
``--export`` option.
.. NOTE:: Retention period starts with the existence of a newer
media set.
- Always create a new media set.
- Always create a new media set (``always``).
With this setting each backup job creates a new media set. This
is less space efficient, because the last media from the last set
With this setting, each backup job creates a new media set. This
is less space efficient, because the media from the last set
may not be fully written, leaving the remaining space unused.
The advantage is that this procudes media sets of minimal
size. Small set are easier to handle, you can move sets to an
off-site vault, and restore is much faster.
size. Small sets are easier to handle, can be moved more conveniently
to an off-site vault, and can be restored much faster.
.. NOTE:: Retention period starts with the creation time of the
media set.
@ -468,11 +467,11 @@ one media pool, so a job only uses tapes from that pool.
- Current set contains damaged or retired tapes.
- Media pool encryption changed
- Media pool encryption has changed
- Database consistency errors, e.g. if the inventory does not
contain required media info, or contain conflicting infos
(outdated data).
- Database consistency errors, for example, if the inventory does not
contain the required media information, or it contains conflicting
information (outdated data).
.. topic:: Retention Policy
@ -489,26 +488,27 @@ one media pool, so a job only uses tapes from that pool.
.. topic:: Hardware Encryption
LTO4 (or later) tape drives support hardware encryption. If you
LTO-4 (or later) tape drives support hardware encryption. If you
configure the media pool to use encryption, all data written to the
tapes is encrypted using the configured key.
That way, unauthorized users cannot read data from the media,
e.g. if you loose a media while shipping to an offsite location.
This way, unauthorized users cannot read data from the media,
for example, if you loose a tape while shipping to an offsite location.
.. Note:: If the backup client also encrypts data, data on tape
.. Note:: If the backup client also encrypts data, data on the tape
will be double encrypted.
The password protected key is stored on each media, so it is
possbible to `restore the key <tape_restore_encryption_key_>`_ using the password. Please make sure
you remember the password in case you need to restore the key.
The password protected key is stored on each medium, so that it is
possbible to `restore the key <tape_restore_encryption_key_>`_ using
the password. Please make sure to remember the password, in case
you need to restore the key.
.. NOTE:: We use global content namespace, i.e. we do not store the
source datastore, so it is impossible to distinguish store1:/vm/100
from store2:/vm/100. Please use different media pools if the
sources are from different name spaces with conflicting names
(E.g. if the sources are from different Proxmox VE clusters).
.. NOTE:: We use global content namespace, meaning we do not store the
source datastore name. Because of this, it is impossible to distinguish
store1:/vm/100 from store2:/vm/100. Please use different media pools
if the sources are from different namespaces with conflicting names
(for example, if the sources are from different Proxmox VE clusters).
The following command creates a new media pool:
@ -520,7 +520,7 @@ The following command creates a new media pool:
# proxmox-tape pool create daily --drive mydrive
Additional option can be set later using the update command:
Additional option can be set later, using the update command:
.. code-block:: console
@ -544,8 +544,8 @@ Tape Backup Jobs
~~~~~~~~~~~~~~~~
To automate tape backup, you can configure tape backup jobs which
store datastore content to a media pool at a specific time
schedule. Required settings are:
write datastore content to a media pool, based on a specific time schedule.
The required settings are:
- ``store``: The datastore you want to backup
@ -564,14 +564,14 @@ use:
# proxmox-tape backup-job create job2 --store vmstore1 \
--pool yourpool --drive yourdrive --schedule daily
Backup includes all snapshot from a backup group by default. You can
The backup includes all snapshots from a backup group by default. You can
set the ``latest-only`` flag to include only the latest snapshots:
.. code-block:: console
# proxmox-tape backup-job update job2 --latest-only
Backup jobs can use email to send tape requests notifications or
Backup jobs can use email to send tape request notifications or
report errors. You can set the notification user with:
.. code-block:: console
@ -581,7 +581,7 @@ report errors. You can set the notification user with:
.. Note:: The email address is a property of the user (see :ref:`user_mgmt`).
It is sometimes useful to eject the tape from the drive after a
backup. For a standalone drive, the ``eject-media`` option eject the
backup. For a standalone drive, the ``eject-media`` option ejects the
tape, making sure that the following backup cannot use the tape
(unless someone manually loads the tape again). For tape libraries,
this option unloads the tape to a free slot, which provides better
@ -591,11 +591,11 @@ dust protection than inside a drive:
# proxmox-tape backup-job update job2 --eject-media
.. Note:: For failed jobs, the tape remain in the drive.
.. Note:: For failed jobs, the tape remains in the drive.
For tape libraries, the ``export-media`` options moves all tapes from
For tape libraries, the ``export-media`` option moves all tapes from
the media set to an export slot, making sure that the following backup
cannot use the tapes. An operator can pickup those tapes and move them
cannot use the tapes. An operator can pick up those tapes and move them
to a vault.
.. code-block:: console
@ -622,9 +622,9 @@ To remove a job, please use:
Administration
--------------
Many sub-command of the ``proxmox-tape`` command line tools take a
Many sub-commands of the ``proxmox-tape`` command line tools take a
parameter called ``--drive``, which specifies the tape drive you want
to work on. For convenience, you can set that in an environment
to work on. For convenience, you can set this in an environment
variable:
.. code-block:: console
@ -639,27 +639,27 @@ parameter from commands that needs a changer device, for example:
# proxmox-tape changer status
Should displays the changer status of the changer device associated with
should display the changer status of the changer device associated with
drive ``mydrive``.
Label Tapes
~~~~~~~~~~~
By default, tape cartidges all looks the same, so you need to put a
label on them for unique identification. So first, put a sticky paper
By default, tape cartridges all look the same, so you need to put a
label on them for unique identification. First, put a sticky paper
label with some human readable text on the cartridge.
If you use a `Tape Library`_, you should use an 8 character string
encoded as `Code 39`_, as definded in the `LTO Ultrium Cartridge Label
Specification`_. You can either bye such barcode labels from your
cartidge vendor, or print them yourself. You can use our `LTO Barcode
Generator`_ App for that.
encoded as `Code 39`_, as defined in the `LTO Ultrium Cartridge Label
Specification`_. You can either buy such barcode labels from your
cartridge vendor, or print them yourself. You can use our `LTO Barcode
Generator`_ app to print them.
Next, you need to write that same label text to the tape, so that the
software can uniquely identify the tape too.
For a standalone drive, manually insert the new tape cartidge into the
For a standalone drive, manually insert the new tape cartridge into the
drive and run:
.. code-block:: console
@ -668,7 +668,7 @@ drive and run:
You may omit the ``--pool`` argument to allow the tape to be used by any pool.
.. Note:: For safety reasons, this command fails if the tape contain
.. Note:: For safety reasons, this command fails if the tape contains
any data. If you want to overwrite it anyway, erase the tape first.
You can verify success by reading back the label:
@ -718,7 +718,7 @@ The following options are available:
--eject-media Eject media upon job completion.
It is normally good practice to eject the tape after use. This unmounts the
tape from the drive and prevents the tape from getting dirty with dust.
tape from the drive and prevents the tape from getting dusty.
--export-media-set Export media set upon job completion.
@ -737,7 +737,7 @@ catalogs, you need to restore them first. Please note that you need
the catalog to find your data, but restoring a complete media-set does
not need media catalogs.
The following command shows the media content (from catalog):
The following command lists the media content (from catalog):
.. code-block:: console
@ -841,7 +841,7 @@ database. Further restore jobs automatically use any available key.
Tape Cleaning
~~~~~~~~~~~~~
LTO tape drives requires regular cleaning. This is done by loading a
LTO tape drives require regular cleaning. This is done by loading a
cleaning cartridge into the drive, which is a manual task for
standalone drives.
@ -876,7 +876,7 @@ This command does the following:
- find the cleaning tape (in slot 3)
- unload the current media from the drive (back to slot1)
- unload the current media from the drive (back to slot 1)
- load the cleaning tape into the drive

View File

@ -181,7 +181,7 @@ fn get_tfa_entry(userid: Userid, id: String) -> Result<TypedTfaInfo, Error> {
if let Some(user_data) = crate::config::tfa::read()?.users.remove(&userid) {
match {
// scope to prevent the temprary iter from borrowing across the whole match
// scope to prevent the temporary iter from borrowing across the whole match
let entry = tfa_id_iter(&user_data).find(|(_ty, _index, entry_id)| id == *entry_id);
entry.map(|(ty, index, _)| (ty, index))
} {
@ -259,7 +259,7 @@ fn delete_tfa(
.ok_or_else(|| http_err!(NOT_FOUND, "no such entry: {}/{}", userid, id))?;
match {
// scope to prevent the temprary iter from borrowing across the whole match
// scope to prevent the temporary iter from borrowing across the whole match
let entry = tfa_id_iter(&user_data).find(|(_, _, entry_id)| id == *entry_id);
entry.map(|(ty, index, _)| (ty, index))
} {

View File

@ -1,4 +1,4 @@
//! Datastore Syncronization Job Management
//! Datastore Synchronization Job Management
use anyhow::{bail, format_err, Error};
use serde_json::Value;

View File

@ -119,7 +119,7 @@ pub fn change_passphrase(
let kdf = kdf.unwrap_or_default();
if let Kdf::None = kdf {
bail!("Please specify a key derivation funktion (none is not allowed here).");
bail!("Please specify a key derivation function (none is not allowed here).");
}
let _lock = open_file_locked(
@ -187,7 +187,7 @@ pub fn create_key(
let kdf = kdf.unwrap_or_default();
if let Kdf::None = kdf {
bail!("Please specify a key derivation funktion (none is not allowed here).");
bail!("Please specify a key derivation function (none is not allowed here).");
}
let (key, mut key_config) = KeyConfig::new(password.as_bytes(), kdf)?;

View File

@ -85,7 +85,7 @@ fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
},
notify: {
type: bool,
description: r#"Send notification mail about new package updates availanle to the
description: r#"Send notification mail about new package updates available to the
email address configured for 'root@pam')."#,
default: false,
optional: true,

View File

@ -32,9 +32,6 @@ use crate::api2::types::{NODE_SCHEMA, SUBSCRIPTION_KEY_SCHEMA, Authid};
pub fn check_subscription(
force: bool,
) -> Result<(), Error> {
// FIXME: drop once proxmox-api-macro is bumped to >> 5.0.0-1
let _remove_me = API_METHOD_CHECK_SUBSCRIPTION_PARAM_DEFAULT_FORCE;
let info = match subscription::read_subscription() {
Err(err) => bail!("could not read subscription status: {}", err),
Ok(Some(info)) => info,

View File

@ -1,10 +1,11 @@
use std::path::Path;
use std::sync::Arc;
use std::sync::{Mutex, Arc};
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::{
try_block,
api::{
api,
RpcEnvironment,
@ -16,6 +17,7 @@ use proxmox::{
use crate::{
task_log,
task_warn,
config::{
self,
cached_user_info::CachedUserInfo,
@ -32,6 +34,7 @@ use crate::{
},
server::{
lookup_user_email,
TapeBackupJobSummary,
jobstate::{
Job,
JobState,
@ -42,6 +45,7 @@ use crate::{
DataStore,
BackupDir,
BackupInfo,
StoreProgress,
},
api2::types::{
Authid,
@ -174,8 +178,15 @@ pub fn do_tape_backup_job(
let (drive_config, _digest) = config::drive::config()?;
// early check/lock before starting worker
let drive_lock = lock_tape_device(&drive_config, &setup.drive)?;
// for scheduled jobs we acquire the lock later in the worker
let drive_lock = if schedule.is_some() {
None
} else {
Some(lock_tape_device(&drive_config, &setup.drive)?)
};
let notify_user = setup.notify_user.as_ref().unwrap_or_else(|| &Userid::root_userid());
let email = lookup_user_email(notify_user);
let upid_str = WorkerTask::new_thread(
&worker_type,
@ -183,26 +194,40 @@ pub fn do_tape_backup_job(
auth_id.clone(),
false,
move |worker| {
let _drive_lock = drive_lock; // keep lock guard
set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
job.start(&worker.upid().to_string())?;
let mut drive_lock = drive_lock;
task_log!(worker,"Starting tape backup job '{}'", job_id);
if let Some(event_str) = schedule {
task_log!(worker,"task triggered by schedule '{}'", event_str);
}
let (job_result, summary) = match try_block!({
if schedule.is_some() {
// for scheduled tape backup jobs, we wait indefinitely for the lock
task_log!(worker, "waiting for drive lock...");
loop {
if let Ok(lock) = lock_tape_device(&drive_config, &setup.drive) {
drive_lock = Some(lock);
break;
} // ignore errors
let notify_user = setup.notify_user.as_ref().unwrap_or_else(|| &Userid::root_userid());
let email = lookup_user_email(notify_user);
worker.check_abort()?;
}
}
set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
let job_result = backup_worker(
&worker,
datastore,
&pool_config,
&setup,
email.clone(),
);
task_log!(worker,"Starting tape backup job '{}'", job_id);
if let Some(event_str) = schedule {
task_log!(worker,"task triggered by schedule '{}'", event_str);
}
backup_worker(
&worker,
datastore,
&pool_config,
&setup,
email.clone(),
)
}) {
Ok(summary) => (Ok(()), summary),
Err(err) => (Err(err), Default::default()),
};
let status = worker.create_state(&job_result);
@ -212,6 +237,7 @@ pub fn do_tape_backup_job(
Some(job.jobname()),
&setup,
&job_result,
summary,
) {
eprintln!("send tape backup notification failed: {}", err);
}
@ -338,13 +364,17 @@ pub fn backup(
move |worker| {
let _drive_lock = drive_lock; // keep lock guard
set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
let job_result = backup_worker(
let (job_result, summary) = match backup_worker(
&worker,
datastore,
&pool_config,
&setup,
email.clone(),
);
) {
Ok(summary) => (Ok(()), summary),
Err(err) => (Err(err), Default::default()),
};
if let Some(email) = email {
if let Err(err) = crate::server::send_tape_backup_status(
@ -352,6 +382,7 @@ pub fn backup(
None,
&setup,
&job_result,
summary,
) {
eprintln!("send tape backup notification failed: {}", err);
}
@ -372,16 +403,16 @@ fn backup_worker(
pool_config: &MediaPoolConfig,
setup: &TapeBackupJobSetup,
email: Option<String>,
) -> Result<(), Error> {
) -> Result<TapeBackupJobSummary, Error> {
let status_path = Path::new(TAPE_STATUS_DIR);
let _lock = MediaPool::lock(status_path, &pool_config.name)?;
let start = std::time::Instant::now();
let mut summary: TapeBackupJobSummary = Default::default();
task_log!(worker, "update media online status");
let changer_name = update_media_online_status(&setup.drive)?;
let pool = MediaPool::with_config(status_path, &pool_config, changer_name)?;
let pool = MediaPool::with_config(status_path, &pool_config, changer_name, false)?;
let mut pool_writer = PoolWriter::new(pool, &setup.drive, worker, email)?;
@ -389,45 +420,112 @@ fn backup_worker(
group_list.sort_unstable();
let group_count = group_list.len();
task_log!(worker, "found {} groups", group_count);
let mut progress = StoreProgress::new(group_count as u64);
let latest_only = setup.latest_only.unwrap_or(false);
if latest_only {
task_log!(worker, "latest-only: true (only considering latest snapshots)");
}
for group in group_list {
let datastore_name = datastore.name();
let mut errors = false;
let mut need_catalog = false; // avoid writing catalog for empty jobs
for (group_number, group) in group_list.into_iter().enumerate() {
progress.done_groups = group_number as u64;
progress.done_snapshots = 0;
progress.group_snapshots = 0;
let mut snapshot_list = group.list_backups(&datastore.base_path())?;
BackupInfo::sort_list(&mut snapshot_list, true); // oldest first
if latest_only {
progress.group_snapshots = 1;
if let Some(info) = snapshot_list.pop() {
if pool_writer.contains_snapshot(&info.backup_dir.to_string()) {
if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
task_log!(worker, "skip snapshot {}", info.backup_dir);
continue;
}
task_log!(worker, "backup snapshot {}", info.backup_dir);
backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)?;
need_catalog = true;
let snapshot_name = info.backup_dir.to_string();
if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
errors = true;
} else {
summary.snapshot_list.push(snapshot_name);
}
progress.done_snapshots = 1;
task_log!(
worker,
"percentage done: {}",
progress
);
}
} else {
for info in snapshot_list {
if pool_writer.contains_snapshot(&info.backup_dir.to_string()) {
progress.group_snapshots = snapshot_list.len() as u64;
for (snapshot_number, info) in snapshot_list.into_iter().enumerate() {
if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
task_log!(worker, "skip snapshot {}", info.backup_dir);
continue;
}
task_log!(worker, "backup snapshot {}", info.backup_dir);
backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)?;
need_catalog = true;
let snapshot_name = info.backup_dir.to_string();
if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
errors = true;
} else {
summary.snapshot_list.push(snapshot_name);
}
progress.done_snapshots = snapshot_number as u64 + 1;
task_log!(
worker,
"percentage done: {}",
progress
);
}
}
}
pool_writer.commit()?;
if need_catalog {
task_log!(worker, "append media catalog");
let uuid = pool_writer.load_writable_media(worker)?;
let done = pool_writer.append_catalog_archive(worker)?;
if !done {
task_log!(worker, "catalog does not fit on tape, writing to next volume");
pool_writer.set_media_status_full(&uuid)?;
pool_writer.load_writable_media(worker)?;
let done = pool_writer.append_catalog_archive(worker)?;
if !done {
bail!("write_catalog_archive failed on second media");
}
}
}
if setup.export_media_set.unwrap_or(false) {
pool_writer.export_media_set(worker)?;
} else if setup.eject_media.unwrap_or(false) {
pool_writer.eject_media(worker)?;
}
Ok(())
if errors {
bail!("Tape backup finished with some errors. Please check the task log.");
}
summary.duration = start.elapsed();
Ok(summary)
}
// Try to update the the media online status
@ -460,39 +558,61 @@ pub fn backup_snapshot(
pool_writer: &mut PoolWriter,
datastore: Arc<DataStore>,
snapshot: BackupDir,
) -> Result<(), Error> {
) -> Result<bool, Error> {
task_log!(worker, "start backup {}:{}", datastore.name(), snapshot);
task_log!(worker, "backup snapshot {}", snapshot);
let snapshot_reader = SnapshotReader::new(datastore.clone(), snapshot.clone())?;
let snapshot_reader = match SnapshotReader::new(datastore.clone(), snapshot.clone()) {
Ok(reader) => reader,
Err(err) => {
// ignore missing snapshots and continue
task_warn!(worker, "failed opening snapshot '{}': {}", snapshot, err);
return Ok(false);
}
};
let mut chunk_iter = snapshot_reader.chunk_iterator()?.peekable();
let snapshot_reader = Arc::new(Mutex::new(snapshot_reader));
let (reader_thread, chunk_iter) = pool_writer.spawn_chunk_reader_thread(
datastore.clone(),
snapshot_reader.clone(),
)?;
let mut chunk_iter = chunk_iter.peekable();
loop {
worker.check_abort()?;
// test is we have remaining chunks
if chunk_iter.peek().is_none() {
break;
match chunk_iter.peek() {
None => break,
Some(Ok(_)) => { /* Ok */ },
Some(Err(err)) => bail!("{}", err),
}
let uuid = pool_writer.load_writable_media(worker)?;
worker.check_abort()?;
let (leom, _bytes) = pool_writer.append_chunk_archive(worker, &datastore, &mut chunk_iter)?;
let (leom, _bytes) = pool_writer.append_chunk_archive(worker, &mut chunk_iter, datastore.name())?;
if leom {
pool_writer.set_media_status_full(&uuid)?;
}
}
if let Err(_) = reader_thread.join() {
bail!("chunk reader thread failed");
}
worker.check_abort()?;
let uuid = pool_writer.load_writable_media(worker)?;
worker.check_abort()?;
let snapshot_reader = snapshot_reader.lock().unwrap();
let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
if !done {
@ -511,5 +631,5 @@ pub fn backup_snapshot(
task_log!(worker, "end backup {}:{}", datastore.name(), snapshot);
Ok(())
Ok(true)
}

View File

@ -48,15 +48,20 @@ use crate::{
MamAttribute,
LinuxDriveAndMediaStatus,
},
tape::restore::restore_media,
tape::restore::{
fast_catalog_restore,
restore_media,
},
},
server::WorkerTask,
tape::{
TAPE_STATUS_DIR,
MediaPool,
Inventory,
MediaCatalog,
MediaId,
lock_media_set,
lock_media_pool,
lock_unassigned_media_pool,
linux_tape_device_list,
lookup_device_identification,
file_formats::{
@ -220,7 +225,7 @@ pub async fn load_slot(drive: String, source_slot: u64) -> Result<(), Error> {
},
},
returns: {
description: "The import-export slot number the media was transfered to.",
description: "The import-export slot number the media was transferred to.",
type: u64,
minimum: 1,
},
@ -373,10 +378,19 @@ pub fn erase_media(
);
let status_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(status_path)?;
let mut inventory = Inventory::new(status_path);
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(status_path, pool)?;
let _media_set_lock = lock_media_set(status_path, uuid, None)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.remove_media(&media_id.label.uuid)?;
} else {
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.remove_media(&media_id.label.uuid)?;
};
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.remove_media(&media_id.label.uuid)?;
handle.erase_media(fast.unwrap_or(true))?;
}
}
@ -548,28 +562,37 @@ fn write_media_label(
drive.label_tape(&label)?;
let mut media_set_label = None;
let status_path = Path::new(TAPE_STATUS_DIR);
if let Some(ref pool) = pool {
let media_id = if let Some(ref pool) = pool {
// assign media to pool by writing special media set label
worker.log(format!("Label media '{}' for pool '{}'", label.label_text, pool));
let set = MediaSetLabel::with_data(&pool, [0u8; 16].into(), 0, label.ctime, None);
drive.write_media_set_label(&set, None)?;
media_set_label = Some(set);
let media_id = MediaId { label, media_set_label: Some(set) };
// Create the media catalog
MediaCatalog::overwrite(status_path, &media_id, false)?;
let mut inventory = Inventory::new(status_path);
inventory.store(media_id.clone(), false)?;
media_id
} else {
worker.log(format!("Label media '{}' (no pool assignment)", label.label_text));
}
let media_id = MediaId { label, media_set_label };
let media_id = MediaId { label, media_set_label: None };
let status_path = Path::new(TAPE_STATUS_DIR);
// Create the media catalog
MediaCatalog::overwrite(status_path, &media_id, false)?;
// Create the media catalog
MediaCatalog::overwrite(status_path, &media_id, false)?;
let mut inventory = Inventory::new(status_path);
inventory.store(media_id.clone(), false)?;
let mut inventory = Inventory::load(status_path)?;
inventory.store(media_id.clone(), false)?;
media_id
};
drive.rewind()?;
@ -705,14 +728,24 @@ pub async fn read_label(
if let Err(err) = drive.set_encryption(encrypt_fingerprint) {
// try, but ignore errors. just log to stderr
eprintln!("uable to load encryption key: {}", err);
eprintln!("unable to load encryption key: {}", err);
}
}
if let Some(true) = inventorize {
let state_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(state_path)?;
inventory.store(media_id, false)?;
let mut inventory = Inventory::new(state_path);
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(state_path, pool)?;
let _lock = lock_media_set(state_path, uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(state_path, &media_id)?;
inventory.store(media_id, false)?;
} else {
let _lock = lock_unassigned_media_pool(state_path)?;
MediaCatalog::destroy(state_path, &media_id.label.uuid)?;
inventory.store(media_id, false)?;
};
}
flat
@ -782,7 +815,7 @@ pub fn clean_drive(
}
}
worker.log("Drive cleaned sucessfully");
worker.log("Drive cleaned successfully");
Ok(())
},
@ -943,11 +976,21 @@ pub fn update_inventory(
}
Ok((Some(media_id), _key_config)) => {
if label_text != media_id.label.label_text {
worker.warn(format!("label text missmatch ({} != {})", label_text, media_id.label.label_text));
worker.warn(format!("label text mismatch ({} != {})", label_text, media_id.label.label_text));
continue;
}
worker.log(format!("inventorize media '{}' with uuid '{}'", label_text, media_id.label.uuid));
inventory.store(media_id, false)?;
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(state_path, pool)?;
let _lock = lock_media_set(state_path, uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(state_path, &media_id)?;
inventory.store(media_id, false)?;
} else {
let _lock = lock_unassigned_media_pool(state_path)?;
MediaCatalog::destroy(state_path, &media_id.label.uuid)?;
inventory.store(media_id, false)?;
};
}
}
changer.unload_media(None)?;
@ -1012,7 +1055,10 @@ fn barcode_label_media_worker(
) -> Result<(), Error> {
let (mut changer, changer_name) = required_media_changer(drive_config, &drive)?;
let label_text_list = changer.online_media_label_texts()?;
let mut label_text_list = changer.online_media_label_texts()?;
// make sure we label them in the right order
label_text_list.sort();
let state_path = Path::new(TAPE_STATUS_DIR);
@ -1181,6 +1227,11 @@ pub async fn status(drive: String) -> Result<LinuxDriveAndMediaStatus, Error> {
type: bool,
optional: true,
},
scan: {
description: "Re-read the whole tape to reconstruct the catalog instead of restoring saved versions.",
type: bool,
optional: true,
},
verbose: {
description: "Verbose mode - log all found chunks.",
type: bool,
@ -1199,11 +1250,13 @@ pub async fn status(drive: String) -> Result<LinuxDriveAndMediaStatus, Error> {
pub fn catalog_media(
drive: String,
force: Option<bool>,
scan: Option<bool>,
verbose: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let verbose = verbose.unwrap_or(false);
let force = force.unwrap_or(false);
let scan = scan.unwrap_or(false);
let upid_str = run_drive_worker(
rpcenv,
@ -1234,19 +1287,22 @@ pub fn catalog_media(
let status_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(status_path)?;
inventory.store(media_id.clone(), false)?;
let mut inventory = Inventory::new(status_path);
let pool = match media_id.media_set_label {
let (_media_set_lock, media_set_uuid) = match media_id.media_set_label {
None => {
worker.log("media is empty");
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.store(media_id.clone(), false)?;
return Ok(());
}
Some(ref set) => {
if set.uuid.as_ref() == [0u8;16] { // media is empty
worker.log("media is empty");
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.store(media_id.clone(), false)?;
return Ok(());
}
let encrypt_fingerprint = set.encryption_key_fingerprint.clone()
@ -1254,16 +1310,36 @@ pub fn catalog_media(
drive.set_encryption(encrypt_fingerprint)?;
set.pool.clone()
let _pool_lock = lock_media_pool(status_path, &set.pool)?;
let media_set_lock = lock_media_set(status_path, &set.uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(status_path, &media_id)?;
inventory.store(media_id.clone(), false)?;
(media_set_lock, &set.uuid)
}
};
let _lock = MediaPool::lock(status_path, &pool)?;
if MediaCatalog::exists(status_path, &media_id.label.uuid) && !force {
bail!("media catalog exists (please use --force to overwrite)");
}
if !scan {
let media_set = inventory.compute_media_set_members(media_set_uuid)?;
if fast_catalog_restore(&worker, &mut drive, &media_set, &media_id.label.uuid)? {
return Ok(())
}
task_log!(worker, "no catalog found");
}
task_log!(worker, "scanning entire media to reconstruct catalog");
drive.rewind()?;
drive.read_label()?; // skip over labels - we already read them above
restore_media(&worker, &mut drive, &media_id, None, verbose)?;
Ok(())

View File

@ -122,7 +122,7 @@ pub async fn list_media(
let config: MediaPoolConfig = config.lookup("pool", pool_name)?;
let changer_name = None; // assume standalone drive
let mut pool = MediaPool::with_config(status_path, &config, changer_name)?;
let mut pool = MediaPool::with_config(status_path, &config, changer_name, true)?;
let current_time = proxmox::tools::time::epoch_i64();
@ -432,29 +432,32 @@ pub fn list_content(
.generate_media_set_name(&set.uuid, template)
.unwrap_or_else(|_| set.uuid.to_string());
let catalog = MediaCatalog::open(status_path, &media_id.label.uuid, false, false)?;
let catalog = MediaCatalog::open(status_path, &media_id, false, false)?;
for snapshot in catalog.snapshot_index().keys() {
let backup_dir: BackupDir = snapshot.parse()?;
for (store, content) in catalog.content() {
for snapshot in content.snapshot_index.keys() {
let backup_dir: BackupDir = snapshot.parse()?;
if let Some(ref backup_type) = filter.backup_type {
if backup_dir.group().backup_type() != backup_type { continue; }
if let Some(ref backup_type) = filter.backup_type {
if backup_dir.group().backup_type() != backup_type { continue; }
}
if let Some(ref backup_id) = filter.backup_id {
if backup_dir.group().backup_id() != backup_id { continue; }
}
list.push(MediaContentEntry {
uuid: media_id.label.uuid.clone(),
label_text: media_id.label.label_text.to_string(),
pool: set.pool.clone(),
media_set_name: media_set_name.clone(),
media_set_uuid: set.uuid.clone(),
media_set_ctime: set.ctime,
seq_nr: set.seq_nr,
snapshot: snapshot.to_owned(),
store: store.to_owned(),
backup_time: backup_dir.backup_time(),
});
}
if let Some(ref backup_id) = filter.backup_id {
if backup_dir.group().backup_id() != backup_id { continue; }
}
list.push(MediaContentEntry {
uuid: media_id.label.uuid.clone(),
label_text: media_id.label.label_text.to_string(),
pool: set.pool.clone(),
media_set_name: media_set_name.clone(),
media_set_uuid: set.uuid.clone(),
media_set_ctime: set.ctime,
seq_nr: set.seq_nr,
snapshot: snapshot.to_owned(),
backup_time: backup_dir.backup_time(),
});
}
}
@ -497,7 +500,7 @@ pub fn get_media_status(uuid: Uuid) -> Result<MediaStatus, Error> {
/// Update media status (None, 'full', 'damaged' or 'retired')
///
/// It is not allowed to set status to 'writable' or 'unknown' (those
/// are internaly managed states).
/// are internally managed states).
pub fn update_media_status(uuid: Uuid, status: Option<MediaStatus>) -> Result<(), Error> {
let status_path = Path::new(TAPE_STATUS_DIR);

View File

@ -1,6 +1,9 @@
use std::path::Path;
use std::ffi::OsStr;
use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
use std::io::{Seek, SeekFrom};
use std::sync::Arc;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
@ -12,6 +15,7 @@ use proxmox::{
RpcEnvironmentType,
Router,
Permission,
schema::parse_property_string,
section_config::SectionConfigData,
},
tools::{
@ -26,10 +30,12 @@ use proxmox::{
use crate::{
task_log,
task_warn,
task::TaskState,
tools::compute_file_csum,
api2::types::{
DATASTORE_SCHEMA,
DATASTORE_MAP_ARRAY_SCHEMA,
DATASTORE_MAP_LIST_SCHEMA,
DRIVE_NAME_SCHEMA,
UPID_SCHEMA,
Authid,
@ -40,6 +46,7 @@ use crate::{
cached_user_info::CachedUserInfo,
acl::{
PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_MODIFY,
PRIV_TAPE_READ,
},
},
@ -64,17 +71,24 @@ use crate::{
TAPE_STATUS_DIR,
TapeRead,
MediaId,
MediaSet,
MediaCatalog,
MediaPool,
Inventory,
lock_media_set,
file_formats::{
PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1,
PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1,
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0,
MediaContentHeader,
ChunkArchiveHeader,
ChunkArchiveDecoder,
SnapshotArchiveHeader,
CatalogArchiveHeader,
},
drive::{
TapeDriver,
@ -85,14 +99,75 @@ use crate::{
},
};
pub const ROUTER: Router = Router::new()
.post(&API_METHOD_RESTORE);
pub struct DataStoreMap {
map: HashMap<String, Arc<DataStore>>,
default: Option<Arc<DataStore>>,
}
impl TryFrom<String> for DataStoreMap {
type Error = Error;
fn try_from(value: String) -> Result<Self, Error> {
let value = parse_property_string(&value, &DATASTORE_MAP_ARRAY_SCHEMA)?;
let mut mapping: Vec<String> = value
.as_array()
.unwrap()
.iter()
.map(|v| v.as_str().unwrap().to_string())
.collect();
let mut map = HashMap::new();
let mut default = None;
while let Some(mut store) = mapping.pop() {
if let Some(index) = store.find('=') {
let mut target = store.split_off(index);
target.remove(0); // remove '='
let datastore = DataStore::lookup_datastore(&target)?;
map.insert(store, datastore);
} else if default.is_none() {
default = Some(DataStore::lookup_datastore(&store)?);
} else {
bail!("multiple default stores given");
}
}
Ok(Self { map, default })
}
}
impl DataStoreMap {
fn used_datastores<'a>(&self) -> HashSet<&str> {
let mut set = HashSet::new();
for store in self.map.values() {
set.insert(store.name());
}
if let Some(ref store) = self.default {
set.insert(store.name());
}
set
}
fn get_datastore(&self, source: &str) -> Option<&DataStore> {
if let Some(store) = self.map.get(source) {
return Some(&store);
}
if let Some(ref store) = self.default {
return Some(&store);
}
return None;
}
}
pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
#[api(
input: {
properties: {
store: {
schema: DATASTORE_SCHEMA,
schema: DATASTORE_MAP_LIST_SCHEMA,
},
drive: {
schema: DRIVE_NAME_SCHEMA,
@ -105,6 +180,10 @@ pub const ROUTER: Router = Router::new()
type: Userid,
optional: true,
},
owner: {
type: Authid,
optional: true,
},
},
},
returns: {
@ -123,15 +202,34 @@ pub fn restore(
drive: String,
media_set: String,
notify_user: Option<Userid>,
owner: Option<Authid>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
if (privs & PRIV_DATASTORE_BACKUP) == 0 {
bail!("no permissions on /datastore/{}", store);
let store_map = DataStoreMap::try_from(store)
.map_err(|err| format_err!("cannot parse store mapping: {}", err))?;
let used_datastores = store_map.used_datastores();
if used_datastores.len() == 0 {
bail!("no datastores given");
}
for store in used_datastores.iter() {
let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
if (privs & PRIV_DATASTORE_BACKUP) == 0 {
bail!("no permissions on /datastore/{}", store);
}
if let Some(ref owner) = owner {
let correct_owner = owner == &auth_id
|| (owner.is_token() && !auth_id.is_token() && owner.user() == auth_id.user());
// same permission as changing ownership after syncing
if !correct_owner && privs & PRIV_DATASTORE_MODIFY == 0 {
bail!("no permission to restore as '{}'", owner);
}
}
}
let privs = user_info.lookup_privs(&auth_id, &["tape", "drive", &drive]);
@ -139,11 +237,14 @@ pub fn restore(
bail!("no permissions on /tape/drive/{}", drive);
}
let status_path = Path::new(TAPE_STATUS_DIR);
let inventory = Inventory::load(status_path)?;
let media_set_uuid = media_set.parse()?;
let status_path = Path::new(TAPE_STATUS_DIR);
let _lock = lock_media_set(status_path, &media_set_uuid, None)?;
let inventory = Inventory::load(status_path)?;
let pool = inventory.lookup_media_set_pool(&media_set_uuid)?;
let privs = user_info.lookup_privs(&auth_id, &["tape", "pool", &pool]);
@ -151,8 +252,6 @@ pub fn restore(
bail!("no permissions on /tape/pool/{}", pool);
}
let datastore = DataStore::lookup_datastore(&store)?;
let (drive_config, _digest) = config::drive::config()?;
// early check/lock before starting worker
@ -160,9 +259,14 @@ pub fn restore(
let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
let taskid = used_datastores
.iter()
.map(|s| s.to_string())
.collect::<Vec<String>>()
.join(", ");
let upid_str = WorkerTask::new_thread(
"tape-restore",
Some(store.clone()),
Some(taskid),
auth_id.clone(),
to_stdout,
move |worker| {
@ -170,8 +274,6 @@ pub fn restore(
set_tape_device_state(&drive, &worker.upid().to_string())?;
let _lock = MediaPool::lock(status_path, &pool)?;
let members = inventory.compute_media_set_members(&media_set_uuid)?;
let media_list = members.media_list();
@ -202,7 +304,11 @@ pub fn restore(
task_log!(worker, "Encryption key fingerprint: {}", fingerprint);
}
task_log!(worker, "Pool: {}", pool);
task_log!(worker, "Datastore: {}", store);
task_log!(worker, "Datastore(s):");
store_map
.used_datastores()
.iter()
.for_each(|store| task_log!(worker, "\t{}", store));
task_log!(worker, "Drive: {}", drive);
task_log!(
worker,
@ -219,9 +325,10 @@ pub fn restore(
media_id,
&drive_config,
&drive,
&datastore,
&store_map,
&auth_id,
&notify_user,
&owner,
)?;
}
@ -249,11 +356,11 @@ pub fn request_and_restore_media(
media_id: &MediaId,
drive_config: &SectionConfigData,
drive_name: &str,
datastore: &DataStore,
store_map: &DataStoreMap,
authid: &Authid,
notify_user: &Option<Userid>,
owner: &Option<Authid>,
) -> Result<(), Error> {
let media_set_uuid = match media_id.media_set_label {
None => bail!("restore_media: no media set - internal error"),
Some(ref set) => &set.uuid,
@ -284,7 +391,15 @@ pub fn request_and_restore_media(
}
}
restore_media(worker, &mut drive, &info, Some((datastore, authid)), false)
let restore_owner = owner.as_ref().unwrap_or(authid);
restore_media(
worker,
&mut drive,
&info,
Some((&store_map, restore_owner)),
false,
)
}
/// Restore complete media content and catalog
@ -294,7 +409,7 @@ pub fn restore_media(
worker: &WorkerTask,
drive: &mut Box<dyn TapeDriver>,
media_id: &MediaId,
target: Option<(&DataStore, &Authid)>,
target: Option<(&DataStoreMap, &Authid)>,
verbose: bool,
) -> Result<(), Error> {
@ -323,11 +438,10 @@ fn restore_archive<'a>(
worker: &WorkerTask,
mut reader: Box<dyn 'a + TapeRead>,
current_file_number: u64,
target: Option<(&DataStore, &Authid)>,
target: Option<(&DataStoreMap, &Authid)>,
catalog: &mut MediaCatalog,
verbose: bool,
) -> Result<(), Error> {
let header: MediaContentHeader = unsafe { reader.read_le_value()? };
if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
bail!("missing MediaContentHeader");
@ -340,67 +454,123 @@ fn restore_archive<'a>(
bail!("unexpected content magic (label)");
}
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0 => {
let snapshot = reader.read_exact_allocated(header.size as usize)?;
let snapshot = std::str::from_utf8(&snapshot)
.map_err(|_| format_err!("found snapshot archive with non-utf8 characters in name"))?;
task_log!(worker, "Found snapshot archive: {} {}", current_file_number, snapshot);
bail!("unexpected snapshot archive version (v1.0)");
}
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1 => {
let header_data = reader.read_exact_allocated(header.size as usize)?;
let archive_header: SnapshotArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse snapshot archive header - {}", err))?;
let datastore_name = archive_header.store;
let snapshot = archive_header.snapshot;
task_log!(worker, "File {}: snapshot archive {}:{}", current_file_number, datastore_name, snapshot);
let backup_dir: BackupDir = snapshot.parse()?;
if let Some((datastore, authid)) = target.as_ref() {
let (owner, _group_lock) = datastore.create_locked_backup_group(backup_dir.group(), authid)?;
if *authid != &owner { // only the owner is allowed to create additional snapshots
bail!("restore '{}' failed - owner check failed ({} != {})", snapshot, authid, owner);
}
let (rel_path, is_new, _snap_lock) = datastore.create_locked_backup_dir(&backup_dir)?;
let mut path = datastore.base_path();
path.push(rel_path);
if is_new {
task_log!(worker, "restore snapshot {}", backup_dir);
match restore_snapshot_archive(worker, reader, &path) {
Err(err) => {
std::fs::remove_dir_all(&path)?;
bail!("restore snapshot {} failed - {}", backup_dir, err);
}
Ok(false) => {
std::fs::remove_dir_all(&path)?;
task_log!(worker, "skip incomplete snapshot {}", backup_dir);
}
Ok(true) => {
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, snapshot)?;
catalog.commit_if_large()?;
}
if let Some((store_map, authid)) = target.as_ref() {
if let Some(datastore) = store_map.get_datastore(&datastore_name) {
let (owner, _group_lock) =
datastore.create_locked_backup_group(backup_dir.group(), authid)?;
if *authid != &owner {
// only the owner is allowed to create additional snapshots
bail!(
"restore '{}' failed - owner check failed ({} != {})",
snapshot,
authid,
owner
);
}
return Ok(());
let (rel_path, is_new, _snap_lock) =
datastore.create_locked_backup_dir(&backup_dir)?;
let mut path = datastore.base_path();
path.push(rel_path);
if is_new {
task_log!(worker, "restore snapshot {}", backup_dir);
match restore_snapshot_archive(worker, reader, &path) {
Err(err) => {
std::fs::remove_dir_all(&path)?;
bail!("restore snapshot {} failed - {}", backup_dir, err);
}
Ok(false) => {
std::fs::remove_dir_all(&path)?;
task_log!(worker, "skip incomplete snapshot {}", backup_dir);
}
Ok(true) => {
catalog.register_snapshot(
Uuid::from(header.uuid),
current_file_number,
&datastore_name,
&snapshot,
)?;
catalog.commit_if_large()?;
}
}
return Ok(());
}
} else {
task_log!(worker, "skipping...");
}
}
reader.skip_to_end()?; // read all data
if let Ok(false) = reader.is_incomplete() {
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, snapshot)?;
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, &datastore_name, &snapshot)?;
catalog.commit_if_large()?;
}
}
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0 => {
task_log!(worker, "Found chunk archive: {}", current_file_number);
let datastore = target.as_ref().map(|t| t.0);
if let Some(chunks) = restore_chunk_archive(worker, reader, datastore, verbose)? {
catalog.start_chunk_archive(Uuid::from(header.uuid), current_file_number)?;
for digest in chunks.iter() {
catalog.register_chunk(&digest)?;
}
task_log!(worker, "register {} chunks", chunks.len());
catalog.end_chunk_archive()?;
catalog.commit_if_large()?;
}
bail!("unexpected chunk archive version (v1.0)");
}
_ => bail!("unknown content magic {:?}", header.content_magic),
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1 => {
let header_data = reader.read_exact_allocated(header.size as usize)?;
let archive_header: ChunkArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse chunk archive header - {}", err))?;
let source_datastore = archive_header.store;
task_log!(worker, "File {}: chunk archive for datastore '{}'", current_file_number, source_datastore);
let datastore = target
.as_ref()
.and_then(|t| t.0.get_datastore(&source_datastore));
if datastore.is_some() || target.is_none() {
if let Some(chunks) = restore_chunk_archive(worker, reader, datastore, verbose)? {
catalog.start_chunk_archive(
Uuid::from(header.uuid),
current_file_number,
&source_datastore,
)?;
for digest in chunks.iter() {
catalog.register_chunk(&digest)?;
}
task_log!(worker, "register {} chunks", chunks.len());
catalog.end_chunk_archive()?;
catalog.commit_if_large()?;
}
return Ok(());
} else if target.is_some() {
task_log!(worker, "skipping...");
}
reader.skip_to_end()?; // read all data
}
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0 => {
let header_data = reader.read_exact_allocated(header.size as usize)?;
let archive_header: CatalogArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse catalog archive header - {}", err))?;
task_log!(worker, "File {}: skip catalog '{}'", current_file_number, archive_header.uuid);
reader.skip_to_end()?; // read all data
}
_ => bail!("unknown content magic {:?}", header.content_magic),
}
catalog.commit()?;
@ -617,3 +787,137 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
Ok(())
}
/// Try to restore media catalogs (form catalog_archives)
pub fn fast_catalog_restore(
worker: &WorkerTask,
drive: &mut Box<dyn TapeDriver>,
media_set: &MediaSet,
uuid: &Uuid, // current media Uuid
) -> Result<bool, Error> {
let status_path = Path::new(TAPE_STATUS_DIR);
let current_file_number = drive.current_file_number()?;
if current_file_number != 2 {
bail!("fast_catalog_restore: wrong media position - internal error");
}
let mut found_catalog = false;
let mut moved_to_eom = false;
loop {
let current_file_number = drive.current_file_number()?;
{ // limit reader scope
let mut reader = match drive.read_next_file()? {
None => {
task_log!(worker, "detected EOT after {} files", current_file_number);
break;
}
Some(reader) => reader,
};
let header: MediaContentHeader = unsafe { reader.read_le_value()? };
if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
bail!("missing MediaContentHeader");
}
if header.content_magic == PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0 {
task_log!(worker, "found catalog at pos {}", current_file_number);
let header_data = reader.read_exact_allocated(header.size as usize)?;
let archive_header: CatalogArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse catalog archive header - {}", err))?;
if &archive_header.media_set_uuid != media_set.uuid() {
task_log!(worker, "skipping unrelated catalog at pos {}", current_file_number);
reader.skip_to_end()?; // read all data
continue;
}
let catalog_uuid = &archive_header.uuid;
let wanted = media_set
.media_list()
.iter()
.find(|e| {
match e {
None => false,
Some(uuid) => uuid == catalog_uuid,
}
})
.is_some();
if !wanted {
task_log!(worker, "skip catalog because media '{}' not inventarized", catalog_uuid);
reader.skip_to_end()?; // read all data
continue;
}
if catalog_uuid == uuid {
// always restore and overwrite catalog
} else {
// only restore if catalog does not exist
if MediaCatalog::exists(status_path, catalog_uuid) {
task_log!(worker, "catalog for media '{}' already exists", catalog_uuid);
reader.skip_to_end()?; // read all data
continue;
}
}
let mut file = MediaCatalog::create_temporary_database_file(status_path, catalog_uuid)?;
std::io::copy(&mut reader, &mut file)?;
file.seek(SeekFrom::Start(0))?;
match MediaCatalog::parse_catalog_header(&mut file)? {
(true, Some(media_uuid), Some(media_set_uuid)) => {
if &media_uuid != catalog_uuid {
task_log!(worker, "catalog uuid missmatch at pos {}", current_file_number);
continue;
}
if media_set_uuid != archive_header.media_set_uuid {
task_log!(worker, "catalog media_set missmatch at pos {}", current_file_number);
continue;
}
MediaCatalog::finish_temporary_database(status_path, &media_uuid, true)?;
if catalog_uuid == uuid {
task_log!(worker, "successfully restored catalog");
found_catalog = true
} else {
task_log!(worker, "successfully restored related catalog {}", media_uuid);
}
}
_ => {
task_warn!(worker, "got incomplete catalog header - skip file");
continue;
}
}
continue;
}
}
if moved_to_eom {
break; // already done - stop
}
moved_to_eom = true;
task_log!(worker, "searching for catalog at EOT (moving to EOT)");
drive.move_to_last_file()?;
let new_file_number = drive.current_file_number()?;
if new_file_number < (current_file_number + 1) {
break; // no new content - stop
}
}
Ok(found_catalog)
}

View File

@ -99,6 +99,8 @@ const_regex!{
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$";
pub DATASTORE_MAP_REGEX = concat!(r"(:?", PROXMOX_SAFE_ID_REGEX_STR!(), r"=)?", PROXMOX_SAFE_ID_REGEX_STR!());
}
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
@ -164,6 +166,9 @@ pub const SUBSCRIPTION_KEY_FORMAT: ApiStringFormat =
pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX);
pub const DATASTORE_MAP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DATASTORE_MAP_REGEX);
pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.")
.format(&PASSWORD_FORMAT)
.min_length(1)
@ -356,6 +361,25 @@ pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.max_length(32)
.schema();
pub const DATASTORE_MAP_SCHEMA: Schema = StringSchema::new("Datastore mapping.")
.format(&DATASTORE_MAP_FORMAT)
.min_length(3)
.max_length(65)
.type_text("(<source>=)?<target>")
.schema();
pub const DATASTORE_MAP_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Datastore mapping list.", &DATASTORE_MAP_SCHEMA)
.schema();
pub const DATASTORE_MAP_LIST_SCHEMA: Schema = StringSchema::new(
"A list of Datastore mappings (or single datastore), comma separated. \
For example 'a=b,e' maps the source datastore 'a' to target 'b and \
all other sources to the default 'e'. If no default is given, only the \
specified sources are mapped.")
.format(&ApiStringFormat::PropertyString(&DATASTORE_MAP_ARRAY_SCHEMA))
.schema();
pub const MEDIA_SET_UUID_SCHEMA: Schema =
StringSchema::new("MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).")
.format(&UUID_FORMAT)
@ -1272,7 +1296,7 @@ pub struct APTUpdateInfo {
pub enum Notify {
/// Never send notification
Never,
/// Send notifications for failed and sucessful jobs
/// Send notifications for failed and successful jobs
Always,
/// Send notifications for failed jobs only
Error,

View File

@ -21,7 +21,7 @@ pub struct OptionalDeviceIdentification {
#[api()]
#[derive(Debug,Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Kind of devive
/// Kind of device
pub enum DeviceKind {
/// Tape changer (Autoloader, Robot)
Changer,

View File

@ -144,6 +144,8 @@ pub struct MediaContentEntry {
pub seq_nr: u64,
/// Media Pool
pub pool: String,
/// Datastore Name
pub store: String,
/// Backup snapshot
pub snapshot: String,
/// Snapshot creation time (epoch)

View File

@ -75,7 +75,7 @@
//!
//! Since PBS allows multiple potentially interfering operations at the
//! same time (e.g. garbage collect, prune, multiple backup creations
//! (only in seperate groups), forget, ...), these need to lock against
//! (only in separate groups), forget, ...), these need to lock against
//! each other in certain scenarios. There is no overarching global lock
//! though, instead always the finest grained lock possible is used,
//! because running these operations concurrently is treated as a feature

View File

@ -3,17 +3,29 @@ use crate::tools;
use anyhow::{bail, format_err, Error};
use std::os::unix::io::RawFd;
use std::path::{PathBuf, Path};
use std::path::{Path, PathBuf};
use proxmox::const_regex;
use super::manifest::MANIFEST_BLOB_NAME;
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9_][A-Za-z0-9._\-]*") }
macro_rules! BACKUP_TYPE_RE { () => (r"(?:host|vm|ct)") }
macro_rules! BACKUP_TIME_RE { () => (r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z") }
macro_rules! BACKUP_ID_RE {
() => {
r"[A-Za-z0-9_][A-Za-z0-9._\-]*"
};
}
macro_rules! BACKUP_TYPE_RE {
() => {
r"(?:host|vm|ct)"
};
}
macro_rules! BACKUP_TIME_RE {
() => {
r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z"
};
}
const_regex!{
const_regex! {
BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$");
@ -38,7 +50,6 @@ pub struct BackupGroup {
}
impl std::cmp::Ord for BackupGroup {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
let type_order = self.backup_type.cmp(&other.backup_type);
if type_order != std::cmp::Ordering::Equal {
@ -51,7 +62,7 @@ impl std::cmp::Ord for BackupGroup {
(Ok(id_self), Ok(id_other)) => id_self.cmp(&id_other),
(Ok(_), Err(_)) => std::cmp::Ordering::Less,
(Err(_), Ok(_)) => std::cmp::Ordering::Greater,
_ => self.backup_id.cmp(&other.backup_id),
_ => self.backup_id.cmp(&other.backup_id),
}
}
}
@ -63,9 +74,11 @@ impl std::cmp::PartialOrd for BackupGroup {
}
impl BackupGroup {
pub fn new<T: Into<String>, U: Into<String>>(backup_type: T, backup_id: U) -> Self {
Self { backup_type: backup_type.into(), backup_id: backup_id.into() }
Self {
backup_type: backup_type.into(),
backup_id: backup_id.into(),
}
}
pub fn backup_type(&self) -> &str {
@ -76,8 +89,7 @@ impl BackupGroup {
&self.backup_id
}
pub fn group_path(&self) -> PathBuf {
pub fn group_path(&self) -> PathBuf {
let mut relative_path = PathBuf::new();
relative_path.push(&self.backup_type);
@ -88,60 +100,82 @@ impl BackupGroup {
}
pub fn list_backups(&self, base_path: &Path) -> Result<Vec<BackupInfo>, Error> {
let mut list = vec![];
let mut path = base_path.to_owned();
path.push(self.group_path());
tools::scandir(libc::AT_FDCWD, &path, &BACKUP_DATE_REGEX, |l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(
libc::AT_FDCWD,
&path,
&BACKUP_DATE_REGEX,
|l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
let backup_dir = BackupDir::with_rfc3339(&self.backup_type, &self.backup_id, backup_time)?;
let files = list_backup_files(l2_fd, backup_time)?;
let backup_dir =
BackupDir::with_rfc3339(&self.backup_type, &self.backup_id, backup_time)?;
let files = list_backup_files(l2_fd, backup_time)?;
list.push(BackupInfo { backup_dir, files });
list.push(BackupInfo { backup_dir, files });
Ok(())
})?;
Ok(())
},
)?;
Ok(list)
}
pub fn last_successful_backup(&self, base_path: &Path) -> Result<Option<i64>, Error> {
pub fn last_successful_backup(&self, base_path: &Path) -> Result<Option<i64>, Error> {
let mut last = None;
let mut path = base_path.to_owned();
path.push(self.group_path());
tools::scandir(libc::AT_FDCWD, &path, &BACKUP_DATE_REGEX, |l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
let mut manifest_path = PathBuf::from(backup_time);
manifest_path.push(MANIFEST_BLOB_NAME);
use nix::fcntl::{openat, OFlag};
match openat(l2_fd, &manifest_path, OFlag::O_RDONLY, nix::sys::stat::Mode::empty()) {
Ok(rawfd) => {
/* manifest exists --> assume backup was successful */
/* close else this leaks! */
nix::unistd::close(rawfd)?;
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => { return Ok(()); }
Err(err) => {
bail!("last_successful_backup: unexpected error - {}", err);
tools::scandir(
libc::AT_FDCWD,
&path,
&BACKUP_DATE_REGEX,
|l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
}
let timestamp = proxmox::tools::time::parse_rfc3339(backup_time)?;
if let Some(last_timestamp) = last {
if timestamp > last_timestamp { last = Some(timestamp); }
} else {
last = Some(timestamp);
}
let mut manifest_path = PathBuf::from(backup_time);
manifest_path.push(MANIFEST_BLOB_NAME);
Ok(())
})?;
use nix::fcntl::{openat, OFlag};
match openat(
l2_fd,
&manifest_path,
OFlag::O_RDONLY,
nix::sys::stat::Mode::empty(),
) {
Ok(rawfd) => {
/* manifest exists --> assume backup was successful */
/* close else this leaks! */
nix::unistd::close(rawfd)?;
}
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
return Ok(());
}
Err(err) => {
bail!("last_successful_backup: unexpected error - {}", err);
}
}
let timestamp = proxmox::tools::time::parse_rfc3339(backup_time)?;
if let Some(last_timestamp) = last {
if timestamp > last_timestamp {
last = Some(timestamp);
}
} else {
last = Some(timestamp);
}
Ok(())
},
)?;
Ok(last)
}
@ -162,7 +196,8 @@ impl std::str::FromStr for BackupGroup {
///
/// This parses strings like `vm/100".
fn from_str(path: &str) -> Result<Self, Self::Err> {
let cap = GROUP_PATH_REGEX.captures(path)
let cap = GROUP_PATH_REGEX
.captures(path)
.ok_or_else(|| format_err!("unable to parse backup group path '{}'", path))?;
Ok(Self {
@ -182,11 +217,10 @@ pub struct BackupDir {
/// Backup timestamp
backup_time: i64,
// backup_time as rfc3339
backup_time_string: String
backup_time_string: String,
}
impl BackupDir {
pub fn new<T, U>(backup_type: T, backup_id: U, backup_time: i64) -> Result<Self, Error>
where
T: Into<String>,
@ -196,21 +230,33 @@ impl BackupDir {
BackupDir::with_group(group, backup_time)
}
pub fn with_rfc3339<T,U,V>(backup_type: T, backup_id: U, backup_time_string: V) -> Result<Self, Error>
pub fn with_rfc3339<T, U, V>(
backup_type: T,
backup_id: U,
backup_time_string: V,
) -> Result<Self, Error>
where
T: Into<String>,
U: Into<String>,
V: Into<String>,
{
let backup_time_string = backup_time_string.into();
let backup_time_string = backup_time_string.into();
let backup_time = proxmox::tools::time::parse_rfc3339(&backup_time_string)?;
let group = BackupGroup::new(backup_type.into(), backup_id.into());
Ok(Self { group, backup_time, backup_time_string })
Ok(Self {
group,
backup_time,
backup_time_string,
})
}
pub fn with_group(group: BackupGroup, backup_time: i64) -> Result<Self, Error> {
let backup_time_string = Self::backup_time_to_string(backup_time)?;
Ok(Self { group, backup_time, backup_time_string })
Ok(Self {
group,
backup_time,
backup_time_string,
})
}
pub fn group(&self) -> &BackupGroup {
@ -225,8 +271,7 @@ impl BackupDir {
&self.backup_time_string
}
pub fn relative_path(&self) -> PathBuf {
pub fn relative_path(&self) -> PathBuf {
let mut relative_path = self.group.group_path();
relative_path.push(self.backup_time_string.clone());
@ -247,7 +292,8 @@ impl std::str::FromStr for BackupDir {
///
/// This parses strings like `host/elsa/2020-06-15T05:18:33Z".
fn from_str(path: &str) -> Result<Self, Self::Err> {
let cap = SNAPSHOT_PATH_REGEX.captures(path)
let cap = SNAPSHOT_PATH_REGEX
.captures(path)
.ok_or_else(|| format_err!("unable to parse backup snapshot path '{}'", path))?;
BackupDir::with_rfc3339(
@ -276,7 +322,6 @@ pub struct BackupInfo {
}
impl BackupInfo {
pub fn new(base_path: &Path, backup_dir: BackupDir) -> Result<BackupInfo, Error> {
let mut path = base_path.to_owned();
path.push(backup_dir.relative_path());
@ -287,19 +332,24 @@ impl BackupInfo {
}
/// Finds the latest backup inside a backup group
pub fn last_backup(base_path: &Path, group: &BackupGroup, only_finished: bool)
-> Result<Option<BackupInfo>, Error>
{
pub fn last_backup(
base_path: &Path,
group: &BackupGroup,
only_finished: bool,
) -> Result<Option<BackupInfo>, Error> {
let backups = group.list_backups(base_path)?;
Ok(backups.into_iter()
Ok(backups
.into_iter()
.filter(|item| !only_finished || item.is_finished())
.max_by_key(|item| item.backup_dir.backup_time()))
}
pub fn sort_list(list: &mut Vec<BackupInfo>, ascendending: bool) {
if ascendending { // oldest first
if ascendending {
// oldest first
list.sort_unstable_by(|a, b| a.backup_dir.backup_time.cmp(&b.backup_dir.backup_time));
} else { // newest first
} else {
// newest first
list.sort_unstable_by(|a, b| b.backup_dir.backup_time.cmp(&a.backup_dir.backup_time));
}
}
@ -316,31 +366,52 @@ impl BackupInfo {
pub fn list_backup_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
let mut list = Vec::new();
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(
libc::AT_FDCWD,
base_path,
&BACKUP_TYPE_REGEX,
|l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
tools::scandir(
l0_fd,
backup_type,
&BACKUP_ID_REGEX,
|_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
list.push(BackupGroup::new(backup_type, backup_id));
list.push(BackupGroup::new(backup_type, backup_id));
Ok(())
})
})?;
Ok(())
},
)
},
)?;
Ok(list)
}
pub fn is_finished(&self) -> bool {
// backup is considered unfinished if there is no manifest
self.files.iter().any(|name| name == super::MANIFEST_BLOB_NAME)
self.files
.iter()
.any(|name| name == super::MANIFEST_BLOB_NAME)
}
}
fn list_backup_files<P: ?Sized + nix::NixPath>(dirfd: RawFd, path: &P) -> Result<Vec<String>, Error> {
fn list_backup_files<P: ?Sized + nix::NixPath>(
dirfd: RawFd,
path: &P,
) -> Result<Vec<String>, Error> {
let mut files = vec![];
tools::scandir(dirfd, path, &BACKUP_FILE_REGEX, |_, filename, file_type| {
if file_type != nix::dir::Type::File { return Ok(()); }
if file_type != nix::dir::Type::File {
return Ok(());
}
files.push(filename.to_owned());
Ok(())
})?;

View File

@ -452,7 +452,7 @@ impl ChunkStore {
#[test]
fn test_chunk_store1() {
let mut path = std::fs::canonicalize(".").unwrap(); // we need absulute path
let mut path = std::fs::canonicalize(".").unwrap(); // we need absolute path
path.push(".testdir");
if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }

View File

@ -448,7 +448,7 @@ impl DataStore {
if !self.chunk_store.cond_touch_chunk(digest, false)? {
crate::task_warn!(
worker,
"warning: unable to access non-existant chunk {}, required by {:?}",
"warning: unable to access non-existent chunk {}, required by {:?}",
proxmox::tools::digest_to_hex(digest),
file_name,
);

View File

@ -180,7 +180,7 @@ fn get_tape_handle(param: &Value) -> Result<LinuxTapeHandle, Error> {
///
/// Positioning is done by first rewinding the tape and then spacing
/// forward over count file marks.
fn asf(count: i32, param: Value) -> Result<(), Error> {
fn asf(count: usize, param: Value) -> Result<(), Error> {
let mut handle = get_tape_handle(&param)?;
@ -212,7 +212,7 @@ fn asf(count: i32, param: Value) -> Result<(), Error> {
/// Backward space count files (position before file mark).
///
/// The tape is positioned on the last block of the previous file.
fn bsf(count: i32, param: Value) -> Result<(), Error> {
fn bsf(count: usize, param: Value) -> Result<(), Error> {
let mut handle = get_tape_handle(&param)?;
@ -478,7 +478,7 @@ fn erase(fast: Option<bool>, param: Value) -> Result<(), Error> {
/// Forward space count files (position after file mark).
///
/// The tape is positioned on the first block of the next file.
fn fsf(count: i32, param: Value) -> Result<(), Error> {
fn fsf(count: usize, param: Value) -> Result<(), Error> {
let mut handle = get_tape_handle(&param)?;

View File

@ -1,7 +1,5 @@
use std::collections::HashSet;
use std::convert::TryFrom;
use std::io::{self, Read, Write, Seek, SeekFrom};
use std::os::unix::io::{FromRawFd, RawFd};
use std::path::{Path, PathBuf};
use std::pin::Pin;
use std::sync::{Arc, Mutex};
@ -19,7 +17,7 @@ use pathpatterns::{MatchEntry, MatchType, PatternFlag};
use proxmox::{
tools::{
time::{strftime_local, epoch_i64},
fs::{file_get_contents, file_get_json, replace_file, CreateOptions, image_size},
fs::{file_get_json, replace_file, CreateOptions, image_size},
},
api::{
api,
@ -32,7 +30,11 @@ use proxmox::{
};
use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
use proxmox_backup::tools;
use proxmox_backup::tools::{
self,
StdChannelWriter,
TokioWriterAdapter,
};
use proxmox_backup::api2::types::*;
use proxmox_backup::api2::version;
use proxmox_backup::client::*;
@ -67,8 +69,18 @@ use proxmox_backup::backup::{
mod proxmox_backup_client;
use proxmox_backup_client::*;
mod proxmox_client_tools;
use proxmox_client_tools::*;
pub mod proxmox_client_tools;
use proxmox_client_tools::{
complete_archive_name, complete_auth_id, complete_backup_group, complete_backup_snapshot,
complete_backup_source, complete_chunk_size, complete_group_or_snapshot,
complete_img_archive_name, complete_pxar_archive_name, complete_repository, connect,
extract_repository_from_value,
key_source::{
crypto_parameters, format_key_source, get_encryption_key_password, KEYFD_SCHEMA,
KEYFILE_SCHEMA, MASTER_PUBKEY_FD_SCHEMA, MASTER_PUBKEY_FILE_SCHEMA,
},
CHUNK_SIZE_SCHEMA, REPO_URL_SCHEMA,
};
fn record_repository(repo: &BackupRepository) {
@ -162,7 +174,7 @@ async fn backup_directory<P: AsRef<Path>>(
dir_path: P,
archive_name: &str,
chunk_size: Option<usize>,
catalog: Arc<Mutex<CatalogWriter<crate::tools::StdChannelWriter>>>,
catalog: Arc<Mutex<CatalogWriter<TokioWriterAdapter<StdChannelWriter>>>>,
pxar_create_options: proxmox_backup::pxar::PxarCreateOptions,
upload_options: UploadOptions,
) -> Result<BackupStats, Error> {
@ -460,7 +472,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
}
struct CatalogUploadResult {
catalog_writer: Arc<Mutex<CatalogWriter<crate::tools::StdChannelWriter>>>,
catalog_writer: Arc<Mutex<CatalogWriter<TokioWriterAdapter<StdChannelWriter>>>>,
result: tokio::sync::oneshot::Receiver<Result<BackupStats, Error>>,
}
@ -473,7 +485,7 @@ fn spawn_catalog_upload(
let catalog_chunk_size = 512*1024;
let catalog_chunk_stream = ChunkStream::new(catalog_stream, Some(catalog_chunk_size));
let catalog_writer = Arc::new(Mutex::new(CatalogWriter::new(crate::tools::StdChannelWriter::new(catalog_tx))?));
let catalog_writer = Arc::new(Mutex::new(CatalogWriter::new(TokioWriterAdapter::new(StdChannelWriter::new(catalog_tx)))?));
let (catalog_result_tx, catalog_result_rx) = tokio::sync::oneshot::channel();
@ -499,437 +511,6 @@ fn spawn_catalog_upload(
Ok(CatalogUploadResult { catalog_writer, result: catalog_result_rx })
}
#[derive(Clone, Debug, Eq, PartialEq)]
enum KeySource {
DefaultKey,
Fd,
Path(String),
}
fn format_key_source(source: &KeySource, key_type: &str) -> String {
match source {
KeySource::DefaultKey => format!("Using default {} key..", key_type),
KeySource::Fd => format!("Using {} key from file descriptor..", key_type),
KeySource::Path(path) => format!("Using {} key from '{}'..", key_type, path),
}
}
#[derive(Clone, Debug, Eq, PartialEq)]
struct KeyWithSource {
pub source: KeySource,
pub key: Vec<u8>,
}
impl KeyWithSource {
pub fn from_fd(key: Vec<u8>) -> Self {
Self {
source: KeySource::Fd,
key,
}
}
pub fn from_default(key: Vec<u8>) -> Self {
Self {
source: KeySource::DefaultKey,
key,
}
}
pub fn from_path(path: String, key: Vec<u8>) -> Self {
Self {
source: KeySource::Path(path),
key,
}
}
}
#[derive(Debug, Eq, PartialEq)]
struct CryptoParams {
mode: CryptMode,
enc_key: Option<KeyWithSource>,
// FIXME switch to openssl::rsa::rsa<openssl::pkey::Public> once that is Eq?
master_pubkey: Option<KeyWithSource>,
}
fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
let keyfile = match param.get("keyfile") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --keyfile parameter type"),
None => None,
};
let key_fd = match param.get("keyfd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --keyfd parameter type"),
None => None,
};
let master_pubkey_file = match param.get("master-pubkey-file") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --master-pubkey-file parameter type"),
None => None,
};
let master_pubkey_fd = match param.get("master-pubkey-fd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad master public key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad public master key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --master-pubkey-fd parameter type"),
None => None,
};
let mode: Option<CryptMode> = match param.get("crypt-mode") {
Some(mode) => Some(serde_json::from_value(mode.clone())?),
None => None,
};
let key = match (keyfile, key_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }.read_to_end(&mut data).map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err)
})?;
Some(KeyWithSource::from_fd(data))
}
};
let master_pubkey = match (master_pubkey_file, master_pubkey_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }
.read_to_end(&mut data)
.map_err(|err| format_err!("error reading master key from fd {}: {}", fd, err))?;
Some(KeyWithSource::from_fd(data))
}
};
let res = match mode {
// no crypt mode, enable encryption if keys are available
None => match (key, master_pubkey) {
// only default keys if available
(None, None) => match key::read_optional_default_encryption_key()? {
None => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
enc_key => {
let master_pubkey = key::read_optional_default_master_pubkey()?;
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit master key, default enc key needed
(None, master_pubkey) => match key::read_optional_default_encryption_key()? {
None => bail!("--master-pubkey-file/--master-pubkey-fd specified, but no key available"),
enc_key => {
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit keyfile, maybe default master key
(enc_key, None) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey: key::read_optional_default_master_pubkey()? },
// explicit keyfile and master key
(enc_key, master_pubkey) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey },
},
// explicitly disabled encryption
Some(CryptMode::None) => match (key, master_pubkey) {
// no keys => OK, no encryption
(None, None) => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
// --keyfile and --crypt-mode=none
(Some(_), _) => bail!("--keyfile/--keyfd and --crypt-mode=none are mutually exclusive"),
// --master-pubkey-file and --crypt-mode=none
(_, Some(_)) => bail!("--master-pubkey-file/--master-pubkey-fd and --crypt-mode=none are mutually exclusive"),
},
// explicitly enabled encryption
Some(mode) => match (key, master_pubkey) {
// no key, maybe master key
(None, master_pubkey) => match key::read_optional_default_encryption_key()? {
None => bail!("--crypt-mode without --keyfile and no default key file available"),
enc_key => {
eprintln!("Encrypting with default encryption key!");
let master_pubkey = match master_pubkey {
None => key::read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams {
mode,
enc_key,
master_pubkey,
}
},
},
// --keyfile and --crypt-mode other than none
(enc_key, master_pubkey) => {
let master_pubkey = match master_pubkey {
None => key::read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams { mode, enc_key, master_pubkey }
},
},
};
Ok(res)
}
#[test]
// WARNING: there must only be one test for crypto_parameters as the default key handling is not
// safe w.r.t. concurrency
fn test_crypto_parameters_handling() -> Result<(), Error> {
let some_key = vec![1;1];
let default_key = vec![2;1];
let some_master_key = vec![3;1];
let default_master_key = vec![4;1];
let keypath = "./target/testout/keyfile.test";
let master_keypath = "./target/testout/masterkeyfile.test";
let invalid_keypath = "./target/testout/invalid_keyfile.test";
let no_key_res = CryptoParams {
enc_key: None,
master_pubkey: None,
mode: CryptMode::None,
};
let some_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let some_key_some_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_path(
master_keypath.to_string(),
some_master_key.clone(),
)),
mode: CryptMode::Encrypt,
};
let some_key_default_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_default(default_master_key.clone())),
mode: CryptMode::Encrypt,
};
let some_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
let default_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let default_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
replace_file(&keypath, &some_key, CreateOptions::default())?;
replace_file(&master_keypath, &some_master_key, CreateOptions::default())?;
// no params, no default key == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, no default key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now set a default key
unsafe { key::set_test_encryption_key(Ok(Some(default_key.clone()))); }
// and repeat
// no params but default key == default key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), default_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key == default key with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only"}));
assert_eq!(res.unwrap(), default_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt"}));
assert_eq!(res.unwrap(), default_key_res);
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now make default key retrieval error
unsafe { key::set_test_encryption_key(Err(format_err!("test error"))); }
// and repeat
// no params, default key retrieval errors == Error
assert!(crypto_parameters(&json!({})).is_err());
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key error == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now remove default key again
unsafe { key::set_test_encryption_key(Ok(None)); }
// set a default master key
unsafe { key::set_test_default_master_pubkey(Ok(Some(default_master_key.clone()))); }
// and use an explicit master key
assert!(crypto_parameters(&json!({"master-pubkey-file": master_keypath})).is_err());
// just a default == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
// same with fallback to default master key
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// crypt mode none == error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "master-pubkey-file": master_keypath})).is_err());
// with just default master key == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt without enc key == error
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt", "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath, "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// invalid master keyfile parameter always errors when a key is passed, even with a valid
// default master key
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "encrypt"})).is_err());
Ok(())
}
#[api(
input: {
properties: {
@ -1160,7 +741,7 @@ async fn create_backup(
);
let (key, created, fingerprint) =
decrypt_key(&key_with_source.key, &key::get_encryption_key_password)?;
decrypt_key(&key_with_source.key, &get_encryption_key_password)?;
println!("Encryption key fingerprint: {}", fingerprint);
let crypt_config = CryptConfig::new(key)?;
@ -1453,7 +1034,7 @@ fn parse_archive_type(name: &str) -> (String, ArchiveType) {
type: String,
description: r###"Target directory path. Use '-' to write to standard output.
We do not extraxt '.pxar' archives when writing to standard output.
We do not extract '.pxar' archives when writing to standard output.
"###
},
@ -1510,7 +1091,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
None => None,
Some(ref key) => {
let (key, _, _) =
decrypt_key(&key.key, &key::get_encryption_key_password).map_err(|err| {
decrypt_key(&key.key, &get_encryption_key_password).map_err(|err| {
eprintln!("{}", format_key_source(&key.source, "encryption"));
err
})?;

View File

@ -330,7 +330,7 @@ async fn get_versions(verbose: bool, param: Value) -> Result<Value, Error> {
let options = default_table_format_options()
.disable_sort()
.noborder(true) // just not helpfull for version info which gets copy pasted often
.noborder(true) // just not helpful for version info which gets copy pasted often
.column(ColumnConfig::new("Package"))
.column(ColumnConfig::new("Version"))
.column(ColumnConfig::new("ExtraInfo").header("Extra Info"))

View File

@ -27,10 +27,13 @@ use proxmox_backup::{
api2::{
self,
types::{
Authid,
DATASTORE_SCHEMA,
DATASTORE_MAP_LIST_SCHEMA,
DRIVE_NAME_SCHEMA,
MEDIA_LABEL_SCHEMA,
MEDIA_POOL_NAME_SCHEMA,
Userid,
},
},
config::{
@ -781,8 +784,8 @@ async fn clean_drive(mut param: Value) -> Result<(), Error> {
let mut client = connect_to_localhost()?;
let path = format!("api2/json/tape/drive/{}/clean-drive", drive);
let result = client.post(&path, Some(param)).await?;
let path = format!("api2/json/tape/drive/{}/clean", drive);
let result = client.put(&path, Some(param)).await?;
view_task_result(&mut client, result, &output_format).await?;
@ -853,7 +856,7 @@ async fn backup(mut param: Value) -> Result<(), Error> {
input: {
properties: {
store: {
schema: DATASTORE_SCHEMA,
schema: DATASTORE_MAP_LIST_SCHEMA,
},
drive: {
schema: DRIVE_NAME_SCHEMA,
@ -863,6 +866,14 @@ async fn backup(mut param: Value) -> Result<(), Error> {
description: "Media set UUID.",
type: String,
},
"notify-user": {
type: Userid,
optional: true,
},
owner: {
type: Authid,
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
@ -900,6 +911,11 @@ async fn restore(mut param: Value) -> Result<(), Error> {
type: bool,
optional: true,
},
scan: {
description: "Re-read the whole tape to reconstruct the catalog instead of restoring saved versions.",
type: bool,
optional: true,
},
verbose: {
description: "Verbose mode - log all found chunks.",
type: bool,

View File

@ -34,6 +34,8 @@ use crate::{
connect,
};
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[api()]
#[derive(Copy, Clone, Serialize)]
/// Speed test result
@ -152,7 +154,7 @@ pub async fn benchmark(
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let (key, _, _) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}

View File

@ -17,7 +17,6 @@ use crate::{
extract_repository_from_value,
format_key_source,
record_repository,
key::get_encryption_key_password,
decrypt_key,
api_datastore_latest_snapshot,
complete_repository,
@ -38,6 +37,8 @@ use crate::{
Shell,
};
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[api(
input: {
properties: {

View File

@ -20,114 +20,10 @@ use proxmox_backup::{
tools::paperkey::{generate_paper_key, PaperkeyFormat},
};
use crate::KeyWithSource;
pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
pub const DEFAULT_MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
pub fn find_default_master_pubkey() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn place_default_master_pubkey() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn find_default_encryption_key() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
pub fn place_default_encryption_key() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
find_default_encryption_key()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
find_default_master_pubkey()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(test)]
static mut TEST_DEFAULT_ENCRYPTION_KEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_ENCRYPTION_KEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_encryption_key(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_ENCRYPTION_KEY = value;
}
#[cfg(test)]
static mut TEST_DEFAULT_MASTER_PUBKEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_MASTER_PUBKEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_default_master_pubkey(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_MASTER_PUBKEY = value;
}
pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
// fixme: implement other input methods
use std::env::VarError::*;
match std::env::var("PBS_ENCRYPTION_PASSWORD") {
Ok(p) => return Ok(p.as_bytes().to_vec()),
Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
Err(NotPresent) => {
// Try another method
}
}
// If we're on a TTY, query the user for a password
if tty::stdin_isatty() {
return Ok(tty::read_password("Encryption Key Password: ")?);
}
bail!("no password input mechanism available");
}
use crate::proxmox_client_tools::key_source::{
find_default_encryption_key, find_default_master_pubkey, get_encryption_key_password,
place_default_encryption_key, place_default_master_pubkey,
};
#[api(
input: {
@ -527,7 +423,7 @@ fn show_master_pubkey(path: Option<String>, param: Value) -> Result<(), Error> {
optional: true,
},
subject: {
description: "Include the specified subject as titel text.",
description: "Include the specified subject as title text.",
optional: true,
},
"output-format": {

View File

@ -1,5 +1,3 @@
use anyhow::{Context, Error};
mod benchmark;
pub use benchmark::*;
mod mount;
@ -13,29 +11,3 @@ pub use snapshot::*;
pub mod key;
pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
}
/// Convenience helper for better error messages:
pub fn find_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<Option<std::path::PathBuf>, Error> {
let file_name = file_name.as_ref();
base_directories()
.map(|base| base.find_config_file(file_name))
.with_context(|| format!("error searching for {}", description))
}
pub fn place_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<std::path::PathBuf, Error> {
let file_name = file_name.as_ref();
base_directories()
.and_then(|base| {
base.place_config_file(file_name).map_err(Error::from)
})
.with_context(|| format!("failed to place {} in xdg home", description))
}

View File

@ -43,6 +43,8 @@ use crate::{
BufferedDynamicReadAt,
};
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[sortable]
const API_METHOD_MOUNT: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&mount),
@ -140,7 +142,7 @@ fn mount(
return proxmox_backup::tools::runtime::main(mount_do(param, None));
}
// Process should be deamonized.
// Process should be daemonized.
// Make sure to fork before the async runtime is instantiated to avoid troubles.
let (pr, pw) = proxmox_backup::tools::pipe()?;
match unsafe { fork() } {
@ -182,7 +184,7 @@ async fn mount_do(param: Value, pipe: Option<Fd>) -> Result<Value, Error> {
None => None,
Some(path) => {
println!("Encryption key file: '{:?}'", path);
let (key, _, fingerprint) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let (key, _, fingerprint) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
println!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?))
}

View File

@ -35,6 +35,8 @@ use crate::{
record_repository,
};
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[api(
input: {
properties: {
@ -239,7 +241,7 @@ async fn upload_log(param: Value) -> Result<Value, Error> {
let crypt_config = match crypto.enc_key {
None => None,
Some(key) => {
let (key, _created, _) = decrypt_key(&key.key, &crate::key::get_encryption_key_password)?;
let (key, _created, _) = decrypt_key(&key.key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}

View File

@ -0,0 +1,572 @@
use std::convert::TryFrom;
use std::path::PathBuf;
use std::os::unix::io::{FromRawFd, RawFd};
use std::io::Read;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::api::schema::*;
use proxmox::sys::linux::tty;
use proxmox::tools::fs::file_get_contents;
use proxmox_backup::backup::CryptMode;
pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
pub const DEFAULT_MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
pub const KEYFILE_SCHEMA: Schema =
StringSchema::new("Path to encryption key. All data will be encrypted using this key.")
.schema();
pub const KEYFD_SCHEMA: Schema =
IntegerSchema::new("Pass an encryption key via an already opened file descriptor.")
.minimum(0)
.schema();
pub const MASTER_PUBKEY_FILE_SCHEMA: Schema = StringSchema::new(
"Path to master public key. The encryption key used for a backup will be encrypted using this key and appended to the backup.")
.schema();
pub const MASTER_PUBKEY_FD_SCHEMA: Schema =
IntegerSchema::new("Pass a master public key via an already opened file descriptor.")
.minimum(0)
.schema();
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum KeySource {
DefaultKey,
Fd,
Path(String),
}
pub fn format_key_source(source: &KeySource, key_type: &str) -> String {
match source {
KeySource::DefaultKey => format!("Using default {} key..", key_type),
KeySource::Fd => format!("Using {} key from file descriptor..", key_type),
KeySource::Path(path) => format!("Using {} key from '{}'..", key_type, path),
}
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct KeyWithSource {
pub source: KeySource,
pub key: Vec<u8>,
}
impl KeyWithSource {
pub fn from_fd(key: Vec<u8>) -> Self {
Self {
source: KeySource::Fd,
key,
}
}
pub fn from_default(key: Vec<u8>) -> Self {
Self {
source: KeySource::DefaultKey,
key,
}
}
pub fn from_path(path: String, key: Vec<u8>) -> Self {
Self {
source: KeySource::Path(path),
key,
}
}
}
#[derive(Debug, Eq, PartialEq)]
pub struct CryptoParams {
pub mode: CryptMode,
pub enc_key: Option<KeyWithSource>,
// FIXME switch to openssl::rsa::rsa<openssl::pkey::Public> once that is Eq?
pub master_pubkey: Option<KeyWithSource>,
}
pub fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
let keyfile = match param.get("keyfile") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --keyfile parameter type"),
None => None,
};
let key_fd = match param.get("keyfd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --keyfd parameter type"),
None => None,
};
let master_pubkey_file = match param.get("master-pubkey-file") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --master-pubkey-file parameter type"),
None => None,
};
let master_pubkey_fd = match param.get("master-pubkey-fd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad master public key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad public master key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --master-pubkey-fd parameter type"),
None => None,
};
let mode: Option<CryptMode> = match param.get("crypt-mode") {
Some(mode) => Some(serde_json::from_value(mode.clone())?),
None => None,
};
let key = match (keyfile, key_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }.read_to_end(&mut data).map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err)
})?;
Some(KeyWithSource::from_fd(data))
}
};
let master_pubkey = match (master_pubkey_file, master_pubkey_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }
.read_to_end(&mut data)
.map_err(|err| format_err!("error reading master key from fd {}: {}", fd, err))?;
Some(KeyWithSource::from_fd(data))
}
};
let res = match mode {
// no crypt mode, enable encryption if keys are available
None => match (key, master_pubkey) {
// only default keys if available
(None, None) => match read_optional_default_encryption_key()? {
None => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
enc_key => {
let master_pubkey = read_optional_default_master_pubkey()?;
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit master key, default enc key needed
(None, master_pubkey) => match read_optional_default_encryption_key()? {
None => bail!("--master-pubkey-file/--master-pubkey-fd specified, but no key available"),
enc_key => {
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit keyfile, maybe default master key
(enc_key, None) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey: read_optional_default_master_pubkey()? },
// explicit keyfile and master key
(enc_key, master_pubkey) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey },
},
// explicitly disabled encryption
Some(CryptMode::None) => match (key, master_pubkey) {
// no keys => OK, no encryption
(None, None) => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
// --keyfile and --crypt-mode=none
(Some(_), _) => bail!("--keyfile/--keyfd and --crypt-mode=none are mutually exclusive"),
// --master-pubkey-file and --crypt-mode=none
(_, Some(_)) => bail!("--master-pubkey-file/--master-pubkey-fd and --crypt-mode=none are mutually exclusive"),
},
// explicitly enabled encryption
Some(mode) => match (key, master_pubkey) {
// no key, maybe master key
(None, master_pubkey) => match read_optional_default_encryption_key()? {
None => bail!("--crypt-mode without --keyfile and no default key file available"),
enc_key => {
eprintln!("Encrypting with default encryption key!");
let master_pubkey = match master_pubkey {
None => read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams {
mode,
enc_key,
master_pubkey,
}
},
},
// --keyfile and --crypt-mode other than none
(enc_key, master_pubkey) => {
let master_pubkey = match master_pubkey {
None => read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams { mode, enc_key, master_pubkey }
},
},
};
Ok(res)
}
pub fn find_default_master_pubkey() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn place_default_master_pubkey() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn find_default_encryption_key() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
pub fn place_default_encryption_key() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
find_default_encryption_key()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
find_default_master_pubkey()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(test)]
static mut TEST_DEFAULT_ENCRYPTION_KEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_ENCRYPTION_KEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_encryption_key(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_ENCRYPTION_KEY = value;
}
#[cfg(test)]
static mut TEST_DEFAULT_MASTER_PUBKEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_MASTER_PUBKEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_default_master_pubkey(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_MASTER_PUBKEY = value;
}
pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
// fixme: implement other input methods
use std::env::VarError::*;
match std::env::var("PBS_ENCRYPTION_PASSWORD") {
Ok(p) => return Ok(p.as_bytes().to_vec()),
Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
Err(NotPresent) => {
// Try another method
}
}
// If we're on a TTY, query the user for a password
if tty::stdin_isatty() {
return Ok(tty::read_password("Encryption Key Password: ")?);
}
bail!("no password input mechanism available");
}
#[test]
// WARNING: there must only be one test for crypto_parameters as the default key handling is not
// safe w.r.t. concurrency
fn test_crypto_parameters_handling() -> Result<(), Error> {
use serde_json::json;
use proxmox::tools::fs::{replace_file, CreateOptions};
let some_key = vec![1;1];
let default_key = vec![2;1];
let some_master_key = vec![3;1];
let default_master_key = vec![4;1];
let keypath = "./target/testout/keyfile.test";
let master_keypath = "./target/testout/masterkeyfile.test";
let invalid_keypath = "./target/testout/invalid_keyfile.test";
let no_key_res = CryptoParams {
enc_key: None,
master_pubkey: None,
mode: CryptMode::None,
};
let some_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let some_key_some_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_path(
master_keypath.to_string(),
some_master_key.clone(),
)),
mode: CryptMode::Encrypt,
};
let some_key_default_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_default(default_master_key.clone())),
mode: CryptMode::Encrypt,
};
let some_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
let default_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let default_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
replace_file(&keypath, &some_key, CreateOptions::default())?;
replace_file(&master_keypath, &some_master_key, CreateOptions::default())?;
// no params, no default key == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, no default key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now set a default key
unsafe { set_test_encryption_key(Ok(Some(default_key.clone()))); }
// and repeat
// no params but default key == default key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), default_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key == default key with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only"}));
assert_eq!(res.unwrap(), default_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt"}));
assert_eq!(res.unwrap(), default_key_res);
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now make default key retrieval error
unsafe { set_test_encryption_key(Err(format_err!("test error"))); }
// and repeat
// no params, default key retrieval errors == Error
assert!(crypto_parameters(&json!({})).is_err());
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key error == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now remove default key again
unsafe { set_test_encryption_key(Ok(None)); }
// set a default master key
unsafe { set_test_default_master_pubkey(Ok(Some(default_master_key.clone()))); }
// and use an explicit master key
assert!(crypto_parameters(&json!({"master-pubkey-file": master_keypath})).is_err());
// just a default == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
// same with fallback to default master key
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// crypt mode none == error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "master-pubkey-file": master_keypath})).is_err());
// with just default master key == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt without enc key == error
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt", "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath, "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// invalid master keyfile parameter always errors when a key is passed, even with a valid
// default master key
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "encrypt"})).is_err());
Ok(())
}

View File

@ -1,8 +1,7 @@
//! Shared tools useful for common CLI clients.
use std::collections::HashMap;
use anyhow::{bail, format_err, Error};
use anyhow::{bail, format_err, Context, Error};
use serde_json::{json, Value};
use xdg::BaseDirectories;
@ -17,6 +16,8 @@ use proxmox_backup::backup::BackupDir;
use proxmox_backup::client::*;
use proxmox_backup::tools;
pub mod key_source;
const ENV_VAR_PBS_FINGERPRINT: &str = "PBS_FINGERPRINT";
const ENV_VAR_PBS_PASSWORD: &str = "PBS_PASSWORD";
@ -25,24 +26,6 @@ pub const REPO_URL_SCHEMA: Schema = StringSchema::new("Repository URL.")
.max_length(256)
.schema();
pub const KEYFILE_SCHEMA: Schema =
StringSchema::new("Path to encryption key. All data will be encrypted using this key.")
.schema();
pub const KEYFD_SCHEMA: Schema =
IntegerSchema::new("Pass an encryption key via an already opened file descriptor.")
.minimum(0)
.schema();
pub const MASTER_PUBKEY_FILE_SCHEMA: Schema = StringSchema::new(
"Path to master public key. The encryption key used for a backup will be encrypted using this key and appended to the backup.")
.schema();
pub const MASTER_PUBKEY_FD_SCHEMA: Schema =
IntegerSchema::new("Pass a master public key via an already opened file descriptor.")
.minimum(0)
.schema();
pub const CHUNK_SIZE_SCHEMA: Schema = IntegerSchema::new("Chunk size in KB. Must be a power of 2.")
.minimum(64)
.maximum(4096)
@ -364,3 +347,28 @@ pub fn complete_backup_source(arg: &str, param: &HashMap<String, String>) -> Vec
result
}
pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
}
/// Convenience helper for better error messages:
pub fn find_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<Option<std::path::PathBuf>, Error> {
let file_name = file_name.as_ref();
base_directories()
.map(|base| base.find_config_file(file_name))
.with_context(|| format!("error searching for {}", description))
}
pub fn place_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<std::path::PathBuf, Error> {
let file_name = file_name.as_ref();
base_directories()
.and_then(|base| base.place_config_file(file_name).map_err(Error::from))
.with_context(|| format!("failed to place {} in xdg home", description))
}

View File

@ -84,7 +84,7 @@ pub fn encryption_key_commands() -> CommandLineInterface {
schema: TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
},
subject: {
description: "Include the specified subject as titel text.",
description: "Include the specified subject as title text.",
optional: true,
},
"output-format": {
@ -128,7 +128,7 @@ fn paper_key(
},
},
)]
/// Print tthe encryption key's metadata.
/// Print the encryption key's metadata.
fn show_key(
param: Value,
rpcenv: &mut dyn RpcEnvironment,

View File

@ -177,12 +177,14 @@ fn list_content(
let options = default_table_format_options()
.sortby("media-set-uuid", false)
.sortby("seq-nr", false)
.sortby("store", false)
.sortby("snapshot", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("label-text"))
.column(ColumnConfig::new("pool"))
.column(ColumnConfig::new("media-set-name"))
.column(ColumnConfig::new("seq-nr"))
.column(ColumnConfig::new("store"))
.column(ColumnConfig::new("snapshot"))
.column(ColumnConfig::new("media-set-uuid"))
;

View File

@ -130,22 +130,22 @@ fn extract_archive(
) -> Result<(), Error> {
let mut feature_flags = Flags::DEFAULT;
if no_xattrs {
feature_flags ^= Flags::WITH_XATTRS;
feature_flags.remove(Flags::WITH_XATTRS);
}
if no_fcaps {
feature_flags ^= Flags::WITH_FCAPS;
feature_flags.remove(Flags::WITH_FCAPS);
}
if no_acls {
feature_flags ^= Flags::WITH_ACL;
feature_flags.remove(Flags::WITH_ACL);
}
if no_device_nodes {
feature_flags ^= Flags::WITH_DEVICE_NODES;
feature_flags.remove(Flags::WITH_DEVICE_NODES);
}
if no_fifos {
feature_flags ^= Flags::WITH_FIFOS;
feature_flags.remove(Flags::WITH_FIFOS);
}
if no_sockets {
feature_flags ^= Flags::WITH_SOCKETS;
feature_flags.remove(Flags::WITH_SOCKETS);
}
let pattern = pattern.unwrap_or_else(Vec::new);
@ -353,22 +353,22 @@ async fn create_archive(
let writer = std::io::BufWriter::with_capacity(1024 * 1024, file);
let mut feature_flags = Flags::DEFAULT;
if no_xattrs {
feature_flags ^= Flags::WITH_XATTRS;
feature_flags.remove(Flags::WITH_XATTRS);
}
if no_fcaps {
feature_flags ^= Flags::WITH_FCAPS;
feature_flags.remove(Flags::WITH_FCAPS);
}
if no_acls {
feature_flags ^= Flags::WITH_ACL;
feature_flags.remove(Flags::WITH_ACL);
}
if no_device_nodes {
feature_flags ^= Flags::WITH_DEVICE_NODES;
feature_flags.remove(Flags::WITH_DEVICE_NODES);
}
if no_fifos {
feature_flags ^= Flags::WITH_FIFOS;
feature_flags.remove(Flags::WITH_FIFOS);
}
if no_sockets {
feature_flags ^= Flags::WITH_SOCKETS;
feature_flags.remove(Flags::WITH_SOCKETS);
}
let writer = pxar::encoder::sync::StandardWriter::new(writer);

View File

@ -1,6 +1,6 @@
/// Tape command implemented using scsi-generic raw commands
///
/// SCSI-generic command needs root priviledges, so this binary need
/// SCSI-generic command needs root privileges, so this binary need
/// to be setuid root.
///
/// This command can use STDIN as tape device handle.

View File

@ -16,11 +16,11 @@ pub const PROXMOX_BACKUP_RUN_DIR: &str = PROXMOX_BACKUP_RUN_DIR_M!();
/// namespaced directory for persistent logging
pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!();
/// logfile for all API reuests handled by the proxy and privileged API daemons. Note that not all
/// logfile for all API requests handled by the proxy and privileged API daemons. Note that not all
/// failed logins can be logged here with full information, use the auth log for that.
pub const API_ACCESS_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/access.log");
/// logfile for any failed authentication, via ticket or via token, and new successfull ticket
/// logfile for any failed authentication, via ticket or via token, and new successful ticket
/// creations. This file can be useful for fail2ban.
pub const API_AUTH_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/auth.log");

View File

@ -1,12 +1,12 @@
use std::collections::HashSet;
use std::os::unix::fs::OpenOptionsExt;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error};
use futures::*;
use futures::stream::Stream;
use futures::future::AbortHandle;
use futures::stream::Stream;
use futures::*;
use serde_json::{json, Value};
use tokio::io::AsyncReadExt;
use tokio::sync::{mpsc, oneshot};
@ -14,11 +14,11 @@ use tokio_stream::wrappers::ReceiverStream;
use proxmox::tools::digest_to_hex;
use super::merge_known_chunks::{MergedChunkInfo, MergeKnownChunks};
use super::merge_known_chunks::{MergeKnownChunks, MergedChunkInfo};
use crate::backup::*;
use crate::tools::format::HumanByte;
use super::{HttpClient, H2Client};
use super::{H2Client, HttpClient};
pub struct BackupWriter {
h2: H2Client,
@ -28,7 +28,6 @@ pub struct BackupWriter {
}
impl Drop for BackupWriter {
fn drop(&mut self) {
self.abort.abort();
}
@ -48,13 +47,32 @@ pub struct UploadOptions {
pub fixed_size: Option<u64>,
}
struct UploadStats {
chunk_count: usize,
chunk_reused: usize,
size: usize,
size_reused: usize,
size_compressed: usize,
duration: std::time::Duration,
csum: [u8; 32],
}
type UploadQueueSender = mpsc::Sender<(MergedChunkInfo, Option<h2::client::ResponseFuture>)>;
type UploadResultReceiver = oneshot::Receiver<Result<(), Error>>;
impl BackupWriter {
fn new(h2: H2Client, abort: AbortHandle, crypt_config: Option<Arc<CryptConfig>>, verbose: bool) -> Arc<Self> {
Arc::new(Self { h2, abort, crypt_config, verbose })
fn new(
h2: H2Client,
abort: AbortHandle,
crypt_config: Option<Arc<CryptConfig>>,
verbose: bool,
) -> Arc<Self> {
Arc::new(Self {
h2,
abort,
crypt_config,
verbose,
})
}
// FIXME: extract into (flattened) parameter struct?
@ -67,9 +85,8 @@ impl BackupWriter {
backup_id: &str,
backup_time: i64,
debug: bool,
benchmark: bool
benchmark: bool,
) -> Result<Arc<BackupWriter>, Error> {
let param = json!({
"backup-type": backup_type,
"backup-id": backup_id,
@ -80,34 +97,30 @@ impl BackupWriter {
});
let req = HttpClient::request_builder(
client.server(), client.port(), "GET", "/api2/json/backup", Some(param)).unwrap();
client.server(),
client.port(),
"GET",
"/api2/json/backup",
Some(param),
)
.unwrap();
let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_PROTOCOL_ID_V1!())).await?;
let (h2, abort) = client
.start_h2_connection(req, String::from(PROXMOX_BACKUP_PROTOCOL_ID_V1!()))
.await?;
Ok(BackupWriter::new(h2, abort, crypt_config, debug))
}
pub async fn get(
&self,
path: &str,
param: Option<Value>,
) -> Result<Value, Error> {
pub async fn get(&self, path: &str, param: Option<Value>) -> Result<Value, Error> {
self.h2.get(path, param).await
}
pub async fn put(
&self,
path: &str,
param: Option<Value>,
) -> Result<Value, Error> {
pub async fn put(&self, path: &str, param: Option<Value>) -> Result<Value, Error> {
self.h2.put(path, param).await
}
pub async fn post(
&self,
path: &str,
param: Option<Value>,
) -> Result<Value, Error> {
pub async fn post(&self, path: &str, param: Option<Value>) -> Result<Value, Error> {
self.h2.post(path, param).await
}
@ -118,7 +131,9 @@ impl BackupWriter {
content_type: &str,
data: Vec<u8>,
) -> Result<Value, Error> {
self.h2.upload("POST", path, param, content_type, data).await
self.h2
.upload("POST", path, param, content_type, data)
.await
}
pub async fn send_upload_request(
@ -129,9 +144,13 @@ impl BackupWriter {
content_type: &str,
data: Vec<u8>,
) -> Result<h2::client::ResponseFuture, Error> {
let request = H2Client::request_builder("localhost", method, path, param, Some(content_type)).unwrap();
let response_future = self.h2.send_request(request, Some(bytes::Bytes::from(data.clone()))).await?;
let request =
H2Client::request_builder("localhost", method, path, param, Some(content_type))
.unwrap();
let response_future = self
.h2
.send_request(request, Some(bytes::Bytes::from(data.clone())))
.await?;
Ok(response_future)
}
@ -163,7 +182,7 @@ impl BackupWriter {
&self,
mut reader: R,
file_name: &str,
) -> Result<BackupStats, Error> {
) -> Result<BackupStats, Error> {
let mut raw_data = Vec::new();
// fixme: avoid loading into memory
reader.read_to_end(&mut raw_data)?;
@ -171,7 +190,16 @@ impl BackupWriter {
let csum = openssl::sha::sha256(&raw_data);
let param = json!({"encoded-size": raw_data.len(), "file-name": file_name });
let size = raw_data.len() as u64;
let _value = self.h2.upload("POST", "blob", Some(param), "application/octet-stream", raw_data).await?;
let _value = self
.h2
.upload(
"POST",
"blob",
Some(param),
"application/octet-stream",
raw_data,
)
.await?;
Ok(BackupStats { size, csum })
}
@ -182,9 +210,11 @@ impl BackupWriter {
options: UploadOptions,
) -> Result<BackupStats, Error> {
let blob = match (options.encrypt, &self.crypt_config) {
(false, _) => DataBlob::encode(&data, None, options.compress)?,
(true, None) => bail!("requested encryption without a crypt config"),
(true, Some(crypt_config)) => DataBlob::encode(&data, Some(crypt_config), options.compress)?,
(false, _) => DataBlob::encode(&data, None, options.compress)?,
(true, None) => bail!("requested encryption without a crypt config"),
(true, Some(crypt_config)) => {
DataBlob::encode(&data, Some(crypt_config), options.compress)?
}
};
let raw_data = blob.into_inner();
@ -192,7 +222,16 @@ impl BackupWriter {
let csum = openssl::sha::sha256(&raw_data);
let param = json!({"encoded-size": size, "file-name": file_name });
let _value = self.h2.upload("POST", "blob", Some(param), "application/octet-stream", raw_data).await?;
let _value = self
.h2
.upload(
"POST",
"blob",
Some(param),
"application/octet-stream",
raw_data,
)
.await?;
Ok(BackupStats { size, csum })
}
@ -202,7 +241,6 @@ impl BackupWriter {
file_name: &str,
options: UploadOptions,
) -> Result<BackupStats, Error> {
let src_path = src_path.as_ref();
let mut file = tokio::fs::File::open(src_path)
@ -215,7 +253,8 @@ impl BackupWriter {
.await
.map_err(|err| format_err!("unable to read file {:?} - {}", src_path, err))?;
self.upload_blob_from_data(contents, file_name, options).await
self.upload_blob_from_data(contents, file_name, options)
.await
}
pub async fn upload_stream(
@ -245,72 +284,118 @@ impl BackupWriter {
// try, but ignore errors
match archive_type(archive_name) {
Ok(ArchiveType::FixedIndex) => {
let _ = self.download_previous_fixed_index(archive_name, &manifest, known_chunks.clone()).await;
let _ = self
.download_previous_fixed_index(
archive_name,
&manifest,
known_chunks.clone(),
)
.await;
}
Ok(ArchiveType::DynamicIndex) => {
let _ = self.download_previous_dynamic_index(archive_name, &manifest, known_chunks.clone()).await;
let _ = self
.download_previous_dynamic_index(
archive_name,
&manifest,
known_chunks.clone(),
)
.await;
}
_ => { /* do nothing */ }
}
}
let wid = self.h2.post(&index_path, Some(param)).await?.as_u64().unwrap();
let wid = self
.h2
.post(&index_path, Some(param))
.await?
.as_u64()
.unwrap();
let (chunk_count, chunk_reused, size, size_reused, duration, csum) =
Self::upload_chunk_info_stream(
self.h2.clone(),
wid,
stream,
&prefix,
known_chunks.clone(),
if options.encrypt { self.crypt_config.clone() } else { None },
options.compress,
self.verbose,
)
.await?;
let upload_stats = Self::upload_chunk_info_stream(
self.h2.clone(),
wid,
stream,
&prefix,
known_chunks.clone(),
if options.encrypt {
self.crypt_config.clone()
} else {
None
},
options.compress,
self.verbose,
)
.await?;
let uploaded = size - size_reused;
let vsize_h: HumanByte = size.into();
let size_dirty = upload_stats.size - upload_stats.size_reused;
let size: HumanByte = upload_stats.size.into();
let archive = if self.verbose {
archive_name.to_string()
} else {
crate::tools::format::strip_server_file_extension(archive_name)
};
if archive_name != CATALOG_NAME {
let speed: HumanByte = ((uploaded * 1_000_000) / (duration.as_micros() as usize)).into();
let uploaded: HumanByte = uploaded.into();
println!("{}: had to upload {} of {} in {:.2}s, average speed {}/s).", archive, uploaded, vsize_h, duration.as_secs_f64(), speed);
let speed: HumanByte =
((size_dirty * 1_000_000) / (upload_stats.duration.as_micros() as usize)).into();
let size_dirty: HumanByte = size_dirty.into();
let size_compressed: HumanByte = upload_stats.size_compressed.into();
println!(
"{}: had to backup {} of {} (compressed {}) in {:.2}s",
archive,
size_dirty,
size,
size_compressed,
upload_stats.duration.as_secs_f64()
);
println!("{}: average backup speed: {}/s", archive, speed);
} else {
println!("Uploaded backup catalog ({})", vsize_h);
println!("Uploaded backup catalog ({})", size);
}
if size_reused > 0 && size > 1024*1024 {
let reused_percent = size_reused as f64 * 100. / size as f64;
let reused: HumanByte = size_reused.into();
println!("{}: backup was done incrementally, reused {} ({:.1}%)", archive, reused, reused_percent);
if upload_stats.size_reused > 0 && upload_stats.size > 1024 * 1024 {
let reused_percent = upload_stats.size_reused as f64 * 100. / upload_stats.size as f64;
let reused: HumanByte = upload_stats.size_reused.into();
println!(
"{}: backup was done incrementally, reused {} ({:.1}%)",
archive, reused, reused_percent
);
}
if self.verbose && chunk_count > 0 {
println!("{}: Reused {} from {} chunks.", archive, chunk_reused, chunk_count);
println!("{}: Average chunk size was {}.", archive, HumanByte::from(size/chunk_count));
println!("{}: Average time per request: {} microseconds.", archive, (duration.as_micros())/(chunk_count as u128));
if self.verbose && upload_stats.chunk_count > 0 {
println!(
"{}: Reused {} from {} chunks.",
archive, upload_stats.chunk_reused, upload_stats.chunk_count
);
println!(
"{}: Average chunk size was {}.",
archive,
HumanByte::from(upload_stats.size / upload_stats.chunk_count)
);
println!(
"{}: Average time per request: {} microseconds.",
archive,
(upload_stats.duration.as_micros()) / (upload_stats.chunk_count as u128)
);
}
let param = json!({
"wid": wid ,
"chunk-count": chunk_count,
"size": size,
"csum": proxmox::tools::digest_to_hex(&csum),
"chunk-count": upload_stats.chunk_count,
"size": upload_stats.size,
"csum": proxmox::tools::digest_to_hex(&upload_stats.csum),
});
let _value = self.h2.post(&close_path, Some(param)).await?;
Ok(BackupStats {
size: size as u64,
csum,
size: upload_stats.size as u64,
csum: upload_stats.csum,
})
}
fn response_queue(verbose: bool) -> (
fn response_queue(
verbose: bool,
) -> (
mpsc::Sender<h2::client::ResponseFuture>,
oneshot::Receiver<Result<(), Error>>
oneshot::Receiver<Result<(), Error>>,
) {
let (verify_queue_tx, verify_queue_rx) = mpsc::channel(100);
let (verify_result_tx, verify_result_rx) = oneshot::channel();
@ -336,12 +421,16 @@ impl BackupWriter {
response
.map_err(Error::from)
.and_then(H2Client::h2api_response)
.map_ok(move |result| if verbose { println!("RESPONSE: {:?}", result) })
.map_ok(move |result| {
if verbose {
println!("RESPONSE: {:?}", result)
}
})
.map_err(|err| format_err!("pipelined request failed: {}", err))
})
.map(|result| {
let _ignore_closed_channel = verify_result_tx.send(result);
})
let _ignore_closed_channel = verify_result_tx.send(result);
}),
);
(verify_queue_tx, verify_result_rx)
@ -418,9 +507,8 @@ impl BackupWriter {
&self,
archive_name: &str,
manifest: &BackupManifest,
known_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
) -> Result<FixedIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
@ -428,10 +516,13 @@ impl BackupWriter {
.open("/tmp")?;
let param = json!({ "archive-name": archive_name });
self.h2.download("previous", Some(param), &mut tmpfile).await?;
self.h2
.download("previous", Some(param), &mut tmpfile)
.await?;
let index = FixedIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read fixed index '{}' - {}", archive_name, err))?;
let index = FixedIndexReader::new(tmpfile).map_err(|err| {
format_err!("unable to read fixed index '{}' - {}", archive_name, err)
})?;
// Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum();
manifest.verify_file(archive_name, &csum, size)?;
@ -443,7 +534,11 @@ impl BackupWriter {
}
if self.verbose {
println!("{}: known chunks list length is {}", archive_name, index.index_count());
println!(
"{}: known chunks list length is {}",
archive_name,
index.index_count()
);
}
Ok(index)
@ -453,9 +548,8 @@ impl BackupWriter {
&self,
archive_name: &str,
manifest: &BackupManifest,
known_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
) -> Result<DynamicIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
@ -463,10 +557,13 @@ impl BackupWriter {
.open("/tmp")?;
let param = json!({ "archive-name": archive_name });
self.h2.download("previous", Some(param), &mut tmpfile).await?;
self.h2
.download("previous", Some(param), &mut tmpfile)
.await?;
let index = DynamicIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read dynmamic index '{}' - {}", archive_name, err))?;
let index = DynamicIndexReader::new(tmpfile).map_err(|err| {
format_err!("unable to read dynmamic index '{}' - {}", archive_name, err)
})?;
// Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum();
manifest.verify_file(archive_name, &csum, size)?;
@ -478,7 +575,11 @@ impl BackupWriter {
}
if self.verbose {
println!("{}: known chunks list length is {}", archive_name, index.index_count());
println!(
"{}: known chunks list length is {}",
archive_name,
index.index_count()
);
}
Ok(index)
@ -487,29 +588,35 @@ impl BackupWriter {
/// Retrieve backup time of last backup
pub async fn previous_backup_time(&self) -> Result<Option<i64>, Error> {
let data = self.h2.get("previous_backup_time", None).await?;
serde_json::from_value(data)
.map_err(|err| format_err!("Failed to parse backup time value returned by server - {}", err))
serde_json::from_value(data).map_err(|err| {
format_err!(
"Failed to parse backup time value returned by server - {}",
err
)
})
}
/// Download backup manifest (index.json) of last backup
pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {
let mut raw_data = Vec::with_capacity(64 * 1024);
let param = json!({ "archive-name": MANIFEST_BLOB_NAME });
self.h2.download("previous", Some(param), &mut raw_data).await?;
self.h2
.download("previous", Some(param), &mut raw_data)
.await?;
let blob = DataBlob::load_from_reader(&mut &raw_data[..])?;
// no expected digest available
let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref), None)?;
let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;
let manifest =
BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;
Ok(manifest)
}
// We have no `self` here for `h2` and `verbose`, the only other arg "common" with 1 other
// funciton in the same path is `wid`, so those 3 could be in a struct, but there's no real use
// function in the same path is `wid`, so those 3 could be in a struct, but there's no real use
// since this is a private method.
#[allow(clippy::too_many_arguments)]
fn upload_chunk_info_stream(
@ -517,12 +624,11 @@ impl BackupWriter {
wid: u64,
stream: impl Stream<Item = Result<bytes::BytesMut, Error>>,
prefix: &str,
known_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
crypt_config: Option<Arc<CryptConfig>>,
compress: bool,
verbose: bool,
) -> impl Future<Output = Result<(usize, usize, usize, usize, std::time::Duration, [u8; 32]), Error>> {
) -> impl Future<Output = Result<UploadStats, Error>> {
let total_chunks = Arc::new(AtomicUsize::new(0));
let total_chunks2 = total_chunks.clone();
let known_chunk_count = Arc::new(AtomicUsize::new(0));
@ -530,6 +636,8 @@ impl BackupWriter {
let stream_len = Arc::new(AtomicUsize::new(0));
let stream_len2 = stream_len.clone();
let compressed_stream_len = Arc::new(AtomicU64::new(0));
let compressed_stream_len2 = compressed_stream_len.clone();
let reused_len = Arc::new(AtomicUsize::new(0));
let reused_len2 = reused_len.clone();
@ -547,14 +655,12 @@ impl BackupWriter {
stream
.and_then(move |data| {
let chunk_len = data.len();
total_chunks.fetch_add(1, Ordering::SeqCst);
let offset = stream_len.fetch_add(chunk_len, Ordering::SeqCst) as u64;
let mut chunk_builder = DataChunkBuilder::new(data.as_ref())
.compress(compress);
let mut chunk_builder = DataChunkBuilder::new(data.as_ref()).compress(compress);
if let Some(ref crypt_config) = crypt_config {
chunk_builder = chunk_builder.crypt_config(crypt_config);
@ -568,7 +674,9 @@ impl BackupWriter {
let chunk_end = offset + chunk_len as u64;
if !is_fixed_chunk_size { csum.update(&chunk_end.to_le_bytes()); }
if !is_fixed_chunk_size {
csum.update(&chunk_end.to_le_bytes());
}
csum.update(digest);
let chunk_is_known = known_chunks.contains(digest);
@ -577,16 +685,17 @@ impl BackupWriter {
reused_len.fetch_add(chunk_len, Ordering::SeqCst);
future::ok(MergedChunkInfo::Known(vec![(offset, *digest)]))
} else {
let compressed_stream_len2 = compressed_stream_len.clone();
known_chunks.insert(*digest);
future::ready(chunk_builder
.build()
.map(move |(chunk, digest)| MergedChunkInfo::New(ChunkInfo {
future::ready(chunk_builder.build().map(move |(chunk, digest)| {
compressed_stream_len2.fetch_add(chunk.raw_size(), Ordering::SeqCst);
MergedChunkInfo::New(ChunkInfo {
chunk,
digest,
chunk_len: chunk_len as u64,
offset,
}))
)
})
}))
}
})
.merge_known_chunks()
@ -614,20 +723,28 @@ impl BackupWriter {
});
let ct = "application/octet-stream";
let request = H2Client::request_builder("localhost", "POST", &upload_chunk_path, Some(param), Some(ct)).unwrap();
let request = H2Client::request_builder(
"localhost",
"POST",
&upload_chunk_path,
Some(param),
Some(ct),
)
.unwrap();
let upload_data = Some(bytes::Bytes::from(chunk_data));
let new_info = MergedChunkInfo::Known(vec![(offset, digest)]);
future::Either::Left(h2
.send_request(request, upload_data)
.and_then(move |response| async move {
future::Either::Left(h2.send_request(request, upload_data).and_then(
move |response| async move {
upload_queue
.send((new_info, Some(response)))
.await
.map_err(|err| format_err!("failed to send to upload queue: {}", err))
})
)
.map_err(|err| {
format_err!("failed to send to upload queue: {}", err)
})
},
))
} else {
future::Either::Right(async move {
upload_queue
@ -637,31 +754,37 @@ impl BackupWriter {
})
}
})
.then(move |result| async move {
upload_result.await?.and(result)
}.boxed())
.then(move |result| async move { upload_result.await?.and(result) }.boxed())
.and_then(move |_| {
let duration = start_time.elapsed();
let total_chunks = total_chunks2.load(Ordering::SeqCst);
let known_chunk_count = known_chunk_count2.load(Ordering::SeqCst);
let stream_len = stream_len2.load(Ordering::SeqCst);
let reused_len = reused_len2.load(Ordering::SeqCst);
let chunk_count = total_chunks2.load(Ordering::SeqCst);
let chunk_reused = known_chunk_count2.load(Ordering::SeqCst);
let size = stream_len2.load(Ordering::SeqCst);
let size_reused = reused_len2.load(Ordering::SeqCst);
let size_compressed = compressed_stream_len2.load(Ordering::SeqCst) as usize;
let mut guard = index_csum_2.lock().unwrap();
let csum = guard.take().unwrap().finish();
futures::future::ok((total_chunks, known_chunk_count, stream_len, reused_len, duration, csum))
futures::future::ok(UploadStats {
chunk_count,
chunk_reused,
size,
size_reused,
size_compressed,
duration,
csum,
})
})
}
/// Upload speed test - prints result to stderr
pub async fn upload_speedtest(&self, verbose: bool) -> Result<f64, Error> {
let mut data = vec![];
// generate pseudo random byte sequence
for i in 0..1024*1024 {
for i in 0..1024 * 1024 {
for j in 0..4 {
let byte = ((i >> (j<<3))&0xff) as u8;
let byte = ((i >> (j << 3)) & 0xff) as u8;
data.push(byte);
}
}
@ -680,9 +803,15 @@ impl BackupWriter {
break;
}
if verbose { eprintln!("send test data ({} bytes)", data.len()); }
let request = H2Client::request_builder("localhost", "POST", "speedtest", None, None).unwrap();
let request_future = self.h2.send_request(request, Some(bytes::Bytes::from(data.clone()))).await?;
if verbose {
eprintln!("send test data ({} bytes)", data.len());
}
let request =
H2Client::request_builder("localhost", "POST", "speedtest", None, None).unwrap();
let request_future = self
.h2
.send_request(request, Some(bytes::Bytes::from(data.clone())))
.await?;
upload_queue.send(request_future).await?;
}
@ -691,9 +820,16 @@ impl BackupWriter {
let _ = upload_result.await?;
eprintln!("Uploaded {} chunks in {} seconds.", repeat, start_time.elapsed().as_secs());
let speed = ((item_len*(repeat as usize)) as f64)/start_time.elapsed().as_secs_f64();
eprintln!("Time per request: {} microseconds.", (start_time.elapsed().as_micros())/(repeat as u128));
eprintln!(
"Uploaded {} chunks in {} seconds.",
repeat,
start_time.elapsed().as_secs()
);
let speed = ((item_len * (repeat as usize)) as f64) / start_time.elapsed().as_secs_f64();
eprintln!(
"Time per request: {} microseconds.",
(start_time.elapsed().as_micros()) / (repeat as u128)
);
Ok(speed)
}

View File

@ -13,6 +13,10 @@ use nix::fcntl::OFlag;
use nix::sys::stat::Mode;
use crate::backup::CatalogWriter;
use crate::tools::{
StdChannelWriter,
TokioWriterAdapter,
};
/// Stream implementation to encode and upload .pxar archives.
///
@ -45,10 +49,10 @@ impl PxarBackupStream {
let error = Arc::new(Mutex::new(None));
let error2 = Arc::clone(&error);
let handler = async move {
let writer = std::io::BufWriter::with_capacity(
let writer = TokioWriterAdapter::new(std::io::BufWriter::with_capacity(
buffer_size,
crate::tools::StdChannelWriter::new(tx),
);
StdChannelWriter::new(tx),
));
let verbose = options.verbose;

View File

@ -12,13 +12,12 @@ use hyper::client::Client;
use hyper::Body;
use pin_project::pin_project;
use serde_json::Value;
use tokio::io::{ReadBuf, AsyncRead, AsyncWrite, AsyncWriteExt};
use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt, ReadBuf};
use tokio::net::UnixStream;
use crate::tools;
use proxmox::api::error::HttpError;
/// Port below 1024 is privileged, this is intentional so only root (on host) can connect
pub const DEFAULT_VSOCK_PORT: u16 = 807;
#[derive(Clone)]
@ -86,7 +85,7 @@ impl tower_service::Service<Uri> for VsockConnector {
Ok(connection)
})
// unravel the thread JoinHandle to a useable future
// unravel the thread JoinHandle to a usable future
.map(|res| match res {
Ok(res) => res,
Err(err) => Err(format_err!("thread join error on vsock connect: {}", err)),
@ -138,43 +137,48 @@ pub struct VsockClient {
client: Client<VsockConnector>,
cid: i32,
port: u16,
auth: Option<String>,
}
impl VsockClient {
pub fn new(cid: i32, port: u16) -> Self {
pub fn new(cid: i32, port: u16, auth: Option<String>) -> Self {
let conn = VsockConnector {};
let client = Client::builder().build::<_, Body>(conn);
Self { client, cid, port }
Self {
client,
cid,
port,
auth,
}
}
pub async fn get(&self, path: &str, data: Option<Value>) -> Result<Value, Error> {
let req = Self::request_builder(self.cid, self.port, "GET", path, data)?;
let req = self.request_builder("GET", path, data)?;
self.api_request(req).await
}
pub async fn post(&mut self, path: &str, data: Option<Value>) -> Result<Value, Error> {
let req = Self::request_builder(self.cid, self.port, "POST", path, data)?;
pub async fn post(&self, path: &str, data: Option<Value>) -> Result<Value, Error> {
let req = self.request_builder("POST", path, data)?;
self.api_request(req).await
}
pub async fn download(
&mut self,
&self,
path: &str,
data: Option<Value>,
output: &mut (dyn AsyncWrite + Send + Unpin),
) -> Result<(), Error> {
let req = Self::request_builder(self.cid, self.port, "GET", path, data)?;
let req = self.request_builder("GET", path, data)?;
let client = self.client.clone();
let resp = client.request(req)
let resp = client
.request(req)
.await
.map_err(|_| format_err!("vsock download request timed out"))?;
let status = resp.status();
if !status.is_success() {
Self::api_response(resp)
.await
.map(|_| ())?
Self::api_response(resp).await.map(|_| ())?
} else {
resp.into_body()
.map_err(Error::from)
@ -212,47 +216,43 @@ impl VsockClient {
.await
}
pub fn request_builder(
cid: i32,
port: u16,
fn request_builder(
&self,
method: &str,
path: &str,
data: Option<Value>,
) -> Result<Request<Body>, Error> {
let path = path.trim_matches('/');
let url: Uri = format!("vsock://{}:{}/{}", cid, port, path).parse()?;
let url: Uri = format!("vsock://{}:{}/{}", self.cid, self.port, path).parse()?;
let make_builder = |content_type: &str, url: &Uri| {
let mut builder = Request::builder()
.method(method)
.uri(url)
.header(hyper::header::CONTENT_TYPE, content_type);
if let Some(auth) = &self.auth {
builder = builder.header(hyper::header::AUTHORIZATION, auth);
}
builder
};
if let Some(data) = data {
if method == "POST" {
let request = Request::builder()
.method(method)
.uri(url)
.header(hyper::header::CONTENT_TYPE, "application/json")
.body(Body::from(data.to_string()))?;
let builder = make_builder("application/json", &url);
let request = builder.body(Body::from(data.to_string()))?;
return Ok(request);
} else {
let query = tools::json_object_to_query(data)?;
let url: Uri = format!("vsock://{}:{}/{}?{}", cid, port, path, query).parse()?;
let request = Request::builder()
.method(method)
.uri(url)
.header(
hyper::header::CONTENT_TYPE,
"application/x-www-form-urlencoded",
)
.body(Body::empty())?;
let url: Uri =
format!("vsock://{}:{}/{}?{}", self.cid, self.port, path, query).parse()?;
let builder = make_builder("application/x-www-form-urlencoded", &url);
let request = builder.body(Body::empty())?;
return Ok(request);
}
}
let request = Request::builder()
.method(method)
.uri(url)
.header(
hyper::header::CONTENT_TYPE,
"application/x-www-form-urlencoded",
)
.body(Body::empty())?;
let builder = make_builder("application/x-www-form-urlencoded", &url);
let request = builder.body(Body::empty())?;
Ok(request)
}

View File

@ -82,7 +82,7 @@ pub fn check_netmask(mask: u8, is_v6: bool) -> Result<(), Error> {
Ok(())
}
// parse ip address with otional cidr mask
// parse ip address with optional cidr mask
pub fn parse_address_or_cidr(cidr: &str) -> Result<(String, Option<u8>, bool), Error> {
lazy_static! {

View File

@ -4,10 +4,10 @@
//! indexed by key fingerprint.
//!
//! We store the plain key (unencrypted), as well as a encrypted
//! version protected by passowrd (see struct `KeyConfig`)
//! version protected by password (see struct `KeyConfig`)
//!
//! Tape backups store the password protected version on tape, so that
//! it is possible to retore the key from tape if you know the
//! it is possible to restore the key from tape if you know the
//! password.
use std::collections::HashMap;

View File

@ -590,7 +590,7 @@ impl TfaUserChallengeData {
}
/// Save the current data. Note that we do not replace the file here since we lock the file
/// itself, as it is in `/run`, and the typicall error case for this particular situation
/// itself, as it is in `/run`, and the typical error case for this particular situation
/// (machine loses power) simply prevents some login, but that'll probably fail anyway for
/// other reasons then...
///

View File

@ -752,10 +752,7 @@ fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Resu
flags: 0,
uid: stat.st_uid,
gid: stat.st_gid,
mtime: pxar::format::StatxTimestamp {
secs: stat.st_mtime,
nanos: stat.st_mtime_nsec as u32,
},
mtime: pxar::format::StatxTimestamp::new(stat.st_mtime, stat.st_mtime_nsec as u32),
},
..Default::default()
};
@ -768,7 +765,7 @@ fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Resu
}
fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags) -> Result<(), Error> {
if flags.contains(Flags::WITH_FCAPS) {
if !flags.contains(Flags::WITH_FCAPS) {
return Ok(());
}
@ -790,7 +787,7 @@ fn get_xattr_fcaps_acl(
proc_path: &Path,
flags: Flags,
) -> Result<(), Error> {
if flags.contains(Flags::WITH_XATTRS) {
if !flags.contains(Flags::WITH_XATTRS) {
return Ok(());
}
@ -879,7 +876,7 @@ fn get_quota_project_id(
return Ok(());
}
if flags.contains(Flags::WITH_QUOTA_PROJID) {
if !flags.contains(Flags::WITH_QUOTA_PROJID) {
return Ok(());
}
@ -914,7 +911,7 @@ fn get_quota_project_id(
}
fn get_acl(metadata: &mut Metadata, proc_path: &Path, flags: Flags) -> Result<(), Error> {
if flags.contains(Flags::WITH_ACL) {
if !flags.contains(Flags::WITH_ACL) {
return Ok(());
}
@ -1009,6 +1006,7 @@ fn process_acl(
metadata.acl.users = acl_user;
metadata.acl.groups = acl_group;
metadata.acl.group_obj = acl_group_obj;
}
acl::ACL_TYPE_DEFAULT => {
if user_obj_permissions != None
@ -1028,13 +1026,11 @@ fn process_acl(
metadata.acl.default_users = acl_user;
metadata.acl.default_groups = acl_group;
metadata.acl.default = acl_default;
}
_ => bail!("Unexpected ACL type encountered"),
}
metadata.acl.group_obj = acl_group_obj;
metadata.acl.default = acl_default;
Ok(())
}

View File

@ -1,6 +1,6 @@
use std::ffi::{OsStr, OsString};
use std::ffi::OsString;
use std::os::unix::io::{AsRawFd, RawFd};
use std::path::PathBuf;
use std::path::{Path, PathBuf};
use anyhow::{bail, format_err, Error};
use nix::dir::Dir;
@ -78,10 +78,6 @@ impl PxarDir {
pub fn metadata(&self) -> &Metadata {
&self.metadata
}
pub fn file_name(&self) -> &OsStr {
&self.file_name
}
}
pub struct PxarDirStack {
@ -159,4 +155,8 @@ impl PxarDirStack {
.try_as_borrowed_fd()
.ok_or_else(|| format_err!("lost track of directory file descriptors"))
}
pub fn path(&self) -> &Path {
&self.path
}
}

View File

@ -285,6 +285,8 @@ impl Extractor {
/// When done with a directory we can apply its metadata if it has been created.
pub fn leave_directory(&mut self) -> Result<(), Error> {
let path_info = self.dir_stack.path().to_owned();
let dir = self
.dir_stack
.pop()
@ -296,7 +298,7 @@ impl Extractor {
self.feature_flags,
dir.metadata(),
fd.as_raw_fd(),
&CString::new(dir.file_name().as_bytes())?,
&path_info,
&mut self.on_error,
)
.map_err(|err| format_err!("failed to apply directory metadata: {}", err))?;
@ -329,6 +331,7 @@ impl Extractor {
metadata,
parent,
file_name,
self.dir_stack.path(),
&mut self.on_error,
)
}
@ -382,6 +385,7 @@ impl Extractor {
metadata,
parent,
file_name,
self.dir_stack.path(),
&mut self.on_error,
)
}
@ -437,7 +441,7 @@ impl Extractor {
self.feature_flags,
metadata,
file.as_raw_fd(),
file_name,
self.dir_stack.path(),
&mut self.on_error,
)
}
@ -494,7 +498,7 @@ impl Extractor {
self.feature_flags,
metadata,
file.as_raw_fd(),
file_name,
self.dir_stack.path(),
&mut self.on_error,
)
}

View File

@ -1,5 +1,6 @@
use std::ffi::{CStr, CString};
use std::os::unix::io::{AsRawFd, FromRawFd, RawFd};
use std::path::Path;
use anyhow::{bail, format_err, Error};
use nix::errno::Errno;
@ -62,6 +63,7 @@ pub fn apply_at(
metadata: &Metadata,
parent: RawFd,
file_name: &CStr,
path_info: &Path,
on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send),
) -> Result<(), Error> {
let fd = proxmox::tools::fd::Fd::openat(
@ -71,7 +73,7 @@ pub fn apply_at(
Mode::empty(),
)?;
apply(flags, metadata, fd.as_raw_fd(), file_name, on_error)
apply(flags, metadata, fd.as_raw_fd(), path_info, on_error)
}
pub fn apply_initial_flags(
@ -94,7 +96,7 @@ pub fn apply(
flags: Flags,
metadata: &Metadata,
fd: RawFd,
file_name: &CStr,
path_info: &Path,
on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send),
) -> Result<(), Error> {
let c_proc_path = CString::new(format!("/proc/self/fd/{}", fd)).unwrap();
@ -116,7 +118,7 @@ pub fn apply(
apply_xattrs(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)
.or_else(&mut *on_error)?;
add_fcaps(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs).or_else(&mut *on_error)?;
apply_acls(flags, &c_proc_path, metadata)
apply_acls(flags, &c_proc_path, metadata, path_info)
.map_err(|err| format_err!("failed to apply acls: {}", err))
.or_else(&mut *on_error)?;
apply_quota_project_id(flags, fd, metadata).or_else(&mut *on_error)?;
@ -147,7 +149,7 @@ pub fn apply(
Err(err) => {
on_error(format_err!(
"failed to restore mtime attribute on {:?}: {}",
file_name,
path_info,
err
))?;
}
@ -227,7 +229,12 @@ fn apply_xattrs(
Ok(())
}
fn apply_acls(flags: Flags, c_proc_path: &CStr, metadata: &Metadata) -> Result<(), Error> {
fn apply_acls(
flags: Flags,
c_proc_path: &CStr,
metadata: &Metadata,
path_info: &Path,
) -> Result<(), Error> {
if !flags.contains(Flags::WITH_ACL) || metadata.acl.is_empty() {
return Ok(());
}
@ -257,11 +264,17 @@ fn apply_acls(flags: Flags, c_proc_path: &CStr, metadata: &Metadata) -> Result<(
acl.add_entry_full(acl::ACL_GROUP_OBJ, None, group_obj.permissions.0)?;
}
None => {
acl.add_entry_full(
acl::ACL_GROUP_OBJ,
None,
acl::mode_group_to_acl_permissions(metadata.stat.mode),
)?;
let mode = acl::mode_group_to_acl_permissions(metadata.stat.mode);
acl.add_entry_full(acl::ACL_GROUP_OBJ, None, mode)?;
if !metadata.acl.users.is_empty() || !metadata.acl.groups.is_empty() {
eprintln!(
"Warning: {:?}: Missing GROUP_OBJ entry in ACL, resetting to value of MASK",
path_info,
);
acl.add_entry_full(acl::ACL_MASK, None, mode)?;
}
}
}

View File

@ -89,3 +89,5 @@ mod report;
pub use report::*;
pub mod ticket;
pub mod auth;

101
src/server/auth.rs Normal file
View File

@ -0,0 +1,101 @@
//! Provides authentication primitives for the HTTP server
use anyhow::{bail, format_err, Error};
use crate::tools::ticket::Ticket;
use crate::auth_helpers::*;
use crate::tools;
use crate::config::cached_user_info::CachedUserInfo;
use crate::api2::types::{Authid, Userid};
use hyper::header;
use percent_encoding::percent_decode_str;
pub struct UserAuthData {
ticket: String,
csrf_token: Option<String>,
}
pub enum AuthData {
User(UserAuthData),
ApiToken(String),
}
pub fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
if let Some(raw_cookie) = headers.get(header::COOKIE) {
if let Ok(cookie) = raw_cookie.to_str() {
if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
Some(Ok(v)) => Some(v.to_owned()),
_ => None,
};
return Some(AuthData::User(UserAuthData {
ticket,
csrf_token,
}));
}
}
}
match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
Some(Ok(v)) => {
if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
} else {
None
}
},
_ => None,
}
}
pub fn check_auth(
method: &hyper::Method,
auth_data: &AuthData,
user_info: &CachedUserInfo,
) -> Result<Authid, Error> {
match auth_data {
AuthData::User(user_auth_data) => {
let ticket = user_auth_data.ticket.clone();
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
.verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
.require_full()?;
let auth_id = Authid::from(userid.clone());
if !user_info.is_active_auth_id(&auth_id) {
bail!("user account disabled or expired.");
}
if method != hyper::Method::GET {
if let Some(csrf_token) = &user_auth_data.csrf_token {
verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?;
} else {
bail!("missing CSRF prevention token");
}
}
Ok(auth_id)
},
AuthData::ApiToken(api_token) => {
let mut parts = api_token.splitn(2, ':');
let tokenid = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?;
if !user_info.is_active_auth_id(&tokenid) {
bail!("user account or token disabled or expired.");
}
let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret)
.decode_utf8()
.map_err(|_| format_err!("failed to decode API token header"))?;
crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
Ok(tokenid)
}
}
}

View File

@ -1,10 +1,11 @@
use anyhow::Error;
use serde_json::json;
use handlebars::{Handlebars, Helper, Context, RenderError, RenderContext, Output, HelperResult};
use handlebars::{Handlebars, Helper, Context, RenderError, RenderContext, Output, HelperResult, TemplateError};
use proxmox::tools::email::sendmail;
use proxmox::api::schema::parse_property_string;
use proxmox::try_block;
use crate::{
config::datastore::DataStoreConfig,
@ -43,7 +44,7 @@ Deduplication Factor: {{deduplication-factor}}
Garbage collection successful.
Please visit the web interface for futher details:
Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#DataStore-{{datastore}}>
@ -57,7 +58,7 @@ Datastore: {{datastore}}
Garbage collection failed: {{error}}
Please visit the web interface for futher details:
Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
@ -71,7 +72,7 @@ Datastore: {{job.store}}
Verification successful.
Please visit the web interface for futher details:
Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}>
@ -89,7 +90,7 @@ Verification failed on these snapshots/groups:
{{/each}}
Please visit the web interface for futher details:
Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
@ -105,7 +106,7 @@ Remote Store: {{job.remote-store}}
Synchronization successful.
Please visit the web interface for futher details:
Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}>
@ -121,7 +122,7 @@ Remote Store: {{job.remote-store}}
Synchronization failed: {{error}}
Please visit the web interface for futher details:
Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
@ -148,11 +149,19 @@ Datastore: {{job.store}}
Tape Pool: {{job.pool}}
Tape Drive: {{job.drive}}
{{#if snapshot-list ~}}
Snapshots included:
{{#each snapshot-list~}}
{{this}}
{{/each~}}
{{/if}}
Duration: {{duration}}
Tape Backup successful.
Please visit the web interface for futher details:
Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}>
@ -171,7 +180,7 @@ Tape Drive: {{job.drive}}
Tape Backup failed: {{error}}
Please visit the web interface for futher details:
Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
@ -181,30 +190,48 @@ lazy_static::lazy_static!{
static ref HANDLEBARS: Handlebars<'static> = {
let mut hb = Handlebars::new();
let result: Result<(), TemplateError> = try_block!({
hb.set_strict_mode(true);
hb.set_strict_mode(true);
hb.register_escape_fn(handlebars::no_escape);
hb.register_helper("human-bytes", Box::new(handlebars_humam_bytes_helper));
hb.register_helper("relative-percentage", Box::new(handlebars_relative_percentage_helper));
hb.register_helper("human-bytes", Box::new(handlebars_humam_bytes_helper));
hb.register_helper("relative-percentage", Box::new(handlebars_relative_percentage_helper));
hb.register_template_string("gc_ok_template", GC_OK_TEMPLATE).unwrap();
hb.register_template_string("gc_err_template", GC_ERR_TEMPLATE).unwrap();
hb.register_template_string("gc_ok_template", GC_OK_TEMPLATE)?;
hb.register_template_string("gc_err_template", GC_ERR_TEMPLATE)?;
hb.register_template_string("verify_ok_template", VERIFY_OK_TEMPLATE).unwrap();
hb.register_template_string("verify_err_template", VERIFY_ERR_TEMPLATE).unwrap();
hb.register_template_string("verify_ok_template", VERIFY_OK_TEMPLATE)?;
hb.register_template_string("verify_err_template", VERIFY_ERR_TEMPLATE)?;
hb.register_template_string("sync_ok_template", SYNC_OK_TEMPLATE).unwrap();
hb.register_template_string("sync_err_template", SYNC_ERR_TEMPLATE).unwrap();
hb.register_template_string("sync_ok_template", SYNC_OK_TEMPLATE)?;
hb.register_template_string("sync_err_template", SYNC_ERR_TEMPLATE)?;
hb.register_template_string("tape_backup_ok_template", TAPE_BACKUP_OK_TEMPLATE).unwrap();
hb.register_template_string("tape_backup_err_template", TAPE_BACKUP_ERR_TEMPLATE).unwrap();
hb.register_template_string("tape_backup_ok_template", TAPE_BACKUP_OK_TEMPLATE)?;
hb.register_template_string("tape_backup_err_template", TAPE_BACKUP_ERR_TEMPLATE)?;
hb.register_template_string("package_update_template", PACKAGE_UPDATES_TEMPLATE).unwrap();
hb.register_template_string("package_update_template", PACKAGE_UPDATES_TEMPLATE)?;
Ok(())
});
if let Err(err) = result {
eprintln!("error during template registration: {}", err);
}
hb
};
}
/// Summary of a successful Tape Job
#[derive(Default)]
pub struct TapeBackupJobSummary {
/// The list of snaphots backed up
pub snapshot_list: Vec<String>,
/// The total time of the backup job
pub duration: std::time::Duration,
}
fn send_job_status_mail(
email: &str,
subject: &str,
@ -402,14 +429,18 @@ pub fn send_tape_backup_status(
id: Option<&str>,
job: &TapeBackupJobSetup,
result: &Result<(), Error>,
summary: TapeBackupJobSummary,
) -> Result<(), Error> {
let (fqdn, port) = get_server_url();
let duration: crate::tools::systemd::time::TimeSpan = summary.duration.into();
let mut data = json!({
"job": job,
"fqdn": fqdn,
"port": port,
"id": id,
"snapshot-list": summary.snapshot_list,
"duration": duration.to_string(),
});
let text = match result {
@ -448,6 +479,30 @@ pub fn send_tape_backup_status(
Ok(())
}
/// Send email to a person to request a manual media change
pub fn send_load_media_email(
drive: &str,
label_text: &str,
to: &str,
reason: Option<String>,
) -> Result<(), Error> {
let subject = format!("Load Media '{}' request for drive '{}'", label_text, drive);
let mut text = String::new();
if let Some(reason) = reason {
text.push_str(&format!("The drive has the wrong or no tape inserted. Error:\n{}\n\n", reason));
}
text.push_str("Please insert the requested media into the backup drive.\n\n");
text.push_str(&format!("Drive: {}\n", drive));
text.push_str(&format!("Media: {}\n", label_text));
send_job_status_mail(to, &subject, &text)
}
fn get_server_url() -> (String, usize) {
// user will surely request that they can change this
@ -576,3 +631,23 @@ fn handlebars_relative_percentage_helper(
}
Ok(())
}
#[test]
fn test_template_register() {
HANDLEBARS.get_helper("human-bytes").unwrap();
HANDLEBARS.get_helper("relative-percentage").unwrap();
assert!(HANDLEBARS.has_template("gc_ok_template"));
assert!(HANDLEBARS.has_template("gc_err_template"));
assert!(HANDLEBARS.has_template("verify_ok_template"));
assert!(HANDLEBARS.has_template("verify_err_template"));
assert!(HANDLEBARS.has_template("sync_ok_template"));
assert!(HANDLEBARS.has_template("sync_err_template"));
assert!(HANDLEBARS.has_template("tape_backup_ok_template"));
assert!(HANDLEBARS.has_template("tape_backup_err_template"));
assert!(HANDLEBARS.has_template("package_update_template"));
}

View File

@ -9,48 +9,41 @@ use std::task::{Context, Poll};
use anyhow::{bail, format_err, Error};
use futures::future::{self, FutureExt, TryFutureExt};
use futures::stream::TryStreamExt;
use hyper::header::{self, HeaderMap};
use hyper::body::HttpBody;
use hyper::header::{self, HeaderMap};
use hyper::http::request::Parts;
use hyper::{Body, Request, Response, StatusCode};
use lazy_static::lazy_static;
use regex::Regex;
use serde_json::{json, Value};
use tokio::fs::File;
use tokio::time::Instant;
use percent_encoding::percent_decode_str;
use url::form_urlencoded;
use regex::Regex;
use proxmox::http_err;
use proxmox::api::{
ApiHandler,
ApiMethod,
HttpError,
Permission,
RpcEnvironment,
RpcEnvironmentType,
check_api_permission,
};
use proxmox::api::schema::{
ObjectSchemaType,
parse_parameter_strings, parse_simple_value, verify_json_object, ObjectSchemaType,
ParameterSchema,
parse_parameter_strings,
parse_simple_value,
verify_json_object,
};
use proxmox::api::{
check_api_permission, ApiHandler, ApiMethod, HttpError, Permission, RpcEnvironment,
RpcEnvironmentType,
};
use proxmox::http_err;
use super::environment::RestEnvironment;
use super::formatter::*;
use super::ApiConfig;
use super::auth::{check_auth, extract_auth_data};
use crate::auth_helpers::*;
use crate::api2::types::{Authid, Userid};
use crate::auth_helpers::*;
use crate::config::cached_user_info::CachedUserInfo;
use crate::tools;
use crate::tools::FileLogger;
use crate::tools::ticket::Ticket;
use crate::config::cached_user_info::CachedUserInfo;
extern "C" { fn tzset(); }
extern "C" {
fn tzset();
}
pub struct RestServer {
pub api_config: Arc<ApiConfig>,
@ -59,13 +52,16 @@ pub struct RestServer {
const MAX_URI_QUERY_LENGTH: usize = 3072;
impl RestServer {
pub fn new(api_config: ApiConfig) -> Self {
Self { api_config: Arc::new(api_config) }
Self {
api_config: Arc::new(api_config),
}
}
}
impl tower_service::Service<&Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>> for RestServer {
impl tower_service::Service<&Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>>
for RestServer
{
type Response = ApiService;
type Error = Error;
type Future = Pin<Box<dyn Future<Output = Result<ApiService, Error>> + Send>>;
@ -74,14 +70,17 @@ impl tower_service::Service<&Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStr
Poll::Ready(Ok(()))
}
fn call(&mut self, ctx: &Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>) -> Self::Future {
fn call(
&mut self,
ctx: &Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>,
) -> Self::Future {
match ctx.get_ref().peer_addr() {
Err(err) => {
future::err(format_err!("unable to get peer address - {}", err)).boxed()
}
Ok(peer) => {
future::ok(ApiService { peer, api_config: self.api_config.clone() }).boxed()
}
Err(err) => future::err(format_err!("unable to get peer address - {}", err)).boxed(),
Ok(peer) => future::ok(ApiService {
peer,
api_config: self.api_config.clone(),
})
.boxed(),
}
}
}
@ -97,12 +96,12 @@ impl tower_service::Service<&tokio::net::TcpStream> for RestServer {
fn call(&mut self, ctx: &tokio::net::TcpStream) -> Self::Future {
match ctx.peer_addr() {
Err(err) => {
future::err(format_err!("unable to get peer address - {}", err)).boxed()
}
Ok(peer) => {
future::ok(ApiService { peer, api_config: self.api_config.clone() }).boxed()
}
Err(err) => future::err(format_err!("unable to get peer address - {}", err)).boxed(),
Ok(peer) => future::ok(ApiService {
peer,
api_config: self.api_config.clone(),
})
.boxed(),
}
}
}
@ -122,8 +121,9 @@ impl tower_service::Service<&tokio::net::UnixStream> for RestServer {
let fake_peer = "0.0.0.0:807".parse().unwrap();
future::ok(ApiService {
peer: fake_peer,
api_config: self.api_config.clone()
}).boxed()
api_config: self.api_config.clone(),
})
.boxed()
}
}
@ -140,8 +140,9 @@ fn log_response(
resp: &Response<Body>,
user_agent: Option<String>,
) {
if resp.extensions().get::<NoLogExtension>().is_some() { return; };
if resp.extensions().get::<NoLogExtension>().is_some() {
return;
};
// we also log URL-to-long requests, so avoid message bigger than PIPE_BUF (4k on Linux)
// to profit from atomicty guarantees for O_APPEND opened logfiles
@ -157,7 +158,15 @@ fn log_response(
message = &data.0;
}
log::error!("{} {}: {} {}: [client {}] {}", method.as_str(), path, status.as_str(), reason, peer, message);
log::error!(
"{} {}: {} {}: [client {}] {}",
method.as_str(),
path,
status.as_str(),
reason,
peer,
message
);
}
if let Some(logfile) = logfile {
let auth_id = match resp.extensions().get::<Authid>() {
@ -169,20 +178,17 @@ fn log_response(
let datetime = proxmox::tools::time::strftime_local("%d/%m/%Y:%H:%M:%S %z", now)
.unwrap_or_else(|_| "-".to_string());
logfile
.lock()
.unwrap()
.log(format!(
"{} - {} [{}] \"{} {}\" {} {} {}",
peer.ip(),
auth_id,
datetime,
method.as_str(),
path,
status.as_str(),
resp.body().size_hint().lower(),
user_agent.unwrap_or_else(|| "-".to_string()),
));
logfile.lock().unwrap().log(format!(
"{} - {} [{}] \"{} {}\" {} {} {}",
peer.ip(),
auth_id,
datetime,
method.as_str(),
path,
status.as_str(),
resp.body().size_hint().lower(),
user_agent.unwrap_or_else(|| "-".to_string()),
));
}
}
pub fn auth_logger() -> Result<FileLogger, Error> {
@ -208,11 +214,13 @@ fn get_proxied_peer(headers: &HeaderMap) -> Option<std::net::SocketAddr> {
fn get_user_agent(headers: &HeaderMap) -> Option<String> {
let agent = headers.get(header::USER_AGENT)?.to_str();
agent.map(|s| {
let mut s = s.to_owned();
s.truncate(128);
s
}).ok()
agent
.map(|s| {
let mut s = s.to_owned();
s.truncate(128);
s
})
.ok()
}
impl tower_service::Service<Request<Body>> for ApiService {
@ -260,7 +268,6 @@ fn parse_query_parameters<S: 'static + BuildHasher + Send>(
parts: &Parts,
uri_param: &HashMap<String, String, S>,
) -> Result<Value, Error> {
let mut param_list: Vec<(String, String)> = vec![];
if !form.is_empty() {
@ -271,7 +278,9 @@ fn parse_query_parameters<S: 'static + BuildHasher + Send>(
if let Some(query_str) = parts.uri.query() {
for (k, v) in form_urlencoded::parse(query_str.as_bytes()).into_owned() {
if k == "_dc" { continue; } // skip extjs "disable cache" parameter
if k == "_dc" {
continue;
} // skip extjs "disable cache" parameter
param_list.push((k, v));
}
}
@ -291,7 +300,6 @@ async fn get_request_parameters<S: 'static + BuildHasher + Send>(
req_body: Body,
uri_param: HashMap<String, String, S>,
) -> Result<Value, Error> {
let mut is_json = false;
if let Some(value) = parts.headers.get(header::CONTENT_TYPE) {
@ -306,19 +314,22 @@ async fn get_request_parameters<S: 'static + BuildHasher + Send>(
}
}
let body = req_body
.map_err(|err| http_err!(BAD_REQUEST, "Promlems reading request body: {}", err))
.try_fold(Vec::new(), |mut acc, chunk| async move {
if acc.len() + chunk.len() < 64*1024 { //fimxe: max request body size?
acc.extend_from_slice(&*chunk);
Ok(acc)
} else {
Err(http_err!(BAD_REQUEST, "Request body too large"))
}
}).await?;
let body = TryStreamExt::map_err(req_body, |err| {
http_err!(BAD_REQUEST, "Problems reading request body: {}", err)
})
.try_fold(Vec::new(), |mut acc, chunk| async move {
// FIXME: max request body size?
if acc.len() + chunk.len() < 64 * 1024 {
acc.extend_from_slice(&*chunk);
Ok(acc)
} else {
Err(http_err!(BAD_REQUEST, "Request body too large"))
}
})
.await?;
let utf8_data = std::str::from_utf8(&body)
.map_err(|err| format_err!("Request body not uft8: {}", err))?;
let utf8_data =
std::str::from_utf8(&body).map_err(|err| format_err!("Request body not uft8: {}", err))?;
if is_json {
let mut params: Value = serde_json::from_str(utf8_data)?;
@ -342,7 +353,6 @@ async fn proxy_protected_request(
req_body: Body,
peer: &std::net::SocketAddr,
) -> Result<Response<Body>, Error> {
let mut uri_parts = parts.uri.clone().into_parts();
uri_parts.scheme = Some(http::uri::Scheme::HTTP);
@ -352,9 +362,10 @@ async fn proxy_protected_request(
parts.uri = new_uri;
let mut request = Request::from_parts(parts, req_body);
request
.headers_mut()
.insert(header::FORWARDED, format!("for=\"{}\";", peer).parse().unwrap());
request.headers_mut().insert(
header::FORWARDED,
format!("for=\"{}\";", peer).parse().unwrap(),
);
let reload_timezone = info.reload_timezone;
@ -367,7 +378,11 @@ async fn proxy_protected_request(
})
.await?;
if reload_timezone { unsafe { tzset(); } }
if reload_timezone {
unsafe {
tzset();
}
}
Ok(resp)
}
@ -380,7 +395,6 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
req_body: Body,
uri_param: HashMap<String, String, S>,
) -> Result<Response<Body>, Error> {
let delay_unauth_time = std::time::Instant::now() + std::time::Duration::from_millis(3000);
let result = match info.handler {
@ -389,12 +403,13 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
(handler)(parts, req_body, params, info, Box::new(rpcenv)).await
}
ApiHandler::Sync(handler) => {
let params = get_request_parameters(info.parameters, parts, req_body, uri_param).await?;
(handler)(params, info, &mut rpcenv)
.map(|data| (formatter.format_data)(data, &rpcenv))
let params =
get_request_parameters(info.parameters, parts, req_body, uri_param).await?;
(handler)(params, info, &mut rpcenv).map(|data| (formatter.format_data)(data, &rpcenv))
}
ApiHandler::Async(handler) => {
let params = get_request_parameters(info.parameters, parts, req_body, uri_param).await?;
let params =
get_request_parameters(info.parameters, parts, req_body, uri_param).await?;
(handler)(params, info, &mut rpcenv)
.await
.map(|data| (formatter.format_data)(data, &rpcenv))
@ -413,7 +428,11 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
}
};
if info.reload_timezone { unsafe { tzset(); } }
if info.reload_timezone {
unsafe {
tzset();
}
}
Ok(resp)
}
@ -424,8 +443,7 @@ fn get_index(
language: Option<String>,
api: &Arc<ApiConfig>,
parts: Parts,
) -> Response<Body> {
) -> Response<Body> {
let nodename = proxmox::tools::nodename();
let user = userid.as_ref().map(|u| u.as_str()).unwrap_or("");
@ -462,9 +480,7 @@ fn get_index(
let (ct, index) = match api.render_template(template_file, &data) {
Ok(index) => ("text/html", index),
Err(err) => {
("text/plain", format!("Error rendering template: {}", err))
}
Err(err) => ("text/plain", format!("Error rendering template: {}", err)),
};
let mut resp = Response::builder()
@ -481,7 +497,6 @@ fn get_index(
}
fn extension_to_content_type(filename: &Path) -> (&'static str, bool) {
if let Some(ext) = filename.extension().and_then(|osstr| osstr.to_str()) {
return match ext {
"css" => ("text/css", false),
@ -510,7 +525,6 @@ fn extension_to_content_type(filename: &Path) -> (&'static str, bool) {
}
async fn simple_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> {
let (content_type, _nocomp) = extension_to_content_type(&filename);
use tokio::io::AsyncReadExt;
@ -527,7 +541,8 @@ async fn simple_static_file_download(filename: PathBuf) -> Result<Response<Body>
let mut response = Response::new(data.into());
response.headers_mut().insert(
header::CONTENT_TYPE,
header::HeaderValue::from_static(content_type));
header::HeaderValue::from_static(content_type),
);
Ok(response)
}
@ -542,22 +557,20 @@ async fn chuncked_static_file_download(filename: PathBuf) -> Result<Response<Bod
.map_ok(|bytes| bytes.freeze());
let body = Body::wrap_stream(payload);
// fixme: set other headers ?
// FIXME: set other headers ?
Ok(Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, content_type)
.body(body)
.unwrap()
)
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, content_type)
.body(body)
.unwrap())
}
async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> {
async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> {
let metadata = tokio::fs::metadata(filename.clone())
.map_err(|err| http_err!(BAD_REQUEST, "File access problems: {}", err))
.await?;
if metadata.len() < 1024*32 {
if metadata.len() < 1024 * 32 {
simple_static_file_download(filename).await
} else {
chuncked_static_file_download(filename).await
@ -574,102 +587,11 @@ fn extract_lang_header(headers: &http::HeaderMap) -> Option<String> {
None
}
struct UserAuthData{
ticket: String,
csrf_token: Option<String>,
}
enum AuthData {
User(UserAuthData),
ApiToken(String),
}
fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
if let Some(raw_cookie) = headers.get(header::COOKIE) {
if let Ok(cookie) = raw_cookie.to_str() {
if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
Some(Ok(v)) => Some(v.to_owned()),
_ => None,
};
return Some(AuthData::User(UserAuthData {
ticket,
csrf_token,
}));
}
}
}
match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
Some(Ok(v)) => {
if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
} else {
None
}
},
_ => None,
}
}
fn check_auth(
method: &hyper::Method,
auth_data: &AuthData,
user_info: &CachedUserInfo,
) -> Result<Authid, Error> {
match auth_data {
AuthData::User(user_auth_data) => {
let ticket = user_auth_data.ticket.clone();
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
.verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
.require_full()?;
let auth_id = Authid::from(userid.clone());
if !user_info.is_active_auth_id(&auth_id) {
bail!("user account disabled or expired.");
}
if method != hyper::Method::GET {
if let Some(csrf_token) = &user_auth_data.csrf_token {
verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?;
} else {
bail!("missing CSRF prevention token");
}
}
Ok(auth_id)
},
AuthData::ApiToken(api_token) => {
let mut parts = api_token.splitn(2, ':');
let tokenid = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?;
if !user_info.is_active_auth_id(&tokenid) {
bail!("user account or token disabled or expired.");
}
let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret)
.decode_utf8()
.map_err(|_| format_err!("failed to decode API token header"))?;
crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
Ok(tokenid)
}
}
}
async fn handle_request(
api: Arc<ApiConfig>,
req: Request<Body>,
peer: &std::net::SocketAddr,
) -> Result<Response<Body>, Error> {
let (parts, body) = req.into_parts();
let method = parts.method.clone();
let (path, components) = tools::normalize_uri_path(parts.uri.path())?;
@ -695,15 +617,13 @@ async fn handle_request(
let access_forbidden_time = std::time::Instant::now() + std::time::Duration::from_millis(500);
if comp_len >= 1 && components[0] == "api2" {
if comp_len >= 2 {
let format = components[1];
let formatter = match format {
"json" => &JSON_FORMATTER,
"extjs" => &EXTJS_FORMATTER,
_ => bail!("Unsupported output format '{}'.", format),
_ => bail!("Unsupported output format '{}'.", format),
};
let mut uri_param = HashMap::new();
@ -725,8 +645,10 @@ async fn handle_request(
Ok(authid) => rpcenv.set_auth_id(Some(authid.to_string())),
Err(err) => {
let peer = peer.ip();
auth_logger()?
.log(format!("authentication failure; rhost={} msg={}", peer, err));
auth_logger()?.log(format!(
"authentication failure; rhost={} msg={}",
peer, err
));
// always delay unauthorized calls by 3 seconds (from start of request)
let err = http_err!(UNAUTHORIZED, "authentication failed - {}", err);
@ -743,7 +665,12 @@ async fn handle_request(
}
Some(api_method) => {
let auth_id = rpcenv.get_auth_id();
if !check_api_permission(api_method.access.permission, auth_id.as_deref(), &uri_param, user_info.as_ref()) {
if !check_api_permission(
api_method.access.permission,
auth_id.as_deref(),
&uri_param,
user_info.as_ref(),
) {
let err = http_err!(FORBIDDEN, "permission check failed");
tokio::time::sleep_until(Instant::from_std(access_forbidden_time)).await;
return Ok((formatter.format_error)(err));
@ -752,7 +679,8 @@ async fn handle_request(
let result = if api_method.protected && env_type == RpcEnvironmentType::PUBLIC {
proxy_protected_request(api_method, parts, body, peer).await
} else {
handle_api_request(rpcenv, api_method, formatter, parts, body, uri_param).await
handle_api_request(rpcenv, api_method, formatter, parts, body, uri_param)
.await
};
let mut response = match result {
@ -768,9 +696,8 @@ async fn handle_request(
return Ok(response);
}
}
}
} else {
} else {
// not Auth required for accessing files!
if method != hyper::Method::GET {
@ -784,8 +711,14 @@ async fn handle_request(
Ok(auth_id) if !auth_id.is_token() => {
let userid = auth_id.user();
let new_csrf_token = assemble_csrf_prevention_token(csrf_secret(), userid);
return Ok(get_index(Some(userid.clone()), Some(new_csrf_token), language, &api, parts));
},
return Ok(get_index(
Some(userid.clone()),
Some(new_csrf_token),
language,
&api,
parts,
));
}
_ => {
tokio::time::sleep_until(Instant::from_std(delay_unauth_time)).await;
return Ok(get_index(None, None, language, &api, parts));

View File

@ -207,6 +207,8 @@ pub fn upid_read_status(upid: &UPID) -> Result<TaskState, Error> {
let mut iter = last_line.splitn(2, ": ");
if let Some(time_str) = iter.next() {
if let Ok(endtime) = proxmox::tools::time::parse_rfc3339(time_str) {
// set the endtime even if we cannot parse the state
status = TaskState::Unknown { endtime };
if let Some(rest) = iter.next().and_then(|rest| rest.strip_prefix("TASK ")) {
if let Ok(state) = TaskState::from_endtime_and_message(endtime, rest) {
status = state;
@ -749,7 +751,7 @@ impl WorkerTask {
match data.abort_listeners.pop() {
None => { break; },
Some(ch) => {
let _ = ch.send(()); // ignore erros here
let _ = ch.send(()); // ignore errors here
},
}
}

View File

@ -1,36 +0,0 @@
use anyhow::Error;
use proxmox::tools::email::sendmail;
/// Send email to a person to request a manual media change
pub fn send_load_media_email(
drive: &str,
label_text: &str,
to: &str,
reason: Option<String>,
) -> Result<(), Error> {
let subject = format!("Load Media '{}' request for drive '{}'", label_text, drive);
let mut text = String::new();
if let Some(reason) = reason {
text.push_str(&format!("The drive has the wrong or no tape inserted. Error:\n{}\n\n", reason));
}
text.push_str("Please insert the requested media into the backup drive.\n\n");
text.push_str(&format!("Drive: {}\n", drive));
text.push_str(&format!("Media: {}\n", label_text));
sendmail(
&[to],
&subject,
Some(&text),
None,
None,
None,
)?;
Ok(())
}

View File

@ -1,8 +1,5 @@
//! Media changer implementation (SCSI media changer)
mod email;
pub use email::*;
pub mod sg_pt_changer;
pub mod mtx;
@ -35,7 +32,7 @@ use crate::api2::types::{
/// Changer element status.
///
/// Drive and slots may be `Empty`, or contain some media, either
/// with knwon volume tag `VolumeTag(String)`, or without (`Full`).
/// with known volume tag `VolumeTag(String)`, or without (`Full`).
#[derive(Serialize, Deserialize, Debug)]
pub enum ElementStatus {
Empty,
@ -87,7 +84,7 @@ pub struct MtxStatus {
pub drives: Vec<DriveStatus>,
/// List of known storage slots
pub slots: Vec<StorageElementStatus>,
/// Tranport elements
/// Transport elements
///
/// Note: Some libraries do not report transport elements.
pub transports: Vec<TransportElementStatus>,
@ -261,7 +258,7 @@ pub trait MediaChange {
/// List online media labels (label_text/barcodes)
///
/// List acessible (online) label texts. This does not include
/// List accessible (online) label texts. This does not include
/// media inside import-export slots or cleaning media.
fn online_media_label_texts(&mut self) -> Result<Vec<String>, Error> {
let status = self.status()?;
@ -378,7 +375,7 @@ pub trait MediaChange {
/// Unload media to a free storage slot
///
/// If posible to the slot it was previously loaded from.
/// If possible to the slot it was previously loaded from.
///
/// Note: This method consumes status - so please use returned status afterward.
fn unload_to_free_slot(&mut self, status: MtxStatus) -> Result<MtxStatus, Error> {

View File

@ -1,4 +1,4 @@
//! Wrapper around expernal `mtx` command line tool
//! Wrapper around external `mtx` command line tool
mod parse_mtx_status;
pub use parse_mtx_status::*;

View File

@ -28,6 +28,7 @@ use crate::{
SENSE_KEY_UNIT_ATTENTION,
SENSE_KEY_NOT_READY,
InquiryInfo,
ScsiError,
scsi_ascii_to_string,
scsi_inquiry,
},
@ -103,7 +104,7 @@ fn execute_scsi_command<F: AsRawFd>(
if !retry {
bail!("{} failed: {}", error_prefix, err);
}
if let Some(ref sense) = err.sense {
if let ScsiError::Sense(ref sense) = err {
if sense.sense_key == SENSE_KEY_NO_SENSE ||
sense.sense_key == SENSE_KEY_RECOVERED_ERROR ||
@ -246,7 +247,7 @@ pub fn unload(
Ok(())
}
/// Tranfer medium from one storage slot to another
/// Transfer medium from one storage slot to another
pub fn transfer_medium<F: AsRawFd>(
file: &mut F,
from_slot: u64,
@ -362,7 +363,7 @@ pub fn read_element_status<F: AsRawFd>(file: &mut F) -> Result<MtxStatus, Error>
bail!("got wrong number of import/export elements");
}
if (setup.transfer_element_count as usize) != drives.len() {
bail!("got wrong number of tranfer elements");
bail!("got wrong number of transfer elements");
}
// create same virtual slot order as mtx(1)
@ -428,7 +429,7 @@ struct SubHeader {
element_type_code: u8,
flags: u8,
descriptor_length: u16,
reseved: u8,
reserved: u8,
byte_count_of_descriptor_data_available: [u8;3],
}

View File

@ -196,7 +196,7 @@ struct SspDataEncryptionCapabilityPage {
page_code: u16,
page_len: u16,
extdecc_cfgp_byte: u8,
reserverd: [u8; 15],
reserved: [u8; 15],
}
#[derive(Endian)]
@ -241,13 +241,13 @@ fn decode_spin_data_encryption_caps(data: &[u8]) -> Result<u8, Error> {
let desc: SspDataEncryptionAlgorithmDescriptor =
unsafe { reader.read_be_value()? };
if desc.descriptor_len != 0x14 {
bail!("got wrong key descriptior len");
bail!("got wrong key descriptor len");
}
if (desc.control_byte_4 & 0b00000011) != 2 {
continue; // cant encrypt in hardware
continue; // can't encrypt in hardware
}
if ((desc.control_byte_4 & 0b00001100) >> 2) != 2 {
continue; // cant decrypt in hardware
continue; // can't decrypt in hardware
}
if desc.algorithm_code == 0x00010014 && desc.key_size == 32 {
aes_cgm_index = Some(desc.algorythm_index);
@ -276,7 +276,7 @@ struct SspDataEncryptionStatusPage {
control_byte: u8,
key_format: u8,
key_len: u16,
reserverd: [u8; 8],
reserved: [u8; 8],
}
fn decode_spin_data_encryption_status(data: &[u8]) -> Result<DataEncryptionStatus, Error> {

View File

@ -242,32 +242,6 @@ impl LinuxTapeHandle {
Ok(())
}
pub fn forward_space_count_files(&mut self, count: i32) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTFSF, mt_count: count, };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("forward space {} files failed - {}", count, err)
})?;
Ok(())
}
pub fn backward_space_count_files(&mut self, count: i32) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTBSF, mt_count: count, };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("backward space {} files failed - {}", count, err)
})?;
Ok(())
}
/// Set tape compression feature
pub fn set_compression(&self, on: bool) -> Result<(), Error> {
@ -467,6 +441,32 @@ impl TapeDriver for LinuxTapeHandle {
Ok(())
}
fn forward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTFSF, mt_count: i32::try_from(count)? };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("forward space {} files failed - {}", count, err)
})?;
Ok(())
}
fn backward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTBSF, mt_count: i32::try_from(count)? };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("backward space {} files failed - {}", count, err)
})?;
Ok(())
}
fn rewind(&mut self) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTREW, mt_count: 1, };

View File

@ -72,14 +72,14 @@ static MAM_ATTRIBUTES: &[ (u16, u16, MamFormat, &str) ] = &[
(0x08_02, 8, MamFormat::ASCII, "Application Version"),
(0x08_03, 160, MamFormat::ASCII, "User Medium Text Label"),
(0x08_04, 12, MamFormat::ASCII, "Date And Time Last Written"),
(0x08_05, 1, MamFormat::BINARY, "Text Localization Identifer"),
(0x08_05, 1, MamFormat::BINARY, "Text Localization Identifier"),
(0x08_06, 32, MamFormat::ASCII, "Barcode"),
(0x08_07, 80, MamFormat::ASCII, "Owning Host Textual Name"),
(0x08_08, 160, MamFormat::ASCII, "Media Pool"),
(0x08_0B, 16, MamFormat::ASCII, "Application Format Version"),
(0x08_0C, 50, MamFormat::ASCII, "Volume Coherency Information"),
(0x08_20, 36, MamFormat::ASCII, "Medium Globally Unique Identifer"),
(0x08_21, 36, MamFormat::ASCII, "Media Pool Globally Unique Identifer"),
(0x08_20, 36, MamFormat::ASCII, "Medium Globally Unique Identifier"),
(0x08_21, 36, MamFormat::ASCII, "Media Pool Globally Unique Identifier"),
(0x10_00, 28, MamFormat::BINARY, "Unique Cartridge Identify (UCI)"),
(0x10_01, 24, MamFormat::BINARY, "Alternate Unique Cartridge Identify (Alt-UCI)"),
@ -101,12 +101,13 @@ lazy_static::lazy_static!{
fn read_tape_mam<F: AsRawFd>(file: &mut F) -> Result<Vec<u8>, Error> {
let mut sg_raw = SgRaw::new(file, 32*1024)?;
let alloc_len: u32 = 32*1024;
let mut sg_raw = SgRaw::new(file, alloc_len as usize)?;
let mut cmd = Vec::new();
cmd.extend(&[0x8c, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8]);
cmd.extend(&[0u8, 0u8]); // first attribute
cmd.extend(&[0u8, 0u8, 0x8f, 0xff]); // alloc len
cmd.extend(&alloc_len.to_be_bytes()); // alloc len
cmd.extend(&[0u8, 0u8]);
sg_raw.do_command(&cmd)
@ -114,7 +115,7 @@ fn read_tape_mam<F: AsRawFd>(file: &mut F) -> Result<Vec<u8>, Error> {
.map(|v| v.to_vec())
}
/// Read Medium auxiliary memory attributes (cartridge memory) using raw SCSI command.
/// Read Medium auxiliary memory attributes (cartridge memory) using raw SCSI command.
pub fn read_mam_attributes<F: AsRawFd>(file: &mut F) -> Result<Vec<MamAttribute>, Error> {
let data = read_tape_mam(file)?;
@ -130,8 +131,12 @@ fn decode_mam_attributes(data: &[u8]) -> Result<Vec<MamAttribute>, Error> {
let expected_len = data_len as usize;
if reader.len() != expected_len {
if reader.len() < expected_len {
bail!("read_mam_attributes: got unexpected data len ({} != {})", reader.len(), expected_len);
} else if reader.len() > expected_len {
// Note: Quantum hh7 returns the allocation_length instead of real data_len
reader = &data[4..expected_len+4];
}
let mut list = Vec::new();

View File

@ -51,7 +51,10 @@ use crate::{
VirtualTapeDrive,
LinuxTapeDrive,
},
server::WorkerTask,
server::{
send_load_media_email,
WorkerTask,
},
tape::{
TapeWrite,
TapeRead,
@ -66,7 +69,6 @@ use crate::{
changer::{
MediaChange,
MtxMediaChanger,
send_load_media_email,
},
},
};
@ -85,6 +87,26 @@ pub trait TapeDriver {
/// We assume this flushes the tape write buffer.
fn move_to_eom(&mut self) -> Result<(), Error>;
/// Move to last file
fn move_to_last_file(&mut self) -> Result<(), Error> {
self.move_to_eom()?;
if self.current_file_number()? == 0 {
bail!("move_to_last_file failed - media contains no data");
}
self.backward_space_count_files(2)?;
Ok(())
}
/// Forward space count files. The tape is positioned on the first block of the next file.
fn forward_space_count_files(&mut self, count: usize) -> Result<(), Error>;
/// Backward space count files. The tape is positioned on the last block of the previous file.
fn backward_space_count_files(&mut self, count: usize) -> Result<(), Error>;
/// Current file number
fn current_file_number(&mut self) -> Result<u64, Error>;
@ -209,7 +231,7 @@ pub trait TapeDriver {
/// Set or clear encryption key
///
/// We use the media_set_uuid to XOR the secret key with the
/// uuid (first 16 bytes), so that each media set uses an uique
/// uuid (first 16 bytes), so that each media set uses an unique
/// key for encryption.
fn set_encryption(
&mut self,
@ -465,7 +487,7 @@ pub fn request_and_load_media(
}
}
/// Aquires an exclusive lock for the tape device
/// Acquires an exclusive lock for the tape device
///
/// Basically calls lock_device_path() using the configured drive path.
pub fn lock_tape_device(
@ -539,7 +561,7 @@ fn tape_device_path(
pub struct DeviceLockGuard(std::fs::File);
// Aquires an exclusive lock on `device_path`
// Acquires an exclusive lock on `device_path`
//
// Uses systemd escape_unit to compute a file name from `device_path`, the try
// to lock `/var/lock/<name>`.

View File

@ -296,6 +296,51 @@ impl TapeDriver for VirtualTapeHandle {
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
*pos = index.files;
self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
Ok(())
}
None => bail!("drive is empty (no tape loaded)."),
}
}
fn forward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let mut status = self.load_status()?;
match status.current_tape {
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
let index = self.load_tape_index(name)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
let new_pos = *pos + count;
if new_pos <= index.files {
*pos = new_pos;
} else {
bail!("forward_space_count_files failed: move beyond EOT");
}
self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
Ok(())
}
None => bail!("drive is empty (no tape loaded)."),
}
}
fn backward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let mut status = self.load_status()?;
match status.current_tape {
Some(VirtualTapeStatus { ref mut pos, .. }) => {
if count <= *pos {
*pos = *pos - count;
} else {
bail!("backward_space_count_files failed: move before BOT");
}
self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
@ -429,7 +474,7 @@ impl MediaChange for VirtualTapeHandle {
}
fn transfer_media(&mut self, _from: u64, _to: u64) -> Result<MtxStatus, Error> {
bail!("media tranfer is not implemented!");
bail!("media transfer is not implemented!");
}
fn export_media(&mut self, _label_text: &str) -> Result<Option<u64>, Error> {

View File

@ -27,11 +27,8 @@ pub fn read_volume_statistics<F: AsRawFd>(file: &mut F) -> Result<Lp17VolumeSta
fn sg_read_volume_statistics<F: AsRawFd>(file: &mut F) -> Result<Vec<u8>, Error> {
let buffer_size = 8192;
let mut sg_raw = SgRaw::new(file, buffer_size)?;
// Note: We cannjot use LP 2Eh TapeAlerts, because that clears flags on read.
// Instead, we use LP 12h TapeAlert Response. which does not clear the flags.
let alloc_len: u16 = 8192;
let mut sg_raw = SgRaw::new(file, alloc_len as usize)?;
let mut cmd = Vec::new();
cmd.push(0x4D); // LOG SENSE
@ -41,7 +38,7 @@ fn sg_read_volume_statistics<F: AsRawFd>(file: &mut F) -> Result<Vec<u8>, Error>
cmd.push(0);
cmd.push(0);
cmd.push(0);
cmd.push((buffer_size >> 8) as u8); cmd.push(0); // alloc len
cmd.extend(&alloc_len.to_be_bytes()); // alloc len
cmd.push(0u8); // control byte
sg_raw.do_command(&cmd)
@ -145,8 +142,13 @@ fn decode_volume_statistics(data: &[u8]) -> Result<Lp17VolumeStatistics, Error>
let page_len: u16 = unsafe { reader.read_be_value()? };
if (page_len as usize + 4) != data.len() {
let page_len = page_len as usize;
if (page_len + 4) > data.len() {
bail!("invalid page length");
} else {
// Note: Quantum hh7 returns the allocation_length instead of real data_len
reader = &data[4..page_len+4];
}
let mut stat = Lp17VolumeStatistics::default();

View File

@ -77,7 +77,7 @@ impl <R: Read> BlockedReader<R> {
if seq_nr != buffer.seq_nr() {
proxmox::io_bail!(
"detected tape block with wrong seqence number ({} != {})",
"detected tape block with wrong sequence number ({} != {})",
seq_nr, buffer.seq_nr())
}

View File

@ -0,0 +1,89 @@
use std::fs::File;
use std::io::Read;
use proxmox::{
sys::error::SysError,
tools::Uuid,
};
use crate::{
tape::{
TapeWrite,
file_formats::{
PROXMOX_TAPE_BLOCK_SIZE,
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0,
MediaContentHeader,
CatalogArchiveHeader,
},
},
};
/// Write a media catalog to the tape
///
/// Returns `Ok(Some(content_uuid))` on success, and `Ok(None)` if
/// `LEOM` was detected before all data was written. The stream is
/// marked inclomplete in that case and does not contain all data (The
/// backup task must rewrite the whole file on the next media).
///
pub fn tape_write_catalog<'a>(
writer: &mut (dyn TapeWrite + 'a),
uuid: &Uuid,
media_set_uuid: &Uuid,
seq_nr: usize,
file: &mut File,
) -> Result<Option<Uuid>, std::io::Error> {
let archive_header = CatalogArchiveHeader {
uuid: uuid.clone(),
media_set_uuid: media_set_uuid.clone(),
seq_nr: seq_nr as u64,
};
let header_data = serde_json::to_string_pretty(&archive_header)?.as_bytes().to_vec();
let header = MediaContentHeader::new(
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0, header_data.len() as u32);
let content_uuid: Uuid = header.uuid.into();
let leom = writer.write_header(&header, &header_data)?;
if leom {
writer.finish(true)?; // mark as incomplete
return Ok(None);
}
let mut file_copy_buffer = proxmox::tools::vec::undefined(PROXMOX_TAPE_BLOCK_SIZE);
let result: Result<(), std::io::Error> = proxmox::try_block!({
let file_size = file.metadata()?.len();
let mut remaining = file_size;
while remaining != 0 {
let got = file.read(&mut file_copy_buffer[..])?;
if got as u64 > remaining {
proxmox::io_bail!("catalog '{}' changed while reading", uuid);
}
writer.write_all(&file_copy_buffer[..got])?;
remaining -= got as u64;
}
if remaining > 0 {
proxmox::io_bail!("catalog '{}' shrunk while reading", uuid);
}
Ok(())
});
match result {
Ok(()) => {
writer.finish(false)?;
Ok(Some(content_uuid))
}
Err(err) => {
if err.is_errno(nix::errno::Errno::ENOSPC) && writer.logical_end_of_media() {
writer.finish(true)?; // mark as incomplete
Ok(None)
} else {
Err(err)
}
}
}
}

View File

@ -14,9 +14,10 @@ use crate::tape::{
TapeWrite,
file_formats::{
PROXMOX_TAPE_BLOCK_SIZE,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1,
PROXMOX_BACKUP_CHUNK_ARCHIVE_ENTRY_MAGIC_1_0,
MediaContentHeader,
ChunkArchiveHeader,
ChunkArchiveEntryHeader,
},
};
@ -25,7 +26,7 @@ use crate::tape::{
///
/// A chunk archive consists of a `MediaContentHeader` followed by a
/// list of chunks entries. Each chunk entry consists of a
/// `ChunkArchiveEntryHeader` folowed by the chunk data (`DataBlob`).
/// `ChunkArchiveEntryHeader` followed by the chunk data (`DataBlob`).
///
/// `| MediaContentHeader | ( ChunkArchiveEntryHeader | DataBlob )* |`
pub struct ChunkArchiveWriter<'a> {
@ -36,13 +37,20 @@ pub struct ChunkArchiveWriter<'a> {
impl <'a> ChunkArchiveWriter<'a> {
pub const MAGIC: [u8; 8] = PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0;
pub const MAGIC: [u8; 8] = PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1;
/// Creates a new instance
pub fn new(mut writer: Box<dyn TapeWrite + 'a>, close_on_leom: bool) -> Result<(Self,Uuid), Error> {
pub fn new(
mut writer: Box<dyn TapeWrite + 'a>,
store: &str,
close_on_leom: bool,
) -> Result<(Self,Uuid), Error> {
let header = MediaContentHeader::new(Self::MAGIC, 0);
writer.write_header(&header, &[])?;
let archive_header = ChunkArchiveHeader { store: store.to_string() };
let header_data = serde_json::to_string_pretty(&archive_header)?.as_bytes().to_vec();
let header = MediaContentHeader::new(Self::MAGIC, header_data.len() as u32);
writer.write_header(&header, &header_data)?;
let me = Self {
writer: Some(writer),
@ -153,7 +161,7 @@ impl <R: Read> ChunkArchiveDecoder<R> {
Self { reader }
}
/// Allow access to the underyling reader
/// Allow access to the underlying reader
pub fn reader(&self) -> &R {
&self.reader
}

View File

@ -13,6 +13,9 @@ pub use chunk_archive::*;
mod snapshot_archive;
pub use snapshot_archive::*;
mod catalog_archive;
pub use catalog_archive::*;
mod multi_volume_writer;
pub use multi_volume_writer::*;
@ -44,12 +47,22 @@ pub const PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0: [u8; 8] = [42, 5, 191, 60, 176,
pub const PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0: [u8; 8] = [8, 96, 99, 249, 47, 151, 83, 216];
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive v1.0")[0..8]
// only used in unreleased version - no longer supported
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0: [u8; 8] = [62, 173, 167, 95, 49, 76, 6, 110];
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive v1.1")[0..8]
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1: [u8; 8] = [109, 49, 99, 109, 215, 2, 131, 191];
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive Entry v1.0")[0..8]
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_ENTRY_MAGIC_1_0: [u8; 8] = [72, 87, 109, 242, 222, 66, 143, 220];
// openssl::sha::sha256(b"Proxmox Backup Snapshot Archive v1.0")[0..8];
// only used in unreleased version - no longer supported
pub const PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0: [u8; 8] = [9, 182, 2, 31, 125, 232, 114, 133];
// openssl::sha::sha256(b"Proxmox Backup Snapshot Archive v1.1")[0..8];
pub const PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1: [u8; 8] = [218, 22, 21, 208, 17, 226, 154, 98];
// openssl::sha::sha256(b"Proxmox Backup Catalog Archive v1.0")[0..8];
pub const PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0: [u8; 8] = [183, 207, 199, 37, 158, 153, 30, 115];
lazy_static::lazy_static!{
// Map content magic numbers to human readable names.
@ -58,7 +71,10 @@ lazy_static::lazy_static!{
map.insert(&PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0, "Proxmox Backup Tape Label v1.0");
map.insert(&PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0, "Proxmox Backup MediaSet Label v1.0");
map.insert(&PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0, "Proxmox Backup Chunk Archive v1.0");
map.insert(&PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1, "Proxmox Backup Chunk Archive v1.1");
map.insert(&PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, "Proxmox Backup Snapshot Archive v1.0");
map.insert(&PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1, "Proxmox Backup Snapshot Archive v1.1");
map.insert(&PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0, "Proxmox Backup Catalog Archive v1.0");
map
};
}
@ -172,6 +188,13 @@ impl MediaContentHeader {
}
}
#[derive(Deserialize, Serialize)]
/// Header for chunk archives
pub struct ChunkArchiveHeader {
// Datastore name
pub store: String,
}
#[derive(Endian)]
#[repr(C,packed)]
/// Header for data blobs inside a chunk archive
@ -184,6 +207,26 @@ pub struct ChunkArchiveEntryHeader {
pub size: u64,
}
#[derive(Deserialize, Serialize)]
/// Header for snapshot archives
pub struct SnapshotArchiveHeader {
/// Snapshot name
pub snapshot: String,
/// Datastore name
pub store: String,
}
#[derive(Deserialize, Serialize)]
/// Header for Catalog archives
pub struct CatalogArchiveHeader {
/// The uuid of the media the catalog is for
pub uuid: Uuid,
/// The media set uuid the catalog is for
pub media_set_uuid: Uuid,
/// Media sequence number
pub seq_nr: u64,
}
#[derive(Serialize,Deserialize,Clone,Debug)]
/// Media Label
///
@ -245,9 +288,12 @@ impl BlockHeader {
pub fn new() -> Box<Self> {
use std::alloc::{alloc_zeroed, Layout};
// align to PAGESIZE, so that we can use it with SG_IO
let page_size = unsafe { libc::sysconf(libc::_SC_PAGESIZE) } as usize;
let mut buffer = unsafe {
let ptr = alloc_zeroed(
Layout::from_size_align(Self::SIZE, std::mem::align_of::<u64>())
Layout::from_size_align(Self::SIZE, page_size)
.unwrap(),
);
Box::from_raw(

View File

@ -12,16 +12,18 @@ use crate::tape::{
SnapshotReader,
file_formats::{
PROXMOX_TAPE_BLOCK_SIZE,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1,
MediaContentHeader,
SnapshotArchiveHeader,
},
};
/// Write a set of files as `pxar` archive to the tape
///
/// This ignores file attributes like ACLs and xattrs.
///
/// Returns `Ok(Some(content_uuid))` on succees, and `Ok(None)` if
/// Returns `Ok(Some(content_uuid))` on success, and `Ok(None)` if
/// `LEOM` was detected before all data was written. The stream is
/// marked inclomplete in that case and does not contain all data (The
/// backup task must rewrite the whole file on the next media).
@ -31,12 +33,15 @@ pub fn tape_write_snapshot_archive<'a>(
) -> Result<Option<Uuid>, std::io::Error> {
let snapshot = snapshot_reader.snapshot().to_string();
let store = snapshot_reader.datastore_name().to_string();
let file_list = snapshot_reader.file_list();
let header_data = snapshot.as_bytes().to_vec();
let archive_header = SnapshotArchiveHeader { snapshot, store };
let header_data = serde_json::to_string_pretty(&archive_header)?.as_bytes().to_vec();
let header = MediaContentHeader::new(
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, header_data.len() as u32);
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1, header_data.len() as u32);
let content_uuid = header.uuid.into();
let root_metadata = pxar::Metadata::dir_builder(0o0664).build();

View File

@ -26,6 +26,7 @@ use crate::{
/// This make it easy to iterate over all used chunks and files.
pub struct SnapshotReader {
snapshot: BackupDir,
datastore_name: String,
file_list: Vec<String>,
locked_dir: Dir,
}
@ -42,11 +43,13 @@ impl SnapshotReader {
"snapshot",
"locked by another operation")?;
let datastore_name = datastore.name().to_string();
let manifest = match datastore.load_manifest(&snapshot) {
Ok((manifest, _)) => manifest,
Err(err) => {
bail!("manifest load error on datastore '{}' snapshot '{}' - {}",
datastore.name(), snapshot, err);
datastore_name, snapshot, err);
}
};
@ -60,7 +63,7 @@ impl SnapshotReader {
file_list.push(CLIENT_LOG_BLOB_NAME.to_string());
}
Ok(Self { snapshot, file_list, locked_dir })
Ok(Self { snapshot, datastore_name, file_list, locked_dir })
}
/// Return the snapshot directory
@ -68,6 +71,11 @@ impl SnapshotReader {
&self.snapshot
}
/// Return the datastore name
pub fn datastore_name(&self) -> &str {
&self.datastore_name
}
/// Returns the list of files the snapshot refers to.
pub fn file_list(&self) -> &Vec<String> {
&self.file_list
@ -85,7 +93,7 @@ impl SnapshotReader {
Ok(file)
}
/// Retunrs an iterator for all used chunks.
/// Returns an iterator for all used chunks.
pub fn chunk_iterator(&self) -> Result<SnapshotChunkIterator, Error> {
SnapshotChunkIterator::new(&self)
}
@ -96,7 +104,6 @@ impl SnapshotReader {
/// Note: The iterator returns a `Result`, and the iterator state is
/// undefined after the first error. So it make no sense to continue
/// iteration after the first error.
#[derive(Clone)]
pub struct SnapshotChunkIterator<'a> {
snapshot_reader: &'a SnapshotReader,
todo_list: Vec<String>,

View File

@ -3,10 +3,30 @@
//! The Inventory persistently stores the list of known backup
//! media. A backup media is identified by its 'MediaId', which is the
//! MediaLabel/MediaSetLabel combination.
//!
//! Inventory Locking
//!
//! The inventory itself has several methods to update single entries,
//! but all of them can be considered atomic.
//!
//! Pool Locking
//!
//! To add/modify media assigned to a pool, we always do
//! lock_media_pool(). For unassigned media, we call
//! lock_unassigned_media_pool().
//!
//! MediaSet Locking
//!
//! To add/remove media from a media set, or to modify catalogs we
//! always do lock_media_set(). Also, we aquire this lock during
//! restore, to make sure it is not reused for backups.
//!
use std::collections::{HashMap, BTreeMap};
use std::path::{Path, PathBuf};
use std::os::unix::io::AsRawFd;
use std::fs::File;
use std::time::Duration;
use anyhow::{bail, Error};
use serde::{Serialize, Deserialize};
@ -78,7 +98,8 @@ impl Inventory {
pub const MEDIA_INVENTORY_FILENAME: &'static str = "inventory.json";
pub const MEDIA_INVENTORY_LOCKFILE: &'static str = ".inventory.lck";
fn new(base_path: &Path) -> Self {
/// Create empty instance, no data loaded
pub fn new(base_path: &Path) -> Self {
let mut inventory_path = base_path.to_owned();
inventory_path.push(Self::MEDIA_INVENTORY_FILENAME);
@ -127,7 +148,7 @@ impl Inventory {
}
/// Lock the database
pub fn lock(&self) -> Result<std::fs::File, Error> {
fn lock(&self) -> Result<std::fs::File, Error> {
let file = open_file_locked(&self.lockfile_path, std::time::Duration::new(10, 0), true)?;
if cfg!(test) {
// We cannot use chown inside test environment (no permissions)
@ -276,7 +297,7 @@ impl Inventory {
continue; // belong to another pool
}
if set.uuid.as_ref() == [0u8;16] { // should we do this??
if set.uuid.as_ref() == [0u8;16] {
list.push(MediaId {
label: entry.id.label.clone(),
media_set_label: None,
@ -561,7 +582,7 @@ impl Inventory {
// Helpers to simplify testing
/// Genreate and insert a new free tape (test helper)
/// Generate and insert a new free tape (test helper)
pub fn generate_free_tape(&mut self, label_text: &str, ctime: i64) -> Uuid {
let label = MediaLabel {
@ -576,7 +597,7 @@ impl Inventory {
uuid
}
/// Genreate and insert a new tape assigned to a specific pool
/// Generate and insert a new tape assigned to a specific pool
/// (test helper)
pub fn generate_assigned_tape(
&mut self,
@ -600,7 +621,7 @@ impl Inventory {
uuid
}
/// Genreate and insert a used tape (test helper)
/// Generate and insert a used tape (test helper)
pub fn generate_used_tape(
&mut self,
label_text: &str,
@ -733,6 +754,57 @@ impl Inventory {
}
/// Lock a media pool
pub fn lock_media_pool(base_path: &Path, name: &str) -> Result<File, Error> {
let mut path = base_path.to_owned();
path.push(format!(".pool-{}", name));
path.set_extension("lck");
let timeout = std::time::Duration::new(10, 0);
let lock = proxmox::tools::fs::open_file_locked(&path, timeout, true)?;
if cfg!(test) {
// We cannot use chown inside test environment (no permissions)
return Ok(lock);
}
let backup_user = crate::backup::backup_user()?;
fchown(lock.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))?;
Ok(lock)
}
/// Lock for media not assigned to any pool
pub fn lock_unassigned_media_pool(base_path: &Path) -> Result<File, Error> {
// lock artificial "__UNASSIGNED__" pool to avoid races
lock_media_pool(base_path, "__UNASSIGNED__")
}
/// Lock a media set
///
/// Timeout is 10 seconds by default
pub fn lock_media_set(
base_path: &Path,
media_set_uuid: &Uuid,
timeout: Option<Duration>,
) -> Result<File, Error> {
let mut path = base_path.to_owned();
path.push(format!(".media-set-{}", media_set_uuid));
path.set_extension("lck");
let timeout = timeout.unwrap_or(Duration::new(10, 0));
let file = open_file_locked(&path, timeout, true)?;
if cfg!(test) {
// We cannot use chown inside test environment (no permissions)
return Ok(file);
}
let backup_user = crate::backup::backup_user()?;
fchown(file.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))?;
Ok(file)
}
// shell completion helper
/// List of known media uuids

View File

@ -26,9 +26,24 @@ use crate::{
backup::BackupDir,
tape::{
MediaId,
file_formats::MediaSetLabel,
},
};
pub struct DatastoreContent {
pub snapshot_index: HashMap<String, u64>, // snapshot => file_nr
pub chunk_index: HashMap<[u8;32], u64>, // chunk => file_nr
}
impl DatastoreContent {
pub fn new() -> Self {
Self {
chunk_index: HashMap::new(),
snapshot_index: HashMap::new(),
}
}
}
/// The Media Catalog
///
@ -44,13 +59,11 @@ pub struct MediaCatalog {
log_to_stdout: bool,
current_archive: Option<(Uuid, u64)>,
current_archive: Option<(Uuid, u64, String)>, // (uuid, file_nr, store)
last_entry: Option<(Uuid, u64)>,
chunk_index: HashMap<[u8;32], u64>,
snapshot_index: HashMap<String, u64>,
content: HashMap<String, DatastoreContent>,
pending: Vec<u8>,
}
@ -59,8 +72,12 @@ impl MediaCatalog {
/// Magic number for media catalog files.
// openssl::sha::sha256(b"Proxmox Backup Media Catalog v1.0")[0..8]
// Note: this version did not store datastore names (not supported anymore)
pub const PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0: [u8; 8] = [221, 29, 164, 1, 59, 69, 19, 40];
// openssl::sha::sha256(b"Proxmox Backup Media Catalog v1.1")[0..8]
pub const PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1: [u8; 8] = [76, 142, 232, 193, 32, 168, 137, 113];
/// List media with catalogs
pub fn media_with_catalogs(base_path: &Path) -> Result<HashSet<Uuid>, Error> {
let mut catalogs = HashSet::new();
@ -99,6 +116,59 @@ impl MediaCatalog {
}
}
/// Destroy the media catalog if media_set uuid does not match
pub fn destroy_unrelated_catalog(
base_path: &Path,
media_id: &MediaId,
) -> Result<(), Error> {
let uuid = &media_id.label.uuid;
let mut path = base_path.to_owned();
path.push(uuid.to_string());
path.set_extension("log");
let file = match std::fs::OpenOptions::new().read(true).open(&path) {
Ok(file) => file,
Err(ref err) if err.kind() == std::io::ErrorKind::NotFound => {
return Ok(());
}
Err(err) => return Err(err.into()),
};
let mut file = BufReader::new(file);
let expected_media_set_id = match media_id.media_set_label {
None => {
std::fs::remove_file(path)?;
return Ok(())
},
Some(ref set) => &set.uuid,
};
let (found_magic_number, media_uuid, media_set_uuid) =
Self::parse_catalog_header(&mut file)?;
if !found_magic_number {
return Ok(());
}
if let Some(ref media_uuid) = media_uuid {
if media_uuid != uuid {
std::fs::remove_file(path)?;
return Ok(());
}
}
if let Some(ref media_set_uuid) = media_set_uuid {
if media_set_uuid != expected_media_set_id {
std::fs::remove_file(path)?;
}
}
Ok(())
}
/// Enable/Disable logging to stdout (disabled by default)
pub fn log_to_stdout(&mut self, enable: bool) {
self.log_to_stdout = enable;
@ -120,11 +190,13 @@ impl MediaCatalog {
/// Open a catalog database, load into memory
pub fn open(
base_path: &Path,
uuid: &Uuid,
media_id: &MediaId,
write: bool,
create: bool,
) -> Result<Self, Error> {
let uuid = &media_id.label.uuid;
let mut path = base_path.to_owned();
path.push(uuid.to_string());
path.set_extension("log");
@ -149,15 +221,14 @@ impl MediaCatalog {
log_to_stdout: false,
current_archive: None,
last_entry: None,
chunk_index: HashMap::new(),
snapshot_index: HashMap::new(),
content: HashMap::new(),
pending: Vec::new(),
};
let found_magic_number = me.load_catalog(&mut file)?;
let (found_magic_number, _) = me.load_catalog(&mut file, media_id.media_set_label.as_ref())?;
if !found_magic_number {
me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0);
me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1);
}
if write {
@ -171,6 +242,32 @@ impl MediaCatalog {
Ok(me)
}
/// Creates a temporary empty catalog file
pub fn create_temporary_database_file(
base_path: &Path,
uuid: &Uuid,
) -> Result<File, Error> {
Self::create_basedir(base_path)?;
let mut tmp_path = base_path.to_owned();
tmp_path.push(uuid.to_string());
tmp_path.set_extension("tmp");
let file = std::fs::OpenOptions::new()
.read(true)
.write(true)
.create(true)
.truncate(true)
.open(&tmp_path)?;
let backup_user = crate::backup::backup_user()?;
fchown(file.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))
.map_err(|err| format_err!("fchown failed - {}", err))?;
Ok(file)
}
/// Creates a temporary, empty catalog database
///
/// Creates a new catalog file using a ".tmp" file extension.
@ -188,18 +285,7 @@ impl MediaCatalog {
let me = proxmox::try_block!({
Self::create_basedir(base_path)?;
let file = std::fs::OpenOptions::new()
.read(true)
.write(true)
.create(true)
.truncate(true)
.open(&tmp_path)?;
let backup_user = crate::backup::backup_user()?;
fchown(file.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))
.map_err(|err| format_err!("fchown failed - {}", err))?;
let file = Self::create_temporary_database_file(base_path, uuid)?;
let mut me = Self {
uuid: uuid.clone(),
@ -207,19 +293,18 @@ impl MediaCatalog {
log_to_stdout: false,
current_archive: None,
last_entry: None,
chunk_index: HashMap::new(),
snapshot_index: HashMap::new(),
content: HashMap::new(),
pending: Vec::new(),
};
me.log_to_stdout = log_to_stdout;
me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0);
me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1);
me.register_label(&media_id.label.uuid, 0)?;
me.register_label(&media_id.label.uuid, 0, 0)?;
if let Some(ref set) = media_id.media_set_label {
me.register_label(&set.uuid, 1)?;
me.register_label(&set.uuid, set.seq_nr, 1)?;
}
me.commit()?;
@ -265,8 +350,8 @@ impl MediaCatalog {
}
/// Accessor to content list
pub fn snapshot_index(&self) -> &HashMap<String, u64> {
&self.snapshot_index
pub fn content(&self) -> &HashMap<String, DatastoreContent> {
&self.content
}
/// Commit pending changes
@ -296,6 +381,9 @@ impl MediaCatalog {
/// Conditionally commit if in pending data is large (> 1Mb)
pub fn commit_if_large(&mut self) -> Result<(), Error> {
if self.current_archive.is_some() {
bail!("can't commit catalog in the middle of an chunk archive");
}
if self.pending.len() > 1024*1024 {
self.commit()?;
}
@ -319,31 +407,47 @@ impl MediaCatalog {
}
/// Test if the catalog already contain a snapshot
pub fn contains_snapshot(&self, snapshot: &str) -> bool {
self.snapshot_index.contains_key(snapshot)
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
match self.content.get(store) {
None => false,
Some(content) => content.snapshot_index.contains_key(snapshot),
}
}
/// Returns the chunk archive file number
pub fn lookup_snapshot(&self, snapshot: &str) -> Option<u64> {
self.snapshot_index.get(snapshot).copied()
/// Returns the snapshot archive file number
pub fn lookup_snapshot(&self, store: &str, snapshot: &str) -> Option<u64> {
match self.content.get(store) {
None => None,
Some(content) => content.snapshot_index.get(snapshot).copied(),
}
}
/// Test if the catalog already contain a chunk
pub fn contains_chunk(&self, digest: &[u8;32]) -> bool {
self.chunk_index.contains_key(digest)
pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
match self.content.get(store) {
None => false,
Some(content) => content.chunk_index.contains_key(digest),
}
}
/// Returns the chunk archive file number
pub fn lookup_chunk(&self, digest: &[u8;32]) -> Option<u64> {
self.chunk_index.get(digest).copied()
pub fn lookup_chunk(&self, store: &str, digest: &[u8;32]) -> Option<u64> {
match self.content.get(store) {
None => None,
Some(content) => content.chunk_index.get(digest).copied(),
}
}
fn check_register_label(&self, file_number: u64) -> Result<(), Error> {
fn check_register_label(&self, file_number: u64, uuid: &Uuid) -> Result<(), Error> {
if file_number >= 2 {
bail!("register label failed: got wrong file number ({} >= 2)", file_number);
}
if file_number == 0 && uuid != &self.uuid {
bail!("register label failed: uuid does not match");
}
if self.current_archive.is_some() {
bail!("register label failed: inside chunk archive");
}
@ -363,15 +467,21 @@ impl MediaCatalog {
/// Register media labels (file 0 and 1)
pub fn register_label(
&mut self,
uuid: &Uuid, // Uuid form MediaContentHeader
uuid: &Uuid, // Media/MediaSet Uuid
seq_nr: u64, // onyl used for media set labels
file_number: u64,
) -> Result<(), Error> {
self.check_register_label(file_number)?;
self.check_register_label(file_number, uuid)?;
if file_number == 0 && seq_nr != 0 {
bail!("register_label failed - seq_nr should be 0 - iternal error");
}
let entry = LabelEntry {
file_number,
uuid: *uuid.as_bytes(),
seq_nr,
};
if self.log_to_stdout {
@ -395,9 +505,9 @@ impl MediaCatalog {
digest: &[u8;32],
) -> Result<(), Error> {
let file_number = match self.current_archive {
let (file_number, store) = match self.current_archive {
None => bail!("register_chunk failed: no archive started"),
Some((_, file_number)) => file_number,
Some((_, file_number, ref store)) => (file_number, store),
};
if self.log_to_stdout {
@ -407,7 +517,12 @@ impl MediaCatalog {
self.pending.push(b'C');
self.pending.extend(digest);
self.chunk_index.insert(*digest, file_number);
match self.content.get_mut(store) {
None => bail!("storage {} not registered - internal error", store),
Some(content) => {
content.chunk_index.insert(*digest, file_number);
}
}
Ok(())
}
@ -440,24 +555,29 @@ impl MediaCatalog {
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
) -> Result<(), Error> {
store: &str,
) -> Result<(), Error> {
self.check_start_chunk_archive(file_number)?;
let entry = ChunkArchiveStart {
file_number,
uuid: *uuid.as_bytes(),
store_name_len: u8::try_from(store.len())?,
};
if self.log_to_stdout {
println!("A|{}|{}", file_number, uuid.to_string());
println!("A|{}|{}|{}", file_number, uuid.to_string(), store);
}
self.pending.push(b'A');
unsafe { self.pending.write_le_value(entry)?; }
self.pending.extend(store.as_bytes());
self.current_archive = Some((uuid, file_number));
self.content.entry(store.to_string()).or_insert(DatastoreContent::new());
self.current_archive = Some((uuid, file_number, store.to_string()));
Ok(())
}
@ -466,7 +586,7 @@ impl MediaCatalog {
match self.current_archive {
None => bail!("end_chunk archive failed: not started"),
Some((ref expected_uuid, expected_file_number)) => {
Some((ref expected_uuid, expected_file_number, ..)) => {
if uuid != expected_uuid {
bail!("end_chunk_archive failed: got unexpected uuid");
}
@ -476,7 +596,6 @@ impl MediaCatalog {
}
}
}
Ok(())
}
@ -485,7 +604,7 @@ impl MediaCatalog {
match self.current_archive.take() {
None => bail!("end_chunk_archive failed: not started"),
Some((uuid, file_number)) => {
Some((uuid, file_number, ..)) => {
let entry = ChunkArchiveEnd {
file_number,
@ -539,6 +658,7 @@ impl MediaCatalog {
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
snapshot: &str,
) -> Result<(), Error> {
@ -547,32 +667,90 @@ impl MediaCatalog {
let entry = SnapshotEntry {
file_number,
uuid: *uuid.as_bytes(),
store_name_len: u8::try_from(store.len())?,
name_len: u16::try_from(snapshot.len())?,
};
if self.log_to_stdout {
println!("S|{}|{}|{}", file_number, uuid.to_string(), snapshot);
println!("S|{}|{}|{}:{}", file_number, uuid.to_string(), store, snapshot);
}
self.pending.push(b'S');
unsafe { self.pending.write_le_value(entry)?; }
self.pending.extend(store.as_bytes());
self.pending.push(b':');
self.pending.extend(snapshot.as_bytes());
self.snapshot_index.insert(snapshot.to_string(), file_number);
let content = self.content.entry(store.to_string())
.or_insert(DatastoreContent::new());
content.snapshot_index.insert(snapshot.to_string(), file_number);
self.last_entry = Some((uuid, file_number));
Ok(())
}
fn load_catalog(&mut self, file: &mut File) -> Result<bool, Error> {
/// Parse the catalog header
pub fn parse_catalog_header<R: Read>(
reader: &mut R,
) -> Result<(bool, Option<Uuid>, Option<Uuid>), Error> {
// read/check magic number
let mut magic = [0u8; 8];
if !reader.read_exact_or_eof(&mut magic)? {
/* EOF */
return Ok((false, None, None));
}
if magic == Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0 {
// only use in unreleased versions
bail!("old catalog format (v1.0) is no longer supported");
}
if magic != Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1 {
bail!("wrong magic number");
}
let mut entry_type = [0u8; 1];
if !reader.read_exact_or_eof(&mut entry_type)? {
/* EOF */
return Ok((true, None, None));
}
if entry_type[0] != b'L' {
bail!("got unexpected entry type");
}
let entry0: LabelEntry = unsafe { reader.read_le_value()? };
let mut entry_type = [0u8; 1];
if !reader.read_exact_or_eof(&mut entry_type)? {
/* EOF */
return Ok((true, Some(entry0.uuid.into()), None));
}
if entry_type[0] != b'L' {
bail!("got unexpected entry type");
}
let entry1: LabelEntry = unsafe { reader.read_le_value()? };
Ok((true, Some(entry0.uuid.into()), Some(entry1.uuid.into())))
}
fn load_catalog(
&mut self,
file: &mut File,
media_set_label: Option<&MediaSetLabel>,
) -> Result<(bool, Option<Uuid>), Error> {
let mut file = BufReader::new(file);
let mut found_magic_number = false;
let mut media_set_uuid = None;
loop {
let pos = file.seek(SeekFrom::Current(0))?;
let pos = file.seek(SeekFrom::Current(0))?; // get current pos
if pos == 0 { // read/check magic number
let mut magic = [0u8; 8];
@ -581,7 +759,11 @@ impl MediaCatalog {
Ok(true) => { /* OK */ }
Err(err) => bail!("read failed - {}", err),
}
if magic != Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0 {
if magic == Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0 {
// only use in unreleased versions
bail!("old catalog format (v1.0) is no longer supported");
}
if magic != Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1 {
bail!("wrong magic number");
}
found_magic_number = true;
@ -597,23 +779,35 @@ impl MediaCatalog {
match entry_type[0] {
b'C' => {
let file_number = match self.current_archive {
let (file_number, store) = match self.current_archive {
None => bail!("register_chunk failed: no archive started"),
Some((_, file_number)) => file_number,
Some((_, file_number, ref store)) => (file_number, store),
};
let mut digest = [0u8; 32];
file.read_exact(&mut digest)?;
self.chunk_index.insert(digest, file_number);
match self.content.get_mut(store) {
None => bail!("storage {} not registered - internal error", store),
Some(content) => {
content.chunk_index.insert(digest, file_number);
}
}
}
b'A' => {
let entry: ChunkArchiveStart = unsafe { file.read_le_value()? };
let file_number = entry.file_number;
let uuid = Uuid::from(entry.uuid);
let store_name_len = entry.store_name_len as usize;
let store = file.read_exact_allocated(store_name_len)?;
let store = std::str::from_utf8(&store)?;
self.check_start_chunk_archive(file_number)?;
self.current_archive = Some((uuid, file_number));
}
self.content.entry(store.to_string())
.or_insert(DatastoreContent::new());
self.current_archive = Some((uuid, file_number, store.to_string()));
}
b'E' => {
let entry: ChunkArchiveEnd = unsafe { file.read_le_value()? };
let file_number = entry.file_number;
@ -627,15 +821,26 @@ impl MediaCatalog {
b'S' => {
let entry: SnapshotEntry = unsafe { file.read_le_value()? };
let file_number = entry.file_number;
let name_len = entry.name_len;
let store_name_len = entry.store_name_len as usize;
let name_len = entry.name_len as usize;
let uuid = Uuid::from(entry.uuid);
let snapshot = file.read_exact_allocated(name_len.into())?;
let store = file.read_exact_allocated(store_name_len + 1)?;
if store[store_name_len] != b':' {
bail!("parse-error: missing separator in SnapshotEntry");
}
let store = std::str::from_utf8(&store[..store_name_len])?;
let snapshot = file.read_exact_allocated(name_len)?;
let snapshot = std::str::from_utf8(&snapshot)?;
self.check_register_snapshot(file_number, snapshot)?;
self.snapshot_index.insert(snapshot.to_string(), file_number);
let content = self.content.entry(store.to_string())
.or_insert(DatastoreContent::new());
content.snapshot_index.insert(snapshot.to_string(), file_number);
self.last_entry = Some((uuid, file_number));
}
@ -644,7 +849,19 @@ impl MediaCatalog {
let file_number = entry.file_number;
let uuid = Uuid::from(entry.uuid);
self.check_register_label(file_number)?;
self.check_register_label(file_number, &uuid)?;
if file_number == 1 {
if let Some(set) = media_set_label {
if set.uuid != uuid {
bail!("got unexpected media set uuid");
}
if set.seq_nr != entry.seq_nr {
bail!("got unexpected media set sequence number");
}
}
media_set_uuid = Some(uuid.clone());
}
self.last_entry = Some((uuid, file_number));
}
@ -655,7 +872,7 @@ impl MediaCatalog {
}
Ok(found_magic_number)
Ok((found_magic_number, media_set_uuid))
}
}
@ -693,9 +910,9 @@ impl MediaSetCatalog {
}
/// Test if the catalog already contain a snapshot
pub fn contains_snapshot(&self, snapshot: &str) -> bool {
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
for catalog in self.catalog_list.values() {
if catalog.contains_snapshot(snapshot) {
if catalog.contains_snapshot(store, snapshot) {
return true;
}
}
@ -703,9 +920,9 @@ impl MediaSetCatalog {
}
/// Test if the catalog already contain a chunk
pub fn contains_chunk(&self, digest: &[u8;32]) -> bool {
pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
for catalog in self.catalog_list.values() {
if catalog.contains_chunk(digest) {
if catalog.contains_chunk(store, digest) {
return true;
}
}
@ -720,6 +937,7 @@ impl MediaSetCatalog {
struct LabelEntry {
file_number: u64,
uuid: [u8;16],
seq_nr: u64, // only used for media set labels
}
#[derive(Endian)]
@ -727,6 +945,8 @@ struct LabelEntry {
struct ChunkArchiveStart {
file_number: u64,
uuid: [u8;16],
store_name_len: u8,
/* datastore name follows */
}
#[derive(Endian)]
@ -741,6 +961,7 @@ struct ChunkArchiveEnd{
struct SnapshotEntry{
file_number: u64,
uuid: [u8;16],
store_name_len: u8,
name_len: u16,
/* snapshot name follows */
/* datastore name, ':', snapshot name follows */
}

View File

@ -3,11 +3,13 @@
//! A set of backup medias.
//!
//! This struct manages backup media state during backup. The main
//! purpose is to allocate media sets and assing new tapes to it.
//! purpose is to allocate media sets and assign new tapes to it.
//!
//!
use std::path::Path;
use std::path::{PathBuf, Path};
use std::fs::File;
use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize};
@ -27,6 +29,9 @@ use crate::{
MediaId,
MediaSet,
Inventory,
lock_media_set,
lock_media_pool,
lock_unassigned_media_pool,
file_formats::{
MediaLabel,
MediaSetLabel,
@ -34,13 +39,11 @@ use crate::{
}
};
/// Media Pool lock guard
pub struct MediaPoolLockGuard(std::fs::File);
/// Media Pool
pub struct MediaPool {
name: String,
state_path: PathBuf,
media_set_policy: MediaSetPolicy,
retention: RetentionPolicy,
@ -48,11 +51,16 @@ pub struct MediaPool {
changer_name: Option<String>,
force_media_availability: bool,
// Set this if you do not need to allocate writeable media - this
// is useful for list_media()
no_media_set_locking: bool,
encrypt_fingerprint: Option<Fingerprint>,
inventory: Inventory,
current_media_set: MediaSet,
current_media_set_lock: Option<File>,
}
impl MediaPool {
@ -71,8 +79,15 @@ impl MediaPool {
retention: RetentionPolicy,
changer_name: Option<String>,
encrypt_fingerprint: Option<Fingerprint>,
no_media_set_locking: bool, // for list_media()
) -> Result<Self, Error> {
let _pool_lock = if no_media_set_locking {
None
} else {
Some(lock_media_pool(state_path, name)?)
};
let inventory = Inventory::load(state_path)?;
let current_media_set = match inventory.latest_media_set(name) {
@ -80,15 +95,24 @@ impl MediaPool {
None => MediaSet::new(),
};
let current_media_set_lock = if no_media_set_locking {
None
} else {
Some(lock_media_set(state_path, current_media_set.uuid(), None)?)
};
Ok(MediaPool {
name: String::from(name),
state_path: state_path.to_owned(),
media_set_policy,
retention,
changer_name,
inventory,
current_media_set,
current_media_set_lock,
encrypt_fingerprint,
force_media_availability: false,
no_media_set_locking,
})
}
@ -99,9 +123,9 @@ impl MediaPool {
self.force_media_availability = true;
}
/// Returns the Uuid of the current media set
pub fn current_media_set(&self) -> &Uuid {
self.current_media_set.uuid()
/// Returns the the current media set
pub fn current_media_set(&self) -> &MediaSet {
&self.current_media_set
}
/// Creates a new instance using the media pool configuration
@ -109,6 +133,7 @@ impl MediaPool {
state_path: &Path,
config: &MediaPoolConfig,
changer_name: Option<String>,
no_media_set_locking: bool, // for list_media()
) -> Result<Self, Error> {
let allocation = config.allocation.clone().unwrap_or_else(|| String::from("continue")).parse()?;
@ -127,6 +152,7 @@ impl MediaPool {
retention,
changer_name,
encrypt_fingerprint,
no_media_set_locking,
)
}
@ -135,7 +161,7 @@ impl MediaPool {
&self.name
}
/// Retruns encryption settings
/// Returns encryption settings
pub fn encrypt_fingerprint(&self) -> Option<Fingerprint> {
self.encrypt_fingerprint.clone()
}
@ -237,9 +263,20 @@ impl MediaPool {
/// status, so this must not change persistent/saved state.
///
/// Returns the reason why we started a new media set (if we do)
pub fn start_write_session(&mut self, current_time: i64) -> Result<Option<String>, Error> {
pub fn start_write_session(
&mut self,
current_time: i64,
) -> Result<Option<String>, Error> {
let mut create_new_set = match self.current_set_usable() {
let _pool_lock = if self.no_media_set_locking {
None
} else {
Some(lock_media_pool(&self.state_path, &self.name)?)
};
self.inventory.reload()?;
let mut create_new_set = match self.current_set_usable() {
Err(err) => {
Some(err.to_string())
}
@ -266,6 +303,14 @@ impl MediaPool {
if create_new_set.is_some() {
let media_set = MediaSet::new();
let current_media_set_lock = if self.no_media_set_locking {
None
} else {
Some(lock_media_set(&self.state_path, media_set.uuid(), None)?)
};
self.current_media_set_lock = current_media_set_lock;
self.current_media_set = media_set;
}
@ -284,7 +329,7 @@ impl MediaPool {
Ok(list)
}
// tests if the media data is considered as expired at sepcified time
// tests if the media data is considered as expired at specified time
pub fn media_is_expired(&self, media: &BackupMedia, current_time: i64) -> bool {
if media.status() != &MediaStatus::Full {
return false;
@ -325,6 +370,10 @@ impl MediaPool {
fn add_media_to_current_set(&mut self, mut media_id: MediaId, current_time: i64) -> Result<(), Error> {
if self.current_media_set_lock.is_none() {
bail!("add_media_to_current_set: media set is not locked - internal error");
}
let seq_nr = self.current_media_set.media_list().len() as u64;
let pool = self.name.clone();
@ -355,6 +404,10 @@ impl MediaPool {
/// Allocates a writable media to the current media set
pub fn alloc_writable_media(&mut self, current_time: i64) -> Result<Uuid, Error> {
if self.current_media_set_lock.is_none() {
bail!("alloc_writable_media: media set is not locked - internal error");
}
let last_is_writable = self.current_set_usable()?;
if last_is_writable {
@ -365,71 +418,98 @@ impl MediaPool {
// try to find empty media in pool, add to media set
let media_list = self.list_media();
{ // limit pool lock scope
let _pool_lock = lock_media_pool(&self.state_path, &self.name)?;
let mut empty_media = Vec::new();
let mut used_media = Vec::new();
self.inventory.reload()?;
for media in media_list.into_iter() {
if !self.location_is_available(media.location()) {
continue;
}
// already part of a media set?
if media.media_set_label().is_some() {
used_media.push(media);
} else {
// only consider writable empty media
if media.status() == &MediaStatus::Writable {
empty_media.push(media);
}
}
}
let media_list = self.list_media();
// sort empty_media, newest first -> oldest last
empty_media.sort_unstable_by(|a, b| b.label().ctime.cmp(&a.label().ctime));
let mut empty_media = Vec::new();
let mut used_media = Vec::new();
if let Some(media) = empty_media.pop() {
// found empty media, add to media set an use it
let uuid = media.uuid().clone();
self.add_media_to_current_set(media.into_id(), current_time)?;
return Ok(uuid);
}
println!("no empty media in pool, try to reuse expired media");
let mut expired_media = Vec::new();
for media in used_media.into_iter() {
if let Some(set) = media.media_set_label() {
if &set.uuid == self.current_media_set.uuid() {
for media in media_list.into_iter() {
if !self.location_is_available(media.location()) {
continue;
}
} else {
continue;
// already part of a media set?
if media.media_set_label().is_some() {
used_media.push(media);
} else {
// only consider writable empty media
if media.status() == &MediaStatus::Writable {
empty_media.push(media);
}
}
}
if self.media_is_expired(&media, current_time) {
println!("found expired media on media '{}'", media.label_text());
expired_media.push(media);
// sort empty_media, newest first -> oldest last
empty_media.sort_unstable_by(|a, b| {
let mut res = b.label().ctime.cmp(&a.label().ctime);
if res == std::cmp::Ordering::Equal {
res = b.label().label_text.cmp(&a.label().label_text);
}
res
});
if let Some(media) = empty_media.pop() {
// found empty media, add to media set an use it
let uuid = media.uuid().clone();
self.add_media_to_current_set(media.into_id(), current_time)?;
return Ok(uuid);
}
}
// sort expired_media, newest first -> oldest last
expired_media.sort_unstable_by(|a, b| {
b.media_set_label().unwrap().ctime.cmp(&a.media_set_label().unwrap().ctime)
});
println!("no empty media in pool, try to reuse expired media");
if let Some(media) = expired_media.pop() {
println!("reuse expired media '{}'", media.label_text());
let uuid = media.uuid().clone();
self.add_media_to_current_set(media.into_id(), current_time)?;
return Ok(uuid);
let mut expired_media = Vec::new();
for media in used_media.into_iter() {
if let Some(set) = media.media_set_label() {
if &set.uuid == self.current_media_set.uuid() {
continue;
}
} else {
continue;
}
if self.media_is_expired(&media, current_time) {
println!("found expired media on media '{}'", media.label_text());
expired_media.push(media);
}
}
// sort expired_media, newest first -> oldest last
expired_media.sort_unstable_by(|a, b| {
let mut res = b.media_set_label().unwrap().ctime.cmp(&a.media_set_label().unwrap().ctime);
if res == std::cmp::Ordering::Equal {
res = b.label().label_text.cmp(&a.label().label_text);
}
res
});
while let Some(media) = expired_media.pop() {
// check if we can modify the media-set (i.e. skip
// media used by a restore job)
if let Ok(_media_set_lock) = lock_media_set(
&self.state_path,
&media.media_set_label().unwrap().uuid,
Some(std::time::Duration::new(0, 0)), // do not wait
) {
println!("reuse expired media '{}'", media.label_text());
let uuid = media.uuid().clone();
self.add_media_to_current_set(media.into_id(), current_time)?;
return Ok(uuid);
}
}
}
println!("no expired media in pool, try to find unassigned/free media");
// try unassigned media
// fixme: lock free media pool to avoid races
let _lock = lock_unassigned_media_pool(&self.state_path)?;
self.inventory.reload()?;
let mut free_media = Vec::new();
for media_id in self.inventory.list_unassigned_media() {
@ -447,6 +527,15 @@ impl MediaPool {
free_media.push(media_id);
}
// sort free_media, newest first -> oldest last
free_media.sort_unstable_by(|a, b| {
let mut res = b.label.ctime.cmp(&a.label.ctime);
if res == std::cmp::Ordering::Equal {
res = b.label.label_text.cmp(&a.label.label_text);
}
res
});
if let Some(media_id) = free_media.pop() {
println!("use free media '{}'", media_id.label.label_text);
let uuid = media_id.label.uuid.clone();
@ -537,17 +626,6 @@ impl MediaPool {
self.inventory.generate_media_set_name(media_set_uuid, template)
}
/// Lock the pool
pub fn lock(base_path: &Path, name: &str) -> Result<MediaPoolLockGuard, Error> {
let mut path = base_path.to_owned();
path.push(format!(".{}", name));
path.set_extension("lck");
let timeout = std::time::Duration::new(10, 0);
let lock = proxmox::tools::fs::open_file_locked(&path, timeout, true)?;
Ok(MediaPoolLockGuard(lock))
}
}
/// Backup media

View File

@ -48,7 +48,7 @@ impl MediaSet {
let seq_nr = seq_nr as usize;
if self.media_list.len() > seq_nr {
if self.media_list[seq_nr].is_some() {
bail!("found duplicate squence number in media set '{}/{}'",
bail!("found duplicate sequence number in media set '{}/{}'",
self.uuid.to_string(), seq_nr);
}
} else {

View File

@ -0,0 +1,118 @@
use anyhow::{bail, Error};
use proxmox::tools::Uuid;
use crate::{
tape::{
MediaCatalog,
MediaSetCatalog,
},
};
/// Helper to build and query sets of catalogs
///
/// Similar to MediaSetCatalog, but allows to modify the last catalog.
pub struct CatalogSet {
// read only part
pub media_set_catalog: MediaSetCatalog,
// catalog to modify (latest in set)
pub catalog: Option<MediaCatalog>,
}
impl CatalogSet {
/// Create empty instance
pub fn new() -> Self {
Self {
media_set_catalog: MediaSetCatalog::new(),
catalog: None,
}
}
/// Add catalog to the read-only set
pub fn append_read_only_catalog(&mut self, catalog: MediaCatalog) -> Result<(), Error> {
self.media_set_catalog.append_catalog(catalog)
}
/// Test if the catalog already contains a snapshot
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
if let Some(ref catalog) = self.catalog {
if catalog.contains_snapshot(store, snapshot) {
return true;
}
}
self.media_set_catalog.contains_snapshot(store, snapshot)
}
/// Test if the catalog already contains a chunk
pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
if let Some(ref catalog) = self.catalog {
if catalog.contains_chunk(store, digest) {
return true;
}
}
self.media_set_catalog.contains_chunk(store, digest)
}
/// Add a new catalog, move the old on to the read-only set
pub fn append_catalog(&mut self, new_catalog: MediaCatalog) -> Result<(), Error> {
// append current catalog to read-only set
if let Some(catalog) = self.catalog.take() {
self.media_set_catalog.append_catalog(catalog)?;
}
// remove read-only version from set (in case it is there)
self.media_set_catalog.remove_catalog(&new_catalog.uuid());
self.catalog = Some(new_catalog);
Ok(())
}
/// Register a snapshot
pub fn register_snapshot(
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
snapshot: &str,
) -> Result<(), Error> {
match self.catalog {
Some(ref mut catalog) => {
catalog.register_snapshot(uuid, file_number, store, snapshot)?;
}
None => bail!("no catalog loaded - internal error"),
}
Ok(())
}
/// Register a chunk archive
pub fn register_chunk_archive(
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
chunk_list: &[[u8; 32]],
) -> Result<(), Error> {
match self.catalog {
Some(ref mut catalog) => {
catalog.start_chunk_archive(uuid, file_number, store)?;
for digest in chunk_list {
catalog.register_chunk(digest)?;
}
catalog.end_chunk_archive()?;
}
None => bail!("no catalog loaded - internal error"),
}
Ok(())
}
/// Commit the catalog changes
pub fn commit(&mut self) -> Result<(), Error> {
if let Some(ref mut catalog) = self.catalog {
catalog.commit()?;
}
Ok(())
}
}

View File

@ -1,6 +1,13 @@
use std::collections::HashSet;
mod catalog_set;
pub use catalog_set::*;
mod new_chunks_iterator;
pub use new_chunks_iterator::*;
use std::path::Path;
use std::fs::File;
use std::time::SystemTime;
use std::sync::{Arc, Mutex};
use anyhow::{bail, Error};
@ -18,15 +25,14 @@ use crate::{
COMMIT_BLOCK_SIZE,
TapeWrite,
SnapshotReader,
SnapshotChunkIterator,
MediaPool,
MediaId,
MediaCatalog,
MediaSetCatalog,
file_formats::{
MediaSetLabel,
ChunkArchiveWriter,
tape_write_snapshot_archive,
tape_write_catalog,
},
drive::{
TapeDriver,
@ -41,35 +47,31 @@ use crate::{
struct PoolWriterState {
drive: Box<dyn TapeDriver>,
catalog: MediaCatalog,
// Media Uuid from loaded media
media_uuid: Uuid,
// tell if we already moved to EOM
at_eom: bool,
// bytes written after the last tape fush/sync
bytes_written: usize,
}
impl PoolWriterState {
fn commit(&mut self) -> Result<(), Error> {
self.drive.sync()?; // sync all data to the tape
self.catalog.commit()?; // then commit the catalog
self.bytes_written = 0;
Ok(())
}
}
/// Helper to manage a backup job, writing several tapes of a pool
pub struct PoolWriter {
pool: MediaPool,
drive_name: String,
status: Option<PoolWriterState>,
media_set_catalog: MediaSetCatalog,
catalog_set: Arc<Mutex<CatalogSet>>,
notify_email: Option<String>,
}
impl PoolWriter {
pub fn new(mut pool: MediaPool, drive_name: &str, worker: &WorkerTask, notify_email: Option<String>) -> Result<Self, Error> {
pub fn new(
mut pool: MediaPool,
drive_name: &str,
worker: &WorkerTask,
notify_email: Option<String>,
) -> Result<Self, Error> {
let current_time = proxmox::tools::time::epoch_i64();
@ -82,26 +84,28 @@ impl PoolWriter {
);
}
task_log!(worker, "media set uuid: {}", pool.current_media_set());
let media_set_uuid = pool.current_media_set().uuid();
task_log!(worker, "media set uuid: {}", media_set_uuid);
let mut media_set_catalog = MediaSetCatalog::new();
let mut catalog_set = CatalogSet::new();
// load all catalogs read-only at start
for media_uuid in pool.current_media_list()? {
let media_info = pool.lookup_media(media_uuid).unwrap();
let media_catalog = MediaCatalog::open(
Path::new(TAPE_STATUS_DIR),
&media_uuid,
media_info.id(),
false,
false,
)?;
media_set_catalog.append_catalog(media_catalog)?;
catalog_set.append_read_only_catalog(media_catalog)?;
}
Ok(Self {
pool,
drive_name: drive_name.to_string(),
status: None,
media_set_catalog,
catalog_set: Arc::new(Mutex::new(catalog_set)),
notify_email,
})
}
@ -116,13 +120,8 @@ impl PoolWriter {
Ok(())
}
pub fn contains_snapshot(&self, snapshot: &str) -> bool {
if let Some(PoolWriterState { ref catalog, .. }) = self.status {
if catalog.contains_snapshot(snapshot) {
return true;
}
}
self.media_set_catalog.contains_snapshot(snapshot)
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
self.catalog_set.lock().unwrap().contains_snapshot(store, snapshot)
}
/// Eject media and drop PoolWriterState (close drive)
@ -188,16 +187,17 @@ impl PoolWriter {
/// This is done automatically during a backupsession, but needs to
/// be called explicitly before dropping the PoolWriter
pub fn commit(&mut self) -> Result<(), Error> {
if let Some(ref mut status) = self.status {
status.commit()?;
if let Some(PoolWriterState {ref mut drive, .. }) = self.status {
drive.sync()?; // sync all data to the tape
}
self.catalog_set.lock().unwrap().commit()?; // then commit the catalog
Ok(())
}
/// Load a writable media into the drive
pub fn load_writable_media(&mut self, worker: &WorkerTask) -> Result<Uuid, Error> {
let last_media_uuid = match self.status {
Some(PoolWriterState { ref catalog, .. }) => Some(catalog.uuid().clone()),
Some(PoolWriterState { ref media_uuid, ..}) => Some(media_uuid.clone()),
None => None,
};
@ -217,13 +217,11 @@ impl PoolWriter {
task_log!(worker, "allocated new writable media '{}'", media.label_text());
// remove read-only catalog (we store a writable version in status)
self.media_set_catalog.remove_catalog(&media_uuid);
if let Some(PoolWriterState {mut drive, catalog, .. }) = self.status.take() {
self.media_set_catalog.append_catalog(catalog)?;
task_log!(worker, "eject current media");
drive.eject_media()?;
if let Some(PoolWriterState {mut drive, .. }) = self.status.take() {
if last_media_uuid.is_some() {
task_log!(worker, "eject current media");
drive.eject_media()?;
}
}
let (drive_config, _digest) = crate::config::drive::config()?;
@ -242,13 +240,15 @@ impl PoolWriter {
}
}
let catalog = update_media_set_label(
let (catalog, is_new_media) = update_media_set_label(
worker,
drive.as_mut(),
old_media_id.media_set_label,
media.id(),
)?;
self.catalog_set.lock().unwrap().append_catalog(catalog)?;
let media_set = media.media_set_label().clone().unwrap();
let encrypt_fingerprint = media_set
@ -258,20 +258,163 @@ impl PoolWriter {
drive.set_encryption(encrypt_fingerprint)?;
self.status = Some(PoolWriterState { drive, catalog, at_eom: false, bytes_written: 0 });
self.status = Some(PoolWriterState {
drive,
media_uuid: media_uuid.clone(),
at_eom: false,
bytes_written: 0,
});
if is_new_media {
// add catalogs from previous media
self.append_media_set_catalogs(worker)?;
}
Ok(media_uuid)
}
/// uuid of currently loaded BackupMedia
pub fn current_media_uuid(&self) -> Result<&Uuid, Error> {
match self.status {
Some(PoolWriterState { ref catalog, ..}) => Ok(catalog.uuid()),
None => bail!("PoolWriter - no media loaded"),
}
fn open_catalog_file(uuid: &Uuid) -> Result<File, Error> {
let status_path = Path::new(TAPE_STATUS_DIR);
let mut path = status_path.to_owned();
path.push(uuid.to_string());
path.set_extension("log");
let file = std::fs::OpenOptions::new()
.read(true)
.open(&path)?;
Ok(file)
}
/// Move to EOM (if not aleady there), then creates a new snapshot
// Check it tape is loaded, then move to EOM (if not already there)
//
// Returns the tape position at EOM.
fn prepare_tape_write(
status: &mut PoolWriterState,
worker: &WorkerTask,
) -> Result<u64, Error> {
if !status.at_eom {
worker.log(String::from("moving to end of media"));
status.drive.move_to_eom()?;
status.at_eom = true;
}
let current_file_number = status.drive.current_file_number()?;
if current_file_number < 2 {
bail!("got strange file position number from drive ({})", current_file_number);
}
Ok(current_file_number)
}
/// Move to EOM (if not already there), then write the current
/// catalog to the tape. On success, this return 'Ok(true)'.
/// Please note that this may fail when there is not enough space
/// on the media (return value 'Ok(false, _)'). In that case, the
/// archive is marked incomplete. The caller should mark the media
/// as full and try again using another media.
pub fn append_catalog_archive(
&mut self,
worker: &WorkerTask,
) -> Result<bool, Error> {
let status = match self.status {
Some(ref mut status) => status,
None => bail!("PoolWriter - no media loaded"),
};
Self::prepare_tape_write(status, worker)?;
let catalog_set = self.catalog_set.lock().unwrap();
let catalog = match catalog_set.catalog {
None => bail!("append_catalog_archive failed: no catalog - internal error"),
Some(ref catalog) => catalog,
};
let media_set = self.pool.current_media_set();
let media_list = media_set.media_list();
let uuid = match media_list.last() {
None => bail!("got empty media list - internal error"),
Some(None) => bail!("got incomplete media list - internal error"),
Some(Some(last_uuid)) => {
if last_uuid != catalog.uuid() {
bail!("got wrong media - internal error");
}
last_uuid
}
};
let seq_nr = media_list.len() - 1;
let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
let mut file = Self::open_catalog_file(uuid)?;
let done = tape_write_catalog(
writer.as_mut(),
uuid,
media_set.uuid(),
seq_nr,
&mut file,
)?.is_some();
Ok(done)
}
// Append catalogs for all previous media in set (without last)
fn append_media_set_catalogs(
&mut self,
worker: &WorkerTask,
) -> Result<(), Error> {
let media_set = self.pool.current_media_set();
let mut media_list = &media_set.media_list()[..];
if media_list.len() < 2 {
return Ok(());
}
media_list = &media_list[..(media_list.len()-1)];
let status = match self.status {
Some(ref mut status) => status,
None => bail!("PoolWriter - no media loaded"),
};
Self::prepare_tape_write(status, worker)?;
for (seq_nr, uuid) in media_list.iter().enumerate() {
let uuid = match uuid {
None => bail!("got incomplete media list - internal error"),
Some(uuid) => uuid,
};
let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
let mut file = Self::open_catalog_file(uuid)?;
task_log!(worker, "write catalog for previous media: {}", uuid);
if tape_write_catalog(
writer.as_mut(),
uuid,
media_set.uuid(),
seq_nr,
&mut file,
)?.is_none() {
bail!("got EOM while writing start catalog");
}
}
Ok(())
}
/// Move to EOM (if not already there), then creates a new snapshot
/// archive writing specified files (as .pxar) into it. On
/// success, this return 'Ok(true)' and the media catalog gets
/// updated.
@ -292,25 +435,17 @@ impl PoolWriter {
None => bail!("PoolWriter - no media loaded"),
};
if !status.at_eom {
worker.log(String::from("moving to end of media"));
status.drive.move_to_eom()?;
status.at_eom = true;
}
let current_file_number = status.drive.current_file_number()?;
if current_file_number < 2 {
bail!("got strange file position number from drive ({})", current_file_number);
}
let current_file_number = Self::prepare_tape_write(status, worker)?;
let (done, bytes_written) = {
let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
match tape_write_snapshot_archive(writer.as_mut(), snapshot_reader)? {
Some(content_uuid) => {
status.catalog.register_snapshot(
self.catalog_set.lock().unwrap().register_snapshot(
content_uuid,
current_file_number,
&snapshot_reader.datastore_name().to_string(),
&snapshot_reader.snapshot().to_string(),
)?;
(true, writer.bytes_written())
@ -324,21 +459,21 @@ impl PoolWriter {
let request_sync = status.bytes_written >= COMMIT_BLOCK_SIZE;
if !done || request_sync {
status.commit()?;
self.commit()?;
}
Ok((done, bytes_written))
}
/// Move to EOM (if not aleady there), then creates a new chunk
/// Move to EOM (if not already there), then creates a new chunk
/// archive and writes chunks from 'chunk_iter'. This stops when
/// it detect LEOM or when we reach max archive size
/// (4GB). Written chunks are registered in the media catalog.
pub fn append_chunk_archive(
&mut self,
worker: &WorkerTask,
datastore: &DataStore,
chunk_iter: &mut std::iter::Peekable<SnapshotChunkIterator>,
chunk_iter: &mut std::iter::Peekable<NewChunksIterator>,
store: &str,
) -> Result<(bool, usize), Error> {
let status = match self.status {
@ -346,16 +481,8 @@ impl PoolWriter {
None => bail!("PoolWriter - no media loaded"),
};
if !status.at_eom {
worker.log(String::from("moving to end of media"));
status.drive.move_to_eom()?;
status.at_eom = true;
}
let current_file_number = Self::prepare_tape_write(status, worker)?;
let current_file_number = status.drive.current_file_number()?;
if current_file_number < 2 {
bail!("got strange file position number from drive ({})", current_file_number);
}
let writer = status.drive.write_file()?;
let start_time = SystemTime::now();
@ -363,10 +490,8 @@ impl PoolWriter {
let (saved_chunks, content_uuid, leom, bytes_written) = write_chunk_archive(
worker,
writer,
datastore,
chunk_iter,
&self.media_set_catalog,
&status.catalog,
store,
MAX_CHUNK_ARCHIVE_SIZE,
)?;
@ -374,42 +499,48 @@ impl PoolWriter {
let elapsed = start_time.elapsed()?.as_secs_f64();
worker.log(format!(
"wrote {:.2} MB ({} MB/s)",
bytes_written as f64 / (1024.0*1024.0),
(bytes_written as f64)/(1024.0*1024.0*elapsed),
"wrote {} chunks ({:.2} MB at {:.2} MB/s)",
saved_chunks.len(),
bytes_written as f64 /1_000_000.0,
(bytes_written as f64)/(1_000_000.0*elapsed),
));
let request_sync = status.bytes_written >= COMMIT_BLOCK_SIZE;
// register chunks in media_catalog
status.catalog.start_chunk_archive(content_uuid, current_file_number)?;
for digest in saved_chunks {
status.catalog.register_chunk(&digest)?;
}
status.catalog.end_chunk_archive()?;
self.catalog_set.lock().unwrap()
.register_chunk_archive(content_uuid, current_file_number, store, &saved_chunks)?;
if leom || request_sync {
status.commit()?;
self.commit()?;
}
Ok((leom, bytes_written))
}
pub fn spawn_chunk_reader_thread(
&self,
datastore: Arc<DataStore>,
snapshot_reader: Arc<Mutex<SnapshotReader>>,
) -> Result<(std::thread::JoinHandle<()>, NewChunksIterator), Error> {
NewChunksIterator::spawn(
datastore,
snapshot_reader,
Arc::clone(&self.catalog_set),
)
}
}
/// write up to <max_size> of chunks
fn write_chunk_archive<'a>(
worker: &WorkerTask,
_worker: &WorkerTask,
writer: Box<dyn 'a + TapeWrite>,
datastore: &DataStore,
chunk_iter: &mut std::iter::Peekable<SnapshotChunkIterator>,
media_set_catalog: &MediaSetCatalog,
media_catalog: &MediaCatalog,
chunk_iter: &mut std::iter::Peekable<NewChunksIterator>,
store: &str,
max_size: usize,
) -> Result<(Vec<[u8;32]>, Uuid, bool, usize), Error> {
let (mut writer, content_uuid) = ChunkArchiveWriter::new(writer, true)?;
let mut chunk_index: HashSet<[u8;32]> = HashSet::new();
let (mut writer, content_uuid) = ChunkArchiveWriter::new(writer, store, true)?;
// we want to get the chunk list in correct order
let mut chunk_list: Vec<[u8;32]> = Vec::new();
@ -417,26 +548,21 @@ fn write_chunk_archive<'a>(
let mut leom = false;
loop {
let digest = match chunk_iter.next() {
let (digest, blob) = match chunk_iter.peek() {
None => break,
Some(digest) => digest?,
Some(Ok((digest, blob))) => (digest, blob),
Some(Err(err)) => bail!("{}", err),
};
if media_catalog.contains_chunk(&digest)
|| chunk_index.contains(&digest)
|| media_set_catalog.contains_chunk(&digest)
{
continue;
}
let blob = datastore.load_chunk(&digest)?;
//println!("CHUNK {} size {}", proxmox::tools::digest_to_hex(&digest), blob.raw_size());
//println!("CHUNK {} size {}", proxmox::tools::digest_to_hex(digest), blob.raw_size());
match writer.try_write_chunk(&digest, &blob) {
Ok(true) => {
chunk_index.insert(digest);
chunk_list.push(digest);
Ok(true) => {
chunk_list.push(*digest);
chunk_iter.next(); // consume
}
Ok(false) => {
// Note; we do not consume the chunk (no chunk_iter.next())
leom = true;
break;
}
@ -444,7 +570,7 @@ fn write_chunk_archive<'a>(
}
if writer.bytes_written() > max_size {
worker.log("Chunk Archive max size reached, closing archive".to_string());
//worker.log("Chunk Archive max size reached, closing archive".to_string());
break;
}
}
@ -462,7 +588,7 @@ fn update_media_set_label(
drive: &mut dyn TapeDriver,
old_set: Option<MediaSetLabel>,
media_id: &MediaId,
) -> Result<MediaCatalog, Error> {
) -> Result<(MediaCatalog, bool), Error> {
let media_catalog;
@ -485,11 +611,12 @@ fn update_media_set_label(
let status_path = Path::new(TAPE_STATUS_DIR);
match old_set {
let new_media = match old_set {
None => {
worker.log("wrinting new media set label".to_string());
drive.write_media_set_label(new_set, key_config.as_ref())?;
media_catalog = MediaCatalog::overwrite(status_path, media_id, false)?;
true
}
Some(media_set_label) => {
if new_set.uuid == media_set_label.uuid {
@ -500,7 +627,11 @@ fn update_media_set_label(
if new_set.encryption_key_fingerprint != media_set_label.encryption_key_fingerprint {
bail!("detected changed encryption fingerprint - internal error");
}
media_catalog = MediaCatalog::open(status_path, &media_id.label.uuid, true, false)?;
media_catalog = MediaCatalog::open(status_path, &media_id, true, false)?;
// todo: verify last content/media_catalog somehow?
false
} else {
worker.log(
format!("wrinting new media set label (overwrite '{}/{}')",
@ -509,12 +640,10 @@ fn update_media_set_label(
drive.write_media_set_label(new_set, key_config.as_ref())?;
media_catalog = MediaCatalog::overwrite(status_path, media_id, false)?;
true
}
}
}
};
// todo: verify last content/media_catalog somehow?
drive.move_to_eom()?; // just to be sure
Ok(media_catalog)
Ok((media_catalog, new_media))
}

View File

@ -0,0 +1,99 @@
use std::collections::HashSet;
use std::sync::{Arc, Mutex};
use anyhow::{format_err, Error};
use crate::{
backup::{
DataStore,
DataBlob,
},
tape::{
CatalogSet,
SnapshotReader,
},
};
/// Chunk iterator which use a separate thread to read chunks
///
/// The iterator skips duplicate chunks and chunks already in the
/// catalog.
pub struct NewChunksIterator {
rx: std::sync::mpsc::Receiver<Result<Option<([u8; 32], DataBlob)>, Error>>,
}
impl NewChunksIterator {
/// Creates the iterator, spawning a new thread
///
/// Make sure to join() the returnd thread handle.
pub fn spawn(
datastore: Arc<DataStore>,
snapshot_reader: Arc<Mutex<SnapshotReader>>,
catalog_set: Arc<Mutex<CatalogSet>>,
) -> Result<(std::thread::JoinHandle<()>, Self), Error> {
let (tx, rx) = std::sync::mpsc::sync_channel(3);
let reader_thread = std::thread::spawn(move || {
let snapshot_reader = snapshot_reader.lock().unwrap();
let mut chunk_index: HashSet<[u8;32]> = HashSet::new();
let datastore_name = snapshot_reader.datastore_name();
let result: Result<(), Error> = proxmox::try_block!({
let mut chunk_iter = snapshot_reader.chunk_iterator()?;
loop {
let digest = match chunk_iter.next() {
None => {
tx.send(Ok(None)).unwrap();
break;
}
Some(digest) => digest?,
};
if chunk_index.contains(&digest) {
continue;
}
if catalog_set.lock().unwrap().contains_chunk(&datastore_name, &digest) {
continue;
};
let blob = datastore.load_chunk(&digest)?;
//println!("LOAD CHUNK {}", proxmox::tools::digest_to_hex(&digest));
tx.send(Ok(Some((digest, blob)))).unwrap();
chunk_index.insert(digest);
}
Ok(())
});
if let Err(err) = result {
tx.send(Err(err)).unwrap();
}
});
Ok((reader_thread, Self { rx }))
}
}
// We do not use Receiver::into_iter(). The manual implementation
// returns a simpler type.
impl Iterator for NewChunksIterator {
type Item = Result<([u8; 32], DataBlob), Error>;
fn next(&mut self) -> Option<Self::Item> {
match self.rx.recv() {
Ok(Ok(None)) => None,
Ok(Ok(Some((digest, blob)))) => Some(Ok((digest, blob))),
Ok(Err(err)) => Some(Err(err)),
Err(_) => Some(Err(format_err!("reader thread failed"))),
}
}
}

View File

@ -67,7 +67,7 @@ pub trait TapeWrite {
///
/// See: https://github.com/torvalds/linux/blob/master/Documentation/scsi/st.rst
///
/// On sucess, this returns if we en countered a EOM condition.
/// On success, this returns if we en countered a EOM condition.
pub fn tape_device_write_block<W: Write>(
writer: &mut W,
data: &[u8],

View File

@ -42,6 +42,7 @@ fn test_alloc_writable_media_1() -> Result<(), Error> {
RetentionPolicy::KeepForever,
None,
None,
false,
)?;
ctime += 10;
@ -71,6 +72,7 @@ fn test_alloc_writable_media_2() -> Result<(), Error> {
RetentionPolicy::KeepForever,
None,
None,
false,
)?;
let ctime = 10;
@ -110,6 +112,7 @@ fn test_alloc_writable_media_3() -> Result<(), Error> {
RetentionPolicy::KeepForever,
None,
None,
false,
)?;
let mut ctime = 10;
@ -156,6 +159,7 @@ fn test_alloc_writable_media_4() -> Result<(), Error> {
RetentionPolicy::ProtectFor(parse_time_span("12s")?),
None,
None,
false,
)?;
let start_time = 10;
@ -173,7 +177,7 @@ fn test_alloc_writable_media_4() -> Result<(), Error> {
// next call fail because there is no free media
assert!(pool.alloc_writable_media(start_time + 5).is_err());
// Create new nedia set, so that previous set can expire
// Create new media set, so that previous set can expire
pool.start_write_session(start_time + 10)?;
assert!(pool.alloc_writable_media(start_time + 10).is_err());

View File

@ -69,6 +69,7 @@ fn test_compute_media_state() -> Result<(), Error> {
RetentionPolicy::KeepForever,
None,
None,
false,
)?;
// tape1 is free
@ -116,6 +117,7 @@ fn test_media_expire_time() -> Result<(), Error> {
RetentionPolicy::ProtectFor(span),
None,
None,
false,
)?;
assert_eq!(pool.lookup_media(&tape0_uuid)?.status(), &MediaStatus::Full);

View File

@ -49,6 +49,7 @@ fn test_current_set_usable_1() -> Result<(), Error> {
RetentionPolicy::KeepForever,
None,
None,
false,
)?;
assert_eq!(pool.current_set_usable()?, false);
@ -75,6 +76,7 @@ fn test_current_set_usable_2() -> Result<(), Error> {
RetentionPolicy::KeepForever,
None,
None,
false,
)?;
assert_eq!(pool.current_set_usable()?, false);
@ -103,6 +105,7 @@ fn test_current_set_usable_3() -> Result<(), Error> {
RetentionPolicy::KeepForever,
Some(String::from("changer1")),
None,
false,
)?;
assert_eq!(pool.current_set_usable()?, false);
@ -131,6 +134,7 @@ fn test_current_set_usable_4() -> Result<(), Error> {
RetentionPolicy::KeepForever,
None,
None,
false,
)?;
assert_eq!(pool.current_set_usable()?, true);
@ -161,6 +165,7 @@ fn test_current_set_usable_5() -> Result<(), Error> {
RetentionPolicy::KeepForever,
None,
None,
false,
)?;
assert_eq!(pool.current_set_usable()?, true);
@ -189,6 +194,7 @@ fn test_current_set_usable_6() -> Result<(), Error> {
RetentionPolicy::KeepForever,
None,
None,
false,
)?;
assert!(pool.current_set_usable().is_err());
@ -223,6 +229,7 @@ fn test_current_set_usable_7() -> Result<(), Error> {
RetentionPolicy::KeepForever,
None,
None,
false,
)?;
assert!(pool.current_set_usable().is_err());

View File

@ -57,6 +57,9 @@ pub use async_channel_writer::AsyncChannelWriter;
mod std_channel_writer;
pub use std_channel_writer::StdChannelWriter;
mod tokio_writer_adapter;
pub use tokio_writer_adapter::TokioWriterAdapter;
mod process_locker;
pub use process_locker::{ProcessLocker, ProcessLockExclusiveGuard, ProcessLockSharedGuard};

View File

@ -302,7 +302,7 @@ impl<K, V> LinkedList<K, V> {
}
}
/// Remove the node referenced by `node_ptr` from the linke list and return it.
/// Remove the node referenced by `node_ptr` from the linked list and return it.
fn remove(&mut self, node_ptr: *mut CacheNode<K, V>) -> Box<CacheNode<K, V>> {
let node = unsafe { Box::from_raw(node_ptr) };

View File

@ -138,10 +138,10 @@ impl<I: Send + 'static> ParallelHandler<I> {
if let Err(panic) = handle.join() {
match panic.downcast::<&str>() {
Ok(panic_msg) => msg_list.push(
format!("thread {} ({}) paniced: {}", self.name, i, panic_msg)
format!("thread {} ({}) panicked: {}", self.name, i, panic_msg)
),
Err(_) => msg_list.push(
format!("thread {} ({}) paniced", self.name, i)
format!("thread {} ({}) panicked", self.name, i)
),
}
}

View File

@ -4,7 +4,7 @@
//!
//! See: `/usr/include/scsi/sg_pt.h`
//!
//! The SCSI Commands Reference Manual also contains some usefull information.
//! The SCSI Commands Reference Manual also contains some useful information.
use std::os::unix::io::AsRawFd;
use std::ptr::NonNull;
@ -44,32 +44,38 @@ impl ToString for SenseInfo {
}
#[derive(Debug)]
pub struct ScsiError {
pub error: Error,
pub sense: Option<SenseInfo>,
pub enum ScsiError {
Error(Error),
Sense(SenseInfo),
}
impl std::fmt::Display for ScsiError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.error)
match self {
ScsiError::Error(err) => write!(f, "{}", err),
ScsiError::Sense(sense) => write!(f, "{}", sense.to_string()),
}
}
}
impl std::error::Error for ScsiError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
self.error.source()
match self {
ScsiError::Error(err) => err.source(),
ScsiError::Sense(_) => None,
}
}
}
impl From<anyhow::Error> for ScsiError {
fn from(error: anyhow::Error) -> Self {
Self { error, sense: None }
Self::Error(error)
}
}
impl From<std::io::Error> for ScsiError {
fn from(error: std::io::Error) -> Self {
Self { error: error.into(), sense: None }
Self::Error(error.into())
}
}
@ -483,10 +489,7 @@ impl <'a, F: AsRawFd> SgRaw<'a, F> {
}
};
return Err(ScsiError {
error: format_err!("{}", sense.to_string()),
sense: Some(sense),
});
return Err(ScsiError::Sense(sense));
}
SCSI_PT_RESULT_TRANSPORT_ERR => return Err(format_err!("scsi command failed: transport error").into()),
SCSI_PT_RESULT_OS_ERR => {
@ -506,7 +509,7 @@ impl <'a, F: AsRawFd> SgRaw<'a, F> {
}
if self.buffer.len() < 16 {
return Err(format_err!("output buffer too small").into());
return Err(format_err!("input buffer too small").into());
}
let mut ptvp = self.create_scsi_pt_obj()?;
@ -530,6 +533,45 @@ impl <'a, F: AsRawFd> SgRaw<'a, F> {
Ok(&self.buffer[..data_len])
}
/// Run the specified RAW SCSI command, use data as input buffer
pub fn do_in_command<'b>(&mut self, cmd: &[u8], data: &'b mut [u8]) -> Result<&'b [u8], ScsiError> {
if !unsafe { sg_is_scsi_cdb(cmd.as_ptr(), cmd.len() as c_int) } {
return Err(format_err!("no valid SCSI command").into());
}
if data.len() == 0 {
return Err(format_err!("got zero-sized input buffer").into());
}
let mut ptvp = self.create_scsi_pt_obj()?;
unsafe {
set_scsi_pt_data_in(
ptvp.as_mut_ptr(),
data.as_mut_ptr(),
data.len() as c_int,
);
set_scsi_pt_cdb(
ptvp.as_mut_ptr(),
cmd.as_ptr(),
cmd.len() as c_int,
);
};
self.do_scsi_pt_checked(&mut ptvp)?;
let resid = unsafe { get_scsi_pt_resid(ptvp.as_ptr()) } as usize;
if resid > data.len() {
return Err(format_err!("do_scsi_pt failed - got strange resid (value too big)").into());
}
let data_len = data.len() - resid;
Ok(&data[..data_len])
}
/// Run dataout command
///
/// Note: use alloc_page_aligned_buffer to alloc data transfer buffer

View File

@ -210,7 +210,7 @@ fn test_parse_register_response() -> Result<(), Error> {
Ok(())
}
/// querys the up to date subscription status and parses the response
/// queries the up to date subscription status and parses the response
pub fn check_subscription(key: String, server_id: String) -> Result<SubscriptionInfo, Error> {
let now = proxmox::tools::time::epoch_i64();
@ -299,7 +299,7 @@ pub fn delete_subscription() -> Result<(), Error> {
Ok(())
}
/// updates apt authenification for repo access
/// updates apt authentication for repo access
pub fn update_apt_auth(key: Option<String>, password: Option<String>) -> Result<(), Error> {
let auth_conf = std::path::Path::new(APT_AUTH_FN);
match (key, password) {
@ -318,8 +318,11 @@ pub fn update_apt_auth(key: Option<String>, password: Option<String>) -> Result<
replace_file(auth_conf, conf.as_bytes(), file_opts)
.map_err(|e| format_err!("Error saving apt auth config - {}", e))?;
}
_ => nix::unistd::unlink(auth_conf)
.map_err(|e| format_err!("Error clearing apt auth config - {}", e))?,
_ => match nix::unistd::unlink(auth_conf) {
Ok(()) => Ok(()),
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => Ok(()), // ignore not existing
Err(err) => Err(err),
}.map_err(|e| format_err!("Error clearing apt auth config - {}", e))?,
}
Ok(())
}

View File

@ -141,6 +141,88 @@ impl From<TimeSpan> for f64 {
}
}
impl From<std::time::Duration> for TimeSpan {
fn from(duration: std::time::Duration) -> Self {
let mut duration = duration.as_nanos();
let nsec = (duration % 1000) as u64;
duration /= 1000;
let usec = (duration % 1000) as u64;
duration /= 1000;
let msec = (duration % 1000) as u64;
duration /= 1000;
let seconds = (duration % 60) as u64;
duration /= 60;
let minutes = (duration % 60) as u64;
duration /= 60;
let hours = (duration % 24) as u64;
duration /= 24;
let years = (duration as f64 / 365.25) as u64;
let ydays = (duration as f64 % 365.25) as u64;
let months = (ydays as f64 / 30.44) as u64;
let mdays = (ydays as f64 % 30.44) as u64;
let weeks = mdays / 7;
let days = mdays % 7;
Self {
nsec,
usec,
msec,
seconds,
minutes,
hours,
days,
weeks,
months,
years,
}
}
}
impl std::fmt::Display for TimeSpan {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> {
let mut first = true;
{ // block scope for mutable borrows
let mut do_write = |v: u64, unit: &str| -> Result<(), std::fmt::Error> {
if !first {
write!(f, " ")?;
}
first = false;
write!(f, "{}{}", v, unit)
};
if self.years > 0 {
do_write(self.years, "y")?;
}
if self.months > 0 {
do_write(self.months, "m")?;
}
if self.weeks > 0 {
do_write(self.weeks, "w")?;
}
if self.days > 0 {
do_write(self.days, "d")?;
}
if self.hours > 0 {
do_write(self.hours, "h")?;
}
if self.minutes > 0 {
do_write(self.minutes, "min")?;
}
}
if !first {
write!(f, " ")?;
}
let seconds = self.seconds as f64 + (self.msec as f64 / 1000.0);
if seconds >= 0.1 {
if seconds >= 1.0 || !first {
write!(f, "{:.0}s", seconds)?;
} else {
write!(f, "{:.1}s", seconds)?;
}
} else if first {
write!(f, "<0.1s")?;
}
Ok(())
}
}
pub fn verify_time_span(i: &str) -> Result<(), Error> {
parse_time_span(i)?;

View File

@ -0,0 +1,26 @@
use std::io::Write;
use tokio::task::block_in_place;
/// Wrapper around a writer which implements Write
///
/// wraps each write with a 'block_in_place' so that
/// any (blocking) writer can be safely used in async context in a
/// tokio runtime
pub struct TokioWriterAdapter<W: Write>(W);
impl<W: Write> TokioWriterAdapter<W> {
pub fn new(writer: W) -> Self {
Self(writer)
}
}
impl<W: Write> Write for TokioWriterAdapter<W> {
fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
block_in_place(|| self.0.write(buf))
}
fn flush(&mut self) -> Result<(), std::io::Error> {
block_in_place(|| self.0.flush())
}
}

View File

@ -80,6 +80,7 @@ struct Zip64FieldWithOffset {
uncompressed_size: u64,
compressed_size: u64,
offset: u64,
start_disk: u32,
}
#[derive(Endian)]
@ -300,10 +301,26 @@ impl ZipEntry {
let filename_len = filename.len();
let header_size = size_of::<CentralDirectoryFileHeader>();
let zip_field_size = size_of::<Zip64FieldWithOffset>();
let size: usize = header_size + filename_len + zip_field_size;
let mut size: usize = header_size + filename_len;
let (date, time) = epoch_to_dos(self.mtime);
let (compressed_size, uncompressed_size, offset, need_zip64) = if self.compressed_size
>= (u32::MAX as u64)
|| self.uncompressed_size >= (u32::MAX as u64)
|| self.offset >= (u32::MAX as u64)
{
size += zip_field_size;
(0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, true)
} else {
(
self.compressed_size as u32,
self.uncompressed_size as u32,
self.offset as u32,
false,
)
};
write_struct(
&mut buf,
CentralDirectoryFileHeader {
@ -315,32 +332,35 @@ impl ZipEntry {
time,
date,
crc32: self.crc32,
compressed_size: 0xFFFFFFFF,
uncompressed_size: 0xFFFFFFFF,
compressed_size,
uncompressed_size,
filename_len: filename_len as u16,
extra_field_len: zip_field_size as u16,
extra_field_len: if need_zip64 { zip_field_size as u16 } else { 0 },
comment_len: 0,
start_disk: 0,
internal_flags: 0,
external_flags: (self.mode as u32) << 16 | (!self.is_file as u32) << 4,
offset: 0xFFFFFFFF,
offset,
},
)
.await?;
buf.write_all(filename).await?;
write_struct(
&mut buf,
Zip64FieldWithOffset {
field_type: 1,
field_size: 3 * 8,
uncompressed_size: self.uncompressed_size,
compressed_size: self.compressed_size,
offset: self.offset,
},
)
.await?;
if need_zip64 {
write_struct(
&mut buf,
Zip64FieldWithOffset {
field_type: 1,
field_size: 3 * 8 + 4,
uncompressed_size: self.uncompressed_size,
compressed_size: self.compressed_size,
offset: self.offset,
start_disk: 0,
},
)
.await?;
}
Ok(size)
}

View File

@ -122,7 +122,7 @@ Ext.define('PBS.view.main.NavigationTree', {
if (view.tapestore === undefined) {
view.tapestore = Ext.create('Proxmox.data.UpdateStore', {
autoStart: true,
interval: 2 * 1000,
interval: 60 * 1000,
storeid: 'pbs-tape-drive-list',
model: 'pbs-tape-drive-list',
});
@ -188,11 +188,13 @@ Ext.define('PBS.view.main.NavigationTree', {
}
}
let toremove = [];
list.eachChild((child) => {
if (!newSet[child.data.path]) {
list.removeChild(child, true);
toremove.push(child);
}
});
toremove.forEach((child) => list.removeChild(child, true));
if (view.pathToSelect !== undefined) {
let path = view.pathToSelect;
@ -267,6 +269,15 @@ Ext.define('PBS.view.main.NavigationTree', {
},
},
reloadTapeStore: function() {
let me = this;
if (!PBS.enableTapeUI) {
return;
}
me.tapestore.load();
},
select: function(path, silent) {
var me = this;
if (me.rstore.isLoaded() && (!PBS.enableTapeUI || me.tapestore.isLoaded())) {

View File

@ -3,6 +3,10 @@ const proxmoxOnlineHelpInfo = {
"link": "/docs/index.html",
"title": "Proxmox Backup Server Documentation Index"
},
"client-repository": {
"link": "/docs/backup-client.html#client-repository",
"title": "Repository Locations"
},
"client-creating-backups": {
"link": "/docs/backup-client.html#client-creating-backups",
"title": "Creating Backups"
@ -47,10 +51,18 @@ const proxmoxOnlineHelpInfo = {
"link": "/docs/package-repositories.html#sysadmin-package-repositories",
"title": "Debian Package Repositories"
},
"sysadmin-package-repos-enterprise": {
"link": "/docs/package-repositories.html#sysadmin-package-repos-enterprise",
"title": "`Proxmox Backup`_ Enterprise Repository"
},
"get-help": {
"link": "/docs/introduction.html#get-help",
"title": "Getting Help"
},
"get-help-enterprise-support": {
"link": "/docs/introduction.html#get-help-enterprise-support",
"title": "Enterprise Support"
},
"chapter-zfs": {
"link": "/docs/sysadmin.html#chapter-zfs",
"title": "ZFS on Linux"

View File

@ -369,30 +369,30 @@ Ext.define('PBS.Utils', {
// do whatever you want here
Proxmox.Utils.override_task_descriptions({
backup: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Backup')),
"tape-backup": (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup')),
"tape-backup-job": (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup Job')),
"tape-restore": ['Datastore', gettext('Tape Restore')],
"barcode-label-media": [gettext('Drive'), gettext('Barcode label media')],
'barcode-label-media': [gettext('Drive'), gettext('Barcode-Label Media')],
'catalog-media': [gettext('Drive'), gettext('Catalog Media')],
dircreate: [gettext('Directory Storage'), gettext('Create')],
dirremove: [gettext('Directory'), gettext('Remove')],
"load-media": (type, id) => PBS.Utils.render_drive_load_media_id(id, gettext('Load media')),
"unload-media": [gettext('Drive'), gettext('Unload media')],
"eject-media": [gettext('Drive'), gettext('Eject media')],
"erase-media": [gettext('Drive'), gettext('Erase media')],
garbage_collection: ['Datastore', gettext('Garbage collect')],
"inventory-update": [gettext('Drive'), gettext('Inventory update')],
"label-media": [gettext('Drive'), gettext('Label media')],
"catalog-media": [gettext('Drive'), gettext('Catalog media')],
'eject-media': [gettext('Drive'), gettext('Eject Media')],
'erase-media': [gettext('Drive'), gettext('Erase Media')],
garbage_collection: ['Datastore', gettext('Garbage Collect')],
'inventory-update': [gettext('Drive'), gettext('Inventory Update')],
'label-media': [gettext('Drive'), gettext('Label Media')],
'load-media': (type, id) => PBS.Utils.render_drive_load_media_id(id, gettext('Load Media')),
logrotate: [null, gettext('Log Rotation')],
prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')),
reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read objects')),
"rewind-media": [gettext('Drive'), gettext('Rewind media')],
reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read Objects')),
'rewind-media': [gettext('Drive'), gettext('Rewind Media')],
sync: ['Datastore', gettext('Remote Sync')],
syncjob: [gettext('Sync Job'), gettext('Remote Sync')],
'tape-backup': (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup')),
'tape-backup-job': (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup Job')),
'tape-restore': ['Datastore', gettext('Tape Restore')],
'unload-media': [gettext('Drive'), gettext('Unload Media')],
verificationjob: [gettext('Verify Job'), gettext('Scheduled Verification')],
verify: ['Datastore', gettext('Verification')],
verify_group: ['Group', gettext('Verification')],
verify_snapshot: ['Snapshot', gettext('Verification')],
verificationjob: [gettext('Verify Job'), gettext('Scheduled Verification')],
zfscreate: [gettext('ZFS Storage'), gettext('Create')],
});
},

View File

@ -273,3 +273,7 @@ span.snapshot-comment-column {
height: 20px;
background-image:url(../images/icon-tape-drive.svg);
}
.info-pointer div.right-aligned {
cursor: pointer;
}

View File

@ -42,7 +42,7 @@ Ext.define('PBS.Datastore.Options', {
rows: {
"notify": {
required: true,
header: gettext('Notfiy'),
header: gettext('Notify'),
renderer: (value) => {
let notify = PBS.Utils.parsePropertyString(value);
let res = [];
@ -59,7 +59,7 @@ Ext.define('PBS.Datastore.Options', {
"notify-user": {
required: true,
defaultValue: 'root@pam',
header: gettext('Notfiy User'),
header: gettext('Notify User'),
editor: {
xtype: 'pbsNotifyOptionEdit',
},

View File

@ -33,7 +33,7 @@ Ext.define('PBS.form.CalendarEvent', {
config: {
deleteEmpty: true,
},
// overide framework function to implement deleteEmpty behaviour
// override framework function to implement deleteEmpty behaviour
getSubmitData: function() {
let me = this, data = null;
if (!me.disabled && me.submitValue) {

View File

@ -24,11 +24,18 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
return;
}
let mediaset = selection[0].data.text;
let uuid = selection[0].data['media-set-uuid'];
let node = selection[0];
let mediaset = node.data.text;
let uuid = node.data['media-set-uuid'];
let datastores = node.data.datastores;
while (!datastores && node.get('depth') > 2) {
node = node.parentNode;
datastores = node.data.datastores;
}
Ext.create('PBS.TapeManagement.TapeRestoreWindow', {
mediaset,
uuid,
datastores,
listeners: {
destroy: function() {
me.reload();
@ -127,9 +134,16 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
},
});
list.result.data.sort((a, b) => a.snapshot.localeCompare(b.snapshot));
list.result.data.sort(function(a, b) {
let storeRes = a.store.localeCompare(b.store);
if (storeRes === 0) {
return a.snapshot.localeCompare(b.snapshot);
} else {
return storeRes;
}
});
let tapes = {};
let stores = {};
for (let entry of list.result.data) {
entry.text = entry.snapshot;
@ -140,9 +154,19 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
entry.iconCls = `fa ${iconCls}`;
}
let store = entry.store;
let tape = entry['label-text'];
if (tapes[tape] === undefined) {
tapes[tape] = {
if (stores[store] === undefined) {
stores[store] = {
text: store,
'media-set-uuid': entry['media-set-uuid'],
iconCls: 'fa fa-database',
tapes: {},
};
}
if (stores[store].tapes[tape] === undefined) {
stores[store].tapes[tape] = {
text: tape,
'media-set-uuid': entry['media-set-uuid'],
'seq-nr': entry['seq-nr'],
@ -153,7 +177,7 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
}
let [type, group, _id] = PBS.Utils.parse_snapshot_id(entry.snapshot);
let children = tapes[tape].children;
let children = stores[store].tapes[tape].children;
let text = `${type}/${group}`;
if (children.length < 1 || children[children.length - 1].text !== text) {
children.push({
@ -167,8 +191,14 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
children[children.length - 1].children.push(entry);
}
for (const tape of Object.values(tapes)) {
node.appendChild(tape);
let storeList = Object.values(stores);
let storeNameList = Object.keys(stores);
let expand = storeList.length === 1;
for (const store of storeList) {
store.children = Object.values(store.tapes);
store.expanded = expand;
delete store.tapes;
node.appendChild(store);
}
if (list.result.data.length === 0) {
@ -176,6 +206,7 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
}
node.set('loaded', true);
node.set('datastores', storeNameList);
Proxmox.Utils.setErrorMask(view, false);
node.expand();
} catch (error) {

Some files were not shown because too many files have changed in this diff Show More