Compare commits

...

65 Commits

Author SHA1 Message Date
a417c8a93e bump version to 1.0.13-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-02 15:32:27 +02:00
79e58a903e pxar: handle missing GROUP_OBJ ACL entries
Previously we did not store GROUP_OBJ ACL entries for
directories, this means that these were lost which may
potentially elevate group permissions if they were masked
before via ACLs, so we also show a warning.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-02 11:10:20 +02:00
9f40e09d0a pxar: fix directory ACL entry creation
Don't override `group_obj` with `None` when handling
`ACL_TYPE_DEFAULT` entries for directories.

Reproducer: /var/log/journal ends up without a `MASK` type
entry making it invalid as it has `USER` and `GROUP`
entries.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-02 10:22:04 +02:00
553e57f914 server/rest: drop now unused imports
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-01 11:53:13 +02:00
2200a38671 code cleanup: drop extra newlines at EOF
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-01 11:27:07 +02:00
ba39ab20fb server/rest: extract auth to seperate module
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:26:28 +02:00
ff8945fd2f proxmox_client_tools: move common key related functions to key_source.rs
Add a new module containing key-related functions and schemata from all
over, code moved is not changed as much as possible.

Requires adapting some 'use' statements across proxmox-backup-client and
putting the XDG helpers quite cozily into proxmox_client_tools/mod.rs

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
4876393562 vsock_client: support authorization header
Pass in an optional auth tag, which will be passed as an Authorization
header on every subsequent call.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
971bc6f94b vsock_client: remove some &mut restrictions and rustfmt
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
cab92acb3c vsock_client: remove wrong comment
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
a1d90719e4 bump pxar dep to 0.10.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-31 14:00:20 +02:00
eeff085d9d server/rest: fix type ambiguity
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-29 12:02:30 +02:00
d43c407a00 server/rest: rust format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-29 08:17:26 +02:00
6bc87d3952 ui: verification job: fix subject of edit window
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:57:00 +02:00
04c1c68f31 ui: verify job: fix subject of edit window
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:45:45 +02:00
94b17c804a ui: task descriptions: sort alphabetically
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:45:23 +02:00
94352256b7 ui: task descriptions: fix casing
Enforce title-case. Affects mostly the new tape related task
description.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:50:51 +02:00
b3bed7e41f docs: tape/pool: add backend/ui setting name for allocation policy
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:40:23 +02:00
a4672dd0b1 ui: tape/pool: set onlineHelp for edit/add window
To let users find the good explanation about allocation and retention
policies from the docs easier.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:29:02 +02:00
17bbcb57d7 ui: tape: retention/allocation are Policies, note so
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:28:36 +02:00
843146479a ui: gettext; s/blocksize/block size/
Blocksize is not a word in the English language

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:04:19 +02:00
cf1e117fc7 sgutils2: use enum for ScsiError
This avoids string allocation when we return SenseInfo.
2021-03-27 15:57:48 +01:00
03eac20b87 SgRaw: add do_in_command() 2021-03-27 15:38:08 +01:00
11f5d59396 tape: page-align BlockHeader so that we can use it with SG_IO 2021-03-27 15:36:35 +01:00
6f63c29306 Cargo.toml: fix: set version to 1.0.12 2021-03-26 14:14:12 +01:00
c0e365fd49 bump version to 1.0.12-1 2021-03-26 14:09:30 +01:00
93fb2e0d21 api2/types: add type_text to DATASTORE_MAP_FORMAT
This way we get a better rendering in the api-viewer.
before:
 [<string>, ... ]

after:
 [(<source>=)?<target>, ... ]

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 13:18:10 +01:00
c553407e98 tape: add --scan option for catalog restore 2021-03-25 13:08:34 +01:00
4830de408b tape: avoid writing catalogs for empty backup tasks 2021-03-25 12:50:40 +01:00
7f78528308 OnlineHelpInfo.js: new link for client-repository 2021-03-25 12:26:57 +01:00
2843ba9017 avoid compiler warning 2021-03-25 12:25:23 +01:00
e244b9d03d api2/types: expand DATASTORE_MAP_LIST_SCHEMA description
and give an example

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 12:18:14 +01:00
657c47db35 tape: ui: TapeRestore: make datastore mapping selectable
by adding a custom field (grid) where the user can select
a target datastore for each source datastore on tape

if we have not loaded the content of the media set yet,
we have to load it on window open to get the list of datastores
on the tape

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 12:17:46 +01:00
a32bb86df9 api subscription: drop old hack for api-macro issue
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-25 12:03:33 +01:00
654c56e05d docs: client benchmark: note that tls is only done if repo is set
and remove misleading note about no network involved in tls
speedtest, as normally there is!

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-25 10:33:45 +01:00
589c4dad9e tape: add fsf/bsf to TapeDriver trait 2021-03-25 10:10:16 +01:00
0320deb0a9 proxmox-tape: fix clean api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 08:14:13 +01:00
4c4e5c2b1e api2/tape/restore: enable restore mapping of datastores
by changing the 'store' parameter of the restore api call to a
list of mappings (or a single default datastore)

for example giving:
a=b,c=d,e

would restore
datastore 'a' from tape to local datastore 'b'
datastore 'c' from tape to local datastore 'e'
all other datastores to 'e'

this way, only a single datastore can also be restored, by only
giving a single mapping, e.g. 'a=b'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 07:46:12 +01:00
924373d2df client/backup_writer: clarify backup and upload size
The text 'had to upload [KMG]iB' implies that this is the size we
actually had to send to the server, while in reality it is the
raw data size before compression.

Count the size of the compressed chunks and print it separately.
Split the average speed into its own line so they do not get too long.

Rename 'uploaded' into 'size_dirty' and 'vsize_h' into 'size'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 18:24:56 +01:00
3b60b5098f client/backup_writer: introduce UploadStats struct
instead of using a big anonymous tuple. This way the returned values
are properly named.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 18:24:56 +01:00
4abb3edd9f docs: fix horizontal scrolling issues on desktop and mobile
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 18:24:39 +01:00
932e69a837 docs: improve navigation coloring on mobile
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 18:24:20 +01:00
ef6d49670b client: backup writer: run rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 17:12:05 +01:00
52ea00e9df docs: only apply toctree color override to sidebar one
else the TOC on the index page has some white text on white back
ground

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 17:09:30 +01:00
870681013a tape: fix catalog restore
We need to rewind the tape if fast_catalog_restore() fail ...
2021-03-24 10:09:23 +01:00
c046739461 tape: fix MediaPool regression tests 2021-03-24 09:44:30 +01:00
8b1289f3e4 tape: skip catalog archives in restore 2021-03-24 09:33:39 +01:00
f1d76ecf6c fix #3359: fix blocking writes in async code during pxar create
in commit `asyncify pxar create_archive`, we changed from a
separate thread for creating a pxar to using async code, but the
StdChannelWriter used for both pxar and catalog can block, which
may block the tokio runtime for single (and probably dual) core
environments

this patch adds a wrapper struct for any writer that implements
'std::io::Write' and wraps the write calls with 'block_in_place'
so that if called in a tokio runtime, it knows that this code
potentially blocks

Fixes: 6afb60abf5 ("asyncify pxar create_archive")

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 09:00:07 +01:00
074503f288 tape: implement fast catalog restore 2021-03-24 08:40:34 +01:00
c6f55139f8 tape: impl. MediaCatalog::parse_catalog_header
This is just an optimization, avoiding to read the catalog into memory.

We also expose create_temporary_database_file() now (will be
used for catalog restore).
2021-03-24 06:32:59 +01:00
20cc25d749 tape: add TapeDriver::move_to_last_file 2021-03-24 06:32:59 +01:00
30316192b3 tape: improve locking (lock media-sets)
- new helper: lock_media_set()

- MediaPool: lock media set

- Expose Inventory::new() to avoid double loading

- do not lock pool on restore (only lock media-set)

- change pool lock name to ".pool-{name}"
2021-03-24 06:32:59 +01:00
e93263be1e taoe: implement MediaCatalog::destroy_unrelated_catalog() helper 2021-03-22 12:03:11 +01:00
2ab2ca9c24 tape: add MediaPool::lock_unassigned_media_pool() helper 2021-03-19 10:13:38 +01:00
54fcb7f5d8 api2/tape/backup: wait indefinitely for lock in scheduled backup jobs
so that a user can schedule multiple backup jobs onto a single
media pool without having to consider timing them apart

this makes sense since we can backup multiple datastores onto
the same media-set but can only specify one datastore per backup job

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 09:04:32 +01:00
4abd4dbe38 api2/tape/backup: include a summary on notification e-mails
for now only contains the list of included snapshots (if any),
as well as the backup duration

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 09:03:52 +01:00
eac1beef3c tape: cleanup PoolWriter - factor out common code 2021-03-19 08:56:14 +01:00
166a48f903 tape: cleanup - split PoolWriter into several files 2021-03-19 08:19:13 +01:00
82775c4764 tape: make sure we only commit/write valid catalogs 2021-03-19 07:50:32 +01:00
88bc9635aa tape: store media_uuid in PoolWriterState
This is mainly a cleanup, avoiding to access the catalog_set to get the uuid.
2021-03-19 07:33:59 +01:00
1037f2bc2d tape: cleanup - rename CatalogBuilder to CatalogSet 2021-03-19 07:22:54 +01:00
f24cbee77d server/email_notifications: do not double html escape
the default escape handler is handlebars::html_escape, but this are
plain text emails and we manually escape them for the html part, so
set the default escape handler to 'no_escape'

this avoids double html escape for the characters: '&"<>' in emails

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:02:49 +01:00
25b4d52dce server/email_notifications: do not panic on template registration
instead print an error and continue, the rendering functions will error
out if one of the templates could not be registered

if we `.unwrap()` here, it can lead to problems if the templates are
not correct, i.e. we could panic while holding a lock, if something holds
a mutex while this is called for the first time

add a test to catch registration issues during package build

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:02:17 +01:00
2729d134bd tools/systemd/time: implement some Traits for TimeSpan
namely
* From<Duration> (to convert easily from duration to timespan)
* Display (for better formatting)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:00:55 +01:00
32b75d36a8 tape: backup media catalogs 2021-03-19 06:58:46 +01:00
62 changed files with 3330 additions and 1586 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "1.0.11" version = "1.0.13"
authors = [ authors = [
"Dietmar Maurer <dietmar@proxmox.com>", "Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>", "Dominik Csapak <d.csapak@proxmox.com>",
@ -52,7 +52,7 @@ proxmox = { version = "0.11.0", features = [ "sortable-macro", "api-macro", "web
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.1" proxmox-fuse = "0.1.1"
pxar = { version = "0.10.0", features = [ "tokio-io" ] } pxar = { version = "0.10.1", features = [ "tokio-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io" ] } #pxar = { path = "../pxar", features = [ "tokio-io" ] }
regex = "1.2" regex = "1.2"
rustyline = "7" rustyline = "7"

24
debian/changelog vendored
View File

@ -1,3 +1,27 @@
rust-proxmox-backup (1.0.13-1) unstable; urgency=medium
* pxar: improve handling ACL entries on create and restore
-- Proxmox Support Team <support@proxmox.com> Fri, 02 Apr 2021 15:32:01 +0200
rust-proxmox-backup (1.0.12-1) unstable; urgency=medium
* tape: write catalogs to tape (speedup catalog restore)
* tape: add --scan option for catalog restore
* tape: improve locking (lock media-sets)
* tape: ui: enable datastore mappings
* fix #3359: fix blocking writes in async code during pxar create
* api2/tape/backup: wait indefinitely for lock in scheduled backup jobs
* docu improvements
-- Proxmox Support Team <support@proxmox.com> Fri, 26 Mar 2021 14:08:47 +0100
rust-proxmox-backup (1.0.11-1) unstable; urgency=medium rust-proxmox-backup (1.0.11-1) unstable; urgency=medium
* fix feature flag logic in pxar create * fix feature flag logic in pxar create

4
debian/control vendored
View File

@ -41,8 +41,8 @@ Build-Depends: debhelper (>= 11),
librust-proxmox-0.11+sortable-macro-dev, librust-proxmox-0.11+sortable-macro-dev,
librust-proxmox-0.11+websocket-dev, librust-proxmox-0.11+websocket-dev,
librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~), librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~),
librust-pxar-0.10+default-dev, librust-pxar-0.10+default-dev (>= 0.10.1-~~),
librust-pxar-0.10+tokio-io-dev, librust-pxar-0.10+tokio-io-dev (>= 0.10.1-~~),
librust-regex-1+default-dev (>= 1.2-~~), librust-regex-1+default-dev (>= 1.2-~~),
librust-rustyline-7+default-dev, librust-rustyline-7+default-dev,
librust-serde-1+default-dev, librust-serde-1+default-dev,

View File

@ -3,6 +3,7 @@ Backup Client Usage
The command line client is called :command:`proxmox-backup-client`. The command line client is called :command:`proxmox-backup-client`.
.. _client_repository:
Repository Locations Repository Locations
-------------------- --------------------
@ -691,8 +692,15 @@ Benchmarking
------------ ------------
The backup client also comes with a benchmarking tool. This tool measures The backup client also comes with a benchmarking tool. This tool measures
various metrics relating to compression and encryption speeds. You can run a various metrics relating to compression and encryption speeds. If a Proxmox
benchmark using the ``benchmark`` subcommand of ``proxmox-backup-client``: Backup repository (remote or local) is specified, the TLS upload speed will get
measured too.
You can run a benchmark using the ``benchmark`` subcommand of
``proxmox-backup-client``:
.. note:: The TLS speed test is only included if a :ref:`backup server
repository is specified <client_repository>`.
.. code-block:: console .. code-block:: console
@ -723,8 +731,7 @@ benchmark using the ``benchmark`` subcommand of ``proxmox-backup-client``:
.. note:: The percentages given in the output table correspond to a .. note:: The percentages given in the output table correspond to a
comparison against a Ryzen 7 2700X. The TLS test connects to the comparison against a Ryzen 7 2700X.
local host, so there is no network involved.
You can also pass the ``--output-format`` parameter to output stats in ``json``, You can also pass the ``--output-format`` parameter to output stats in ``json``,
rather than the default table format. rather than the default table format.

View File

@ -57,6 +57,11 @@ div.sphinxsidebar h3 {
div.sphinxsidebar h1.logo-name { div.sphinxsidebar h1.logo-name {
display: none; display: none;
} }
div.document, div.footer {
width: min(100%, 1320px);
}
@media screen and (max-width: 875px) { @media screen and (max-width: 875px) {
div.sphinxsidebar p.logo { div.sphinxsidebar p.logo {
display: initial; display: initial;
@ -65,9 +70,19 @@ div.sphinxsidebar h1.logo-name {
display: block; display: block;
} }
div.sphinxsidebar span { div.sphinxsidebar span {
color: #AAA; color: #EEE;
} }
ul li.toctree-l1 > a { .sphinxsidebar ul li.toctree-l1 > a, div.sphinxsidebar a {
color: #FFF; color: #FFF;
} }
div.sphinxsidebar {
background-color: #555;
}
div.body {
min-width: 300px;
}
div.footer {
display: block;
margin: 15px auto 0px auto;
}
} }

View File

@ -411,7 +411,7 @@ one media pool, so a job only uses tapes from that pool.
The pool additionally defines how long backup jobs can append data The pool additionally defines how long backup jobs can append data
to a media set. The following settings are possible: to a media set. The following settings are possible:
- Try to use the current media set. - Try to use the current media set (``continue``).
This setting produces one large media set. While this is very This setting produces one large media set. While this is very
space efficient (deduplication, no unused space), it can lead to space efficient (deduplication, no unused space), it can lead to
@ -433,7 +433,7 @@ one media pool, so a job only uses tapes from that pool.
.. NOTE:: Retention period starts with the existence of a newer .. NOTE:: Retention period starts with the existence of a newer
media set. media set.
- Always create a new media set. - Always create a new media set (``always``).
With this setting, each backup job creates a new media set. This With this setting, each backup job creates a new media set. This
is less space efficient, because the media from the last set is less space efficient, because the media from the last set

View File

@ -32,9 +32,6 @@ use crate::api2::types::{NODE_SCHEMA, SUBSCRIPTION_KEY_SCHEMA, Authid};
pub fn check_subscription( pub fn check_subscription(
force: bool, force: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
// FIXME: drop once proxmox-api-macro is bumped to >> 5.0.0-1
let _remove_me = API_METHOD_CHECK_SUBSCRIPTION_PARAM_DEFAULT_FORCE;
let info = match subscription::read_subscription() { let info = match subscription::read_subscription() {
Err(err) => bail!("could not read subscription status: {}", err), Err(err) => bail!("could not read subscription status: {}", err),
Ok(Some(info)) => info, Ok(Some(info)) => info,

View File

@ -5,6 +5,7 @@ use anyhow::{bail, format_err, Error};
use serde_json::Value; use serde_json::Value;
use proxmox::{ use proxmox::{
try_block,
api::{ api::{
api, api,
RpcEnvironment, RpcEnvironment,
@ -33,6 +34,7 @@ use crate::{
}, },
server::{ server::{
lookup_user_email, lookup_user_email,
TapeBackupJobSummary,
jobstate::{ jobstate::{
Job, Job,
JobState, JobState,
@ -176,8 +178,15 @@ pub fn do_tape_backup_job(
let (drive_config, _digest) = config::drive::config()?; let (drive_config, _digest) = config::drive::config()?;
// early check/lock before starting worker // for scheduled jobs we acquire the lock later in the worker
let drive_lock = lock_tape_device(&drive_config, &setup.drive)?; let drive_lock = if schedule.is_some() {
None
} else {
Some(lock_tape_device(&drive_config, &setup.drive)?)
};
let notify_user = setup.notify_user.as_ref().unwrap_or_else(|| &Userid::root_userid());
let email = lookup_user_email(notify_user);
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
&worker_type, &worker_type,
@ -185,26 +194,40 @@ pub fn do_tape_backup_job(
auth_id.clone(), auth_id.clone(),
false, false,
move |worker| { move |worker| {
let _drive_lock = drive_lock; // keep lock guard
set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
job.start(&worker.upid().to_string())?; job.start(&worker.upid().to_string())?;
let mut drive_lock = drive_lock;
let (job_result, summary) = match try_block!({
if schedule.is_some() {
// for scheduled tape backup jobs, we wait indefinitely for the lock
task_log!(worker, "waiting for drive lock...");
loop {
if let Ok(lock) = lock_tape_device(&drive_config, &setup.drive) {
drive_lock = Some(lock);
break;
} // ignore errors
worker.check_abort()?;
}
}
set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
task_log!(worker,"Starting tape backup job '{}'", job_id); task_log!(worker,"Starting tape backup job '{}'", job_id);
if let Some(event_str) = schedule { if let Some(event_str) = schedule {
task_log!(worker,"task triggered by schedule '{}'", event_str); task_log!(worker,"task triggered by schedule '{}'", event_str);
} }
let notify_user = setup.notify_user.as_ref().unwrap_or_else(|| &Userid::root_userid()); backup_worker(
let email = lookup_user_email(notify_user);
let job_result = backup_worker(
&worker, &worker,
datastore, datastore,
&pool_config, &pool_config,
&setup, &setup,
email.clone(), email.clone(),
); )
}) {
Ok(summary) => (Ok(()), summary),
Err(err) => (Err(err), Default::default()),
};
let status = worker.create_state(&job_result); let status = worker.create_state(&job_result);
@ -214,6 +237,7 @@ pub fn do_tape_backup_job(
Some(job.jobname()), Some(job.jobname()),
&setup, &setup,
&job_result, &job_result,
summary,
) { ) {
eprintln!("send tape backup notification failed: {}", err); eprintln!("send tape backup notification failed: {}", err);
} }
@ -340,13 +364,17 @@ pub fn backup(
move |worker| { move |worker| {
let _drive_lock = drive_lock; // keep lock guard let _drive_lock = drive_lock; // keep lock guard
set_tape_device_state(&setup.drive, &worker.upid().to_string())?; set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
let job_result = backup_worker(
let (job_result, summary) = match backup_worker(
&worker, &worker,
datastore, datastore,
&pool_config, &pool_config,
&setup, &setup,
email.clone(), email.clone(),
); ) {
Ok(summary) => (Ok(()), summary),
Err(err) => (Err(err), Default::default()),
};
if let Some(email) = email { if let Some(email) = email {
if let Err(err) = crate::server::send_tape_backup_status( if let Err(err) = crate::server::send_tape_backup_status(
@ -354,6 +382,7 @@ pub fn backup(
None, None,
&setup, &setup,
&job_result, &job_result,
summary,
) { ) {
eprintln!("send tape backup notification failed: {}", err); eprintln!("send tape backup notification failed: {}", err);
} }
@ -374,16 +403,16 @@ fn backup_worker(
pool_config: &MediaPoolConfig, pool_config: &MediaPoolConfig,
setup: &TapeBackupJobSetup, setup: &TapeBackupJobSetup,
email: Option<String>, email: Option<String>,
) -> Result<(), Error> { ) -> Result<TapeBackupJobSummary, Error> {
let status_path = Path::new(TAPE_STATUS_DIR); let status_path = Path::new(TAPE_STATUS_DIR);
let start = std::time::Instant::now();
let _lock = MediaPool::lock(status_path, &pool_config.name)?; let mut summary: TapeBackupJobSummary = Default::default();
task_log!(worker, "update media online status"); task_log!(worker, "update media online status");
let changer_name = update_media_online_status(&setup.drive)?; let changer_name = update_media_online_status(&setup.drive)?;
let pool = MediaPool::with_config(status_path, &pool_config, changer_name)?; let pool = MediaPool::with_config(status_path, &pool_config, changer_name, false)?;
let mut pool_writer = PoolWriter::new(pool, &setup.drive, worker, email)?; let mut pool_writer = PoolWriter::new(pool, &setup.drive, worker, email)?;
@ -406,6 +435,8 @@ fn backup_worker(
let mut errors = false; let mut errors = false;
let mut need_catalog = false; // avoid writing catalog for empty jobs
for (group_number, group) in group_list.into_iter().enumerate() { for (group_number, group) in group_list.into_iter().enumerate() {
progress.done_groups = group_number as u64; progress.done_groups = group_number as u64;
progress.done_snapshots = 0; progress.done_snapshots = 0;
@ -422,8 +453,14 @@ fn backup_worker(
task_log!(worker, "skip snapshot {}", info.backup_dir); task_log!(worker, "skip snapshot {}", info.backup_dir);
continue; continue;
} }
need_catalog = true;
let snapshot_name = info.backup_dir.to_string();
if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? { if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
errors = true; errors = true;
} else {
summary.snapshot_list.push(snapshot_name);
} }
progress.done_snapshots = 1; progress.done_snapshots = 1;
task_log!( task_log!(
@ -439,8 +476,14 @@ fn backup_worker(
task_log!(worker, "skip snapshot {}", info.backup_dir); task_log!(worker, "skip snapshot {}", info.backup_dir);
continue; continue;
} }
need_catalog = true;
let snapshot_name = info.backup_dir.to_string();
if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? { if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
errors = true; errors = true;
} else {
summary.snapshot_list.push(snapshot_name);
} }
progress.done_snapshots = snapshot_number as u64 + 1; progress.done_snapshots = snapshot_number as u64 + 1;
task_log!( task_log!(
@ -454,6 +497,22 @@ fn backup_worker(
pool_writer.commit()?; pool_writer.commit()?;
if need_catalog {
task_log!(worker, "append media catalog");
let uuid = pool_writer.load_writable_media(worker)?;
let done = pool_writer.append_catalog_archive(worker)?;
if !done {
task_log!(worker, "catalog does not fit on tape, writing to next volume");
pool_writer.set_media_status_full(&uuid)?;
pool_writer.load_writable_media(worker)?;
let done = pool_writer.append_catalog_archive(worker)?;
if !done {
bail!("write_catalog_archive failed on second media");
}
}
}
if setup.export_media_set.unwrap_or(false) { if setup.export_media_set.unwrap_or(false) {
pool_writer.export_media_set(worker)?; pool_writer.export_media_set(worker)?;
} else if setup.eject_media.unwrap_or(false) { } else if setup.eject_media.unwrap_or(false) {
@ -464,7 +523,9 @@ fn backup_worker(
bail!("Tape backup finished with some errors. Please check the task log."); bail!("Tape backup finished with some errors. Please check the task log.");
} }
Ok(()) summary.duration = start.elapsed();
Ok(summary)
} }
// Try to update the the media online status // Try to update the the media online status

View File

@ -48,15 +48,20 @@ use crate::{
MamAttribute, MamAttribute,
LinuxDriveAndMediaStatus, LinuxDriveAndMediaStatus,
}, },
tape::restore::restore_media, tape::restore::{
fast_catalog_restore,
restore_media,
},
}, },
server::WorkerTask, server::WorkerTask,
tape::{ tape::{
TAPE_STATUS_DIR, TAPE_STATUS_DIR,
MediaPool,
Inventory, Inventory,
MediaCatalog, MediaCatalog,
MediaId, MediaId,
lock_media_set,
lock_media_pool,
lock_unassigned_media_pool,
linux_tape_device_list, linux_tape_device_list,
lookup_device_identification, lookup_device_identification,
file_formats::{ file_formats::{
@ -373,10 +378,19 @@ pub fn erase_media(
); );
let status_path = Path::new(TAPE_STATUS_DIR); let status_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(status_path)?; let mut inventory = Inventory::new(status_path);
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(status_path, pool)?;
let _media_set_lock = lock_media_set(status_path, uuid, None)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?; MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.remove_media(&media_id.label.uuid)?; inventory.remove_media(&media_id.label.uuid)?;
} else {
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.remove_media(&media_id.label.uuid)?;
};
handle.erase_media(fast.unwrap_or(true))?; handle.erase_media(fast.unwrap_or(true))?;
} }
} }
@ -548,29 +562,38 @@ fn write_media_label(
drive.label_tape(&label)?; drive.label_tape(&label)?;
let mut media_set_label = None; let status_path = Path::new(TAPE_STATUS_DIR);
if let Some(ref pool) = pool { let media_id = if let Some(ref pool) = pool {
// assign media to pool by writing special media set label // assign media to pool by writing special media set label
worker.log(format!("Label media '{}' for pool '{}'", label.label_text, pool)); worker.log(format!("Label media '{}' for pool '{}'", label.label_text, pool));
let set = MediaSetLabel::with_data(&pool, [0u8; 16].into(), 0, label.ctime, None); let set = MediaSetLabel::with_data(&pool, [0u8; 16].into(), 0, label.ctime, None);
drive.write_media_set_label(&set, None)?; drive.write_media_set_label(&set, None)?;
media_set_label = Some(set);
} else {
worker.log(format!("Label media '{}' (no pool assignment)", label.label_text));
}
let media_id = MediaId { label, media_set_label }; let media_id = MediaId { label, media_set_label: Some(set) };
let status_path = Path::new(TAPE_STATUS_DIR);
// Create the media catalog // Create the media catalog
MediaCatalog::overwrite(status_path, &media_id, false)?; MediaCatalog::overwrite(status_path, &media_id, false)?;
let mut inventory = Inventory::load(status_path)?; let mut inventory = Inventory::new(status_path);
inventory.store(media_id.clone(), false)?; inventory.store(media_id.clone(), false)?;
media_id
} else {
worker.log(format!("Label media '{}' (no pool assignment)", label.label_text));
let media_id = MediaId { label, media_set_label: None };
// Create the media catalog
MediaCatalog::overwrite(status_path, &media_id, false)?;
let mut inventory = Inventory::new(status_path);
inventory.store(media_id.clone(), false)?;
media_id
};
drive.rewind()?; drive.rewind()?;
match drive.read_label() { match drive.read_label() {
@ -705,14 +728,24 @@ pub async fn read_label(
if let Err(err) = drive.set_encryption(encrypt_fingerprint) { if let Err(err) = drive.set_encryption(encrypt_fingerprint) {
// try, but ignore errors. just log to stderr // try, but ignore errors. just log to stderr
eprintln!("uable to load encryption key: {}", err); eprintln!("unable to load encryption key: {}", err);
} }
} }
if let Some(true) = inventorize { if let Some(true) = inventorize {
let state_path = Path::new(TAPE_STATUS_DIR); let state_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(state_path)?; let mut inventory = Inventory::new(state_path);
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(state_path, pool)?;
let _lock = lock_media_set(state_path, uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(state_path, &media_id)?;
inventory.store(media_id, false)?; inventory.store(media_id, false)?;
} else {
let _lock = lock_unassigned_media_pool(state_path)?;
MediaCatalog::destroy(state_path, &media_id.label.uuid)?;
inventory.store(media_id, false)?;
};
} }
flat flat
@ -947,7 +980,17 @@ pub fn update_inventory(
continue; continue;
} }
worker.log(format!("inventorize media '{}' with uuid '{}'", label_text, media_id.label.uuid)); worker.log(format!("inventorize media '{}' with uuid '{}'", label_text, media_id.label.uuid));
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(state_path, pool)?;
let _lock = lock_media_set(state_path, uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(state_path, &media_id)?;
inventory.store(media_id, false)?; inventory.store(media_id, false)?;
} else {
let _lock = lock_unassigned_media_pool(state_path)?;
MediaCatalog::destroy(state_path, &media_id.label.uuid)?;
inventory.store(media_id, false)?;
};
} }
} }
changer.unload_media(None)?; changer.unload_media(None)?;
@ -1184,6 +1227,11 @@ pub async fn status(drive: String) -> Result<LinuxDriveAndMediaStatus, Error> {
type: bool, type: bool,
optional: true, optional: true,
}, },
scan: {
description: "Re-read the whole tape to reconstruct the catalog instead of restoring saved versions.",
type: bool,
optional: true,
},
verbose: { verbose: {
description: "Verbose mode - log all found chunks.", description: "Verbose mode - log all found chunks.",
type: bool, type: bool,
@ -1202,11 +1250,13 @@ pub async fn status(drive: String) -> Result<LinuxDriveAndMediaStatus, Error> {
pub fn catalog_media( pub fn catalog_media(
drive: String, drive: String,
force: Option<bool>, force: Option<bool>,
scan: Option<bool>,
verbose: Option<bool>, verbose: Option<bool>,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let verbose = verbose.unwrap_or(false); let verbose = verbose.unwrap_or(false);
let force = force.unwrap_or(false); let force = force.unwrap_or(false);
let scan = scan.unwrap_or(false);
let upid_str = run_drive_worker( let upid_str = run_drive_worker(
rpcenv, rpcenv,
@ -1237,19 +1287,22 @@ pub fn catalog_media(
let status_path = Path::new(TAPE_STATUS_DIR); let status_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(status_path)?; let mut inventory = Inventory::new(status_path);
inventory.store(media_id.clone(), false)?;
let pool = match media_id.media_set_label { let (_media_set_lock, media_set_uuid) = match media_id.media_set_label {
None => { None => {
worker.log("media is empty"); worker.log("media is empty");
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?; MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.store(media_id.clone(), false)?;
return Ok(()); return Ok(());
} }
Some(ref set) => { Some(ref set) => {
if set.uuid.as_ref() == [0u8;16] { // media is empty if set.uuid.as_ref() == [0u8;16] { // media is empty
worker.log("media is empty"); worker.log("media is empty");
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?; MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.store(media_id.clone(), false)?;
return Ok(()); return Ok(());
} }
let encrypt_fingerprint = set.encryption_key_fingerprint.clone() let encrypt_fingerprint = set.encryption_key_fingerprint.clone()
@ -1257,16 +1310,36 @@ pub fn catalog_media(
drive.set_encryption(encrypt_fingerprint)?; drive.set_encryption(encrypt_fingerprint)?;
set.pool.clone() let _pool_lock = lock_media_pool(status_path, &set.pool)?;
let media_set_lock = lock_media_set(status_path, &set.uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(status_path, &media_id)?;
inventory.store(media_id.clone(), false)?;
(media_set_lock, &set.uuid)
} }
}; };
let _lock = MediaPool::lock(status_path, &pool)?;
if MediaCatalog::exists(status_path, &media_id.label.uuid) && !force { if MediaCatalog::exists(status_path, &media_id.label.uuid) && !force {
bail!("media catalog exists (please use --force to overwrite)"); bail!("media catalog exists (please use --force to overwrite)");
} }
if !scan {
let media_set = inventory.compute_media_set_members(media_set_uuid)?;
if fast_catalog_restore(&worker, &mut drive, &media_set, &media_id.label.uuid)? {
return Ok(())
}
task_log!(worker, "no catalog found");
}
task_log!(worker, "scanning entire media to reconstruct catalog");
drive.rewind()?;
drive.read_label()?; // skip over labels - we already read them above
restore_media(&worker, &mut drive, &media_id, None, verbose)?; restore_media(&worker, &mut drive, &media_id, None, verbose)?;
Ok(()) Ok(())

View File

@ -122,7 +122,7 @@ pub async fn list_media(
let config: MediaPoolConfig = config.lookup("pool", pool_name)?; let config: MediaPoolConfig = config.lookup("pool", pool_name)?;
let changer_name = None; // assume standalone drive let changer_name = None; // assume standalone drive
let mut pool = MediaPool::with_config(status_path, &config, changer_name)?; let mut pool = MediaPool::with_config(status_path, &config, changer_name, true)?;
let current_time = proxmox::tools::time::epoch_i64(); let current_time = proxmox::tools::time::epoch_i64();

View File

@ -1,6 +1,9 @@
use std::path::Path; use std::path::Path;
use std::ffi::OsStr; use std::ffi::OsStr;
use std::collections::{HashMap, HashSet};
use std::convert::TryFrom; use std::convert::TryFrom;
use std::io::{Seek, SeekFrom};
use std::sync::Arc;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::Value; use serde_json::Value;
@ -12,6 +15,7 @@ use proxmox::{
RpcEnvironmentType, RpcEnvironmentType,
Router, Router,
Permission, Permission,
schema::parse_property_string,
section_config::SectionConfigData, section_config::SectionConfigData,
}, },
tools::{ tools::{
@ -26,10 +30,12 @@ use proxmox::{
use crate::{ use crate::{
task_log, task_log,
task_warn,
task::TaskState, task::TaskState,
tools::compute_file_csum, tools::compute_file_csum,
api2::types::{ api2::types::{
DATASTORE_SCHEMA, DATASTORE_MAP_ARRAY_SCHEMA,
DATASTORE_MAP_LIST_SCHEMA,
DRIVE_NAME_SCHEMA, DRIVE_NAME_SCHEMA,
UPID_SCHEMA, UPID_SCHEMA,
Authid, Authid,
@ -65,9 +71,10 @@ use crate::{
TAPE_STATUS_DIR, TAPE_STATUS_DIR,
TapeRead, TapeRead,
MediaId, MediaId,
MediaSet,
MediaCatalog, MediaCatalog,
MediaPool,
Inventory, Inventory,
lock_media_set,
file_formats::{ file_formats::{
PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0, PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0,
@ -76,10 +83,12 @@ use crate::{
PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0, PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1, PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1,
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0,
MediaContentHeader, MediaContentHeader,
ChunkArchiveHeader, ChunkArchiveHeader,
ChunkArchiveDecoder, ChunkArchiveDecoder,
SnapshotArchiveHeader, SnapshotArchiveHeader,
CatalogArchiveHeader,
}, },
drive::{ drive::{
TapeDriver, TapeDriver,
@ -90,14 +99,75 @@ use crate::{
}, },
}; };
pub const ROUTER: Router = Router::new() pub struct DataStoreMap {
.post(&API_METHOD_RESTORE); map: HashMap<String, Arc<DataStore>>,
default: Option<Arc<DataStore>>,
}
impl TryFrom<String> for DataStoreMap {
type Error = Error;
fn try_from(value: String) -> Result<Self, Error> {
let value = parse_property_string(&value, &DATASTORE_MAP_ARRAY_SCHEMA)?;
let mut mapping: Vec<String> = value
.as_array()
.unwrap()
.iter()
.map(|v| v.as_str().unwrap().to_string())
.collect();
let mut map = HashMap::new();
let mut default = None;
while let Some(mut store) = mapping.pop() {
if let Some(index) = store.find('=') {
let mut target = store.split_off(index);
target.remove(0); // remove '='
let datastore = DataStore::lookup_datastore(&target)?;
map.insert(store, datastore);
} else if default.is_none() {
default = Some(DataStore::lookup_datastore(&store)?);
} else {
bail!("multiple default stores given");
}
}
Ok(Self { map, default })
}
}
impl DataStoreMap {
fn used_datastores<'a>(&self) -> HashSet<&str> {
let mut set = HashSet::new();
for store in self.map.values() {
set.insert(store.name());
}
if let Some(ref store) = self.default {
set.insert(store.name());
}
set
}
fn get_datastore(&self, source: &str) -> Option<&DataStore> {
if let Some(store) = self.map.get(source) {
return Some(&store);
}
if let Some(ref store) = self.default {
return Some(&store);
}
return None;
}
}
pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
#[api( #[api(
input: { input: {
properties: { properties: {
store: { store: {
schema: DATASTORE_SCHEMA, schema: DATASTORE_MAP_LIST_SCHEMA,
}, },
drive: { drive: {
schema: DRIVE_NAME_SCHEMA, schema: DRIVE_NAME_SCHEMA,
@ -135,10 +205,17 @@ pub fn restore(
owner: Option<Authid>, owner: Option<Authid>,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let store_map = DataStoreMap::try_from(store)
.map_err(|err| format_err!("cannot parse store mapping: {}", err))?;
let used_datastores = store_map.used_datastores();
if used_datastores.len() == 0 {
bail!("no datastores given");
}
for store in used_datastores.iter() {
let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]); let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
if (privs & PRIV_DATASTORE_BACKUP) == 0 { if (privs & PRIV_DATASTORE_BACKUP) == 0 {
bail!("no permissions on /datastore/{}", store); bail!("no permissions on /datastore/{}", store);
@ -146,26 +223,28 @@ pub fn restore(
if let Some(ref owner) = owner { if let Some(ref owner) = owner {
let correct_owner = owner == &auth_id let correct_owner = owner == &auth_id
|| (owner.is_token() || (owner.is_token() && !auth_id.is_token() && owner.user() == auth_id.user());
&& !auth_id.is_token()
&& owner.user() == auth_id.user());
// same permission as changing ownership after syncing // same permission as changing ownership after syncing
if !correct_owner && privs & PRIV_DATASTORE_MODIFY == 0 { if !correct_owner && privs & PRIV_DATASTORE_MODIFY == 0 {
bail!("no permission to restore as '{}'", owner); bail!("no permission to restore as '{}'", owner);
} }
} }
}
let privs = user_info.lookup_privs(&auth_id, &["tape", "drive", &drive]); let privs = user_info.lookup_privs(&auth_id, &["tape", "drive", &drive]);
if (privs & PRIV_TAPE_READ) == 0 { if (privs & PRIV_TAPE_READ) == 0 {
bail!("no permissions on /tape/drive/{}", drive); bail!("no permissions on /tape/drive/{}", drive);
} }
let status_path = Path::new(TAPE_STATUS_DIR);
let inventory = Inventory::load(status_path)?;
let media_set_uuid = media_set.parse()?; let media_set_uuid = media_set.parse()?;
let status_path = Path::new(TAPE_STATUS_DIR);
let _lock = lock_media_set(status_path, &media_set_uuid, None)?;
let inventory = Inventory::load(status_path)?;
let pool = inventory.lookup_media_set_pool(&media_set_uuid)?; let pool = inventory.lookup_media_set_pool(&media_set_uuid)?;
let privs = user_info.lookup_privs(&auth_id, &["tape", "pool", &pool]); let privs = user_info.lookup_privs(&auth_id, &["tape", "pool", &pool]);
@ -173,8 +252,6 @@ pub fn restore(
bail!("no permissions on /tape/pool/{}", pool); bail!("no permissions on /tape/pool/{}", pool);
} }
let datastore = DataStore::lookup_datastore(&store)?;
let (drive_config, _digest) = config::drive::config()?; let (drive_config, _digest) = config::drive::config()?;
// early check/lock before starting worker // early check/lock before starting worker
@ -182,9 +259,14 @@ pub fn restore(
let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI; let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
let taskid = used_datastores
.iter()
.map(|s| s.to_string())
.collect::<Vec<String>>()
.join(", ");
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"tape-restore", "tape-restore",
Some(store.clone()), Some(taskid),
auth_id.clone(), auth_id.clone(),
to_stdout, to_stdout,
move |worker| { move |worker| {
@ -192,8 +274,6 @@ pub fn restore(
set_tape_device_state(&drive, &worker.upid().to_string())?; set_tape_device_state(&drive, &worker.upid().to_string())?;
let _lock = MediaPool::lock(status_path, &pool)?;
let members = inventory.compute_media_set_members(&media_set_uuid)?; let members = inventory.compute_media_set_members(&media_set_uuid)?;
let media_list = members.media_list(); let media_list = members.media_list();
@ -224,7 +304,11 @@ pub fn restore(
task_log!(worker, "Encryption key fingerprint: {}", fingerprint); task_log!(worker, "Encryption key fingerprint: {}", fingerprint);
} }
task_log!(worker, "Pool: {}", pool); task_log!(worker, "Pool: {}", pool);
task_log!(worker, "Datastore: {}", store); task_log!(worker, "Datastore(s):");
store_map
.used_datastores()
.iter()
.for_each(|store| task_log!(worker, "\t{}", store));
task_log!(worker, "Drive: {}", drive); task_log!(worker, "Drive: {}", drive);
task_log!( task_log!(
worker, worker,
@ -241,7 +325,7 @@ pub fn restore(
media_id, media_id,
&drive_config, &drive_config,
&drive, &drive,
&datastore, &store_map,
&auth_id, &auth_id,
&notify_user, &notify_user,
&owner, &owner,
@ -272,12 +356,11 @@ pub fn request_and_restore_media(
media_id: &MediaId, media_id: &MediaId,
drive_config: &SectionConfigData, drive_config: &SectionConfigData,
drive_name: &str, drive_name: &str,
datastore: &DataStore, store_map: &DataStoreMap,
authid: &Authid, authid: &Authid,
notify_user: &Option<Userid>, notify_user: &Option<Userid>,
owner: &Option<Authid>, owner: &Option<Authid>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let media_set_uuid = match media_id.media_set_label { let media_set_uuid = match media_id.media_set_label {
None => bail!("restore_media: no media set - internal error"), None => bail!("restore_media: no media set - internal error"),
Some(ref set) => &set.uuid, Some(ref set) => &set.uuid,
@ -310,7 +393,13 @@ pub fn request_and_restore_media(
let restore_owner = owner.as_ref().unwrap_or(authid); let restore_owner = owner.as_ref().unwrap_or(authid);
restore_media(worker, &mut drive, &info, Some((datastore, restore_owner)), false) restore_media(
worker,
&mut drive,
&info,
Some((&store_map, restore_owner)),
false,
)
} }
/// Restore complete media content and catalog /// Restore complete media content and catalog
@ -320,7 +409,7 @@ pub fn restore_media(
worker: &WorkerTask, worker: &WorkerTask,
drive: &mut Box<dyn TapeDriver>, drive: &mut Box<dyn TapeDriver>,
media_id: &MediaId, media_id: &MediaId,
target: Option<(&DataStore, &Authid)>, target: Option<(&DataStoreMap, &Authid)>,
verbose: bool, verbose: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
@ -349,11 +438,10 @@ fn restore_archive<'a>(
worker: &WorkerTask, worker: &WorkerTask,
mut reader: Box<dyn 'a + TapeRead>, mut reader: Box<dyn 'a + TapeRead>,
current_file_number: u64, current_file_number: u64,
target: Option<(&DataStore, &Authid)>, target: Option<(&DataStoreMap, &Authid)>,
catalog: &mut MediaCatalog, catalog: &mut MediaCatalog,
verbose: bool, verbose: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let header: MediaContentHeader = unsafe { reader.read_le_value()? }; let header: MediaContentHeader = unsafe { reader.read_le_value()? };
if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 { if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
bail!("missing MediaContentHeader"); bail!("missing MediaContentHeader");
@ -381,14 +469,22 @@ fn restore_archive<'a>(
let backup_dir: BackupDir = snapshot.parse()?; let backup_dir: BackupDir = snapshot.parse()?;
if let Some((datastore, authid)) = target.as_ref() { if let Some((store_map, authid)) = target.as_ref() {
if let Some(datastore) = store_map.get_datastore(&datastore_name) {
let (owner, _group_lock) = datastore.create_locked_backup_group(backup_dir.group(), authid)?; let (owner, _group_lock) =
if *authid != &owner { // only the owner is allowed to create additional snapshots datastore.create_locked_backup_group(backup_dir.group(), authid)?;
bail!("restore '{}' failed - owner check failed ({} != {})", snapshot, authid, owner); if *authid != &owner {
// only the owner is allowed to create additional snapshots
bail!(
"restore '{}' failed - owner check failed ({} != {})",
snapshot,
authid,
owner
);
} }
let (rel_path, is_new, _snap_lock) = datastore.create_locked_backup_dir(&backup_dir)?; let (rel_path, is_new, _snap_lock) =
datastore.create_locked_backup_dir(&backup_dir)?;
let mut path = datastore.base_path(); let mut path = datastore.base_path();
path.push(rel_path); path.push(rel_path);
@ -405,12 +501,20 @@ fn restore_archive<'a>(
task_log!(worker, "skip incomplete snapshot {}", backup_dir); task_log!(worker, "skip incomplete snapshot {}", backup_dir);
} }
Ok(true) => { Ok(true) => {
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, &datastore_name, &snapshot)?; catalog.register_snapshot(
Uuid::from(header.uuid),
current_file_number,
&datastore_name,
&snapshot,
)?;
catalog.commit_if_large()?; catalog.commit_if_large()?;
} }
} }
return Ok(()); return Ok(());
} }
} else {
task_log!(worker, "skipping...");
}
} }
reader.skip_to_end()?; // read all data reader.skip_to_end()?; // read all data
@ -431,10 +535,17 @@ fn restore_archive<'a>(
let source_datastore = archive_header.store; let source_datastore = archive_header.store;
task_log!(worker, "File {}: chunk archive for datastore '{}'", current_file_number, source_datastore); task_log!(worker, "File {}: chunk archive for datastore '{}'", current_file_number, source_datastore);
let datastore = target.as_ref().map(|t| t.0); let datastore = target
.as_ref()
.and_then(|t| t.0.get_datastore(&source_datastore));
if datastore.is_some() || target.is_none() {
if let Some(chunks) = restore_chunk_archive(worker, reader, datastore, verbose)? { if let Some(chunks) = restore_chunk_archive(worker, reader, datastore, verbose)? {
catalog.start_chunk_archive(Uuid::from(header.uuid), current_file_number, &source_datastore)?; catalog.start_chunk_archive(
Uuid::from(header.uuid),
current_file_number,
&source_datastore,
)?;
for digest in chunks.iter() { for digest in chunks.iter() {
catalog.register_chunk(&digest)?; catalog.register_chunk(&digest)?;
} }
@ -442,6 +553,22 @@ fn restore_archive<'a>(
catalog.end_chunk_archive()?; catalog.end_chunk_archive()?;
catalog.commit_if_large()?; catalog.commit_if_large()?;
} }
return Ok(());
} else if target.is_some() {
task_log!(worker, "skipping...");
}
reader.skip_to_end()?; // read all data
}
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0 => {
let header_data = reader.read_exact_allocated(header.size as usize)?;
let archive_header: CatalogArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse catalog archive header - {}", err))?;
task_log!(worker, "File {}: skip catalog '{}'", current_file_number, archive_header.uuid);
reader.skip_to_end()?; // read all data
} }
_ => bail!("unknown content magic {:?}", header.content_magic), _ => bail!("unknown content magic {:?}", header.content_magic),
} }
@ -660,3 +787,137 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
Ok(()) Ok(())
} }
/// Try to restore media catalogs (form catalog_archives)
pub fn fast_catalog_restore(
worker: &WorkerTask,
drive: &mut Box<dyn TapeDriver>,
media_set: &MediaSet,
uuid: &Uuid, // current media Uuid
) -> Result<bool, Error> {
let status_path = Path::new(TAPE_STATUS_DIR);
let current_file_number = drive.current_file_number()?;
if current_file_number != 2 {
bail!("fast_catalog_restore: wrong media position - internal error");
}
let mut found_catalog = false;
let mut moved_to_eom = false;
loop {
let current_file_number = drive.current_file_number()?;
{ // limit reader scope
let mut reader = match drive.read_next_file()? {
None => {
task_log!(worker, "detected EOT after {} files", current_file_number);
break;
}
Some(reader) => reader,
};
let header: MediaContentHeader = unsafe { reader.read_le_value()? };
if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
bail!("missing MediaContentHeader");
}
if header.content_magic == PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0 {
task_log!(worker, "found catalog at pos {}", current_file_number);
let header_data = reader.read_exact_allocated(header.size as usize)?;
let archive_header: CatalogArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse catalog archive header - {}", err))?;
if &archive_header.media_set_uuid != media_set.uuid() {
task_log!(worker, "skipping unrelated catalog at pos {}", current_file_number);
reader.skip_to_end()?; // read all data
continue;
}
let catalog_uuid = &archive_header.uuid;
let wanted = media_set
.media_list()
.iter()
.find(|e| {
match e {
None => false,
Some(uuid) => uuid == catalog_uuid,
}
})
.is_some();
if !wanted {
task_log!(worker, "skip catalog because media '{}' not inventarized", catalog_uuid);
reader.skip_to_end()?; // read all data
continue;
}
if catalog_uuid == uuid {
// always restore and overwrite catalog
} else {
// only restore if catalog does not exist
if MediaCatalog::exists(status_path, catalog_uuid) {
task_log!(worker, "catalog for media '{}' already exists", catalog_uuid);
reader.skip_to_end()?; // read all data
continue;
}
}
let mut file = MediaCatalog::create_temporary_database_file(status_path, catalog_uuid)?;
std::io::copy(&mut reader, &mut file)?;
file.seek(SeekFrom::Start(0))?;
match MediaCatalog::parse_catalog_header(&mut file)? {
(true, Some(media_uuid), Some(media_set_uuid)) => {
if &media_uuid != catalog_uuid {
task_log!(worker, "catalog uuid missmatch at pos {}", current_file_number);
continue;
}
if media_set_uuid != archive_header.media_set_uuid {
task_log!(worker, "catalog media_set missmatch at pos {}", current_file_number);
continue;
}
MediaCatalog::finish_temporary_database(status_path, &media_uuid, true)?;
if catalog_uuid == uuid {
task_log!(worker, "successfully restored catalog");
found_catalog = true
} else {
task_log!(worker, "successfully restored related catalog {}", media_uuid);
}
}
_ => {
task_warn!(worker, "got incomplete catalog header - skip file");
continue;
}
}
continue;
}
}
if moved_to_eom {
break; // already done - stop
}
moved_to_eom = true;
task_log!(worker, "searching for catalog at EOT (moving to EOT)");
drive.move_to_last_file()?;
let new_file_number = drive.current_file_number()?;
if new_file_number < (current_file_number + 1) {
break; // no new content - stop
}
}
Ok(found_catalog)
}

View File

@ -99,6 +99,8 @@ const_regex!{
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$"; pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$"; pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$";
pub DATASTORE_MAP_REGEX = concat!(r"(:?", PROXMOX_SAFE_ID_REGEX_STR!(), r"=)?", PROXMOX_SAFE_ID_REGEX_STR!());
} }
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat = pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
@ -164,6 +166,9 @@ pub const SUBSCRIPTION_KEY_FORMAT: ApiStringFormat =
pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat = pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX); ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX);
pub const DATASTORE_MAP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DATASTORE_MAP_REGEX);
pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.") pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.")
.format(&PASSWORD_FORMAT) .format(&PASSWORD_FORMAT)
.min_length(1) .min_length(1)
@ -356,6 +361,25 @@ pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.max_length(32) .max_length(32)
.schema(); .schema();
pub const DATASTORE_MAP_SCHEMA: Schema = StringSchema::new("Datastore mapping.")
.format(&DATASTORE_MAP_FORMAT)
.min_length(3)
.max_length(65)
.type_text("(<source>=)?<target>")
.schema();
pub const DATASTORE_MAP_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Datastore mapping list.", &DATASTORE_MAP_SCHEMA)
.schema();
pub const DATASTORE_MAP_LIST_SCHEMA: Schema = StringSchema::new(
"A list of Datastore mappings (or single datastore), comma separated. \
For example 'a=b,e' maps the source datastore 'a' to target 'b and \
all other sources to the default 'e'. If no default is given, only the \
specified sources are mapped.")
.format(&ApiStringFormat::PropertyString(&DATASTORE_MAP_ARRAY_SCHEMA))
.schema();
pub const MEDIA_SET_UUID_SCHEMA: Schema = pub const MEDIA_SET_UUID_SCHEMA: Schema =
StringSchema::new("MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).") StringSchema::new("MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).")
.format(&UUID_FORMAT) .format(&UUID_FORMAT)

View File

@ -180,7 +180,7 @@ fn get_tape_handle(param: &Value) -> Result<LinuxTapeHandle, Error> {
/// ///
/// Positioning is done by first rewinding the tape and then spacing /// Positioning is done by first rewinding the tape and then spacing
/// forward over count file marks. /// forward over count file marks.
fn asf(count: i32, param: Value) -> Result<(), Error> { fn asf(count: usize, param: Value) -> Result<(), Error> {
let mut handle = get_tape_handle(&param)?; let mut handle = get_tape_handle(&param)?;
@ -212,7 +212,7 @@ fn asf(count: i32, param: Value) -> Result<(), Error> {
/// Backward space count files (position before file mark). /// Backward space count files (position before file mark).
/// ///
/// The tape is positioned on the last block of the previous file. /// The tape is positioned on the last block of the previous file.
fn bsf(count: i32, param: Value) -> Result<(), Error> { fn bsf(count: usize, param: Value) -> Result<(), Error> {
let mut handle = get_tape_handle(&param)?; let mut handle = get_tape_handle(&param)?;
@ -478,7 +478,7 @@ fn erase(fast: Option<bool>, param: Value) -> Result<(), Error> {
/// Forward space count files (position after file mark). /// Forward space count files (position after file mark).
/// ///
/// The tape is positioned on the first block of the next file. /// The tape is positioned on the first block of the next file.
fn fsf(count: i32, param: Value) -> Result<(), Error> { fn fsf(count: usize, param: Value) -> Result<(), Error> {
let mut handle = get_tape_handle(&param)?; let mut handle = get_tape_handle(&param)?;

View File

@ -1,7 +1,5 @@
use std::collections::HashSet; use std::collections::HashSet;
use std::convert::TryFrom;
use std::io::{self, Read, Write, Seek, SeekFrom}; use std::io::{self, Read, Write, Seek, SeekFrom};
use std::os::unix::io::{FromRawFd, RawFd};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::pin::Pin; use std::pin::Pin;
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
@ -19,7 +17,7 @@ use pathpatterns::{MatchEntry, MatchType, PatternFlag};
use proxmox::{ use proxmox::{
tools::{ tools::{
time::{strftime_local, epoch_i64}, time::{strftime_local, epoch_i64},
fs::{file_get_contents, file_get_json, replace_file, CreateOptions, image_size}, fs::{file_get_json, replace_file, CreateOptions, image_size},
}, },
api::{ api::{
api, api,
@ -32,7 +30,11 @@ use proxmox::{
}; };
use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation}; use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
use proxmox_backup::tools; use proxmox_backup::tools::{
self,
StdChannelWriter,
TokioWriterAdapter,
};
use proxmox_backup::api2::types::*; use proxmox_backup::api2::types::*;
use proxmox_backup::api2::version; use proxmox_backup::api2::version;
use proxmox_backup::client::*; use proxmox_backup::client::*;
@ -67,8 +69,18 @@ use proxmox_backup::backup::{
mod proxmox_backup_client; mod proxmox_backup_client;
use proxmox_backup_client::*; use proxmox_backup_client::*;
mod proxmox_client_tools; pub mod proxmox_client_tools;
use proxmox_client_tools::*; use proxmox_client_tools::{
complete_archive_name, complete_auth_id, complete_backup_group, complete_backup_snapshot,
complete_backup_source, complete_chunk_size, complete_group_or_snapshot,
complete_img_archive_name, complete_pxar_archive_name, complete_repository, connect,
extract_repository_from_value,
key_source::{
crypto_parameters, format_key_source, get_encryption_key_password, KEYFD_SCHEMA,
KEYFILE_SCHEMA, MASTER_PUBKEY_FD_SCHEMA, MASTER_PUBKEY_FILE_SCHEMA,
},
CHUNK_SIZE_SCHEMA, REPO_URL_SCHEMA,
};
fn record_repository(repo: &BackupRepository) { fn record_repository(repo: &BackupRepository) {
@ -162,7 +174,7 @@ async fn backup_directory<P: AsRef<Path>>(
dir_path: P, dir_path: P,
archive_name: &str, archive_name: &str,
chunk_size: Option<usize>, chunk_size: Option<usize>,
catalog: Arc<Mutex<CatalogWriter<crate::tools::StdChannelWriter>>>, catalog: Arc<Mutex<CatalogWriter<TokioWriterAdapter<StdChannelWriter>>>>,
pxar_create_options: proxmox_backup::pxar::PxarCreateOptions, pxar_create_options: proxmox_backup::pxar::PxarCreateOptions,
upload_options: UploadOptions, upload_options: UploadOptions,
) -> Result<BackupStats, Error> { ) -> Result<BackupStats, Error> {
@ -460,7 +472,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
} }
struct CatalogUploadResult { struct CatalogUploadResult {
catalog_writer: Arc<Mutex<CatalogWriter<crate::tools::StdChannelWriter>>>, catalog_writer: Arc<Mutex<CatalogWriter<TokioWriterAdapter<StdChannelWriter>>>>,
result: tokio::sync::oneshot::Receiver<Result<BackupStats, Error>>, result: tokio::sync::oneshot::Receiver<Result<BackupStats, Error>>,
} }
@ -473,7 +485,7 @@ fn spawn_catalog_upload(
let catalog_chunk_size = 512*1024; let catalog_chunk_size = 512*1024;
let catalog_chunk_stream = ChunkStream::new(catalog_stream, Some(catalog_chunk_size)); let catalog_chunk_stream = ChunkStream::new(catalog_stream, Some(catalog_chunk_size));
let catalog_writer = Arc::new(Mutex::new(CatalogWriter::new(crate::tools::StdChannelWriter::new(catalog_tx))?)); let catalog_writer = Arc::new(Mutex::new(CatalogWriter::new(TokioWriterAdapter::new(StdChannelWriter::new(catalog_tx)))?));
let (catalog_result_tx, catalog_result_rx) = tokio::sync::oneshot::channel(); let (catalog_result_tx, catalog_result_rx) = tokio::sync::oneshot::channel();
@ -499,437 +511,6 @@ fn spawn_catalog_upload(
Ok(CatalogUploadResult { catalog_writer, result: catalog_result_rx }) Ok(CatalogUploadResult { catalog_writer, result: catalog_result_rx })
} }
#[derive(Clone, Debug, Eq, PartialEq)]
enum KeySource {
DefaultKey,
Fd,
Path(String),
}
fn format_key_source(source: &KeySource, key_type: &str) -> String {
match source {
KeySource::DefaultKey => format!("Using default {} key..", key_type),
KeySource::Fd => format!("Using {} key from file descriptor..", key_type),
KeySource::Path(path) => format!("Using {} key from '{}'..", key_type, path),
}
}
#[derive(Clone, Debug, Eq, PartialEq)]
struct KeyWithSource {
pub source: KeySource,
pub key: Vec<u8>,
}
impl KeyWithSource {
pub fn from_fd(key: Vec<u8>) -> Self {
Self {
source: KeySource::Fd,
key,
}
}
pub fn from_default(key: Vec<u8>) -> Self {
Self {
source: KeySource::DefaultKey,
key,
}
}
pub fn from_path(path: String, key: Vec<u8>) -> Self {
Self {
source: KeySource::Path(path),
key,
}
}
}
#[derive(Debug, Eq, PartialEq)]
struct CryptoParams {
mode: CryptMode,
enc_key: Option<KeyWithSource>,
// FIXME switch to openssl::rsa::rsa<openssl::pkey::Public> once that is Eq?
master_pubkey: Option<KeyWithSource>,
}
fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
let keyfile = match param.get("keyfile") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --keyfile parameter type"),
None => None,
};
let key_fd = match param.get("keyfd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --keyfd parameter type"),
None => None,
};
let master_pubkey_file = match param.get("master-pubkey-file") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --master-pubkey-file parameter type"),
None => None,
};
let master_pubkey_fd = match param.get("master-pubkey-fd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad master public key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad public master key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --master-pubkey-fd parameter type"),
None => None,
};
let mode: Option<CryptMode> = match param.get("crypt-mode") {
Some(mode) => Some(serde_json::from_value(mode.clone())?),
None => None,
};
let key = match (keyfile, key_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }.read_to_end(&mut data).map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err)
})?;
Some(KeyWithSource::from_fd(data))
}
};
let master_pubkey = match (master_pubkey_file, master_pubkey_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }
.read_to_end(&mut data)
.map_err(|err| format_err!("error reading master key from fd {}: {}", fd, err))?;
Some(KeyWithSource::from_fd(data))
}
};
let res = match mode {
// no crypt mode, enable encryption if keys are available
None => match (key, master_pubkey) {
// only default keys if available
(None, None) => match key::read_optional_default_encryption_key()? {
None => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
enc_key => {
let master_pubkey = key::read_optional_default_master_pubkey()?;
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit master key, default enc key needed
(None, master_pubkey) => match key::read_optional_default_encryption_key()? {
None => bail!("--master-pubkey-file/--master-pubkey-fd specified, but no key available"),
enc_key => {
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit keyfile, maybe default master key
(enc_key, None) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey: key::read_optional_default_master_pubkey()? },
// explicit keyfile and master key
(enc_key, master_pubkey) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey },
},
// explicitly disabled encryption
Some(CryptMode::None) => match (key, master_pubkey) {
// no keys => OK, no encryption
(None, None) => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
// --keyfile and --crypt-mode=none
(Some(_), _) => bail!("--keyfile/--keyfd and --crypt-mode=none are mutually exclusive"),
// --master-pubkey-file and --crypt-mode=none
(_, Some(_)) => bail!("--master-pubkey-file/--master-pubkey-fd and --crypt-mode=none are mutually exclusive"),
},
// explicitly enabled encryption
Some(mode) => match (key, master_pubkey) {
// no key, maybe master key
(None, master_pubkey) => match key::read_optional_default_encryption_key()? {
None => bail!("--crypt-mode without --keyfile and no default key file available"),
enc_key => {
eprintln!("Encrypting with default encryption key!");
let master_pubkey = match master_pubkey {
None => key::read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams {
mode,
enc_key,
master_pubkey,
}
},
},
// --keyfile and --crypt-mode other than none
(enc_key, master_pubkey) => {
let master_pubkey = match master_pubkey {
None => key::read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams { mode, enc_key, master_pubkey }
},
},
};
Ok(res)
}
#[test]
// WARNING: there must only be one test for crypto_parameters as the default key handling is not
// safe w.r.t. concurrency
fn test_crypto_parameters_handling() -> Result<(), Error> {
let some_key = vec![1;1];
let default_key = vec![2;1];
let some_master_key = vec![3;1];
let default_master_key = vec![4;1];
let keypath = "./target/testout/keyfile.test";
let master_keypath = "./target/testout/masterkeyfile.test";
let invalid_keypath = "./target/testout/invalid_keyfile.test";
let no_key_res = CryptoParams {
enc_key: None,
master_pubkey: None,
mode: CryptMode::None,
};
let some_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let some_key_some_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_path(
master_keypath.to_string(),
some_master_key.clone(),
)),
mode: CryptMode::Encrypt,
};
let some_key_default_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_default(default_master_key.clone())),
mode: CryptMode::Encrypt,
};
let some_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
let default_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let default_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
replace_file(&keypath, &some_key, CreateOptions::default())?;
replace_file(&master_keypath, &some_master_key, CreateOptions::default())?;
// no params, no default key == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, no default key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now set a default key
unsafe { key::set_test_encryption_key(Ok(Some(default_key.clone()))); }
// and repeat
// no params but default key == default key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), default_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key == default key with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only"}));
assert_eq!(res.unwrap(), default_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt"}));
assert_eq!(res.unwrap(), default_key_res);
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now make default key retrieval error
unsafe { key::set_test_encryption_key(Err(format_err!("test error"))); }
// and repeat
// no params, default key retrieval errors == Error
assert!(crypto_parameters(&json!({})).is_err());
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key error == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now remove default key again
unsafe { key::set_test_encryption_key(Ok(None)); }
// set a default master key
unsafe { key::set_test_default_master_pubkey(Ok(Some(default_master_key.clone()))); }
// and use an explicit master key
assert!(crypto_parameters(&json!({"master-pubkey-file": master_keypath})).is_err());
// just a default == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
// same with fallback to default master key
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// crypt mode none == error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "master-pubkey-file": master_keypath})).is_err());
// with just default master key == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt without enc key == error
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt", "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath, "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// invalid master keyfile parameter always errors when a key is passed, even with a valid
// default master key
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "encrypt"})).is_err());
Ok(())
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -1160,7 +741,7 @@ async fn create_backup(
); );
let (key, created, fingerprint) = let (key, created, fingerprint) =
decrypt_key(&key_with_source.key, &key::get_encryption_key_password)?; decrypt_key(&key_with_source.key, &get_encryption_key_password)?;
println!("Encryption key fingerprint: {}", fingerprint); println!("Encryption key fingerprint: {}", fingerprint);
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
@ -1510,7 +1091,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
None => None, None => None,
Some(ref key) => { Some(ref key) => {
let (key, _, _) = let (key, _, _) =
decrypt_key(&key.key, &key::get_encryption_key_password).map_err(|err| { decrypt_key(&key.key, &get_encryption_key_password).map_err(|err| {
eprintln!("{}", format_key_source(&key.source, "encryption")); eprintln!("{}", format_key_source(&key.source, "encryption"));
err err
})?; })?;

View File

@ -29,6 +29,7 @@ use proxmox_backup::{
types::{ types::{
Authid, Authid,
DATASTORE_SCHEMA, DATASTORE_SCHEMA,
DATASTORE_MAP_LIST_SCHEMA,
DRIVE_NAME_SCHEMA, DRIVE_NAME_SCHEMA,
MEDIA_LABEL_SCHEMA, MEDIA_LABEL_SCHEMA,
MEDIA_POOL_NAME_SCHEMA, MEDIA_POOL_NAME_SCHEMA,
@ -783,8 +784,8 @@ async fn clean_drive(mut param: Value) -> Result<(), Error> {
let mut client = connect_to_localhost()?; let mut client = connect_to_localhost()?;
let path = format!("api2/json/tape/drive/{}/clean-drive", drive); let path = format!("api2/json/tape/drive/{}/clean", drive);
let result = client.post(&path, Some(param)).await?; let result = client.put(&path, Some(param)).await?;
view_task_result(&mut client, result, &output_format).await?; view_task_result(&mut client, result, &output_format).await?;
@ -855,7 +856,7 @@ async fn backup(mut param: Value) -> Result<(), Error> {
input: { input: {
properties: { properties: {
store: { store: {
schema: DATASTORE_SCHEMA, schema: DATASTORE_MAP_LIST_SCHEMA,
}, },
drive: { drive: {
schema: DRIVE_NAME_SCHEMA, schema: DRIVE_NAME_SCHEMA,
@ -910,6 +911,11 @@ async fn restore(mut param: Value) -> Result<(), Error> {
type: bool, type: bool,
optional: true, optional: true,
}, },
scan: {
description: "Re-read the whole tape to reconstruct the catalog instead of restoring saved versions.",
type: bool,
optional: true,
},
verbose: { verbose: {
description: "Verbose mode - log all found chunks.", description: "Verbose mode - log all found chunks.",
type: bool, type: bool,

View File

@ -34,6 +34,8 @@ use crate::{
connect, connect,
}; };
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[api()] #[api()]
#[derive(Copy, Clone, Serialize)] #[derive(Copy, Clone, Serialize)]
/// Speed test result /// Speed test result
@ -152,7 +154,7 @@ pub async fn benchmark(
let crypt_config = match keyfile { let crypt_config = match keyfile {
None => None, None => None,
Some(path) => { Some(path) => {
let (key, _, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?; let (key, _, _) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }

View File

@ -17,7 +17,6 @@ use crate::{
extract_repository_from_value, extract_repository_from_value,
format_key_source, format_key_source,
record_repository, record_repository,
key::get_encryption_key_password,
decrypt_key, decrypt_key,
api_datastore_latest_snapshot, api_datastore_latest_snapshot,
complete_repository, complete_repository,
@ -38,6 +37,8 @@ use crate::{
Shell, Shell,
}; };
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[api( #[api(
input: { input: {
properties: { properties: {

View File

@ -20,114 +20,10 @@ use proxmox_backup::{
tools::paperkey::{generate_paper_key, PaperkeyFormat}, tools::paperkey::{generate_paper_key, PaperkeyFormat},
}; };
use crate::KeyWithSource; use crate::proxmox_client_tools::key_source::{
find_default_encryption_key, find_default_master_pubkey, get_encryption_key_password,
pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json"; place_default_encryption_key, place_default_master_pubkey,
pub const DEFAULT_MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem"; };
pub fn find_default_master_pubkey() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn place_default_master_pubkey() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn find_default_encryption_key() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
pub fn place_default_encryption_key() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
find_default_encryption_key()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
find_default_master_pubkey()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(test)]
static mut TEST_DEFAULT_ENCRYPTION_KEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_ENCRYPTION_KEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_encryption_key(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_ENCRYPTION_KEY = value;
}
#[cfg(test)]
static mut TEST_DEFAULT_MASTER_PUBKEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_MASTER_PUBKEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_default_master_pubkey(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_MASTER_PUBKEY = value;
}
pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
// fixme: implement other input methods
use std::env::VarError::*;
match std::env::var("PBS_ENCRYPTION_PASSWORD") {
Ok(p) => return Ok(p.as_bytes().to_vec()),
Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
Err(NotPresent) => {
// Try another method
}
}
// If we're on a TTY, query the user for a password
if tty::stdin_isatty() {
return Ok(tty::read_password("Encryption Key Password: ")?);
}
bail!("no password input mechanism available");
}
#[api( #[api(
input: { input: {

View File

@ -1,5 +1,3 @@
use anyhow::{Context, Error};
mod benchmark; mod benchmark;
pub use benchmark::*; pub use benchmark::*;
mod mount; mod mount;
@ -13,29 +11,3 @@ pub use snapshot::*;
pub mod key; pub mod key;
pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
}
/// Convenience helper for better error messages:
pub fn find_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<Option<std::path::PathBuf>, Error> {
let file_name = file_name.as_ref();
base_directories()
.map(|base| base.find_config_file(file_name))
.with_context(|| format!("error searching for {}", description))
}
pub fn place_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<std::path::PathBuf, Error> {
let file_name = file_name.as_ref();
base_directories()
.and_then(|base| {
base.place_config_file(file_name).map_err(Error::from)
})
.with_context(|| format!("failed to place {} in xdg home", description))
}

View File

@ -43,6 +43,8 @@ use crate::{
BufferedDynamicReadAt, BufferedDynamicReadAt,
}; };
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[sortable] #[sortable]
const API_METHOD_MOUNT: ApiMethod = ApiMethod::new( const API_METHOD_MOUNT: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&mount), &ApiHandler::Sync(&mount),
@ -182,7 +184,7 @@ async fn mount_do(param: Value, pipe: Option<Fd>) -> Result<Value, Error> {
None => None, None => None,
Some(path) => { Some(path) => {
println!("Encryption key file: '{:?}'", path); println!("Encryption key file: '{:?}'", path);
let (key, _, fingerprint) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?; let (key, _, fingerprint) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
println!("Encryption key fingerprint: '{}'", fingerprint); println!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?)) Some(Arc::new(CryptConfig::new(key)?))
} }

View File

@ -35,6 +35,8 @@ use crate::{
record_repository, record_repository,
}; };
use crate::proxmox_client_tools::key_source::get_encryption_key_password;
#[api( #[api(
input: { input: {
properties: { properties: {
@ -239,7 +241,7 @@ async fn upload_log(param: Value) -> Result<Value, Error> {
let crypt_config = match crypto.enc_key { let crypt_config = match crypto.enc_key {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _created, _) = decrypt_key(&key.key, &crate::key::get_encryption_key_password)?; let (key, _created, _) = decrypt_key(&key.key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }

View File

@ -0,0 +1,572 @@
use std::convert::TryFrom;
use std::path::PathBuf;
use std::os::unix::io::{FromRawFd, RawFd};
use std::io::Read;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::api::schema::*;
use proxmox::sys::linux::tty;
use proxmox::tools::fs::file_get_contents;
use proxmox_backup::backup::CryptMode;
pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
pub const DEFAULT_MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
pub const KEYFILE_SCHEMA: Schema =
StringSchema::new("Path to encryption key. All data will be encrypted using this key.")
.schema();
pub const KEYFD_SCHEMA: Schema =
IntegerSchema::new("Pass an encryption key via an already opened file descriptor.")
.minimum(0)
.schema();
pub const MASTER_PUBKEY_FILE_SCHEMA: Schema = StringSchema::new(
"Path to master public key. The encryption key used for a backup will be encrypted using this key and appended to the backup.")
.schema();
pub const MASTER_PUBKEY_FD_SCHEMA: Schema =
IntegerSchema::new("Pass a master public key via an already opened file descriptor.")
.minimum(0)
.schema();
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum KeySource {
DefaultKey,
Fd,
Path(String),
}
pub fn format_key_source(source: &KeySource, key_type: &str) -> String {
match source {
KeySource::DefaultKey => format!("Using default {} key..", key_type),
KeySource::Fd => format!("Using {} key from file descriptor..", key_type),
KeySource::Path(path) => format!("Using {} key from '{}'..", key_type, path),
}
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct KeyWithSource {
pub source: KeySource,
pub key: Vec<u8>,
}
impl KeyWithSource {
pub fn from_fd(key: Vec<u8>) -> Self {
Self {
source: KeySource::Fd,
key,
}
}
pub fn from_default(key: Vec<u8>) -> Self {
Self {
source: KeySource::DefaultKey,
key,
}
}
pub fn from_path(path: String, key: Vec<u8>) -> Self {
Self {
source: KeySource::Path(path),
key,
}
}
}
#[derive(Debug, Eq, PartialEq)]
pub struct CryptoParams {
pub mode: CryptMode,
pub enc_key: Option<KeyWithSource>,
// FIXME switch to openssl::rsa::rsa<openssl::pkey::Public> once that is Eq?
pub master_pubkey: Option<KeyWithSource>,
}
pub fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
let keyfile = match param.get("keyfile") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --keyfile parameter type"),
None => None,
};
let key_fd = match param.get("keyfd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --keyfd parameter type"),
None => None,
};
let master_pubkey_file = match param.get("master-pubkey-file") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --master-pubkey-file parameter type"),
None => None,
};
let master_pubkey_fd = match param.get("master-pubkey-fd") {
Some(Value::Number(key_fd)) => Some(
RawFd::try_from(key_fd
.as_i64()
.ok_or_else(|| format_err!("bad master public key fd: {:?}", key_fd))?
)
.map_err(|err| format_err!("bad public master key fd: {:?}: {}", key_fd, err))?
),
Some(_) => bail!("bad --master-pubkey-fd parameter type"),
None => None,
};
let mode: Option<CryptMode> = match param.get("crypt-mode") {
Some(mode) => Some(serde_json::from_value(mode.clone())?),
None => None,
};
let key = match (keyfile, key_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }.read_to_end(&mut data).map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err)
})?;
Some(KeyWithSource::from_fd(data))
}
};
let master_pubkey = match (master_pubkey_file, master_pubkey_fd) {
(None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(KeyWithSource::from_path(
keyfile.clone(),
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }
.read_to_end(&mut data)
.map_err(|err| format_err!("error reading master key from fd {}: {}", fd, err))?;
Some(KeyWithSource::from_fd(data))
}
};
let res = match mode {
// no crypt mode, enable encryption if keys are available
None => match (key, master_pubkey) {
// only default keys if available
(None, None) => match read_optional_default_encryption_key()? {
None => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
enc_key => {
let master_pubkey = read_optional_default_master_pubkey()?;
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit master key, default enc key needed
(None, master_pubkey) => match read_optional_default_encryption_key()? {
None => bail!("--master-pubkey-file/--master-pubkey-fd specified, but no key available"),
enc_key => {
CryptoParams {
mode: CryptMode::Encrypt,
enc_key,
master_pubkey,
}
},
},
// explicit keyfile, maybe default master key
(enc_key, None) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey: read_optional_default_master_pubkey()? },
// explicit keyfile and master key
(enc_key, master_pubkey) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey },
},
// explicitly disabled encryption
Some(CryptMode::None) => match (key, master_pubkey) {
// no keys => OK, no encryption
(None, None) => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
// --keyfile and --crypt-mode=none
(Some(_), _) => bail!("--keyfile/--keyfd and --crypt-mode=none are mutually exclusive"),
// --master-pubkey-file and --crypt-mode=none
(_, Some(_)) => bail!("--master-pubkey-file/--master-pubkey-fd and --crypt-mode=none are mutually exclusive"),
},
// explicitly enabled encryption
Some(mode) => match (key, master_pubkey) {
// no key, maybe master key
(None, master_pubkey) => match read_optional_default_encryption_key()? {
None => bail!("--crypt-mode without --keyfile and no default key file available"),
enc_key => {
eprintln!("Encrypting with default encryption key!");
let master_pubkey = match master_pubkey {
None => read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams {
mode,
enc_key,
master_pubkey,
}
},
},
// --keyfile and --crypt-mode other than none
(enc_key, master_pubkey) => {
let master_pubkey = match master_pubkey {
None => read_optional_default_master_pubkey()?,
master_pubkey => master_pubkey,
};
CryptoParams { mode, enc_key, master_pubkey }
},
},
};
Ok(res)
}
pub fn find_default_master_pubkey() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn place_default_master_pubkey() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_MASTER_PUBKEY_FILE_NAME,
"default master public key file",
)
}
pub fn find_default_encryption_key() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
pub fn place_default_encryption_key() -> Result<PathBuf, Error> {
super::place_xdg_file(
DEFAULT_ENCRYPTION_KEY_FILE_NAME,
"default encryption key file",
)
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
find_default_encryption_key()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(not(test))]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
find_default_master_pubkey()?
.map(|path| file_get_contents(path).map(KeyWithSource::from_default))
.transpose()
}
#[cfg(test)]
static mut TEST_DEFAULT_ENCRYPTION_KEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_ENCRYPTION_KEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_encryption_key(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_ENCRYPTION_KEY = value;
}
#[cfg(test)]
static mut TEST_DEFAULT_MASTER_PUBKEY: Result<Option<Vec<u8>>, Error> = Ok(None);
#[cfg(test)]
pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
// not safe when multiple concurrent test cases end up here!
unsafe {
match &TEST_DEFAULT_MASTER_PUBKEY {
Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
Ok(None) => Ok(None),
Err(_) => bail!("test error"),
}
}
}
#[cfg(test)]
// not safe when multiple concurrent test cases end up here!
pub(crate) unsafe fn set_test_default_master_pubkey(value: Result<Option<Vec<u8>>, Error>) {
TEST_DEFAULT_MASTER_PUBKEY = value;
}
pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
// fixme: implement other input methods
use std::env::VarError::*;
match std::env::var("PBS_ENCRYPTION_PASSWORD") {
Ok(p) => return Ok(p.as_bytes().to_vec()),
Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
Err(NotPresent) => {
// Try another method
}
}
// If we're on a TTY, query the user for a password
if tty::stdin_isatty() {
return Ok(tty::read_password("Encryption Key Password: ")?);
}
bail!("no password input mechanism available");
}
#[test]
// WARNING: there must only be one test for crypto_parameters as the default key handling is not
// safe w.r.t. concurrency
fn test_crypto_parameters_handling() -> Result<(), Error> {
use serde_json::json;
use proxmox::tools::fs::{replace_file, CreateOptions};
let some_key = vec![1;1];
let default_key = vec![2;1];
let some_master_key = vec![3;1];
let default_master_key = vec![4;1];
let keypath = "./target/testout/keyfile.test";
let master_keypath = "./target/testout/masterkeyfile.test";
let invalid_keypath = "./target/testout/invalid_keyfile.test";
let no_key_res = CryptoParams {
enc_key: None,
master_pubkey: None,
mode: CryptMode::None,
};
let some_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let some_key_some_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_path(
master_keypath.to_string(),
some_master_key.clone(),
)),
mode: CryptMode::Encrypt,
};
let some_key_default_master_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: Some(KeyWithSource::from_default(default_master_key.clone())),
mode: CryptMode::Encrypt,
};
let some_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_path(
keypath.to_string(),
some_key.clone(),
)),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
let default_key_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::Encrypt,
};
let default_key_sign_res = CryptoParams {
enc_key: Some(KeyWithSource::from_default(default_key.clone())),
master_pubkey: None,
mode: CryptMode::SignOnly,
};
replace_file(&keypath, &some_key, CreateOptions::default())?;
replace_file(&master_keypath, &some_master_key, CreateOptions::default())?;
// no params, no default key == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, no default key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now set a default key
unsafe { set_test_encryption_key(Ok(Some(default_key.clone()))); }
// and repeat
// no params but default key == default key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), default_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key == default key with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only"}));
assert_eq!(res.unwrap(), default_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt"}));
assert_eq!(res.unwrap(), default_key_res);
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now make default key retrieval error
unsafe { set_test_encryption_key(Err(format_err!("test error"))); }
// and repeat
// no params, default key retrieval errors == Error
assert!(crypto_parameters(&json!({})).is_err());
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// crypt mode none == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt/sign-only, no keyfile, default key error == Error
assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_sign_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_res);
// invalid keyfile parameter always errors
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
// now remove default key again
unsafe { set_test_encryption_key(Ok(None)); }
// set a default master key
unsafe { set_test_default_master_pubkey(Ok(Some(default_master_key.clone()))); }
// and use an explicit master key
assert!(crypto_parameters(&json!({"master-pubkey-file": master_keypath})).is_err());
// just a default == no key
let res = crypto_parameters(&json!({}));
assert_eq!(res.unwrap(), no_key_res);
// keyfile param == key from keyfile
let res = crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
// same with fallback to default master key
let res = crypto_parameters(&json!({"keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// crypt mode none == error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "master-pubkey-file": master_keypath})).is_err());
// with just default master key == no key
let res = crypto_parameters(&json!({"crypt-mode": "none"}));
assert_eq!(res.unwrap(), no_key_res);
// crypt mode encrypt without enc key == error
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt", "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
// crypt mode none with explicit key == Error
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath, "master-pubkey-file": master_keypath})).is_err());
assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
// crypt mode encrypt with keyfile == key from keyfile with correct mode
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath, "master-pubkey-file": master_keypath}));
assert_eq!(res.unwrap(), some_key_some_master_res);
let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
assert_eq!(res.unwrap(), some_key_default_master_res);
// invalid master keyfile parameter always errors when a key is passed, even with a valid
// default master key
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "none"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "sign-only"})).is_err());
assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "encrypt"})).is_err());
Ok(())
}

View File

@ -1,8 +1,7 @@
//! Shared tools useful for common CLI clients. //! Shared tools useful for common CLI clients.
use std::collections::HashMap; use std::collections::HashMap;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Context, Error};
use serde_json::{json, Value}; use serde_json::{json, Value};
use xdg::BaseDirectories; use xdg::BaseDirectories;
@ -17,6 +16,8 @@ use proxmox_backup::backup::BackupDir;
use proxmox_backup::client::*; use proxmox_backup::client::*;
use proxmox_backup::tools; use proxmox_backup::tools;
pub mod key_source;
const ENV_VAR_PBS_FINGERPRINT: &str = "PBS_FINGERPRINT"; const ENV_VAR_PBS_FINGERPRINT: &str = "PBS_FINGERPRINT";
const ENV_VAR_PBS_PASSWORD: &str = "PBS_PASSWORD"; const ENV_VAR_PBS_PASSWORD: &str = "PBS_PASSWORD";
@ -25,24 +26,6 @@ pub const REPO_URL_SCHEMA: Schema = StringSchema::new("Repository URL.")
.max_length(256) .max_length(256)
.schema(); .schema();
pub const KEYFILE_SCHEMA: Schema =
StringSchema::new("Path to encryption key. All data will be encrypted using this key.")
.schema();
pub const KEYFD_SCHEMA: Schema =
IntegerSchema::new("Pass an encryption key via an already opened file descriptor.")
.minimum(0)
.schema();
pub const MASTER_PUBKEY_FILE_SCHEMA: Schema = StringSchema::new(
"Path to master public key. The encryption key used for a backup will be encrypted using this key and appended to the backup.")
.schema();
pub const MASTER_PUBKEY_FD_SCHEMA: Schema =
IntegerSchema::new("Pass a master public key via an already opened file descriptor.")
.minimum(0)
.schema();
pub const CHUNK_SIZE_SCHEMA: Schema = IntegerSchema::new("Chunk size in KB. Must be a power of 2.") pub const CHUNK_SIZE_SCHEMA: Schema = IntegerSchema::new("Chunk size in KB. Must be a power of 2.")
.minimum(64) .minimum(64)
.maximum(4096) .maximum(4096)
@ -364,3 +347,28 @@ pub fn complete_backup_source(arg: &str, param: &HashMap<String, String>) -> Vec
result result
} }
pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
}
/// Convenience helper for better error messages:
pub fn find_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<Option<std::path::PathBuf>, Error> {
let file_name = file_name.as_ref();
base_directories()
.map(|base| base.find_config_file(file_name))
.with_context(|| format!("error searching for {}", description))
}
pub fn place_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<std::path::PathBuf, Error> {
let file_name = file_name.as_ref();
base_directories()
.and_then(|base| base.place_config_file(file_name).map_err(Error::from))
.with_context(|| format!("failed to place {} in xdg home", description))
}

View File

@ -1,12 +1,12 @@
use std::collections::HashSet; use std::collections::HashSet;
use std::os::unix::fs::OpenOptionsExt; use std::os::unix::fs::OpenOptionsExt;
use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::*;
use futures::stream::Stream;
use futures::future::AbortHandle; use futures::future::AbortHandle;
use futures::stream::Stream;
use futures::*;
use serde_json::{json, Value}; use serde_json::{json, Value};
use tokio::io::AsyncReadExt; use tokio::io::AsyncReadExt;
use tokio::sync::{mpsc, oneshot}; use tokio::sync::{mpsc, oneshot};
@ -14,11 +14,11 @@ use tokio_stream::wrappers::ReceiverStream;
use proxmox::tools::digest_to_hex; use proxmox::tools::digest_to_hex;
use super::merge_known_chunks::{MergedChunkInfo, MergeKnownChunks}; use super::merge_known_chunks::{MergeKnownChunks, MergedChunkInfo};
use crate::backup::*; use crate::backup::*;
use crate::tools::format::HumanByte; use crate::tools::format::HumanByte;
use super::{HttpClient, H2Client}; use super::{H2Client, HttpClient};
pub struct BackupWriter { pub struct BackupWriter {
h2: H2Client, h2: H2Client,
@ -28,7 +28,6 @@ pub struct BackupWriter {
} }
impl Drop for BackupWriter { impl Drop for BackupWriter {
fn drop(&mut self) { fn drop(&mut self) {
self.abort.abort(); self.abort.abort();
} }
@ -48,13 +47,32 @@ pub struct UploadOptions {
pub fixed_size: Option<u64>, pub fixed_size: Option<u64>,
} }
struct UploadStats {
chunk_count: usize,
chunk_reused: usize,
size: usize,
size_reused: usize,
size_compressed: usize,
duration: std::time::Duration,
csum: [u8; 32],
}
type UploadQueueSender = mpsc::Sender<(MergedChunkInfo, Option<h2::client::ResponseFuture>)>; type UploadQueueSender = mpsc::Sender<(MergedChunkInfo, Option<h2::client::ResponseFuture>)>;
type UploadResultReceiver = oneshot::Receiver<Result<(), Error>>; type UploadResultReceiver = oneshot::Receiver<Result<(), Error>>;
impl BackupWriter { impl BackupWriter {
fn new(
fn new(h2: H2Client, abort: AbortHandle, crypt_config: Option<Arc<CryptConfig>>, verbose: bool) -> Arc<Self> { h2: H2Client,
Arc::new(Self { h2, abort, crypt_config, verbose }) abort: AbortHandle,
crypt_config: Option<Arc<CryptConfig>>,
verbose: bool,
) -> Arc<Self> {
Arc::new(Self {
h2,
abort,
crypt_config,
verbose,
})
} }
// FIXME: extract into (flattened) parameter struct? // FIXME: extract into (flattened) parameter struct?
@ -67,9 +85,8 @@ impl BackupWriter {
backup_id: &str, backup_id: &str,
backup_time: i64, backup_time: i64,
debug: bool, debug: bool,
benchmark: bool benchmark: bool,
) -> Result<Arc<BackupWriter>, Error> { ) -> Result<Arc<BackupWriter>, Error> {
let param = json!({ let param = json!({
"backup-type": backup_type, "backup-type": backup_type,
"backup-id": backup_id, "backup-id": backup_id,
@ -80,34 +97,30 @@ impl BackupWriter {
}); });
let req = HttpClient::request_builder( let req = HttpClient::request_builder(
client.server(), client.port(), "GET", "/api2/json/backup", Some(param)).unwrap(); client.server(),
client.port(),
"GET",
"/api2/json/backup",
Some(param),
)
.unwrap();
let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_PROTOCOL_ID_V1!())).await?; let (h2, abort) = client
.start_h2_connection(req, String::from(PROXMOX_BACKUP_PROTOCOL_ID_V1!()))
.await?;
Ok(BackupWriter::new(h2, abort, crypt_config, debug)) Ok(BackupWriter::new(h2, abort, crypt_config, debug))
} }
pub async fn get( pub async fn get(&self, path: &str, param: Option<Value>) -> Result<Value, Error> {
&self,
path: &str,
param: Option<Value>,
) -> Result<Value, Error> {
self.h2.get(path, param).await self.h2.get(path, param).await
} }
pub async fn put( pub async fn put(&self, path: &str, param: Option<Value>) -> Result<Value, Error> {
&self,
path: &str,
param: Option<Value>,
) -> Result<Value, Error> {
self.h2.put(path, param).await self.h2.put(path, param).await
} }
pub async fn post( pub async fn post(&self, path: &str, param: Option<Value>) -> Result<Value, Error> {
&self,
path: &str,
param: Option<Value>,
) -> Result<Value, Error> {
self.h2.post(path, param).await self.h2.post(path, param).await
} }
@ -118,7 +131,9 @@ impl BackupWriter {
content_type: &str, content_type: &str,
data: Vec<u8>, data: Vec<u8>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
self.h2.upload("POST", path, param, content_type, data).await self.h2
.upload("POST", path, param, content_type, data)
.await
} }
pub async fn send_upload_request( pub async fn send_upload_request(
@ -129,9 +144,13 @@ impl BackupWriter {
content_type: &str, content_type: &str,
data: Vec<u8>, data: Vec<u8>,
) -> Result<h2::client::ResponseFuture, Error> { ) -> Result<h2::client::ResponseFuture, Error> {
let request =
let request = H2Client::request_builder("localhost", method, path, param, Some(content_type)).unwrap(); H2Client::request_builder("localhost", method, path, param, Some(content_type))
let response_future = self.h2.send_request(request, Some(bytes::Bytes::from(data.clone()))).await?; .unwrap();
let response_future = self
.h2
.send_request(request, Some(bytes::Bytes::from(data.clone())))
.await?;
Ok(response_future) Ok(response_future)
} }
@ -171,7 +190,16 @@ impl BackupWriter {
let csum = openssl::sha::sha256(&raw_data); let csum = openssl::sha::sha256(&raw_data);
let param = json!({"encoded-size": raw_data.len(), "file-name": file_name }); let param = json!({"encoded-size": raw_data.len(), "file-name": file_name });
let size = raw_data.len() as u64; let size = raw_data.len() as u64;
let _value = self.h2.upload("POST", "blob", Some(param), "application/octet-stream", raw_data).await?; let _value = self
.h2
.upload(
"POST",
"blob",
Some(param),
"application/octet-stream",
raw_data,
)
.await?;
Ok(BackupStats { size, csum }) Ok(BackupStats { size, csum })
} }
@ -184,7 +212,9 @@ impl BackupWriter {
let blob = match (options.encrypt, &self.crypt_config) { let blob = match (options.encrypt, &self.crypt_config) {
(false, _) => DataBlob::encode(&data, None, options.compress)?, (false, _) => DataBlob::encode(&data, None, options.compress)?,
(true, None) => bail!("requested encryption without a crypt config"), (true, None) => bail!("requested encryption without a crypt config"),
(true, Some(crypt_config)) => DataBlob::encode(&data, Some(crypt_config), options.compress)?, (true, Some(crypt_config)) => {
DataBlob::encode(&data, Some(crypt_config), options.compress)?
}
}; };
let raw_data = blob.into_inner(); let raw_data = blob.into_inner();
@ -192,7 +222,16 @@ impl BackupWriter {
let csum = openssl::sha::sha256(&raw_data); let csum = openssl::sha::sha256(&raw_data);
let param = json!({"encoded-size": size, "file-name": file_name }); let param = json!({"encoded-size": size, "file-name": file_name });
let _value = self.h2.upload("POST", "blob", Some(param), "application/octet-stream", raw_data).await?; let _value = self
.h2
.upload(
"POST",
"blob",
Some(param),
"application/octet-stream",
raw_data,
)
.await?;
Ok(BackupStats { size, csum }) Ok(BackupStats { size, csum })
} }
@ -202,7 +241,6 @@ impl BackupWriter {
file_name: &str, file_name: &str,
options: UploadOptions, options: UploadOptions,
) -> Result<BackupStats, Error> { ) -> Result<BackupStats, Error> {
let src_path = src_path.as_ref(); let src_path = src_path.as_ref();
let mut file = tokio::fs::File::open(src_path) let mut file = tokio::fs::File::open(src_path)
@ -215,7 +253,8 @@ impl BackupWriter {
.await .await
.map_err(|err| format_err!("unable to read file {:?} - {}", src_path, err))?; .map_err(|err| format_err!("unable to read file {:?} - {}", src_path, err))?;
self.upload_blob_from_data(contents, file_name, options).await self.upload_blob_from_data(contents, file_name, options)
.await
} }
pub async fn upload_stream( pub async fn upload_stream(
@ -245,72 +284,118 @@ impl BackupWriter {
// try, but ignore errors // try, but ignore errors
match archive_type(archive_name) { match archive_type(archive_name) {
Ok(ArchiveType::FixedIndex) => { Ok(ArchiveType::FixedIndex) => {
let _ = self.download_previous_fixed_index(archive_name, &manifest, known_chunks.clone()).await; let _ = self
.download_previous_fixed_index(
archive_name,
&manifest,
known_chunks.clone(),
)
.await;
} }
Ok(ArchiveType::DynamicIndex) => { Ok(ArchiveType::DynamicIndex) => {
let _ = self.download_previous_dynamic_index(archive_name, &manifest, known_chunks.clone()).await; let _ = self
.download_previous_dynamic_index(
archive_name,
&manifest,
known_chunks.clone(),
)
.await;
} }
_ => { /* do nothing */ } _ => { /* do nothing */ }
} }
} }
let wid = self.h2.post(&index_path, Some(param)).await?.as_u64().unwrap(); let wid = self
.h2
.post(&index_path, Some(param))
.await?
.as_u64()
.unwrap();
let (chunk_count, chunk_reused, size, size_reused, duration, csum) = let upload_stats = Self::upload_chunk_info_stream(
Self::upload_chunk_info_stream(
self.h2.clone(), self.h2.clone(),
wid, wid,
stream, stream,
&prefix, &prefix,
known_chunks.clone(), known_chunks.clone(),
if options.encrypt { self.crypt_config.clone() } else { None }, if options.encrypt {
self.crypt_config.clone()
} else {
None
},
options.compress, options.compress,
self.verbose, self.verbose,
) )
.await?; .await?;
let uploaded = size - size_reused; let size_dirty = upload_stats.size - upload_stats.size_reused;
let vsize_h: HumanByte = size.into(); let size: HumanByte = upload_stats.size.into();
let archive = if self.verbose { let archive = if self.verbose {
archive_name.to_string() archive_name.to_string()
} else { } else {
crate::tools::format::strip_server_file_extension(archive_name) crate::tools::format::strip_server_file_extension(archive_name)
}; };
if archive_name != CATALOG_NAME { if archive_name != CATALOG_NAME {
let speed: HumanByte = ((uploaded * 1_000_000) / (duration.as_micros() as usize)).into(); let speed: HumanByte =
let uploaded: HumanByte = uploaded.into(); ((size_dirty * 1_000_000) / (upload_stats.duration.as_micros() as usize)).into();
println!("{}: had to upload {} of {} in {:.2}s, average speed {}/s).", archive, uploaded, vsize_h, duration.as_secs_f64(), speed); let size_dirty: HumanByte = size_dirty.into();
let size_compressed: HumanByte = upload_stats.size_compressed.into();
println!(
"{}: had to backup {} of {} (compressed {}) in {:.2}s",
archive,
size_dirty,
size,
size_compressed,
upload_stats.duration.as_secs_f64()
);
println!("{}: average backup speed: {}/s", archive, speed);
} else { } else {
println!("Uploaded backup catalog ({})", vsize_h); println!("Uploaded backup catalog ({})", size);
} }
if size_reused > 0 && size > 1024*1024 { if upload_stats.size_reused > 0 && upload_stats.size > 1024 * 1024 {
let reused_percent = size_reused as f64 * 100. / size as f64; let reused_percent = upload_stats.size_reused as f64 * 100. / upload_stats.size as f64;
let reused: HumanByte = size_reused.into(); let reused: HumanByte = upload_stats.size_reused.into();
println!("{}: backup was done incrementally, reused {} ({:.1}%)", archive, reused, reused_percent); println!(
"{}: backup was done incrementally, reused {} ({:.1}%)",
archive, reused, reused_percent
);
} }
if self.verbose && chunk_count > 0 { if self.verbose && upload_stats.chunk_count > 0 {
println!("{}: Reused {} from {} chunks.", archive, chunk_reused, chunk_count); println!(
println!("{}: Average chunk size was {}.", archive, HumanByte::from(size/chunk_count)); "{}: Reused {} from {} chunks.",
println!("{}: Average time per request: {} microseconds.", archive, (duration.as_micros())/(chunk_count as u128)); archive, upload_stats.chunk_reused, upload_stats.chunk_count
);
println!(
"{}: Average chunk size was {}.",
archive,
HumanByte::from(upload_stats.size / upload_stats.chunk_count)
);
println!(
"{}: Average time per request: {} microseconds.",
archive,
(upload_stats.duration.as_micros()) / (upload_stats.chunk_count as u128)
);
} }
let param = json!({ let param = json!({
"wid": wid , "wid": wid ,
"chunk-count": chunk_count, "chunk-count": upload_stats.chunk_count,
"size": size, "size": upload_stats.size,
"csum": proxmox::tools::digest_to_hex(&csum), "csum": proxmox::tools::digest_to_hex(&upload_stats.csum),
}); });
let _value = self.h2.post(&close_path, Some(param)).await?; let _value = self.h2.post(&close_path, Some(param)).await?;
Ok(BackupStats { Ok(BackupStats {
size: size as u64, size: upload_stats.size as u64,
csum, csum: upload_stats.csum,
}) })
} }
fn response_queue(verbose: bool) -> ( fn response_queue(
verbose: bool,
) -> (
mpsc::Sender<h2::client::ResponseFuture>, mpsc::Sender<h2::client::ResponseFuture>,
oneshot::Receiver<Result<(), Error>> oneshot::Receiver<Result<(), Error>>,
) { ) {
let (verify_queue_tx, verify_queue_rx) = mpsc::channel(100); let (verify_queue_tx, verify_queue_rx) = mpsc::channel(100);
let (verify_result_tx, verify_result_rx) = oneshot::channel(); let (verify_result_tx, verify_result_rx) = oneshot::channel();
@ -336,12 +421,16 @@ impl BackupWriter {
response response
.map_err(Error::from) .map_err(Error::from)
.and_then(H2Client::h2api_response) .and_then(H2Client::h2api_response)
.map_ok(move |result| if verbose { println!("RESPONSE: {:?}", result) }) .map_ok(move |result| {
if verbose {
println!("RESPONSE: {:?}", result)
}
})
.map_err(|err| format_err!("pipelined request failed: {}", err)) .map_err(|err| format_err!("pipelined request failed: {}", err))
}) })
.map(|result| { .map(|result| {
let _ignore_closed_channel = verify_result_tx.send(result); let _ignore_closed_channel = verify_result_tx.send(result);
}) }),
); );
(verify_queue_tx, verify_result_rx) (verify_queue_tx, verify_result_rx)
@ -420,7 +509,6 @@ impl BackupWriter {
manifest: &BackupManifest, manifest: &BackupManifest,
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>, known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
) -> Result<FixedIndexReader, Error> { ) -> Result<FixedIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new() let mut tmpfile = std::fs::OpenOptions::new()
.write(true) .write(true)
.read(true) .read(true)
@ -428,10 +516,13 @@ impl BackupWriter {
.open("/tmp")?; .open("/tmp")?;
let param = json!({ "archive-name": archive_name }); let param = json!({ "archive-name": archive_name });
self.h2.download("previous", Some(param), &mut tmpfile).await?; self.h2
.download("previous", Some(param), &mut tmpfile)
.await?;
let index = FixedIndexReader::new(tmpfile) let index = FixedIndexReader::new(tmpfile).map_err(|err| {
.map_err(|err| format_err!("unable to read fixed index '{}' - {}", archive_name, err))?; format_err!("unable to read fixed index '{}' - {}", archive_name, err)
})?;
// Note: do not use values stored in index (not trusted) - instead, computed them again // Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum(); let (csum, size) = index.compute_csum();
manifest.verify_file(archive_name, &csum, size)?; manifest.verify_file(archive_name, &csum, size)?;
@ -443,7 +534,11 @@ impl BackupWriter {
} }
if self.verbose { if self.verbose {
println!("{}: known chunks list length is {}", archive_name, index.index_count()); println!(
"{}: known chunks list length is {}",
archive_name,
index.index_count()
);
} }
Ok(index) Ok(index)
@ -455,7 +550,6 @@ impl BackupWriter {
manifest: &BackupManifest, manifest: &BackupManifest,
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>, known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
) -> Result<DynamicIndexReader, Error> { ) -> Result<DynamicIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new() let mut tmpfile = std::fs::OpenOptions::new()
.write(true) .write(true)
.read(true) .read(true)
@ -463,10 +557,13 @@ impl BackupWriter {
.open("/tmp")?; .open("/tmp")?;
let param = json!({ "archive-name": archive_name }); let param = json!({ "archive-name": archive_name });
self.h2.download("previous", Some(param), &mut tmpfile).await?; self.h2
.download("previous", Some(param), &mut tmpfile)
.await?;
let index = DynamicIndexReader::new(tmpfile) let index = DynamicIndexReader::new(tmpfile).map_err(|err| {
.map_err(|err| format_err!("unable to read dynmamic index '{}' - {}", archive_name, err))?; format_err!("unable to read dynmamic index '{}' - {}", archive_name, err)
})?;
// Note: do not use values stored in index (not trusted) - instead, computed them again // Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum(); let (csum, size) = index.compute_csum();
manifest.verify_file(archive_name, &csum, size)?; manifest.verify_file(archive_name, &csum, size)?;
@ -478,7 +575,11 @@ impl BackupWriter {
} }
if self.verbose { if self.verbose {
println!("{}: known chunks list length is {}", archive_name, index.index_count()); println!(
"{}: known chunks list length is {}",
archive_name,
index.index_count()
);
} }
Ok(index) Ok(index)
@ -487,23 +588,29 @@ impl BackupWriter {
/// Retrieve backup time of last backup /// Retrieve backup time of last backup
pub async fn previous_backup_time(&self) -> Result<Option<i64>, Error> { pub async fn previous_backup_time(&self) -> Result<Option<i64>, Error> {
let data = self.h2.get("previous_backup_time", None).await?; let data = self.h2.get("previous_backup_time", None).await?;
serde_json::from_value(data) serde_json::from_value(data).map_err(|err| {
.map_err(|err| format_err!("Failed to parse backup time value returned by server - {}", err)) format_err!(
"Failed to parse backup time value returned by server - {}",
err
)
})
} }
/// Download backup manifest (index.json) of last backup /// Download backup manifest (index.json) of last backup
pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> { pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {
let mut raw_data = Vec::with_capacity(64 * 1024); let mut raw_data = Vec::with_capacity(64 * 1024);
let param = json!({ "archive-name": MANIFEST_BLOB_NAME }); let param = json!({ "archive-name": MANIFEST_BLOB_NAME });
self.h2.download("previous", Some(param), &mut raw_data).await?; self.h2
.download("previous", Some(param), &mut raw_data)
.await?;
let blob = DataBlob::load_from_reader(&mut &raw_data[..])?; let blob = DataBlob::load_from_reader(&mut &raw_data[..])?;
// no expected digest available // no expected digest available
let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref), None)?; let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref), None)?;
let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?; let manifest =
BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;
Ok(manifest) Ok(manifest)
} }
@ -521,8 +628,7 @@ impl BackupWriter {
crypt_config: Option<Arc<CryptConfig>>, crypt_config: Option<Arc<CryptConfig>>,
compress: bool, compress: bool,
verbose: bool, verbose: bool,
) -> impl Future<Output = Result<(usize, usize, usize, usize, std::time::Duration, [u8; 32]), Error>> { ) -> impl Future<Output = Result<UploadStats, Error>> {
let total_chunks = Arc::new(AtomicUsize::new(0)); let total_chunks = Arc::new(AtomicUsize::new(0));
let total_chunks2 = total_chunks.clone(); let total_chunks2 = total_chunks.clone();
let known_chunk_count = Arc::new(AtomicUsize::new(0)); let known_chunk_count = Arc::new(AtomicUsize::new(0));
@ -530,6 +636,8 @@ impl BackupWriter {
let stream_len = Arc::new(AtomicUsize::new(0)); let stream_len = Arc::new(AtomicUsize::new(0));
let stream_len2 = stream_len.clone(); let stream_len2 = stream_len.clone();
let compressed_stream_len = Arc::new(AtomicU64::new(0));
let compressed_stream_len2 = compressed_stream_len.clone();
let reused_len = Arc::new(AtomicUsize::new(0)); let reused_len = Arc::new(AtomicUsize::new(0));
let reused_len2 = reused_len.clone(); let reused_len2 = reused_len.clone();
@ -547,14 +655,12 @@ impl BackupWriter {
stream stream
.and_then(move |data| { .and_then(move |data| {
let chunk_len = data.len(); let chunk_len = data.len();
total_chunks.fetch_add(1, Ordering::SeqCst); total_chunks.fetch_add(1, Ordering::SeqCst);
let offset = stream_len.fetch_add(chunk_len, Ordering::SeqCst) as u64; let offset = stream_len.fetch_add(chunk_len, Ordering::SeqCst) as u64;
let mut chunk_builder = DataChunkBuilder::new(data.as_ref()) let mut chunk_builder = DataChunkBuilder::new(data.as_ref()).compress(compress);
.compress(compress);
if let Some(ref crypt_config) = crypt_config { if let Some(ref crypt_config) = crypt_config {
chunk_builder = chunk_builder.crypt_config(crypt_config); chunk_builder = chunk_builder.crypt_config(crypt_config);
@ -568,7 +674,9 @@ impl BackupWriter {
let chunk_end = offset + chunk_len as u64; let chunk_end = offset + chunk_len as u64;
if !is_fixed_chunk_size { csum.update(&chunk_end.to_le_bytes()); } if !is_fixed_chunk_size {
csum.update(&chunk_end.to_le_bytes());
}
csum.update(digest); csum.update(digest);
let chunk_is_known = known_chunks.contains(digest); let chunk_is_known = known_chunks.contains(digest);
@ -577,16 +685,17 @@ impl BackupWriter {
reused_len.fetch_add(chunk_len, Ordering::SeqCst); reused_len.fetch_add(chunk_len, Ordering::SeqCst);
future::ok(MergedChunkInfo::Known(vec![(offset, *digest)])) future::ok(MergedChunkInfo::Known(vec![(offset, *digest)]))
} else { } else {
let compressed_stream_len2 = compressed_stream_len.clone();
known_chunks.insert(*digest); known_chunks.insert(*digest);
future::ready(chunk_builder future::ready(chunk_builder.build().map(move |(chunk, digest)| {
.build() compressed_stream_len2.fetch_add(chunk.raw_size(), Ordering::SeqCst);
.map(move |(chunk, digest)| MergedChunkInfo::New(ChunkInfo { MergedChunkInfo::New(ChunkInfo {
chunk, chunk,
digest, digest,
chunk_len: chunk_len as u64, chunk_len: chunk_len as u64,
offset, offset,
})
})) }))
)
} }
}) })
.merge_known_chunks() .merge_known_chunks()
@ -614,20 +723,28 @@ impl BackupWriter {
}); });
let ct = "application/octet-stream"; let ct = "application/octet-stream";
let request = H2Client::request_builder("localhost", "POST", &upload_chunk_path, Some(param), Some(ct)).unwrap(); let request = H2Client::request_builder(
"localhost",
"POST",
&upload_chunk_path,
Some(param),
Some(ct),
)
.unwrap();
let upload_data = Some(bytes::Bytes::from(chunk_data)); let upload_data = Some(bytes::Bytes::from(chunk_data));
let new_info = MergedChunkInfo::Known(vec![(offset, digest)]); let new_info = MergedChunkInfo::Known(vec![(offset, digest)]);
future::Either::Left(h2 future::Either::Left(h2.send_request(request, upload_data).and_then(
.send_request(request, upload_data) move |response| async move {
.and_then(move |response| async move {
upload_queue upload_queue
.send((new_info, Some(response))) .send((new_info, Some(response)))
.await .await
.map_err(|err| format_err!("failed to send to upload queue: {}", err)) .map_err(|err| {
format_err!("failed to send to upload queue: {}", err)
}) })
) },
))
} else { } else {
future::Either::Right(async move { future::Either::Right(async move {
upload_queue upload_queue
@ -637,26 +754,32 @@ impl BackupWriter {
}) })
} }
}) })
.then(move |result| async move { .then(move |result| async move { upload_result.await?.and(result) }.boxed())
upload_result.await?.and(result)
}.boxed())
.and_then(move |_| { .and_then(move |_| {
let duration = start_time.elapsed(); let duration = start_time.elapsed();
let total_chunks = total_chunks2.load(Ordering::SeqCst); let chunk_count = total_chunks2.load(Ordering::SeqCst);
let known_chunk_count = known_chunk_count2.load(Ordering::SeqCst); let chunk_reused = known_chunk_count2.load(Ordering::SeqCst);
let stream_len = stream_len2.load(Ordering::SeqCst); let size = stream_len2.load(Ordering::SeqCst);
let reused_len = reused_len2.load(Ordering::SeqCst); let size_reused = reused_len2.load(Ordering::SeqCst);
let size_compressed = compressed_stream_len2.load(Ordering::SeqCst) as usize;
let mut guard = index_csum_2.lock().unwrap(); let mut guard = index_csum_2.lock().unwrap();
let csum = guard.take().unwrap().finish(); let csum = guard.take().unwrap().finish();
futures::future::ok((total_chunks, known_chunk_count, stream_len, reused_len, duration, csum)) futures::future::ok(UploadStats {
chunk_count,
chunk_reused,
size,
size_reused,
size_compressed,
duration,
csum,
})
}) })
} }
/// Upload speed test - prints result to stderr /// Upload speed test - prints result to stderr
pub async fn upload_speedtest(&self, verbose: bool) -> Result<f64, Error> { pub async fn upload_speedtest(&self, verbose: bool) -> Result<f64, Error> {
let mut data = vec![]; let mut data = vec![];
// generate pseudo random byte sequence // generate pseudo random byte sequence
for i in 0..1024 * 1024 { for i in 0..1024 * 1024 {
@ -680,9 +803,15 @@ impl BackupWriter {
break; break;
} }
if verbose { eprintln!("send test data ({} bytes)", data.len()); } if verbose {
let request = H2Client::request_builder("localhost", "POST", "speedtest", None, None).unwrap(); eprintln!("send test data ({} bytes)", data.len());
let request_future = self.h2.send_request(request, Some(bytes::Bytes::from(data.clone()))).await?; }
let request =
H2Client::request_builder("localhost", "POST", "speedtest", None, None).unwrap();
let request_future = self
.h2
.send_request(request, Some(bytes::Bytes::from(data.clone())))
.await?;
upload_queue.send(request_future).await?; upload_queue.send(request_future).await?;
} }
@ -691,9 +820,16 @@ impl BackupWriter {
let _ = upload_result.await?; let _ = upload_result.await?;
eprintln!("Uploaded {} chunks in {} seconds.", repeat, start_time.elapsed().as_secs()); eprintln!(
"Uploaded {} chunks in {} seconds.",
repeat,
start_time.elapsed().as_secs()
);
let speed = ((item_len * (repeat as usize)) as f64) / start_time.elapsed().as_secs_f64(); let speed = ((item_len * (repeat as usize)) as f64) / start_time.elapsed().as_secs_f64();
eprintln!("Time per request: {} microseconds.", (start_time.elapsed().as_micros())/(repeat as u128)); eprintln!(
"Time per request: {} microseconds.",
(start_time.elapsed().as_micros()) / (repeat as u128)
);
Ok(speed) Ok(speed)
} }

View File

@ -13,6 +13,10 @@ use nix::fcntl::OFlag;
use nix::sys::stat::Mode; use nix::sys::stat::Mode;
use crate::backup::CatalogWriter; use crate::backup::CatalogWriter;
use crate::tools::{
StdChannelWriter,
TokioWriterAdapter,
};
/// Stream implementation to encode and upload .pxar archives. /// Stream implementation to encode and upload .pxar archives.
/// ///
@ -45,10 +49,10 @@ impl PxarBackupStream {
let error = Arc::new(Mutex::new(None)); let error = Arc::new(Mutex::new(None));
let error2 = Arc::clone(&error); let error2 = Arc::clone(&error);
let handler = async move { let handler = async move {
let writer = std::io::BufWriter::with_capacity( let writer = TokioWriterAdapter::new(std::io::BufWriter::with_capacity(
buffer_size, buffer_size,
crate::tools::StdChannelWriter::new(tx), StdChannelWriter::new(tx),
); ));
let verbose = options.verbose; let verbose = options.verbose;

View File

@ -12,13 +12,12 @@ use hyper::client::Client;
use hyper::Body; use hyper::Body;
use pin_project::pin_project; use pin_project::pin_project;
use serde_json::Value; use serde_json::Value;
use tokio::io::{ReadBuf, AsyncRead, AsyncWrite, AsyncWriteExt}; use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt, ReadBuf};
use tokio::net::UnixStream; use tokio::net::UnixStream;
use crate::tools; use crate::tools;
use proxmox::api::error::HttpError; use proxmox::api::error::HttpError;
/// Port below 1024 is privileged, this is intentional so only root (on host) can connect
pub const DEFAULT_VSOCK_PORT: u16 = 807; pub const DEFAULT_VSOCK_PORT: u16 = 807;
#[derive(Clone)] #[derive(Clone)]
@ -138,43 +137,48 @@ pub struct VsockClient {
client: Client<VsockConnector>, client: Client<VsockConnector>,
cid: i32, cid: i32,
port: u16, port: u16,
auth: Option<String>,
} }
impl VsockClient { impl VsockClient {
pub fn new(cid: i32, port: u16) -> Self { pub fn new(cid: i32, port: u16, auth: Option<String>) -> Self {
let conn = VsockConnector {}; let conn = VsockConnector {};
let client = Client::builder().build::<_, Body>(conn); let client = Client::builder().build::<_, Body>(conn);
Self { client, cid, port } Self {
client,
cid,
port,
auth,
}
} }
pub async fn get(&self, path: &str, data: Option<Value>) -> Result<Value, Error> { pub async fn get(&self, path: &str, data: Option<Value>) -> Result<Value, Error> {
let req = Self::request_builder(self.cid, self.port, "GET", path, data)?; let req = self.request_builder("GET", path, data)?;
self.api_request(req).await self.api_request(req).await
} }
pub async fn post(&mut self, path: &str, data: Option<Value>) -> Result<Value, Error> { pub async fn post(&self, path: &str, data: Option<Value>) -> Result<Value, Error> {
let req = Self::request_builder(self.cid, self.port, "POST", path, data)?; let req = self.request_builder("POST", path, data)?;
self.api_request(req).await self.api_request(req).await
} }
pub async fn download( pub async fn download(
&mut self, &self,
path: &str, path: &str,
data: Option<Value>, data: Option<Value>,
output: &mut (dyn AsyncWrite + Send + Unpin), output: &mut (dyn AsyncWrite + Send + Unpin),
) -> Result<(), Error> { ) -> Result<(), Error> {
let req = Self::request_builder(self.cid, self.port, "GET", path, data)?; let req = self.request_builder("GET", path, data)?;
let client = self.client.clone(); let client = self.client.clone();
let resp = client.request(req) let resp = client
.request(req)
.await .await
.map_err(|_| format_err!("vsock download request timed out"))?; .map_err(|_| format_err!("vsock download request timed out"))?;
let status = resp.status(); let status = resp.status();
if !status.is_success() { if !status.is_success() {
Self::api_response(resp) Self::api_response(resp).await.map(|_| ())?
.await
.map(|_| ())?
} else { } else {
resp.into_body() resp.into_body()
.map_err(Error::from) .map_err(Error::from)
@ -212,47 +216,43 @@ impl VsockClient {
.await .await
} }
pub fn request_builder( fn request_builder(
cid: i32, &self,
port: u16,
method: &str, method: &str,
path: &str, path: &str,
data: Option<Value>, data: Option<Value>,
) -> Result<Request<Body>, Error> { ) -> Result<Request<Body>, Error> {
let path = path.trim_matches('/'); let path = path.trim_matches('/');
let url: Uri = format!("vsock://{}:{}/{}", cid, port, path).parse()?; let url: Uri = format!("vsock://{}:{}/{}", self.cid, self.port, path).parse()?;
let make_builder = |content_type: &str, url: &Uri| {
let mut builder = Request::builder()
.method(method)
.uri(url)
.header(hyper::header::CONTENT_TYPE, content_type);
if let Some(auth) = &self.auth {
builder = builder.header(hyper::header::AUTHORIZATION, auth);
}
builder
};
if let Some(data) = data { if let Some(data) = data {
if method == "POST" { if method == "POST" {
let request = Request::builder() let builder = make_builder("application/json", &url);
.method(method) let request = builder.body(Body::from(data.to_string()))?;
.uri(url)
.header(hyper::header::CONTENT_TYPE, "application/json")
.body(Body::from(data.to_string()))?;
return Ok(request); return Ok(request);
} else { } else {
let query = tools::json_object_to_query(data)?; let query = tools::json_object_to_query(data)?;
let url: Uri = format!("vsock://{}:{}/{}?{}", cid, port, path, query).parse()?; let url: Uri =
let request = Request::builder() format!("vsock://{}:{}/{}?{}", self.cid, self.port, path, query).parse()?;
.method(method) let builder = make_builder("application/x-www-form-urlencoded", &url);
.uri(url) let request = builder.body(Body::empty())?;
.header(
hyper::header::CONTENT_TYPE,
"application/x-www-form-urlencoded",
)
.body(Body::empty())?;
return Ok(request); return Ok(request);
} }
} }
let request = Request::builder() let builder = make_builder("application/x-www-form-urlencoded", &url);
.method(method) let request = builder.body(Body::empty())?;
.uri(url)
.header(
hyper::header::CONTENT_TYPE,
"application/x-www-form-urlencoded",
)
.body(Body::empty())?;
Ok(request) Ok(request)
} }

View File

@ -1006,6 +1006,7 @@ fn process_acl(
metadata.acl.users = acl_user; metadata.acl.users = acl_user;
metadata.acl.groups = acl_group; metadata.acl.groups = acl_group;
metadata.acl.group_obj = acl_group_obj;
} }
acl::ACL_TYPE_DEFAULT => { acl::ACL_TYPE_DEFAULT => {
if user_obj_permissions != None if user_obj_permissions != None
@ -1025,13 +1026,11 @@ fn process_acl(
metadata.acl.default_users = acl_user; metadata.acl.default_users = acl_user;
metadata.acl.default_groups = acl_group; metadata.acl.default_groups = acl_group;
metadata.acl.default = acl_default;
} }
_ => bail!("Unexpected ACL type encountered"), _ => bail!("Unexpected ACL type encountered"),
} }
metadata.acl.group_obj = acl_group_obj;
metadata.acl.default = acl_default;
Ok(()) Ok(())
} }

View File

@ -1,6 +1,6 @@
use std::ffi::{OsStr, OsString}; use std::ffi::OsString;
use std::os::unix::io::{AsRawFd, RawFd}; use std::os::unix::io::{AsRawFd, RawFd};
use std::path::PathBuf; use std::path::{Path, PathBuf};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use nix::dir::Dir; use nix::dir::Dir;
@ -78,10 +78,6 @@ impl PxarDir {
pub fn metadata(&self) -> &Metadata { pub fn metadata(&self) -> &Metadata {
&self.metadata &self.metadata
} }
pub fn file_name(&self) -> &OsStr {
&self.file_name
}
} }
pub struct PxarDirStack { pub struct PxarDirStack {
@ -159,4 +155,8 @@ impl PxarDirStack {
.try_as_borrowed_fd() .try_as_borrowed_fd()
.ok_or_else(|| format_err!("lost track of directory file descriptors")) .ok_or_else(|| format_err!("lost track of directory file descriptors"))
} }
pub fn path(&self) -> &Path {
&self.path
}
} }

View File

@ -285,6 +285,8 @@ impl Extractor {
/// When done with a directory we can apply its metadata if it has been created. /// When done with a directory we can apply its metadata if it has been created.
pub fn leave_directory(&mut self) -> Result<(), Error> { pub fn leave_directory(&mut self) -> Result<(), Error> {
let path_info = self.dir_stack.path().to_owned();
let dir = self let dir = self
.dir_stack .dir_stack
.pop() .pop()
@ -296,7 +298,7 @@ impl Extractor {
self.feature_flags, self.feature_flags,
dir.metadata(), dir.metadata(),
fd.as_raw_fd(), fd.as_raw_fd(),
&CString::new(dir.file_name().as_bytes())?, &path_info,
&mut self.on_error, &mut self.on_error,
) )
.map_err(|err| format_err!("failed to apply directory metadata: {}", err))?; .map_err(|err| format_err!("failed to apply directory metadata: {}", err))?;
@ -329,6 +331,7 @@ impl Extractor {
metadata, metadata,
parent, parent,
file_name, file_name,
self.dir_stack.path(),
&mut self.on_error, &mut self.on_error,
) )
} }
@ -382,6 +385,7 @@ impl Extractor {
metadata, metadata,
parent, parent,
file_name, file_name,
self.dir_stack.path(),
&mut self.on_error, &mut self.on_error,
) )
} }
@ -437,7 +441,7 @@ impl Extractor {
self.feature_flags, self.feature_flags,
metadata, metadata,
file.as_raw_fd(), file.as_raw_fd(),
file_name, self.dir_stack.path(),
&mut self.on_error, &mut self.on_error,
) )
} }
@ -494,7 +498,7 @@ impl Extractor {
self.feature_flags, self.feature_flags,
metadata, metadata,
file.as_raw_fd(), file.as_raw_fd(),
file_name, self.dir_stack.path(),
&mut self.on_error, &mut self.on_error,
) )
} }

View File

@ -1,5 +1,6 @@
use std::ffi::{CStr, CString}; use std::ffi::{CStr, CString};
use std::os::unix::io::{AsRawFd, FromRawFd, RawFd}; use std::os::unix::io::{AsRawFd, FromRawFd, RawFd};
use std::path::Path;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use nix::errno::Errno; use nix::errno::Errno;
@ -62,6 +63,7 @@ pub fn apply_at(
metadata: &Metadata, metadata: &Metadata,
parent: RawFd, parent: RawFd,
file_name: &CStr, file_name: &CStr,
path_info: &Path,
on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send), on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send),
) -> Result<(), Error> { ) -> Result<(), Error> {
let fd = proxmox::tools::fd::Fd::openat( let fd = proxmox::tools::fd::Fd::openat(
@ -71,7 +73,7 @@ pub fn apply_at(
Mode::empty(), Mode::empty(),
)?; )?;
apply(flags, metadata, fd.as_raw_fd(), file_name, on_error) apply(flags, metadata, fd.as_raw_fd(), path_info, on_error)
} }
pub fn apply_initial_flags( pub fn apply_initial_flags(
@ -94,7 +96,7 @@ pub fn apply(
flags: Flags, flags: Flags,
metadata: &Metadata, metadata: &Metadata,
fd: RawFd, fd: RawFd,
file_name: &CStr, path_info: &Path,
on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send), on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send),
) -> Result<(), Error> { ) -> Result<(), Error> {
let c_proc_path = CString::new(format!("/proc/self/fd/{}", fd)).unwrap(); let c_proc_path = CString::new(format!("/proc/self/fd/{}", fd)).unwrap();
@ -116,7 +118,7 @@ pub fn apply(
apply_xattrs(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs) apply_xattrs(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)
.or_else(&mut *on_error)?; .or_else(&mut *on_error)?;
add_fcaps(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs).or_else(&mut *on_error)?; add_fcaps(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs).or_else(&mut *on_error)?;
apply_acls(flags, &c_proc_path, metadata) apply_acls(flags, &c_proc_path, metadata, path_info)
.map_err(|err| format_err!("failed to apply acls: {}", err)) .map_err(|err| format_err!("failed to apply acls: {}", err))
.or_else(&mut *on_error)?; .or_else(&mut *on_error)?;
apply_quota_project_id(flags, fd, metadata).or_else(&mut *on_error)?; apply_quota_project_id(flags, fd, metadata).or_else(&mut *on_error)?;
@ -147,7 +149,7 @@ pub fn apply(
Err(err) => { Err(err) => {
on_error(format_err!( on_error(format_err!(
"failed to restore mtime attribute on {:?}: {}", "failed to restore mtime attribute on {:?}: {}",
file_name, path_info,
err err
))?; ))?;
} }
@ -227,7 +229,12 @@ fn apply_xattrs(
Ok(()) Ok(())
} }
fn apply_acls(flags: Flags, c_proc_path: &CStr, metadata: &Metadata) -> Result<(), Error> { fn apply_acls(
flags: Flags,
c_proc_path: &CStr,
metadata: &Metadata,
path_info: &Path,
) -> Result<(), Error> {
if !flags.contains(Flags::WITH_ACL) || metadata.acl.is_empty() { if !flags.contains(Flags::WITH_ACL) || metadata.acl.is_empty() {
return Ok(()); return Ok(());
} }
@ -257,11 +264,17 @@ fn apply_acls(flags: Flags, c_proc_path: &CStr, metadata: &Metadata) -> Result<(
acl.add_entry_full(acl::ACL_GROUP_OBJ, None, group_obj.permissions.0)?; acl.add_entry_full(acl::ACL_GROUP_OBJ, None, group_obj.permissions.0)?;
} }
None => { None => {
acl.add_entry_full( let mode = acl::mode_group_to_acl_permissions(metadata.stat.mode);
acl::ACL_GROUP_OBJ,
None, acl.add_entry_full(acl::ACL_GROUP_OBJ, None, mode)?;
acl::mode_group_to_acl_permissions(metadata.stat.mode),
)?; if !metadata.acl.users.is_empty() || !metadata.acl.groups.is_empty() {
eprintln!(
"Warning: {:?}: Missing GROUP_OBJ entry in ACL, resetting to value of MASK",
path_info,
);
acl.add_entry_full(acl::ACL_MASK, None, mode)?;
}
} }
} }

View File

@ -89,3 +89,5 @@ mod report;
pub use report::*; pub use report::*;
pub mod ticket; pub mod ticket;
pub mod auth;

101
src/server/auth.rs Normal file
View File

@ -0,0 +1,101 @@
//! Provides authentication primitives for the HTTP server
use anyhow::{bail, format_err, Error};
use crate::tools::ticket::Ticket;
use crate::auth_helpers::*;
use crate::tools;
use crate::config::cached_user_info::CachedUserInfo;
use crate::api2::types::{Authid, Userid};
use hyper::header;
use percent_encoding::percent_decode_str;
pub struct UserAuthData {
ticket: String,
csrf_token: Option<String>,
}
pub enum AuthData {
User(UserAuthData),
ApiToken(String),
}
pub fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
if let Some(raw_cookie) = headers.get(header::COOKIE) {
if let Ok(cookie) = raw_cookie.to_str() {
if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
Some(Ok(v)) => Some(v.to_owned()),
_ => None,
};
return Some(AuthData::User(UserAuthData {
ticket,
csrf_token,
}));
}
}
}
match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
Some(Ok(v)) => {
if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
} else {
None
}
},
_ => None,
}
}
pub fn check_auth(
method: &hyper::Method,
auth_data: &AuthData,
user_info: &CachedUserInfo,
) -> Result<Authid, Error> {
match auth_data {
AuthData::User(user_auth_data) => {
let ticket = user_auth_data.ticket.clone();
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
.verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
.require_full()?;
let auth_id = Authid::from(userid.clone());
if !user_info.is_active_auth_id(&auth_id) {
bail!("user account disabled or expired.");
}
if method != hyper::Method::GET {
if let Some(csrf_token) = &user_auth_data.csrf_token {
verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?;
} else {
bail!("missing CSRF prevention token");
}
}
Ok(auth_id)
},
AuthData::ApiToken(api_token) => {
let mut parts = api_token.splitn(2, ':');
let tokenid = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?;
if !user_info.is_active_auth_id(&tokenid) {
bail!("user account or token disabled or expired.");
}
let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret)
.decode_utf8()
.map_err(|_| format_err!("failed to decode API token header"))?;
crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
Ok(tokenid)
}
}
}

View File

@ -1,10 +1,11 @@
use anyhow::Error; use anyhow::Error;
use serde_json::json; use serde_json::json;
use handlebars::{Handlebars, Helper, Context, RenderError, RenderContext, Output, HelperResult}; use handlebars::{Handlebars, Helper, Context, RenderError, RenderContext, Output, HelperResult, TemplateError};
use proxmox::tools::email::sendmail; use proxmox::tools::email::sendmail;
use proxmox::api::schema::parse_property_string; use proxmox::api::schema::parse_property_string;
use proxmox::try_block;
use crate::{ use crate::{
config::datastore::DataStoreConfig, config::datastore::DataStoreConfig,
@ -148,6 +149,14 @@ Datastore: {{job.store}}
Tape Pool: {{job.pool}} Tape Pool: {{job.pool}}
Tape Drive: {{job.drive}} Tape Drive: {{job.drive}}
{{#if snapshot-list ~}}
Snapshots included:
{{#each snapshot-list~}}
{{this}}
{{/each~}}
{{/if}}
Duration: {{duration}}
Tape Backup successful. Tape Backup successful.
@ -181,30 +190,48 @@ lazy_static::lazy_static!{
static ref HANDLEBARS: Handlebars<'static> = { static ref HANDLEBARS: Handlebars<'static> = {
let mut hb = Handlebars::new(); let mut hb = Handlebars::new();
let result: Result<(), TemplateError> = try_block!({
hb.set_strict_mode(true); hb.set_strict_mode(true);
hb.register_escape_fn(handlebars::no_escape);
hb.register_helper("human-bytes", Box::new(handlebars_humam_bytes_helper)); hb.register_helper("human-bytes", Box::new(handlebars_humam_bytes_helper));
hb.register_helper("relative-percentage", Box::new(handlebars_relative_percentage_helper)); hb.register_helper("relative-percentage", Box::new(handlebars_relative_percentage_helper));
hb.register_template_string("gc_ok_template", GC_OK_TEMPLATE).unwrap(); hb.register_template_string("gc_ok_template", GC_OK_TEMPLATE)?;
hb.register_template_string("gc_err_template", GC_ERR_TEMPLATE).unwrap(); hb.register_template_string("gc_err_template", GC_ERR_TEMPLATE)?;
hb.register_template_string("verify_ok_template", VERIFY_OK_TEMPLATE).unwrap(); hb.register_template_string("verify_ok_template", VERIFY_OK_TEMPLATE)?;
hb.register_template_string("verify_err_template", VERIFY_ERR_TEMPLATE).unwrap(); hb.register_template_string("verify_err_template", VERIFY_ERR_TEMPLATE)?;
hb.register_template_string("sync_ok_template", SYNC_OK_TEMPLATE).unwrap(); hb.register_template_string("sync_ok_template", SYNC_OK_TEMPLATE)?;
hb.register_template_string("sync_err_template", SYNC_ERR_TEMPLATE).unwrap(); hb.register_template_string("sync_err_template", SYNC_ERR_TEMPLATE)?;
hb.register_template_string("tape_backup_ok_template", TAPE_BACKUP_OK_TEMPLATE).unwrap(); hb.register_template_string("tape_backup_ok_template", TAPE_BACKUP_OK_TEMPLATE)?;
hb.register_template_string("tape_backup_err_template", TAPE_BACKUP_ERR_TEMPLATE).unwrap(); hb.register_template_string("tape_backup_err_template", TAPE_BACKUP_ERR_TEMPLATE)?;
hb.register_template_string("package_update_template", PACKAGE_UPDATES_TEMPLATE).unwrap(); hb.register_template_string("package_update_template", PACKAGE_UPDATES_TEMPLATE)?;
Ok(())
});
if let Err(err) = result {
eprintln!("error during template registration: {}", err);
}
hb hb
}; };
} }
/// Summary of a successful Tape Job
#[derive(Default)]
pub struct TapeBackupJobSummary {
/// The list of snaphots backed up
pub snapshot_list: Vec<String>,
/// The total time of the backup job
pub duration: std::time::Duration,
}
fn send_job_status_mail( fn send_job_status_mail(
email: &str, email: &str,
subject: &str, subject: &str,
@ -402,14 +429,18 @@ pub fn send_tape_backup_status(
id: Option<&str>, id: Option<&str>,
job: &TapeBackupJobSetup, job: &TapeBackupJobSetup,
result: &Result<(), Error>, result: &Result<(), Error>,
summary: TapeBackupJobSummary,
) -> Result<(), Error> { ) -> Result<(), Error> {
let (fqdn, port) = get_server_url(); let (fqdn, port) = get_server_url();
let duration: crate::tools::systemd::time::TimeSpan = summary.duration.into();
let mut data = json!({ let mut data = json!({
"job": job, "job": job,
"fqdn": fqdn, "fqdn": fqdn,
"port": port, "port": port,
"id": id, "id": id,
"snapshot-list": summary.snapshot_list,
"duration": duration.to_string(),
}); });
let text = match result { let text = match result {
@ -600,3 +631,23 @@ fn handlebars_relative_percentage_helper(
} }
Ok(()) Ok(())
} }
#[test]
fn test_template_register() {
HANDLEBARS.get_helper("human-bytes").unwrap();
HANDLEBARS.get_helper("relative-percentage").unwrap();
assert!(HANDLEBARS.has_template("gc_ok_template"));
assert!(HANDLEBARS.has_template("gc_err_template"));
assert!(HANDLEBARS.has_template("verify_ok_template"));
assert!(HANDLEBARS.has_template("verify_err_template"));
assert!(HANDLEBARS.has_template("sync_ok_template"));
assert!(HANDLEBARS.has_template("sync_err_template"));
assert!(HANDLEBARS.has_template("tape_backup_ok_template"));
assert!(HANDLEBARS.has_template("tape_backup_err_template"));
assert!(HANDLEBARS.has_template("package_update_template"));
}

View File

@ -9,48 +9,41 @@ use std::task::{Context, Poll};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::future::{self, FutureExt, TryFutureExt}; use futures::future::{self, FutureExt, TryFutureExt};
use futures::stream::TryStreamExt; use futures::stream::TryStreamExt;
use hyper::header::{self, HeaderMap};
use hyper::body::HttpBody; use hyper::body::HttpBody;
use hyper::header::{self, HeaderMap};
use hyper::http::request::Parts; use hyper::http::request::Parts;
use hyper::{Body, Request, Response, StatusCode}; use hyper::{Body, Request, Response, StatusCode};
use lazy_static::lazy_static; use lazy_static::lazy_static;
use regex::Regex;
use serde_json::{json, Value}; use serde_json::{json, Value};
use tokio::fs::File; use tokio::fs::File;
use tokio::time::Instant; use tokio::time::Instant;
use percent_encoding::percent_decode_str;
use url::form_urlencoded; use url::form_urlencoded;
use regex::Regex;
use proxmox::http_err;
use proxmox::api::{
ApiHandler,
ApiMethod,
HttpError,
Permission,
RpcEnvironment,
RpcEnvironmentType,
check_api_permission,
};
use proxmox::api::schema::{ use proxmox::api::schema::{
ObjectSchemaType, parse_parameter_strings, parse_simple_value, verify_json_object, ObjectSchemaType,
ParameterSchema, ParameterSchema,
parse_parameter_strings,
parse_simple_value,
verify_json_object,
}; };
use proxmox::api::{
check_api_permission, ApiHandler, ApiMethod, HttpError, Permission, RpcEnvironment,
RpcEnvironmentType,
};
use proxmox::http_err;
use super::environment::RestEnvironment; use super::environment::RestEnvironment;
use super::formatter::*; use super::formatter::*;
use super::ApiConfig; use super::ApiConfig;
use super::auth::{check_auth, extract_auth_data};
use crate::auth_helpers::*;
use crate::api2::types::{Authid, Userid}; use crate::api2::types::{Authid, Userid};
use crate::auth_helpers::*;
use crate::config::cached_user_info::CachedUserInfo;
use crate::tools; use crate::tools;
use crate::tools::FileLogger; use crate::tools::FileLogger;
use crate::tools::ticket::Ticket;
use crate::config::cached_user_info::CachedUserInfo;
extern "C" { fn tzset(); } extern "C" {
fn tzset();
}
pub struct RestServer { pub struct RestServer {
pub api_config: Arc<ApiConfig>, pub api_config: Arc<ApiConfig>,
@ -59,13 +52,16 @@ pub struct RestServer {
const MAX_URI_QUERY_LENGTH: usize = 3072; const MAX_URI_QUERY_LENGTH: usize = 3072;
impl RestServer { impl RestServer {
pub fn new(api_config: ApiConfig) -> Self { pub fn new(api_config: ApiConfig) -> Self {
Self { api_config: Arc::new(api_config) } Self {
api_config: Arc::new(api_config),
}
} }
} }
impl tower_service::Service<&Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>> for RestServer { impl tower_service::Service<&Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>>
for RestServer
{
type Response = ApiService; type Response = ApiService;
type Error = Error; type Error = Error;
type Future = Pin<Box<dyn Future<Output = Result<ApiService, Error>> + Send>>; type Future = Pin<Box<dyn Future<Output = Result<ApiService, Error>> + Send>>;
@ -74,14 +70,17 @@ impl tower_service::Service<&Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStr
Poll::Ready(Ok(())) Poll::Ready(Ok(()))
} }
fn call(&mut self, ctx: &Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>) -> Self::Future { fn call(
&mut self,
ctx: &Pin<Box<tokio_openssl::SslStream<tokio::net::TcpStream>>>,
) -> Self::Future {
match ctx.get_ref().peer_addr() { match ctx.get_ref().peer_addr() {
Err(err) => { Err(err) => future::err(format_err!("unable to get peer address - {}", err)).boxed(),
future::err(format_err!("unable to get peer address - {}", err)).boxed() Ok(peer) => future::ok(ApiService {
} peer,
Ok(peer) => { api_config: self.api_config.clone(),
future::ok(ApiService { peer, api_config: self.api_config.clone() }).boxed() })
} .boxed(),
} }
} }
} }
@ -97,12 +96,12 @@ impl tower_service::Service<&tokio::net::TcpStream> for RestServer {
fn call(&mut self, ctx: &tokio::net::TcpStream) -> Self::Future { fn call(&mut self, ctx: &tokio::net::TcpStream) -> Self::Future {
match ctx.peer_addr() { match ctx.peer_addr() {
Err(err) => { Err(err) => future::err(format_err!("unable to get peer address - {}", err)).boxed(),
future::err(format_err!("unable to get peer address - {}", err)).boxed() Ok(peer) => future::ok(ApiService {
} peer,
Ok(peer) => { api_config: self.api_config.clone(),
future::ok(ApiService { peer, api_config: self.api_config.clone() }).boxed() })
} .boxed(),
} }
} }
} }
@ -122,8 +121,9 @@ impl tower_service::Service<&tokio::net::UnixStream> for RestServer {
let fake_peer = "0.0.0.0:807".parse().unwrap(); let fake_peer = "0.0.0.0:807".parse().unwrap();
future::ok(ApiService { future::ok(ApiService {
peer: fake_peer, peer: fake_peer,
api_config: self.api_config.clone() api_config: self.api_config.clone(),
}).boxed() })
.boxed()
} }
} }
@ -140,8 +140,9 @@ fn log_response(
resp: &Response<Body>, resp: &Response<Body>,
user_agent: Option<String>, user_agent: Option<String>,
) { ) {
if resp.extensions().get::<NoLogExtension>().is_some() {
if resp.extensions().get::<NoLogExtension>().is_some() { return; }; return;
};
// we also log URL-to-long requests, so avoid message bigger than PIPE_BUF (4k on Linux) // we also log URL-to-long requests, so avoid message bigger than PIPE_BUF (4k on Linux)
// to profit from atomicty guarantees for O_APPEND opened logfiles // to profit from atomicty guarantees for O_APPEND opened logfiles
@ -157,7 +158,15 @@ fn log_response(
message = &data.0; message = &data.0;
} }
log::error!("{} {}: {} {}: [client {}] {}", method.as_str(), path, status.as_str(), reason, peer, message); log::error!(
"{} {}: {} {}: [client {}] {}",
method.as_str(),
path,
status.as_str(),
reason,
peer,
message
);
} }
if let Some(logfile) = logfile { if let Some(logfile) = logfile {
let auth_id = match resp.extensions().get::<Authid>() { let auth_id = match resp.extensions().get::<Authid>() {
@ -169,10 +178,7 @@ fn log_response(
let datetime = proxmox::tools::time::strftime_local("%d/%m/%Y:%H:%M:%S %z", now) let datetime = proxmox::tools::time::strftime_local("%d/%m/%Y:%H:%M:%S %z", now)
.unwrap_or_else(|_| "-".to_string()); .unwrap_or_else(|_| "-".to_string());
logfile logfile.lock().unwrap().log(format!(
.lock()
.unwrap()
.log(format!(
"{} - {} [{}] \"{} {}\" {} {} {}", "{} - {} [{}] \"{} {}\" {} {} {}",
peer.ip(), peer.ip(),
auth_id, auth_id,
@ -208,11 +214,13 @@ fn get_proxied_peer(headers: &HeaderMap) -> Option<std::net::SocketAddr> {
fn get_user_agent(headers: &HeaderMap) -> Option<String> { fn get_user_agent(headers: &HeaderMap) -> Option<String> {
let agent = headers.get(header::USER_AGENT)?.to_str(); let agent = headers.get(header::USER_AGENT)?.to_str();
agent.map(|s| { agent
.map(|s| {
let mut s = s.to_owned(); let mut s = s.to_owned();
s.truncate(128); s.truncate(128);
s s
}).ok() })
.ok()
} }
impl tower_service::Service<Request<Body>> for ApiService { impl tower_service::Service<Request<Body>> for ApiService {
@ -260,7 +268,6 @@ fn parse_query_parameters<S: 'static + BuildHasher + Send>(
parts: &Parts, parts: &Parts,
uri_param: &HashMap<String, String, S>, uri_param: &HashMap<String, String, S>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let mut param_list: Vec<(String, String)> = vec![]; let mut param_list: Vec<(String, String)> = vec![];
if !form.is_empty() { if !form.is_empty() {
@ -271,7 +278,9 @@ fn parse_query_parameters<S: 'static + BuildHasher + Send>(
if let Some(query_str) = parts.uri.query() { if let Some(query_str) = parts.uri.query() {
for (k, v) in form_urlencoded::parse(query_str.as_bytes()).into_owned() { for (k, v) in form_urlencoded::parse(query_str.as_bytes()).into_owned() {
if k == "_dc" { continue; } // skip extjs "disable cache" parameter if k == "_dc" {
continue;
} // skip extjs "disable cache" parameter
param_list.push((k, v)); param_list.push((k, v));
} }
} }
@ -291,7 +300,6 @@ async fn get_request_parameters<S: 'static + BuildHasher + Send>(
req_body: Body, req_body: Body,
uri_param: HashMap<String, String, S>, uri_param: HashMap<String, String, S>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let mut is_json = false; let mut is_json = false;
if let Some(value) = parts.headers.get(header::CONTENT_TYPE) { if let Some(value) = parts.headers.get(header::CONTENT_TYPE) {
@ -306,19 +314,22 @@ async fn get_request_parameters<S: 'static + BuildHasher + Send>(
} }
} }
let body = req_body let body = TryStreamExt::map_err(req_body, |err| {
.map_err(|err| http_err!(BAD_REQUEST, "Promlems reading request body: {}", err)) http_err!(BAD_REQUEST, "Problems reading request body: {}", err)
})
.try_fold(Vec::new(), |mut acc, chunk| async move { .try_fold(Vec::new(), |mut acc, chunk| async move {
if acc.len() + chunk.len() < 64*1024 { //fimxe: max request body size? // FIXME: max request body size?
if acc.len() + chunk.len() < 64 * 1024 {
acc.extend_from_slice(&*chunk); acc.extend_from_slice(&*chunk);
Ok(acc) Ok(acc)
} else { } else {
Err(http_err!(BAD_REQUEST, "Request body too large")) Err(http_err!(BAD_REQUEST, "Request body too large"))
} }
}).await?; })
.await?;
let utf8_data = std::str::from_utf8(&body) let utf8_data =
.map_err(|err| format_err!("Request body not uft8: {}", err))?; std::str::from_utf8(&body).map_err(|err| format_err!("Request body not uft8: {}", err))?;
if is_json { if is_json {
let mut params: Value = serde_json::from_str(utf8_data)?; let mut params: Value = serde_json::from_str(utf8_data)?;
@ -342,7 +353,6 @@ async fn proxy_protected_request(
req_body: Body, req_body: Body,
peer: &std::net::SocketAddr, peer: &std::net::SocketAddr,
) -> Result<Response<Body>, Error> { ) -> Result<Response<Body>, Error> {
let mut uri_parts = parts.uri.clone().into_parts(); let mut uri_parts = parts.uri.clone().into_parts();
uri_parts.scheme = Some(http::uri::Scheme::HTTP); uri_parts.scheme = Some(http::uri::Scheme::HTTP);
@ -352,9 +362,10 @@ async fn proxy_protected_request(
parts.uri = new_uri; parts.uri = new_uri;
let mut request = Request::from_parts(parts, req_body); let mut request = Request::from_parts(parts, req_body);
request request.headers_mut().insert(
.headers_mut() header::FORWARDED,
.insert(header::FORWARDED, format!("for=\"{}\";", peer).parse().unwrap()); format!("for=\"{}\";", peer).parse().unwrap(),
);
let reload_timezone = info.reload_timezone; let reload_timezone = info.reload_timezone;
@ -367,7 +378,11 @@ async fn proxy_protected_request(
}) })
.await?; .await?;
if reload_timezone { unsafe { tzset(); } } if reload_timezone {
unsafe {
tzset();
}
}
Ok(resp) Ok(resp)
} }
@ -380,7 +395,6 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
req_body: Body, req_body: Body,
uri_param: HashMap<String, String, S>, uri_param: HashMap<String, String, S>,
) -> Result<Response<Body>, Error> { ) -> Result<Response<Body>, Error> {
let delay_unauth_time = std::time::Instant::now() + std::time::Duration::from_millis(3000); let delay_unauth_time = std::time::Instant::now() + std::time::Duration::from_millis(3000);
let result = match info.handler { let result = match info.handler {
@ -389,12 +403,13 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
(handler)(parts, req_body, params, info, Box::new(rpcenv)).await (handler)(parts, req_body, params, info, Box::new(rpcenv)).await
} }
ApiHandler::Sync(handler) => { ApiHandler::Sync(handler) => {
let params = get_request_parameters(info.parameters, parts, req_body, uri_param).await?; let params =
(handler)(params, info, &mut rpcenv) get_request_parameters(info.parameters, parts, req_body, uri_param).await?;
.map(|data| (formatter.format_data)(data, &rpcenv)) (handler)(params, info, &mut rpcenv).map(|data| (formatter.format_data)(data, &rpcenv))
} }
ApiHandler::Async(handler) => { ApiHandler::Async(handler) => {
let params = get_request_parameters(info.parameters, parts, req_body, uri_param).await?; let params =
get_request_parameters(info.parameters, parts, req_body, uri_param).await?;
(handler)(params, info, &mut rpcenv) (handler)(params, info, &mut rpcenv)
.await .await
.map(|data| (formatter.format_data)(data, &rpcenv)) .map(|data| (formatter.format_data)(data, &rpcenv))
@ -413,7 +428,11 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
} }
}; };
if info.reload_timezone { unsafe { tzset(); } } if info.reload_timezone {
unsafe {
tzset();
}
}
Ok(resp) Ok(resp)
} }
@ -425,7 +444,6 @@ fn get_index(
api: &Arc<ApiConfig>, api: &Arc<ApiConfig>,
parts: Parts, parts: Parts,
) -> Response<Body> { ) -> Response<Body> {
let nodename = proxmox::tools::nodename(); let nodename = proxmox::tools::nodename();
let user = userid.as_ref().map(|u| u.as_str()).unwrap_or(""); let user = userid.as_ref().map(|u| u.as_str()).unwrap_or("");
@ -462,9 +480,7 @@ fn get_index(
let (ct, index) = match api.render_template(template_file, &data) { let (ct, index) = match api.render_template(template_file, &data) {
Ok(index) => ("text/html", index), Ok(index) => ("text/html", index),
Err(err) => { Err(err) => ("text/plain", format!("Error rendering template: {}", err)),
("text/plain", format!("Error rendering template: {}", err))
}
}; };
let mut resp = Response::builder() let mut resp = Response::builder()
@ -481,7 +497,6 @@ fn get_index(
} }
fn extension_to_content_type(filename: &Path) -> (&'static str, bool) { fn extension_to_content_type(filename: &Path) -> (&'static str, bool) {
if let Some(ext) = filename.extension().and_then(|osstr| osstr.to_str()) { if let Some(ext) = filename.extension().and_then(|osstr| osstr.to_str()) {
return match ext { return match ext {
"css" => ("text/css", false), "css" => ("text/css", false),
@ -510,7 +525,6 @@ fn extension_to_content_type(filename: &Path) -> (&'static str, bool) {
} }
async fn simple_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> { async fn simple_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> {
let (content_type, _nocomp) = extension_to_content_type(&filename); let (content_type, _nocomp) = extension_to_content_type(&filename);
use tokio::io::AsyncReadExt; use tokio::io::AsyncReadExt;
@ -527,7 +541,8 @@ async fn simple_static_file_download(filename: PathBuf) -> Result<Response<Body>
let mut response = Response::new(data.into()); let mut response = Response::new(data.into());
response.headers_mut().insert( response.headers_mut().insert(
header::CONTENT_TYPE, header::CONTENT_TYPE,
header::HeaderValue::from_static(content_type)); header::HeaderValue::from_static(content_type),
);
Ok(response) Ok(response)
} }
@ -542,17 +557,15 @@ async fn chuncked_static_file_download(filename: PathBuf) -> Result<Response<Bod
.map_ok(|bytes| bytes.freeze()); .map_ok(|bytes| bytes.freeze());
let body = Body::wrap_stream(payload); let body = Body::wrap_stream(payload);
// fixme: set other headers ? // FIXME: set other headers ?
Ok(Response::builder() Ok(Response::builder()
.status(StatusCode::OK) .status(StatusCode::OK)
.header(header::CONTENT_TYPE, content_type) .header(header::CONTENT_TYPE, content_type)
.body(body) .body(body)
.unwrap() .unwrap())
)
} }
async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> { async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> {
let metadata = tokio::fs::metadata(filename.clone()) let metadata = tokio::fs::metadata(filename.clone())
.map_err(|err| http_err!(BAD_REQUEST, "File access problems: {}", err)) .map_err(|err| http_err!(BAD_REQUEST, "File access problems: {}", err))
.await?; .await?;
@ -574,102 +587,11 @@ fn extract_lang_header(headers: &http::HeaderMap) -> Option<String> {
None None
} }
struct UserAuthData{
ticket: String,
csrf_token: Option<String>,
}
enum AuthData {
User(UserAuthData),
ApiToken(String),
}
fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
if let Some(raw_cookie) = headers.get(header::COOKIE) {
if let Ok(cookie) = raw_cookie.to_str() {
if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
Some(Ok(v)) => Some(v.to_owned()),
_ => None,
};
return Some(AuthData::User(UserAuthData {
ticket,
csrf_token,
}));
}
}
}
match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
Some(Ok(v)) => {
if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
} else {
None
}
},
_ => None,
}
}
fn check_auth(
method: &hyper::Method,
auth_data: &AuthData,
user_info: &CachedUserInfo,
) -> Result<Authid, Error> {
match auth_data {
AuthData::User(user_auth_data) => {
let ticket = user_auth_data.ticket.clone();
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
.verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
.require_full()?;
let auth_id = Authid::from(userid.clone());
if !user_info.is_active_auth_id(&auth_id) {
bail!("user account disabled or expired.");
}
if method != hyper::Method::GET {
if let Some(csrf_token) = &user_auth_data.csrf_token {
verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?;
} else {
bail!("missing CSRF prevention token");
}
}
Ok(auth_id)
},
AuthData::ApiToken(api_token) => {
let mut parts = api_token.splitn(2, ':');
let tokenid = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?;
if !user_info.is_active_auth_id(&tokenid) {
bail!("user account or token disabled or expired.");
}
let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret)
.decode_utf8()
.map_err(|_| format_err!("failed to decode API token header"))?;
crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
Ok(tokenid)
}
}
}
async fn handle_request( async fn handle_request(
api: Arc<ApiConfig>, api: Arc<ApiConfig>,
req: Request<Body>, req: Request<Body>,
peer: &std::net::SocketAddr, peer: &std::net::SocketAddr,
) -> Result<Response<Body>, Error> { ) -> Result<Response<Body>, Error> {
let (parts, body) = req.into_parts(); let (parts, body) = req.into_parts();
let method = parts.method.clone(); let method = parts.method.clone();
let (path, components) = tools::normalize_uri_path(parts.uri.path())?; let (path, components) = tools::normalize_uri_path(parts.uri.path())?;
@ -695,9 +617,7 @@ async fn handle_request(
let access_forbidden_time = std::time::Instant::now() + std::time::Duration::from_millis(500); let access_forbidden_time = std::time::Instant::now() + std::time::Duration::from_millis(500);
if comp_len >= 1 && components[0] == "api2" { if comp_len >= 1 && components[0] == "api2" {
if comp_len >= 2 { if comp_len >= 2 {
let format = components[1]; let format = components[1];
let formatter = match format { let formatter = match format {
@ -725,8 +645,10 @@ async fn handle_request(
Ok(authid) => rpcenv.set_auth_id(Some(authid.to_string())), Ok(authid) => rpcenv.set_auth_id(Some(authid.to_string())),
Err(err) => { Err(err) => {
let peer = peer.ip(); let peer = peer.ip();
auth_logger()? auth_logger()?.log(format!(
.log(format!("authentication failure; rhost={} msg={}", peer, err)); "authentication failure; rhost={} msg={}",
peer, err
));
// always delay unauthorized calls by 3 seconds (from start of request) // always delay unauthorized calls by 3 seconds (from start of request)
let err = http_err!(UNAUTHORIZED, "authentication failed - {}", err); let err = http_err!(UNAUTHORIZED, "authentication failed - {}", err);
@ -743,7 +665,12 @@ async fn handle_request(
} }
Some(api_method) => { Some(api_method) => {
let auth_id = rpcenv.get_auth_id(); let auth_id = rpcenv.get_auth_id();
if !check_api_permission(api_method.access.permission, auth_id.as_deref(), &uri_param, user_info.as_ref()) { if !check_api_permission(
api_method.access.permission,
auth_id.as_deref(),
&uri_param,
user_info.as_ref(),
) {
let err = http_err!(FORBIDDEN, "permission check failed"); let err = http_err!(FORBIDDEN, "permission check failed");
tokio::time::sleep_until(Instant::from_std(access_forbidden_time)).await; tokio::time::sleep_until(Instant::from_std(access_forbidden_time)).await;
return Ok((formatter.format_error)(err)); return Ok((formatter.format_error)(err));
@ -752,7 +679,8 @@ async fn handle_request(
let result = if api_method.protected && env_type == RpcEnvironmentType::PUBLIC { let result = if api_method.protected && env_type == RpcEnvironmentType::PUBLIC {
proxy_protected_request(api_method, parts, body, peer).await proxy_protected_request(api_method, parts, body, peer).await
} else { } else {
handle_api_request(rpcenv, api_method, formatter, parts, body, uri_param).await handle_api_request(rpcenv, api_method, formatter, parts, body, uri_param)
.await
}; };
let mut response = match result { let mut response = match result {
@ -768,7 +696,6 @@ async fn handle_request(
return Ok(response); return Ok(response);
} }
} }
} }
} else { } else {
// not Auth required for accessing files! // not Auth required for accessing files!
@ -784,8 +711,14 @@ async fn handle_request(
Ok(auth_id) if !auth_id.is_token() => { Ok(auth_id) if !auth_id.is_token() => {
let userid = auth_id.user(); let userid = auth_id.user();
let new_csrf_token = assemble_csrf_prevention_token(csrf_secret(), userid); let new_csrf_token = assemble_csrf_prevention_token(csrf_secret(), userid);
return Ok(get_index(Some(userid.clone()), Some(new_csrf_token), language, &api, parts)); return Ok(get_index(
}, Some(userid.clone()),
Some(new_csrf_token),
language,
&api,
parts,
));
}
_ => { _ => {
tokio::time::sleep_until(Instant::from_std(delay_unauth_time)).await; tokio::time::sleep_until(Instant::from_std(delay_unauth_time)).await;
return Ok(get_index(None, None, language, &api, parts)); return Ok(get_index(None, None, language, &api, parts));

View File

@ -28,6 +28,7 @@ use crate::{
SENSE_KEY_UNIT_ATTENTION, SENSE_KEY_UNIT_ATTENTION,
SENSE_KEY_NOT_READY, SENSE_KEY_NOT_READY,
InquiryInfo, InquiryInfo,
ScsiError,
scsi_ascii_to_string, scsi_ascii_to_string,
scsi_inquiry, scsi_inquiry,
}, },
@ -103,7 +104,7 @@ fn execute_scsi_command<F: AsRawFd>(
if !retry { if !retry {
bail!("{} failed: {}", error_prefix, err); bail!("{} failed: {}", error_prefix, err);
} }
if let Some(ref sense) = err.sense { if let ScsiError::Sense(ref sense) = err {
if sense.sense_key == SENSE_KEY_NO_SENSE || if sense.sense_key == SENSE_KEY_NO_SENSE ||
sense.sense_key == SENSE_KEY_RECOVERED_ERROR || sense.sense_key == SENSE_KEY_RECOVERED_ERROR ||

View File

@ -242,32 +242,6 @@ impl LinuxTapeHandle {
Ok(()) Ok(())
} }
pub fn forward_space_count_files(&mut self, count: i32) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTFSF, mt_count: count, };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("forward space {} files failed - {}", count, err)
})?;
Ok(())
}
pub fn backward_space_count_files(&mut self, count: i32) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTBSF, mt_count: count, };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("backward space {} files failed - {}", count, err)
})?;
Ok(())
}
/// Set tape compression feature /// Set tape compression feature
pub fn set_compression(&self, on: bool) -> Result<(), Error> { pub fn set_compression(&self, on: bool) -> Result<(), Error> {
@ -467,6 +441,32 @@ impl TapeDriver for LinuxTapeHandle {
Ok(()) Ok(())
} }
fn forward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTFSF, mt_count: i32::try_from(count)? };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("forward space {} files failed - {}", count, err)
})?;
Ok(())
}
fn backward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTBSF, mt_count: i32::try_from(count)? };
unsafe {
mtioctop(self.file.as_raw_fd(), &cmd)
}.map_err(|err| {
format_err!("backward space {} files failed - {}", count, err)
})?;
Ok(())
}
fn rewind(&mut self) -> Result<(), Error> { fn rewind(&mut self) -> Result<(), Error> {
let cmd = mtop { mt_op: MTCmd::MTREW, mt_count: 1, }; let cmd = mtop { mt_op: MTCmd::MTREW, mt_count: 1, };

View File

@ -87,6 +87,26 @@ pub trait TapeDriver {
/// We assume this flushes the tape write buffer. /// We assume this flushes the tape write buffer.
fn move_to_eom(&mut self) -> Result<(), Error>; fn move_to_eom(&mut self) -> Result<(), Error>;
/// Move to last file
fn move_to_last_file(&mut self) -> Result<(), Error> {
self.move_to_eom()?;
if self.current_file_number()? == 0 {
bail!("move_to_last_file failed - media contains no data");
}
self.backward_space_count_files(2)?;
Ok(())
}
/// Forward space count files. The tape is positioned on the first block of the next file.
fn forward_space_count_files(&mut self, count: usize) -> Result<(), Error>;
/// Backward space count files. The tape is positioned on the last block of the previous file.
fn backward_space_count_files(&mut self, count: usize) -> Result<(), Error>;
/// Current file number /// Current file number
fn current_file_number(&mut self) -> Result<u64, Error>; fn current_file_number(&mut self) -> Result<u64, Error>;

View File

@ -296,6 +296,51 @@ impl TapeDriver for VirtualTapeHandle {
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?; .map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
*pos = index.files; *pos = index.files;
self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
Ok(())
}
None => bail!("drive is empty (no tape loaded)."),
}
}
fn forward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let mut status = self.load_status()?;
match status.current_tape {
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
let index = self.load_tape_index(name)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
let new_pos = *pos + count;
if new_pos <= index.files {
*pos = new_pos;
} else {
bail!("forward_space_count_files failed: move beyond EOT");
}
self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
Ok(())
}
None => bail!("drive is empty (no tape loaded)."),
}
}
fn backward_space_count_files(&mut self, count: usize) -> Result<(), Error> {
let mut status = self.load_status()?;
match status.current_tape {
Some(VirtualTapeStatus { ref mut pos, .. }) => {
if count <= *pos {
*pos = *pos - count;
} else {
bail!("backward_space_count_files failed: move before BOT");
}
self.store_status(&status) self.store_status(&status)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?; .map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;

View File

@ -0,0 +1,89 @@
use std::fs::File;
use std::io::Read;
use proxmox::{
sys::error::SysError,
tools::Uuid,
};
use crate::{
tape::{
TapeWrite,
file_formats::{
PROXMOX_TAPE_BLOCK_SIZE,
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0,
MediaContentHeader,
CatalogArchiveHeader,
},
},
};
/// Write a media catalog to the tape
///
/// Returns `Ok(Some(content_uuid))` on success, and `Ok(None)` if
/// `LEOM` was detected before all data was written. The stream is
/// marked inclomplete in that case and does not contain all data (The
/// backup task must rewrite the whole file on the next media).
///
pub fn tape_write_catalog<'a>(
writer: &mut (dyn TapeWrite + 'a),
uuid: &Uuid,
media_set_uuid: &Uuid,
seq_nr: usize,
file: &mut File,
) -> Result<Option<Uuid>, std::io::Error> {
let archive_header = CatalogArchiveHeader {
uuid: uuid.clone(),
media_set_uuid: media_set_uuid.clone(),
seq_nr: seq_nr as u64,
};
let header_data = serde_json::to_string_pretty(&archive_header)?.as_bytes().to_vec();
let header = MediaContentHeader::new(
PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0, header_data.len() as u32);
let content_uuid: Uuid = header.uuid.into();
let leom = writer.write_header(&header, &header_data)?;
if leom {
writer.finish(true)?; // mark as incomplete
return Ok(None);
}
let mut file_copy_buffer = proxmox::tools::vec::undefined(PROXMOX_TAPE_BLOCK_SIZE);
let result: Result<(), std::io::Error> = proxmox::try_block!({
let file_size = file.metadata()?.len();
let mut remaining = file_size;
while remaining != 0 {
let got = file.read(&mut file_copy_buffer[..])?;
if got as u64 > remaining {
proxmox::io_bail!("catalog '{}' changed while reading", uuid);
}
writer.write_all(&file_copy_buffer[..got])?;
remaining -= got as u64;
}
if remaining > 0 {
proxmox::io_bail!("catalog '{}' shrunk while reading", uuid);
}
Ok(())
});
match result {
Ok(()) => {
writer.finish(false)?;
Ok(Some(content_uuid))
}
Err(err) => {
if err.is_errno(nix::errno::Errno::ENOSPC) && writer.logical_end_of_media() {
writer.finish(true)?; // mark as incomplete
Ok(None)
} else {
Err(err)
}
}
}
}

View File

@ -13,6 +13,9 @@ pub use chunk_archive::*;
mod snapshot_archive; mod snapshot_archive;
pub use snapshot_archive::*; pub use snapshot_archive::*;
mod catalog_archive;
pub use catalog_archive::*;
mod multi_volume_writer; mod multi_volume_writer;
pub use multi_volume_writer::*; pub use multi_volume_writer::*;
@ -285,9 +288,12 @@ impl BlockHeader {
pub fn new() -> Box<Self> { pub fn new() -> Box<Self> {
use std::alloc::{alloc_zeroed, Layout}; use std::alloc::{alloc_zeroed, Layout};
// align to PAGESIZE, so that we can use it with SG_IO
let page_size = unsafe { libc::sysconf(libc::_SC_PAGESIZE) } as usize;
let mut buffer = unsafe { let mut buffer = unsafe {
let ptr = alloc_zeroed( let ptr = alloc_zeroed(
Layout::from_size_align(Self::SIZE, std::mem::align_of::<u64>()) Layout::from_size_align(Self::SIZE, page_size)
.unwrap(), .unwrap(),
); );
Box::from_raw( Box::from_raw(

View File

@ -3,10 +3,30 @@
//! The Inventory persistently stores the list of known backup //! The Inventory persistently stores the list of known backup
//! media. A backup media is identified by its 'MediaId', which is the //! media. A backup media is identified by its 'MediaId', which is the
//! MediaLabel/MediaSetLabel combination. //! MediaLabel/MediaSetLabel combination.
//!
//! Inventory Locking
//!
//! The inventory itself has several methods to update single entries,
//! but all of them can be considered atomic.
//!
//! Pool Locking
//!
//! To add/modify media assigned to a pool, we always do
//! lock_media_pool(). For unassigned media, we call
//! lock_unassigned_media_pool().
//!
//! MediaSet Locking
//!
//! To add/remove media from a media set, or to modify catalogs we
//! always do lock_media_set(). Also, we aquire this lock during
//! restore, to make sure it is not reused for backups.
//!
use std::collections::{HashMap, BTreeMap}; use std::collections::{HashMap, BTreeMap};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
use std::fs::File;
use std::time::Duration;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
use serde::{Serialize, Deserialize}; use serde::{Serialize, Deserialize};
@ -78,7 +98,8 @@ impl Inventory {
pub const MEDIA_INVENTORY_FILENAME: &'static str = "inventory.json"; pub const MEDIA_INVENTORY_FILENAME: &'static str = "inventory.json";
pub const MEDIA_INVENTORY_LOCKFILE: &'static str = ".inventory.lck"; pub const MEDIA_INVENTORY_LOCKFILE: &'static str = ".inventory.lck";
fn new(base_path: &Path) -> Self { /// Create empty instance, no data loaded
pub fn new(base_path: &Path) -> Self {
let mut inventory_path = base_path.to_owned(); let mut inventory_path = base_path.to_owned();
inventory_path.push(Self::MEDIA_INVENTORY_FILENAME); inventory_path.push(Self::MEDIA_INVENTORY_FILENAME);
@ -127,7 +148,7 @@ impl Inventory {
} }
/// Lock the database /// Lock the database
pub fn lock(&self) -> Result<std::fs::File, Error> { fn lock(&self) -> Result<std::fs::File, Error> {
let file = open_file_locked(&self.lockfile_path, std::time::Duration::new(10, 0), true)?; let file = open_file_locked(&self.lockfile_path, std::time::Duration::new(10, 0), true)?;
if cfg!(test) { if cfg!(test) {
// We cannot use chown inside test environment (no permissions) // We cannot use chown inside test environment (no permissions)
@ -733,6 +754,57 @@ impl Inventory {
} }
/// Lock a media pool
pub fn lock_media_pool(base_path: &Path, name: &str) -> Result<File, Error> {
let mut path = base_path.to_owned();
path.push(format!(".pool-{}", name));
path.set_extension("lck");
let timeout = std::time::Duration::new(10, 0);
let lock = proxmox::tools::fs::open_file_locked(&path, timeout, true)?;
if cfg!(test) {
// We cannot use chown inside test environment (no permissions)
return Ok(lock);
}
let backup_user = crate::backup::backup_user()?;
fchown(lock.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))?;
Ok(lock)
}
/// Lock for media not assigned to any pool
pub fn lock_unassigned_media_pool(base_path: &Path) -> Result<File, Error> {
// lock artificial "__UNASSIGNED__" pool to avoid races
lock_media_pool(base_path, "__UNASSIGNED__")
}
/// Lock a media set
///
/// Timeout is 10 seconds by default
pub fn lock_media_set(
base_path: &Path,
media_set_uuid: &Uuid,
timeout: Option<Duration>,
) -> Result<File, Error> {
let mut path = base_path.to_owned();
path.push(format!(".media-set-{}", media_set_uuid));
path.set_extension("lck");
let timeout = timeout.unwrap_or(Duration::new(10, 0));
let file = open_file_locked(&path, timeout, true)?;
if cfg!(test) {
// We cannot use chown inside test environment (no permissions)
return Ok(file);
}
let backup_user = crate::backup::backup_user()?;
fchown(file.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))?;
Ok(file)
}
// shell completion helper // shell completion helper
/// List of known media uuids /// List of known media uuids

View File

@ -116,6 +116,59 @@ impl MediaCatalog {
} }
} }
/// Destroy the media catalog if media_set uuid does not match
pub fn destroy_unrelated_catalog(
base_path: &Path,
media_id: &MediaId,
) -> Result<(), Error> {
let uuid = &media_id.label.uuid;
let mut path = base_path.to_owned();
path.push(uuid.to_string());
path.set_extension("log");
let file = match std::fs::OpenOptions::new().read(true).open(&path) {
Ok(file) => file,
Err(ref err) if err.kind() == std::io::ErrorKind::NotFound => {
return Ok(());
}
Err(err) => return Err(err.into()),
};
let mut file = BufReader::new(file);
let expected_media_set_id = match media_id.media_set_label {
None => {
std::fs::remove_file(path)?;
return Ok(())
},
Some(ref set) => &set.uuid,
};
let (found_magic_number, media_uuid, media_set_uuid) =
Self::parse_catalog_header(&mut file)?;
if !found_magic_number {
return Ok(());
}
if let Some(ref media_uuid) = media_uuid {
if media_uuid != uuid {
std::fs::remove_file(path)?;
return Ok(());
}
}
if let Some(ref media_set_uuid) = media_set_uuid {
if media_set_uuid != expected_media_set_id {
std::fs::remove_file(path)?;
}
}
Ok(())
}
/// Enable/Disable logging to stdout (disabled by default) /// Enable/Disable logging to stdout (disabled by default)
pub fn log_to_stdout(&mut self, enable: bool) { pub fn log_to_stdout(&mut self, enable: bool) {
self.log_to_stdout = enable; self.log_to_stdout = enable;
@ -172,7 +225,7 @@ impl MediaCatalog {
pending: Vec::new(), pending: Vec::new(),
}; };
let found_magic_number = me.load_catalog(&mut file, media_id.media_set_label.as_ref())?; let (found_magic_number, _) = me.load_catalog(&mut file, media_id.media_set_label.as_ref())?;
if !found_magic_number { if !found_magic_number {
me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1); me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1);
@ -189,6 +242,32 @@ impl MediaCatalog {
Ok(me) Ok(me)
} }
/// Creates a temporary empty catalog file
pub fn create_temporary_database_file(
base_path: &Path,
uuid: &Uuid,
) -> Result<File, Error> {
Self::create_basedir(base_path)?;
let mut tmp_path = base_path.to_owned();
tmp_path.push(uuid.to_string());
tmp_path.set_extension("tmp");
let file = std::fs::OpenOptions::new()
.read(true)
.write(true)
.create(true)
.truncate(true)
.open(&tmp_path)?;
let backup_user = crate::backup::backup_user()?;
fchown(file.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))
.map_err(|err| format_err!("fchown failed - {}", err))?;
Ok(file)
}
/// Creates a temporary, empty catalog database /// Creates a temporary, empty catalog database
/// ///
/// Creates a new catalog file using a ".tmp" file extension. /// Creates a new catalog file using a ".tmp" file extension.
@ -206,18 +285,7 @@ impl MediaCatalog {
let me = proxmox::try_block!({ let me = proxmox::try_block!({
Self::create_basedir(base_path)?; let file = Self::create_temporary_database_file(base_path, uuid)?;
let file = std::fs::OpenOptions::new()
.read(true)
.write(true)
.create(true)
.truncate(true)
.open(&tmp_path)?;
let backup_user = crate::backup::backup_user()?;
fchown(file.as_raw_fd(), Some(backup_user.uid), Some(backup_user.gid))
.map_err(|err| format_err!("fchown failed - {}", err))?;
let mut me = Self { let mut me = Self {
uuid: uuid.clone(), uuid: uuid.clone(),
@ -313,6 +381,9 @@ impl MediaCatalog {
/// Conditionally commit if in pending data is large (> 1Mb) /// Conditionally commit if in pending data is large (> 1Mb)
pub fn commit_if_large(&mut self) -> Result<(), Error> { pub fn commit_if_large(&mut self) -> Result<(), Error> {
if self.current_archive.is_some() {
bail!("can't commit catalog in the middle of an chunk archive");
}
if self.pending.len() > 1024*1024 { if self.pending.len() > 1024*1024 {
self.commit()?; self.commit()?;
} }
@ -621,17 +692,65 @@ impl MediaCatalog {
Ok(()) Ok(())
} }
/// Parse the catalog header
pub fn parse_catalog_header<R: Read>(
reader: &mut R,
) -> Result<(bool, Option<Uuid>, Option<Uuid>), Error> {
// read/check magic number
let mut magic = [0u8; 8];
if !reader.read_exact_or_eof(&mut magic)? {
/* EOF */
return Ok((false, None, None));
}
if magic == Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0 {
// only use in unreleased versions
bail!("old catalog format (v1.0) is no longer supported");
}
if magic != Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1 {
bail!("wrong magic number");
}
let mut entry_type = [0u8; 1];
if !reader.read_exact_or_eof(&mut entry_type)? {
/* EOF */
return Ok((true, None, None));
}
if entry_type[0] != b'L' {
bail!("got unexpected entry type");
}
let entry0: LabelEntry = unsafe { reader.read_le_value()? };
let mut entry_type = [0u8; 1];
if !reader.read_exact_or_eof(&mut entry_type)? {
/* EOF */
return Ok((true, Some(entry0.uuid.into()), None));
}
if entry_type[0] != b'L' {
bail!("got unexpected entry type");
}
let entry1: LabelEntry = unsafe { reader.read_le_value()? };
Ok((true, Some(entry0.uuid.into()), Some(entry1.uuid.into())))
}
fn load_catalog( fn load_catalog(
&mut self, &mut self,
file: &mut File, file: &mut File,
media_set_label: Option<&MediaSetLabel>, media_set_label: Option<&MediaSetLabel>,
) -> Result<bool, Error> { ) -> Result<(bool, Option<Uuid>), Error> {
let mut file = BufReader::new(file); let mut file = BufReader::new(file);
let mut found_magic_number = false; let mut found_magic_number = false;
let mut media_set_uuid = None;
loop { loop {
let pos = file.seek(SeekFrom::Current(0))?; let pos = file.seek(SeekFrom::Current(0))?; // get current pos
if pos == 0 { // read/check magic number if pos == 0 { // read/check magic number
let mut magic = [0u8; 8]; let mut magic = [0u8; 8];
@ -741,6 +860,7 @@ impl MediaCatalog {
bail!("got unexpected media set sequence number"); bail!("got unexpected media set sequence number");
} }
} }
media_set_uuid = Some(uuid.clone());
} }
self.last_entry = Some((uuid, file_number)); self.last_entry = Some((uuid, file_number));
@ -752,7 +872,7 @@ impl MediaCatalog {
} }
Ok(found_magic_number) Ok((found_magic_number, media_set_uuid))
} }
} }

View File

@ -8,6 +8,8 @@
//! //!
use std::path::{PathBuf, Path}; use std::path::{PathBuf, Path};
use std::fs::File;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize}; use ::serde::{Deserialize, Serialize};
@ -27,6 +29,9 @@ use crate::{
MediaId, MediaId,
MediaSet, MediaSet,
Inventory, Inventory,
lock_media_set,
lock_media_pool,
lock_unassigned_media_pool,
file_formats::{ file_formats::{
MediaLabel, MediaLabel,
MediaSetLabel, MediaSetLabel,
@ -34,9 +39,6 @@ use crate::{
} }
}; };
/// Media Pool lock guard
pub struct MediaPoolLockGuard(std::fs::File);
/// Media Pool /// Media Pool
pub struct MediaPool { pub struct MediaPool {
@ -49,11 +51,16 @@ pub struct MediaPool {
changer_name: Option<String>, changer_name: Option<String>,
force_media_availability: bool, force_media_availability: bool,
// Set this if you do not need to allocate writeable media - this
// is useful for list_media()
no_media_set_locking: bool,
encrypt_fingerprint: Option<Fingerprint>, encrypt_fingerprint: Option<Fingerprint>,
inventory: Inventory, inventory: Inventory,
current_media_set: MediaSet, current_media_set: MediaSet,
current_media_set_lock: Option<File>,
} }
impl MediaPool { impl MediaPool {
@ -72,8 +79,15 @@ impl MediaPool {
retention: RetentionPolicy, retention: RetentionPolicy,
changer_name: Option<String>, changer_name: Option<String>,
encrypt_fingerprint: Option<Fingerprint>, encrypt_fingerprint: Option<Fingerprint>,
no_media_set_locking: bool, // for list_media()
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let _pool_lock = if no_media_set_locking {
None
} else {
Some(lock_media_pool(state_path, name)?)
};
let inventory = Inventory::load(state_path)?; let inventory = Inventory::load(state_path)?;
let current_media_set = match inventory.latest_media_set(name) { let current_media_set = match inventory.latest_media_set(name) {
@ -81,6 +95,12 @@ impl MediaPool {
None => MediaSet::new(), None => MediaSet::new(),
}; };
let current_media_set_lock = if no_media_set_locking {
None
} else {
Some(lock_media_set(state_path, current_media_set.uuid(), None)?)
};
Ok(MediaPool { Ok(MediaPool {
name: String::from(name), name: String::from(name),
state_path: state_path.to_owned(), state_path: state_path.to_owned(),
@ -89,8 +109,10 @@ impl MediaPool {
changer_name, changer_name,
inventory, inventory,
current_media_set, current_media_set,
current_media_set_lock,
encrypt_fingerprint, encrypt_fingerprint,
force_media_availability: false, force_media_availability: false,
no_media_set_locking,
}) })
} }
@ -101,9 +123,9 @@ impl MediaPool {
self.force_media_availability = true; self.force_media_availability = true;
} }
/// Returns the Uuid of the current media set /// Returns the the current media set
pub fn current_media_set(&self) -> &Uuid { pub fn current_media_set(&self) -> &MediaSet {
self.current_media_set.uuid() &self.current_media_set
} }
/// Creates a new instance using the media pool configuration /// Creates a new instance using the media pool configuration
@ -111,6 +133,7 @@ impl MediaPool {
state_path: &Path, state_path: &Path,
config: &MediaPoolConfig, config: &MediaPoolConfig,
changer_name: Option<String>, changer_name: Option<String>,
no_media_set_locking: bool, // for list_media()
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let allocation = config.allocation.clone().unwrap_or_else(|| String::from("continue")).parse()?; let allocation = config.allocation.clone().unwrap_or_else(|| String::from("continue")).parse()?;
@ -129,6 +152,7 @@ impl MediaPool {
retention, retention,
changer_name, changer_name,
encrypt_fingerprint, encrypt_fingerprint,
no_media_set_locking,
) )
} }
@ -239,7 +263,18 @@ impl MediaPool {
/// status, so this must not change persistent/saved state. /// status, so this must not change persistent/saved state.
/// ///
/// Returns the reason why we started a new media set (if we do) /// Returns the reason why we started a new media set (if we do)
pub fn start_write_session(&mut self, current_time: i64) -> Result<Option<String>, Error> { pub fn start_write_session(
&mut self,
current_time: i64,
) -> Result<Option<String>, Error> {
let _pool_lock = if self.no_media_set_locking {
None
} else {
Some(lock_media_pool(&self.state_path, &self.name)?)
};
self.inventory.reload()?;
let mut create_new_set = match self.current_set_usable() { let mut create_new_set = match self.current_set_usable() {
Err(err) => { Err(err) => {
@ -268,6 +303,14 @@ impl MediaPool {
if create_new_set.is_some() { if create_new_set.is_some() {
let media_set = MediaSet::new(); let media_set = MediaSet::new();
let current_media_set_lock = if self.no_media_set_locking {
None
} else {
Some(lock_media_set(&self.state_path, media_set.uuid(), None)?)
};
self.current_media_set_lock = current_media_set_lock;
self.current_media_set = media_set; self.current_media_set = media_set;
} }
@ -327,6 +370,10 @@ impl MediaPool {
fn add_media_to_current_set(&mut self, mut media_id: MediaId, current_time: i64) -> Result<(), Error> { fn add_media_to_current_set(&mut self, mut media_id: MediaId, current_time: i64) -> Result<(), Error> {
if self.current_media_set_lock.is_none() {
bail!("add_media_to_current_set: media set is not locked - internal error");
}
let seq_nr = self.current_media_set.media_list().len() as u64; let seq_nr = self.current_media_set.media_list().len() as u64;
let pool = self.name.clone(); let pool = self.name.clone();
@ -357,6 +404,10 @@ impl MediaPool {
/// Allocates a writable media to the current media set /// Allocates a writable media to the current media set
pub fn alloc_writable_media(&mut self, current_time: i64) -> Result<Uuid, Error> { pub fn alloc_writable_media(&mut self, current_time: i64) -> Result<Uuid, Error> {
if self.current_media_set_lock.is_none() {
bail!("alloc_writable_media: media set is not locked - internal error");
}
let last_is_writable = self.current_set_usable()?; let last_is_writable = self.current_set_usable()?;
if last_is_writable { if last_is_writable {
@ -367,6 +418,11 @@ impl MediaPool {
// try to find empty media in pool, add to media set // try to find empty media in pool, add to media set
{ // limit pool lock scope
let _pool_lock = lock_media_pool(&self.state_path, &self.name)?;
self.inventory.reload()?;
let media_list = self.list_media(); let media_list = self.list_media();
let mut empty_media = Vec::new(); let mut empty_media = Vec::new();
@ -431,19 +487,26 @@ impl MediaPool {
res res
}); });
if let Some(media) = expired_media.pop() { while let Some(media) = expired_media.pop() {
// check if we can modify the media-set (i.e. skip
// media used by a restore job)
if let Ok(_media_set_lock) = lock_media_set(
&self.state_path,
&media.media_set_label().unwrap().uuid,
Some(std::time::Duration::new(0, 0)), // do not wait
) {
println!("reuse expired media '{}'", media.label_text()); println!("reuse expired media '{}'", media.label_text());
let uuid = media.uuid().clone(); let uuid = media.uuid().clone();
self.add_media_to_current_set(media.into_id(), current_time)?; self.add_media_to_current_set(media.into_id(), current_time)?;
return Ok(uuid); return Ok(uuid);
} }
}
}
println!("no expired media in pool, try to find unassigned/free media"); println!("no expired media in pool, try to find unassigned/free media");
// try unassigned media // try unassigned media
let _lock = lock_unassigned_media_pool(&self.state_path)?;
// lock artificial "__UNASSIGNED__" pool to avoid races
let _lock = MediaPool::lock(&self.state_path, "__UNASSIGNED__")?;
self.inventory.reload()?; self.inventory.reload()?;
@ -563,17 +626,6 @@ impl MediaPool {
self.inventory.generate_media_set_name(media_set_uuid, template) self.inventory.generate_media_set_name(media_set_uuid, template)
} }
/// Lock the pool
pub fn lock(base_path: &Path, name: &str) -> Result<MediaPoolLockGuard, Error> {
let mut path = base_path.to_owned();
path.push(format!(".{}", name));
path.set_extension("lck");
let timeout = std::time::Duration::new(10, 0);
let lock = proxmox::tools::fs::open_file_locked(&path, timeout, true)?;
Ok(MediaPoolLockGuard(lock))
}
} }
/// Backup media /// Backup media

View File

@ -0,0 +1,118 @@
use anyhow::{bail, Error};
use proxmox::tools::Uuid;
use crate::{
tape::{
MediaCatalog,
MediaSetCatalog,
},
};
/// Helper to build and query sets of catalogs
///
/// Similar to MediaSetCatalog, but allows to modify the last catalog.
pub struct CatalogSet {
// read only part
pub media_set_catalog: MediaSetCatalog,
// catalog to modify (latest in set)
pub catalog: Option<MediaCatalog>,
}
impl CatalogSet {
/// Create empty instance
pub fn new() -> Self {
Self {
media_set_catalog: MediaSetCatalog::new(),
catalog: None,
}
}
/// Add catalog to the read-only set
pub fn append_read_only_catalog(&mut self, catalog: MediaCatalog) -> Result<(), Error> {
self.media_set_catalog.append_catalog(catalog)
}
/// Test if the catalog already contains a snapshot
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
if let Some(ref catalog) = self.catalog {
if catalog.contains_snapshot(store, snapshot) {
return true;
}
}
self.media_set_catalog.contains_snapshot(store, snapshot)
}
/// Test if the catalog already contains a chunk
pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
if let Some(ref catalog) = self.catalog {
if catalog.contains_chunk(store, digest) {
return true;
}
}
self.media_set_catalog.contains_chunk(store, digest)
}
/// Add a new catalog, move the old on to the read-only set
pub fn append_catalog(&mut self, new_catalog: MediaCatalog) -> Result<(), Error> {
// append current catalog to read-only set
if let Some(catalog) = self.catalog.take() {
self.media_set_catalog.append_catalog(catalog)?;
}
// remove read-only version from set (in case it is there)
self.media_set_catalog.remove_catalog(&new_catalog.uuid());
self.catalog = Some(new_catalog);
Ok(())
}
/// Register a snapshot
pub fn register_snapshot(
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
snapshot: &str,
) -> Result<(), Error> {
match self.catalog {
Some(ref mut catalog) => {
catalog.register_snapshot(uuid, file_number, store, snapshot)?;
}
None => bail!("no catalog loaded - internal error"),
}
Ok(())
}
/// Register a chunk archive
pub fn register_chunk_archive(
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
chunk_list: &[[u8; 32]],
) -> Result<(), Error> {
match self.catalog {
Some(ref mut catalog) => {
catalog.start_chunk_archive(uuid, file_number, store)?;
for digest in chunk_list {
catalog.register_chunk(digest)?;
}
catalog.end_chunk_archive()?;
}
None => bail!("no catalog loaded - internal error"),
}
Ok(())
}
/// Commit the catalog changes
pub fn commit(&mut self) -> Result<(), Error> {
if let Some(ref mut catalog) = self.catalog {
catalog.commit()?;
}
Ok(())
}
}

View File

@ -1,9 +1,15 @@
use std::collections::HashSet; mod catalog_set;
pub use catalog_set::*;
mod new_chunks_iterator;
pub use new_chunks_iterator::*;
use std::path::Path; use std::path::Path;
use std::fs::File;
use std::time::SystemTime; use std::time::SystemTime;
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, Error};
use proxmox::tools::Uuid; use proxmox::tools::Uuid;
@ -11,7 +17,6 @@ use crate::{
task_log, task_log,
backup::{ backup::{
DataStore, DataStore,
DataBlob,
}, },
server::WorkerTask, server::WorkerTask,
tape::{ tape::{
@ -23,11 +28,11 @@ use crate::{
MediaPool, MediaPool,
MediaId, MediaId,
MediaCatalog, MediaCatalog,
MediaSetCatalog,
file_formats::{ file_formats::{
MediaSetLabel, MediaSetLabel,
ChunkArchiveWriter, ChunkArchiveWriter,
tape_write_snapshot_archive, tape_write_snapshot_archive,
tape_write_catalog,
}, },
drive::{ drive::{
TapeDriver, TapeDriver,
@ -39,184 +44,11 @@ use crate::{
config::tape_encryption_keys::load_key_configs, config::tape_encryption_keys::load_key_configs,
}; };
/// Helper to build and query sets of catalogs
pub struct CatalogBuilder {
// read only part
media_set_catalog: MediaSetCatalog,
// catalog to modify (latest in set)
catalog: Option<MediaCatalog>,
}
impl CatalogBuilder {
/// Test if the catalog already contains a snapshot
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
if let Some(ref catalog) = self.catalog {
if catalog.contains_snapshot(store, snapshot) {
return true;
}
}
self.media_set_catalog.contains_snapshot(store, snapshot)
}
/// Test if the catalog already contains a chunk
pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
if let Some(ref catalog) = self.catalog {
if catalog.contains_chunk(store, digest) {
return true;
}
}
self.media_set_catalog.contains_chunk(store, digest)
}
/// Add a new catalog, move the old on to the read-only set
pub fn append_catalog(&mut self, new_catalog: MediaCatalog) -> Result<(), Error> {
// append current catalog to read-only set
if let Some(catalog) = self.catalog.take() {
self.media_set_catalog.append_catalog(catalog)?;
}
// remove read-only version from set (in case it is there)
self.media_set_catalog.remove_catalog(&new_catalog.uuid());
self.catalog = Some(new_catalog);
Ok(())
}
/// Register a snapshot
pub fn register_snapshot(
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
snapshot: &str,
) -> Result<(), Error> {
match self.catalog {
Some(ref mut catalog) => {
catalog.register_snapshot(uuid, file_number, store, snapshot)?;
}
None => bail!("no catalog loaded - internal error"),
}
Ok(())
}
/// Register a chunk archive
pub fn register_chunk_archive(
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
chunk_list: &[[u8; 32]],
) -> Result<(), Error> {
match self.catalog {
Some(ref mut catalog) => {
catalog.start_chunk_archive(uuid, file_number, store)?;
for digest in chunk_list {
catalog.register_chunk(digest)?;
}
catalog.end_chunk_archive()?;
}
None => bail!("no catalog loaded - internal error"),
}
Ok(())
}
/// Commit the catalog changes
pub fn commit(&mut self) -> Result<(), Error> {
if let Some(ref mut catalog) = self.catalog {
catalog.commit()?;
}
Ok(())
}
}
/// Chunk iterator which use a separate thread to read chunks
///
/// The iterator skips duplicate chunks and chunks already in the
/// catalog.
pub struct NewChunksIterator {
rx: std::sync::mpsc::Receiver<Result<Option<([u8; 32], DataBlob)>, Error>>,
}
impl NewChunksIterator {
/// Creates the iterator, spawning a new thread
///
/// Make sure to join() the returnd thread handle.
pub fn spawn(
datastore: Arc<DataStore>,
snapshot_reader: Arc<Mutex<SnapshotReader>>,
catalog_builder: Arc<Mutex<CatalogBuilder>>,
) -> Result<(std::thread::JoinHandle<()>, Self), Error> {
let (tx, rx) = std::sync::mpsc::sync_channel(3);
let reader_thread = std::thread::spawn(move || {
let snapshot_reader = snapshot_reader.lock().unwrap();
let mut chunk_index: HashSet<[u8;32]> = HashSet::new();
let datastore_name = snapshot_reader.datastore_name();
let result: Result<(), Error> = proxmox::try_block!({
let mut chunk_iter = snapshot_reader.chunk_iterator()?;
loop {
let digest = match chunk_iter.next() {
None => {
tx.send(Ok(None)).unwrap();
break;
}
Some(digest) => digest?,
};
if chunk_index.contains(&digest) {
continue;
}
if catalog_builder.lock().unwrap().contains_chunk(&datastore_name, &digest) {
continue;
};
let blob = datastore.load_chunk(&digest)?;
//println!("LOAD CHUNK {}", proxmox::tools::digest_to_hex(&digest));
tx.send(Ok(Some((digest, blob)))).unwrap();
chunk_index.insert(digest);
}
Ok(())
});
if let Err(err) = result {
tx.send(Err(err)).unwrap();
}
});
Ok((reader_thread, Self { rx }))
}
}
// We do not use Receiver::into_iter(). The manual implementation
// returns a simpler type.
impl Iterator for NewChunksIterator {
type Item = Result<([u8; 32], DataBlob), Error>;
fn next(&mut self) -> Option<Self::Item> {
match self.rx.recv() {
Ok(Ok(None)) => None,
Ok(Ok(Some((digest, blob)))) => Some(Ok((digest, blob))),
Ok(Err(err)) => Some(Err(err)),
Err(_) => Some(Err(format_err!("reader thread failed"))),
}
}
}
struct PoolWriterState { struct PoolWriterState {
drive: Box<dyn TapeDriver>, drive: Box<dyn TapeDriver>,
// Media Uuid from loaded media
media_uuid: Uuid,
// tell if we already moved to EOM // tell if we already moved to EOM
at_eom: bool, at_eom: bool,
// bytes written after the last tape fush/sync // bytes written after the last tape fush/sync
@ -228,13 +60,18 @@ pub struct PoolWriter {
pool: MediaPool, pool: MediaPool,
drive_name: String, drive_name: String,
status: Option<PoolWriterState>, status: Option<PoolWriterState>,
catalog_builder: Arc<Mutex<CatalogBuilder>>, catalog_set: Arc<Mutex<CatalogSet>>,
notify_email: Option<String>, notify_email: Option<String>,
} }
impl PoolWriter { impl PoolWriter {
pub fn new(mut pool: MediaPool, drive_name: &str, worker: &WorkerTask, notify_email: Option<String>) -> Result<Self, Error> { pub fn new(
mut pool: MediaPool,
drive_name: &str,
worker: &WorkerTask,
notify_email: Option<String>,
) -> Result<Self, Error> {
let current_time = proxmox::tools::time::epoch_i64(); let current_time = proxmox::tools::time::epoch_i64();
@ -247,9 +84,10 @@ impl PoolWriter {
); );
} }
task_log!(worker, "media set uuid: {}", pool.current_media_set()); let media_set_uuid = pool.current_media_set().uuid();
task_log!(worker, "media set uuid: {}", media_set_uuid);
let mut media_set_catalog = MediaSetCatalog::new(); let mut catalog_set = CatalogSet::new();
// load all catalogs read-only at start // load all catalogs read-only at start
for media_uuid in pool.current_media_list()? { for media_uuid in pool.current_media_list()? {
@ -260,16 +98,14 @@ impl PoolWriter {
false, false,
false, false,
)?; )?;
media_set_catalog.append_catalog(media_catalog)?; catalog_set.append_read_only_catalog(media_catalog)?;
} }
let catalog_builder = CatalogBuilder { media_set_catalog, catalog: None };
Ok(Self { Ok(Self {
pool, pool,
drive_name: drive_name.to_string(), drive_name: drive_name.to_string(),
status: None, status: None,
catalog_builder: Arc::new(Mutex::new(catalog_builder)), catalog_set: Arc::new(Mutex::new(catalog_set)),
notify_email, notify_email,
}) })
} }
@ -285,7 +121,7 @@ impl PoolWriter {
} }
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool { pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
self.catalog_builder.lock().unwrap().contains_snapshot(store, snapshot) self.catalog_set.lock().unwrap().contains_snapshot(store, snapshot)
} }
/// Eject media and drop PoolWriterState (close drive) /// Eject media and drop PoolWriterState (close drive)
@ -354,14 +190,14 @@ impl PoolWriter {
if let Some(PoolWriterState {ref mut drive, .. }) = self.status { if let Some(PoolWriterState {ref mut drive, .. }) = self.status {
drive.sync()?; // sync all data to the tape drive.sync()?; // sync all data to the tape
} }
self.catalog_builder.lock().unwrap().commit()?; // then commit the catalog self.catalog_set.lock().unwrap().commit()?; // then commit the catalog
Ok(()) Ok(())
} }
/// Load a writable media into the drive /// Load a writable media into the drive
pub fn load_writable_media(&mut self, worker: &WorkerTask) -> Result<Uuid, Error> { pub fn load_writable_media(&mut self, worker: &WorkerTask) -> Result<Uuid, Error> {
let last_media_uuid = match self.catalog_builder.lock().unwrap().catalog { let last_media_uuid = match self.status {
Some(ref catalog) => Some(catalog.uuid().clone()), Some(PoolWriterState { ref media_uuid, ..}) => Some(media_uuid.clone()),
None => None, None => None,
}; };
@ -404,14 +240,14 @@ impl PoolWriter {
} }
} }
let catalog = update_media_set_label( let (catalog, is_new_media) = update_media_set_label(
worker, worker,
drive.as_mut(), drive.as_mut(),
old_media_id.media_set_label, old_media_id.media_set_label,
media.id(), media.id(),
)?; )?;
self.catalog_builder.lock().unwrap().append_catalog(catalog)?; self.catalog_set.lock().unwrap().append_catalog(catalog)?;
let media_set = media.media_set_label().clone().unwrap(); let media_set = media.media_set_label().clone().unwrap();
@ -422,11 +258,162 @@ impl PoolWriter {
drive.set_encryption(encrypt_fingerprint)?; drive.set_encryption(encrypt_fingerprint)?;
self.status = Some(PoolWriterState { drive, at_eom: false, bytes_written: 0 }); self.status = Some(PoolWriterState {
drive,
media_uuid: media_uuid.clone(),
at_eom: false,
bytes_written: 0,
});
if is_new_media {
// add catalogs from previous media
self.append_media_set_catalogs(worker)?;
}
Ok(media_uuid) Ok(media_uuid)
} }
fn open_catalog_file(uuid: &Uuid) -> Result<File, Error> {
let status_path = Path::new(TAPE_STATUS_DIR);
let mut path = status_path.to_owned();
path.push(uuid.to_string());
path.set_extension("log");
let file = std::fs::OpenOptions::new()
.read(true)
.open(&path)?;
Ok(file)
}
// Check it tape is loaded, then move to EOM (if not already there)
//
// Returns the tape position at EOM.
fn prepare_tape_write(
status: &mut PoolWriterState,
worker: &WorkerTask,
) -> Result<u64, Error> {
if !status.at_eom {
worker.log(String::from("moving to end of media"));
status.drive.move_to_eom()?;
status.at_eom = true;
}
let current_file_number = status.drive.current_file_number()?;
if current_file_number < 2 {
bail!("got strange file position number from drive ({})", current_file_number);
}
Ok(current_file_number)
}
/// Move to EOM (if not already there), then write the current
/// catalog to the tape. On success, this return 'Ok(true)'.
/// Please note that this may fail when there is not enough space
/// on the media (return value 'Ok(false, _)'). In that case, the
/// archive is marked incomplete. The caller should mark the media
/// as full and try again using another media.
pub fn append_catalog_archive(
&mut self,
worker: &WorkerTask,
) -> Result<bool, Error> {
let status = match self.status {
Some(ref mut status) => status,
None => bail!("PoolWriter - no media loaded"),
};
Self::prepare_tape_write(status, worker)?;
let catalog_set = self.catalog_set.lock().unwrap();
let catalog = match catalog_set.catalog {
None => bail!("append_catalog_archive failed: no catalog - internal error"),
Some(ref catalog) => catalog,
};
let media_set = self.pool.current_media_set();
let media_list = media_set.media_list();
let uuid = match media_list.last() {
None => bail!("got empty media list - internal error"),
Some(None) => bail!("got incomplete media list - internal error"),
Some(Some(last_uuid)) => {
if last_uuid != catalog.uuid() {
bail!("got wrong media - internal error");
}
last_uuid
}
};
let seq_nr = media_list.len() - 1;
let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
let mut file = Self::open_catalog_file(uuid)?;
let done = tape_write_catalog(
writer.as_mut(),
uuid,
media_set.uuid(),
seq_nr,
&mut file,
)?.is_some();
Ok(done)
}
// Append catalogs for all previous media in set (without last)
fn append_media_set_catalogs(
&mut self,
worker: &WorkerTask,
) -> Result<(), Error> {
let media_set = self.pool.current_media_set();
let mut media_list = &media_set.media_list()[..];
if media_list.len() < 2 {
return Ok(());
}
media_list = &media_list[..(media_list.len()-1)];
let status = match self.status {
Some(ref mut status) => status,
None => bail!("PoolWriter - no media loaded"),
};
Self::prepare_tape_write(status, worker)?;
for (seq_nr, uuid) in media_list.iter().enumerate() {
let uuid = match uuid {
None => bail!("got incomplete media list - internal error"),
Some(uuid) => uuid,
};
let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
let mut file = Self::open_catalog_file(uuid)?;
task_log!(worker, "write catalog for previous media: {}", uuid);
if tape_write_catalog(
writer.as_mut(),
uuid,
media_set.uuid(),
seq_nr,
&mut file,
)?.is_none() {
bail!("got EOM while writing start catalog");
}
}
Ok(())
}
/// Move to EOM (if not already there), then creates a new snapshot /// Move to EOM (if not already there), then creates a new snapshot
/// archive writing specified files (as .pxar) into it. On /// archive writing specified files (as .pxar) into it. On
/// success, this return 'Ok(true)' and the media catalog gets /// success, this return 'Ok(true)' and the media catalog gets
@ -448,23 +435,14 @@ impl PoolWriter {
None => bail!("PoolWriter - no media loaded"), None => bail!("PoolWriter - no media loaded"),
}; };
if !status.at_eom { let current_file_number = Self::prepare_tape_write(status, worker)?;
worker.log(String::from("moving to end of media"));
status.drive.move_to_eom()?;
status.at_eom = true;
}
let current_file_number = status.drive.current_file_number()?;
if current_file_number < 2 {
bail!("got strange file position number from drive ({})", current_file_number);
}
let (done, bytes_written) = { let (done, bytes_written) = {
let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?; let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
match tape_write_snapshot_archive(writer.as_mut(), snapshot_reader)? { match tape_write_snapshot_archive(writer.as_mut(), snapshot_reader)? {
Some(content_uuid) => { Some(content_uuid) => {
self.catalog_builder.lock().unwrap().register_snapshot( self.catalog_set.lock().unwrap().register_snapshot(
content_uuid, content_uuid,
current_file_number, current_file_number,
&snapshot_reader.datastore_name().to_string(), &snapshot_reader.datastore_name().to_string(),
@ -503,16 +481,8 @@ impl PoolWriter {
None => bail!("PoolWriter - no media loaded"), None => bail!("PoolWriter - no media loaded"),
}; };
if !status.at_eom { let current_file_number = Self::prepare_tape_write(status, worker)?;
worker.log(String::from("moving to end of media"));
status.drive.move_to_eom()?;
status.at_eom = true;
}
let current_file_number = status.drive.current_file_number()?;
if current_file_number < 2 {
bail!("got strange file position number from drive ({})", current_file_number);
}
let writer = status.drive.write_file()?; let writer = status.drive.write_file()?;
let start_time = SystemTime::now(); let start_time = SystemTime::now();
@ -538,7 +508,7 @@ impl PoolWriter {
let request_sync = status.bytes_written >= COMMIT_BLOCK_SIZE; let request_sync = status.bytes_written >= COMMIT_BLOCK_SIZE;
// register chunks in media_catalog // register chunks in media_catalog
self.catalog_builder.lock().unwrap() self.catalog_set.lock().unwrap()
.register_chunk_archive(content_uuid, current_file_number, store, &saved_chunks)?; .register_chunk_archive(content_uuid, current_file_number, store, &saved_chunks)?;
if leom || request_sync { if leom || request_sync {
@ -556,7 +526,7 @@ impl PoolWriter {
NewChunksIterator::spawn( NewChunksIterator::spawn(
datastore, datastore,
snapshot_reader, snapshot_reader,
Arc::clone(&self.catalog_builder), Arc::clone(&self.catalog_set),
) )
} }
} }
@ -618,7 +588,7 @@ fn update_media_set_label(
drive: &mut dyn TapeDriver, drive: &mut dyn TapeDriver,
old_set: Option<MediaSetLabel>, old_set: Option<MediaSetLabel>,
media_id: &MediaId, media_id: &MediaId,
) -> Result<MediaCatalog, Error> { ) -> Result<(MediaCatalog, bool), Error> {
let media_catalog; let media_catalog;
@ -641,11 +611,12 @@ fn update_media_set_label(
let status_path = Path::new(TAPE_STATUS_DIR); let status_path = Path::new(TAPE_STATUS_DIR);
match old_set { let new_media = match old_set {
None => { None => {
worker.log("wrinting new media set label".to_string()); worker.log("wrinting new media set label".to_string());
drive.write_media_set_label(new_set, key_config.as_ref())?; drive.write_media_set_label(new_set, key_config.as_ref())?;
media_catalog = MediaCatalog::overwrite(status_path, media_id, false)?; media_catalog = MediaCatalog::overwrite(status_path, media_id, false)?;
true
} }
Some(media_set_label) => { Some(media_set_label) => {
if new_set.uuid == media_set_label.uuid { if new_set.uuid == media_set_label.uuid {
@ -657,6 +628,10 @@ fn update_media_set_label(
bail!("detected changed encryption fingerprint - internal error"); bail!("detected changed encryption fingerprint - internal error");
} }
media_catalog = MediaCatalog::open(status_path, &media_id, true, false)?; media_catalog = MediaCatalog::open(status_path, &media_id, true, false)?;
// todo: verify last content/media_catalog somehow?
false
} else { } else {
worker.log( worker.log(
format!("wrinting new media set label (overwrite '{}/{}')", format!("wrinting new media set label (overwrite '{}/{}')",
@ -665,11 +640,10 @@ fn update_media_set_label(
drive.write_media_set_label(new_set, key_config.as_ref())?; drive.write_media_set_label(new_set, key_config.as_ref())?;
media_catalog = MediaCatalog::overwrite(status_path, media_id, false)?; media_catalog = MediaCatalog::overwrite(status_path, media_id, false)?;
true
} }
} }
} };
// todo: verify last content/media_catalog somehow? Ok((media_catalog, new_media))
Ok(media_catalog)
} }

View File

@ -0,0 +1,99 @@
use std::collections::HashSet;
use std::sync::{Arc, Mutex};
use anyhow::{format_err, Error};
use crate::{
backup::{
DataStore,
DataBlob,
},
tape::{
CatalogSet,
SnapshotReader,
},
};
/// Chunk iterator which use a separate thread to read chunks
///
/// The iterator skips duplicate chunks and chunks already in the
/// catalog.
pub struct NewChunksIterator {
rx: std::sync::mpsc::Receiver<Result<Option<([u8; 32], DataBlob)>, Error>>,
}
impl NewChunksIterator {
/// Creates the iterator, spawning a new thread
///
/// Make sure to join() the returnd thread handle.
pub fn spawn(
datastore: Arc<DataStore>,
snapshot_reader: Arc<Mutex<SnapshotReader>>,
catalog_set: Arc<Mutex<CatalogSet>>,
) -> Result<(std::thread::JoinHandle<()>, Self), Error> {
let (tx, rx) = std::sync::mpsc::sync_channel(3);
let reader_thread = std::thread::spawn(move || {
let snapshot_reader = snapshot_reader.lock().unwrap();
let mut chunk_index: HashSet<[u8;32]> = HashSet::new();
let datastore_name = snapshot_reader.datastore_name();
let result: Result<(), Error> = proxmox::try_block!({
let mut chunk_iter = snapshot_reader.chunk_iterator()?;
loop {
let digest = match chunk_iter.next() {
None => {
tx.send(Ok(None)).unwrap();
break;
}
Some(digest) => digest?,
};
if chunk_index.contains(&digest) {
continue;
}
if catalog_set.lock().unwrap().contains_chunk(&datastore_name, &digest) {
continue;
};
let blob = datastore.load_chunk(&digest)?;
//println!("LOAD CHUNK {}", proxmox::tools::digest_to_hex(&digest));
tx.send(Ok(Some((digest, blob)))).unwrap();
chunk_index.insert(digest);
}
Ok(())
});
if let Err(err) = result {
tx.send(Err(err)).unwrap();
}
});
Ok((reader_thread, Self { rx }))
}
}
// We do not use Receiver::into_iter(). The manual implementation
// returns a simpler type.
impl Iterator for NewChunksIterator {
type Item = Result<([u8; 32], DataBlob), Error>;
fn next(&mut self) -> Option<Self::Item> {
match self.rx.recv() {
Ok(Ok(None)) => None,
Ok(Ok(Some((digest, blob)))) => Some(Ok((digest, blob))),
Ok(Err(err)) => Some(Err(err)),
Err(_) => Some(Err(format_err!("reader thread failed"))),
}
}
}

View File

@ -42,6 +42,7 @@ fn test_alloc_writable_media_1() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
ctime += 10; ctime += 10;
@ -71,6 +72,7 @@ fn test_alloc_writable_media_2() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
let ctime = 10; let ctime = 10;
@ -110,6 +112,7 @@ fn test_alloc_writable_media_3() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
let mut ctime = 10; let mut ctime = 10;
@ -156,6 +159,7 @@ fn test_alloc_writable_media_4() -> Result<(), Error> {
RetentionPolicy::ProtectFor(parse_time_span("12s")?), RetentionPolicy::ProtectFor(parse_time_span("12s")?),
None, None,
None, None,
false,
)?; )?;
let start_time = 10; let start_time = 10;

View File

@ -69,6 +69,7 @@ fn test_compute_media_state() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
// tape1 is free // tape1 is free
@ -116,6 +117,7 @@ fn test_media_expire_time() -> Result<(), Error> {
RetentionPolicy::ProtectFor(span), RetentionPolicy::ProtectFor(span),
None, None,
None, None,
false,
)?; )?;
assert_eq!(pool.lookup_media(&tape0_uuid)?.status(), &MediaStatus::Full); assert_eq!(pool.lookup_media(&tape0_uuid)?.status(), &MediaStatus::Full);

View File

@ -49,6 +49,7 @@ fn test_current_set_usable_1() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert_eq!(pool.current_set_usable()?, false); assert_eq!(pool.current_set_usable()?, false);
@ -75,6 +76,7 @@ fn test_current_set_usable_2() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert_eq!(pool.current_set_usable()?, false); assert_eq!(pool.current_set_usable()?, false);
@ -103,6 +105,7 @@ fn test_current_set_usable_3() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
Some(String::from("changer1")), Some(String::from("changer1")),
None, None,
false,
)?; )?;
assert_eq!(pool.current_set_usable()?, false); assert_eq!(pool.current_set_usable()?, false);
@ -131,6 +134,7 @@ fn test_current_set_usable_4() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert_eq!(pool.current_set_usable()?, true); assert_eq!(pool.current_set_usable()?, true);
@ -161,6 +165,7 @@ fn test_current_set_usable_5() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert_eq!(pool.current_set_usable()?, true); assert_eq!(pool.current_set_usable()?, true);
@ -189,6 +194,7 @@ fn test_current_set_usable_6() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert!(pool.current_set_usable().is_err()); assert!(pool.current_set_usable().is_err());
@ -223,6 +229,7 @@ fn test_current_set_usable_7() -> Result<(), Error> {
RetentionPolicy::KeepForever, RetentionPolicy::KeepForever,
None, None,
None, None,
false,
)?; )?;
assert!(pool.current_set_usable().is_err()); assert!(pool.current_set_usable().is_err());

View File

@ -57,6 +57,9 @@ pub use async_channel_writer::AsyncChannelWriter;
mod std_channel_writer; mod std_channel_writer;
pub use std_channel_writer::StdChannelWriter; pub use std_channel_writer::StdChannelWriter;
mod tokio_writer_adapter;
pub use tokio_writer_adapter::TokioWriterAdapter;
mod process_locker; mod process_locker;
pub use process_locker::{ProcessLocker, ProcessLockExclusiveGuard, ProcessLockSharedGuard}; pub use process_locker::{ProcessLocker, ProcessLockExclusiveGuard, ProcessLockSharedGuard};

View File

@ -44,32 +44,38 @@ impl ToString for SenseInfo {
} }
#[derive(Debug)] #[derive(Debug)]
pub struct ScsiError { pub enum ScsiError {
pub error: Error, Error(Error),
pub sense: Option<SenseInfo>, Sense(SenseInfo),
} }
impl std::fmt::Display for ScsiError { impl std::fmt::Display for ScsiError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.error) match self {
ScsiError::Error(err) => write!(f, "{}", err),
ScsiError::Sense(sense) => write!(f, "{}", sense.to_string()),
}
} }
} }
impl std::error::Error for ScsiError { impl std::error::Error for ScsiError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> { fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
self.error.source() match self {
ScsiError::Error(err) => err.source(),
ScsiError::Sense(_) => None,
}
} }
} }
impl From<anyhow::Error> for ScsiError { impl From<anyhow::Error> for ScsiError {
fn from(error: anyhow::Error) -> Self { fn from(error: anyhow::Error) -> Self {
Self { error, sense: None } Self::Error(error)
} }
} }
impl From<std::io::Error> for ScsiError { impl From<std::io::Error> for ScsiError {
fn from(error: std::io::Error) -> Self { fn from(error: std::io::Error) -> Self {
Self { error: error.into(), sense: None } Self::Error(error.into())
} }
} }
@ -483,10 +489,7 @@ impl <'a, F: AsRawFd> SgRaw<'a, F> {
} }
}; };
return Err(ScsiError { return Err(ScsiError::Sense(sense));
error: format_err!("{}", sense.to_string()),
sense: Some(sense),
});
} }
SCSI_PT_RESULT_TRANSPORT_ERR => return Err(format_err!("scsi command failed: transport error").into()), SCSI_PT_RESULT_TRANSPORT_ERR => return Err(format_err!("scsi command failed: transport error").into()),
SCSI_PT_RESULT_OS_ERR => { SCSI_PT_RESULT_OS_ERR => {
@ -506,7 +509,7 @@ impl <'a, F: AsRawFd> SgRaw<'a, F> {
} }
if self.buffer.len() < 16 { if self.buffer.len() < 16 {
return Err(format_err!("output buffer too small").into()); return Err(format_err!("input buffer too small").into());
} }
let mut ptvp = self.create_scsi_pt_obj()?; let mut ptvp = self.create_scsi_pt_obj()?;
@ -530,6 +533,45 @@ impl <'a, F: AsRawFd> SgRaw<'a, F> {
Ok(&self.buffer[..data_len]) Ok(&self.buffer[..data_len])
} }
/// Run the specified RAW SCSI command, use data as input buffer
pub fn do_in_command<'b>(&mut self, cmd: &[u8], data: &'b mut [u8]) -> Result<&'b [u8], ScsiError> {
if !unsafe { sg_is_scsi_cdb(cmd.as_ptr(), cmd.len() as c_int) } {
return Err(format_err!("no valid SCSI command").into());
}
if data.len() == 0 {
return Err(format_err!("got zero-sized input buffer").into());
}
let mut ptvp = self.create_scsi_pt_obj()?;
unsafe {
set_scsi_pt_data_in(
ptvp.as_mut_ptr(),
data.as_mut_ptr(),
data.len() as c_int,
);
set_scsi_pt_cdb(
ptvp.as_mut_ptr(),
cmd.as_ptr(),
cmd.len() as c_int,
);
};
self.do_scsi_pt_checked(&mut ptvp)?;
let resid = unsafe { get_scsi_pt_resid(ptvp.as_ptr()) } as usize;
if resid > data.len() {
return Err(format_err!("do_scsi_pt failed - got strange resid (value too big)").into());
}
let data_len = data.len() - resid;
Ok(&data[..data_len])
}
/// Run dataout command /// Run dataout command
/// ///
/// Note: use alloc_page_aligned_buffer to alloc data transfer buffer /// Note: use alloc_page_aligned_buffer to alloc data transfer buffer

View File

@ -141,6 +141,88 @@ impl From<TimeSpan> for f64 {
} }
} }
impl From<std::time::Duration> for TimeSpan {
fn from(duration: std::time::Duration) -> Self {
let mut duration = duration.as_nanos();
let nsec = (duration % 1000) as u64;
duration /= 1000;
let usec = (duration % 1000) as u64;
duration /= 1000;
let msec = (duration % 1000) as u64;
duration /= 1000;
let seconds = (duration % 60) as u64;
duration /= 60;
let minutes = (duration % 60) as u64;
duration /= 60;
let hours = (duration % 24) as u64;
duration /= 24;
let years = (duration as f64 / 365.25) as u64;
let ydays = (duration as f64 % 365.25) as u64;
let months = (ydays as f64 / 30.44) as u64;
let mdays = (ydays as f64 % 30.44) as u64;
let weeks = mdays / 7;
let days = mdays % 7;
Self {
nsec,
usec,
msec,
seconds,
minutes,
hours,
days,
weeks,
months,
years,
}
}
}
impl std::fmt::Display for TimeSpan {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> {
let mut first = true;
{ // block scope for mutable borrows
let mut do_write = |v: u64, unit: &str| -> Result<(), std::fmt::Error> {
if !first {
write!(f, " ")?;
}
first = false;
write!(f, "{}{}", v, unit)
};
if self.years > 0 {
do_write(self.years, "y")?;
}
if self.months > 0 {
do_write(self.months, "m")?;
}
if self.weeks > 0 {
do_write(self.weeks, "w")?;
}
if self.days > 0 {
do_write(self.days, "d")?;
}
if self.hours > 0 {
do_write(self.hours, "h")?;
}
if self.minutes > 0 {
do_write(self.minutes, "min")?;
}
}
if !first {
write!(f, " ")?;
}
let seconds = self.seconds as f64 + (self.msec as f64 / 1000.0);
if seconds >= 0.1 {
if seconds >= 1.0 || !first {
write!(f, "{:.0}s", seconds)?;
} else {
write!(f, "{:.1}s", seconds)?;
}
} else if first {
write!(f, "<0.1s")?;
}
Ok(())
}
}
pub fn verify_time_span(i: &str) -> Result<(), Error> { pub fn verify_time_span(i: &str) -> Result<(), Error> {
parse_time_span(i)?; parse_time_span(i)?;

View File

@ -0,0 +1,26 @@
use std::io::Write;
use tokio::task::block_in_place;
/// Wrapper around a writer which implements Write
///
/// wraps each write with a 'block_in_place' so that
/// any (blocking) writer can be safely used in async context in a
/// tokio runtime
pub struct TokioWriterAdapter<W: Write>(W);
impl<W: Write> TokioWriterAdapter<W> {
pub fn new(writer: W) -> Self {
Self(writer)
}
}
impl<W: Write> Write for TokioWriterAdapter<W> {
fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
block_in_place(|| self.0.write(buf))
}
fn flush(&mut self) -> Result<(), std::io::Error> {
block_in_place(|| self.0.flush())
}
}

View File

@ -3,6 +3,10 @@ const proxmoxOnlineHelpInfo = {
"link": "/docs/index.html", "link": "/docs/index.html",
"title": "Proxmox Backup Server Documentation Index" "title": "Proxmox Backup Server Documentation Index"
}, },
"client-repository": {
"link": "/docs/backup-client.html#client-repository",
"title": "Repository Locations"
},
"client-creating-backups": { "client-creating-backups": {
"link": "/docs/backup-client.html#client-creating-backups", "link": "/docs/backup-client.html#client-creating-backups",
"title": "Creating Backups" "title": "Creating Backups"

View File

@ -369,30 +369,30 @@ Ext.define('PBS.Utils', {
// do whatever you want here // do whatever you want here
Proxmox.Utils.override_task_descriptions({ Proxmox.Utils.override_task_descriptions({
backup: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Backup')), backup: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Backup')),
"tape-backup": (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup')), 'barcode-label-media': [gettext('Drive'), gettext('Barcode-Label Media')],
"tape-backup-job": (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup Job')), 'catalog-media': [gettext('Drive'), gettext('Catalog Media')],
"tape-restore": ['Datastore', gettext('Tape Restore')],
"barcode-label-media": [gettext('Drive'), gettext('Barcode label media')],
dircreate: [gettext('Directory Storage'), gettext('Create')], dircreate: [gettext('Directory Storage'), gettext('Create')],
dirremove: [gettext('Directory'), gettext('Remove')], dirremove: [gettext('Directory'), gettext('Remove')],
"load-media": (type, id) => PBS.Utils.render_drive_load_media_id(id, gettext('Load media')), 'eject-media': [gettext('Drive'), gettext('Eject Media')],
"unload-media": [gettext('Drive'), gettext('Unload media')], 'erase-media': [gettext('Drive'), gettext('Erase Media')],
"eject-media": [gettext('Drive'), gettext('Eject media')], garbage_collection: ['Datastore', gettext('Garbage Collect')],
"erase-media": [gettext('Drive'), gettext('Erase media')], 'inventory-update': [gettext('Drive'), gettext('Inventory Update')],
garbage_collection: ['Datastore', gettext('Garbage collect')], 'label-media': [gettext('Drive'), gettext('Label Media')],
"inventory-update": [gettext('Drive'), gettext('Inventory update')], 'load-media': (type, id) => PBS.Utils.render_drive_load_media_id(id, gettext('Load Media')),
"label-media": [gettext('Drive'), gettext('Label media')],
"catalog-media": [gettext('Drive'), gettext('Catalog media')],
logrotate: [null, gettext('Log Rotation')], logrotate: [null, gettext('Log Rotation')],
prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')), prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')),
reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read objects')), reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read Objects')),
"rewind-media": [gettext('Drive'), gettext('Rewind media')], 'rewind-media': [gettext('Drive'), gettext('Rewind Media')],
sync: ['Datastore', gettext('Remote Sync')], sync: ['Datastore', gettext('Remote Sync')],
syncjob: [gettext('Sync Job'), gettext('Remote Sync')], syncjob: [gettext('Sync Job'), gettext('Remote Sync')],
'tape-backup': (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup')),
'tape-backup-job': (type, id) => PBS.Utils.render_tape_backup_id(id, gettext('Tape Backup Job')),
'tape-restore': ['Datastore', gettext('Tape Restore')],
'unload-media': [gettext('Drive'), gettext('Unload Media')],
verificationjob: [gettext('Verify Job'), gettext('Scheduled Verification')],
verify: ['Datastore', gettext('Verification')], verify: ['Datastore', gettext('Verification')],
verify_group: ['Group', gettext('Verification')], verify_group: ['Group', gettext('Verification')],
verify_snapshot: ['Snapshot', gettext('Verification')], verify_snapshot: ['Snapshot', gettext('Verification')],
verificationjob: [gettext('Verify Job'), gettext('Scheduled Verification')],
zfscreate: [gettext('ZFS Storage'), gettext('Create')], zfscreate: [gettext('ZFS Storage'), gettext('Create')],
}); });
}, },

View File

@ -24,11 +24,18 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
return; return;
} }
let mediaset = selection[0].data.text; let node = selection[0];
let uuid = selection[0].data['media-set-uuid']; let mediaset = node.data.text;
let uuid = node.data['media-set-uuid'];
let datastores = node.data.datastores;
while (!datastores && node.get('depth') > 2) {
node = node.parentNode;
datastores = node.data.datastores;
}
Ext.create('PBS.TapeManagement.TapeRestoreWindow', { Ext.create('PBS.TapeManagement.TapeRestoreWindow', {
mediaset, mediaset,
uuid, uuid,
datastores,
listeners: { listeners: {
destroy: function() { destroy: function() {
me.reload(); me.reload();
@ -185,6 +192,7 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
} }
let storeList = Object.values(stores); let storeList = Object.values(stores);
let storeNameList = Object.keys(stores);
let expand = storeList.length === 1; let expand = storeList.length === 1;
for (const store of storeList) { for (const store of storeList) {
store.children = Object.values(store.tapes); store.children = Object.values(store.tapes);
@ -198,6 +206,7 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
} }
node.set('loaded', true); node.set('loaded', true);
node.set('datastores', storeNameList);
Proxmox.Utils.setErrorMask(view, false); Proxmox.Utils.setErrorMask(view, false);
node.expand(); node.expand();
} catch (error) { } catch (error) {

View File

@ -305,7 +305,7 @@ Ext.define('PBS.TapeManagement.DriveStatusGrid', {
rows: { rows: {
'blocksize': { 'blocksize': {
required: true, required: true,
header: gettext('Blocksize'), header: gettext('Block Size'),
renderer: function(value) { renderer: function(value) {
if (!value) { if (!value) {
return gettext('Dynamic'); return gettext('Dynamic');

View File

@ -113,11 +113,11 @@ Ext.define('PBS.TapeManagement.PoolPanel', {
flex: 1, flex: 1,
}, },
{ {
text: gettext('Allocation'), text: gettext('Allocation Policy'),
dataIndex: 'allocation', dataIndex: 'allocation',
}, },
{ {
text: gettext('Retention'), text: gettext('Retention Policy'),
dataIndex: 'retention', dataIndex: 'retention',
}, },
{ {

View File

@ -27,11 +27,11 @@ Ext.define('PBS.TapeManagement.PoolSelector', {
dataIndex: 'drive', dataIndex: 'drive',
}, },
{ {
text: gettext('Allocation'), text: gettext('Allocation Policy'),
dataIndex: 'allocation', dataIndex: 'allocation',
}, },
{ {
text: gettext('Retention'), text: gettext('Retention Policy'),
dataIndex: 'retention', dataIndex: 'retention',
}, },
{ {

View File

@ -3,6 +3,8 @@ Ext.define('PBS.TapeManagement.PoolEditWindow', {
alias: 'widget.pbsPoolEditWindow', alias: 'widget.pbsPoolEditWindow',
mixins: ['Proxmox.Mixin.CBind'], mixins: ['Proxmox.Mixin.CBind'],
onlineHelp: 'tape_media_pool_config',
isCreate: true, isCreate: true,
isAdd: true, isAdd: true,
subject: gettext('Media Pool'), subject: gettext('Media Pool'),
@ -33,7 +35,7 @@ Ext.define('PBS.TapeManagement.PoolEditWindow', {
}, },
}, },
{ {
fieldLabel: gettext('Allocation'), fieldLabel: gettext('Allocation Policy'),
xtype: 'pbsAllocationSelector', xtype: 'pbsAllocationSelector',
name: 'allocation', name: 'allocation',
skipEmptyText: true, skipEmptyText: true,
@ -44,7 +46,7 @@ Ext.define('PBS.TapeManagement.PoolEditWindow', {
}, },
}, },
{ {
fieldLabel: gettext('Retention'), fieldLabel: gettext('Retention Policy'),
xtype: 'pbsRetentionSelector', xtype: 'pbsRetentionSelector',
name: 'retention', name: 'retention',
skipEmptyText: true, skipEmptyText: true,

View File

@ -1,9 +1,9 @@
Ext.define('PBS.TapeManagement.TapeRestoreWindow', { Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
extend: 'Proxmox.window.Edit', extend: 'Proxmox.window.Edit',
alias: 'pbsTapeRestoreWindow', alias: 'widget.pbsTapeRestoreWindow',
mixins: ['Proxmox.Mixin.CBind'], mixins: ['Proxmox.Mixin.CBind'],
width: 400, width: 800,
title: gettext('Restore Media Set'), title: gettext('Restore Media Set'),
url: '/api2/extjs/tape/restore', url: '/api2/extjs/tape/restore',
method: 'POST', method: 'POST',
@ -14,7 +14,31 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
labelWidth: 120, labelWidth: 120,
}, },
referenceHolder: true,
items: [ items: [
{
xtype: 'inputpanel',
onGetValues: function(values) {
let me = this;
let datastores = [];
if (values.store && values.store !== "") {
datastores.push(values.store);
delete values.store;
}
if (values.mapping) {
datastores.push(values.mapping);
delete values.mapping;
}
values.store = datastores.join(',');
return values;
},
column1: [
{ {
xtype: 'displayfield', xtype: 'displayfield',
fieldLabel: gettext('Media Set'), fieldLabel: gettext('Media Set'),
@ -31,16 +55,14 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
value: '{uuid}', value: '{uuid}',
}, },
}, },
{
xtype: 'pbsDataStoreSelector',
fieldLabel: gettext('Datastore'),
name: 'store',
},
{ {
xtype: 'pbsDriveSelector', xtype: 'pbsDriveSelector',
fieldLabel: gettext('Drive'), fieldLabel: gettext('Drive'),
name: 'drive', name: 'drive',
}, },
],
column2: [
{ {
xtype: 'pbsUserSelector', xtype: 'pbsUserSelector',
name: 'notify-user', name: 'notify-user',
@ -61,5 +83,203 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
skipEmptyText: true, skipEmptyText: true,
renderer: Ext.String.htmlEncode, renderer: Ext.String.htmlEncode,
}, },
{
xtype: 'pbsDataStoreSelector',
fieldLabel: gettext('Datastore'),
reference: 'defaultDatastore',
name: 'store',
listeners: {
change: function(field, value) {
let me = this;
let grid = me.up('window').lookup('mappingGrid');
grid.setNeedStores(!value);
},
},
},
],
columnB: [
{
fieldLabel: gettext('Datastore Mapping'),
labelWidth: 200,
hidden: true,
reference: 'mappingLabel',
xtype: 'displayfield',
},
{
xtype: 'pbsDataStoreMappingField',
reference: 'mappingGrid',
name: 'mapping',
defaultBindProperty: 'value',
hidden: true,
},
],
},
],
setDataStores: function(datastores) {
let me = this;
let label = me.lookup('mappingLabel');
let grid = me.lookup('mappingGrid');
let defaultField = me.lookup('defaultDatastore');
if (!datastores || datastores.length <= 1) {
label.setVisible(false);
grid.setVisible(false);
defaultField.setFieldLabel(gettext('Datastore'));
defaultField.setAllowBlank(false);
defaultField.setEmptyText("");
return;
}
label.setVisible(true);
defaultField.setFieldLabel(gettext('Default Datastore'));
defaultField.setAllowBlank(true);
defaultField.setEmptyText(Proxmox.Utils.NoneText);
grid.setDataStores(datastores);
grid.setVisible(true);
},
initComponent: function() {
let me = this;
me.callParent();
if (me.datastores) {
me.setDataStores(me.datastores);
} else {
// use timeout so that the window is rendered already
// for correct masking
setTimeout(function() {
Proxmox.Utils.API2Request({
waitMsgTarget: me,
url: `/tape/media/content?media-set=${me.uuid}`,
success: function(response, opt) {
let datastores = {};
for (const content of response.result.data) {
datastores[content.store] = true;
}
me.setDataStores(Object.keys(datastores));
},
failure: function() {
// ignore failing api call, maybe catalog is missing
me.setDataStores();
},
});
}, 10);
}
},
});
Ext.define('PBS.TapeManagement.DataStoreMappingGrid', {
extend: 'Ext.grid.Panel',
alias: 'widget.pbsDataStoreMappingField',
mixins: ['Ext.form.field.Field'],
getValue: function() {
let me = this;
let datastores = [];
me.getStore().each((rec) => {
let source = rec.data.source;
let target = rec.data.target;
if (target && target !== "") {
datastores.push(`${source}=${target}`);
}
});
return datastores.join(',');
},
// this determines if we need at least one valid mapping
needStores: false,
setNeedStores: function(needStores) {
let me = this;
me.needStores = needStores;
me.checkChange();
me.validate();
},
setValue: function(value) {
let me = this;
me.setDataStores(value);
return me;
},
getErrors: function(value) {
let me = this;
let error = false;
if (me.needStores) {
error = true;
me.getStore().each((rec) => {
if (rec.data.target) {
error = false;
}
});
}
if (error) {
me.addCls(['x-form-trigger-wrap-default', 'x-form-trigger-wrap-invalid']);
let errorMsg = gettext("Need at least one mapping");
me.getActionEl().dom.setAttribute('data-errorqtip', errorMsg);
return [errorMsg];
}
me.removeCls(['x-form-trigger-wrap-default', 'x-form-trigger-wrap-invalid']);
me.getActionEl().dom.setAttribute('data-errorqtip', "");
return [];
},
setDataStores: function(datastores) {
let me = this;
let store = me.getStore();
let data = [];
for (const datastore of datastores) {
data.push({
source: datastore,
target: '',
});
}
store.setData(data);
},
viewConfig: {
markDirty: false,
},
store: { data: [] },
columns: [
{
text: gettext('Source Datastore'),
dataIndex: 'source',
flex: 1,
},
{
text: gettext('Target Datastore'),
xtype: 'widgetcolumn',
dataIndex: 'target',
flex: 1,
widget: {
xtype: 'pbsDataStoreSelector',
allowBlank: true,
emptyText: Proxmox.Utils.NoneText,
listeners: {
change: function(selector, value) {
let me = this;
let rec = me.getWidgetRecord();
if (!rec) {
return;
}
rec.set('target', value);
me.up('grid').checkChange();
},
},
},
},
], ],
}); });

View File

@ -9,7 +9,7 @@ Ext.define('PBS.window.VerifyJobEdit', {
isAdd: true, isAdd: true,
subject: gettext('VerifyJob'), subject: gettext('Verification Job'),
fieldDefaults: { labelWidth: 120 }, fieldDefaults: { labelWidth: 120 },
defaultFocus: 'field[name="ignore-verified"]', defaultFocus: 'field[name="ignore-verified"]',