Compare commits

..

97 Commits

Author SHA1 Message Date
c5ac2b9ddd bump version to 0.8.10-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 15:47:30 +02:00
81f293513e backup: lock base snapshot and ensure existance on finish
To prevent forgetting the base snapshot of a running backup, and catch
the case when it still happens (e.g. via manual rm) to at least error
out instead of storing a potentially invalid backup.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 11:04:47 +02:00
8b5f72b176 Revert "backup: ensure base snapshots are still available after backup"
This reverts commit d53fbe2474.

The HashSet and "register" function are unnecessary, as we already know
which backup is the one we need to check: the last one, stored as
'last_backup'.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 11:03:53 +02:00
f23f75433f backup: flock snapshot on backup start
An flock on the snapshot dir itself is used in addition to the group dir
lock. The lock is used to avoid races with forget and prune, while
having more granularity than the group lock (i.e. the group lock is
necessary to prevent more than one backup per group, but the snapshot
lock still allows backups unrelated to the currently running to be
forgotten/pruned).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 11:02:21 +02:00
6d6b4e72d3 datastore: prevent in-use deletion with locks instead of heuristic
Attempt to lock the backup directory to be deleted, if it works keep the
lock until the deletion is complete. This way we ensure that no other
locking operation (e.g. using a snapshot as base for another backup) can
happen concurrently.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 11:00:29 +02:00
e434258592 src/backup/backup_info.rs: remove BackupGroup lock()
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 10:58:35 +02:00
3dc1a2d5b6 src/tools/fs.rs: new helper lock_dir_noblock
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 10:57:48 +02:00
5d95558bae Makefile: build target - do not fail if control file does not exist
This can happen if a previous build failed ...
2020-08-11 10:47:23 +02:00
882c082369 mark signed manifests as such
for less-confusing display in the web interface

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 09:56:53 +02:00
9a38fa29c2 verify: also check chunk CryptMode
and in-line verify_stored_chunk to avoid double-loading each chunk.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 09:56:20 +02:00
14f6c9cb8b chunk readers: ensure chunk/index CryptMode matches
an encrypted Index should never reference a plain-text chunk, and an
unencrypted Index should never reference an encrypted chunk.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 09:54:22 +02:00
2d55beeca0 datastore api: verify blob/index csum from manifest
when dowloading decoded files.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 09:52:45 +02:00
9238cdf50d datastore api: only decode unencrypted indices
these checks were already in place for regular downloading of backed up
files, also do them when attempting to decode a catalog, or when
downloading decoded files referenced by a pxar index.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 09:51:20 +02:00
5d30f03826 impl PartialEq between Realm and RealmRef
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 12:23:36 +02:00
14263ef989 assert that Username does not impl PartialEq
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 12:21:12 +02:00
e7cb4dc50d introduce Username, Realm and Userid api types
and begin splitting up types.rs as it has grown quite large
already

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 12:05:01 +02:00
27d864210a d/control: proxmox 0.3.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 12:05:01 +02:00
f667f49dab bump proxmox dependency to 0.3.3 for serde helpers
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 11:32:01 +02:00
866c556faf move types.rs to types/mod.rs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 10:32:31 +02:00
90d515c97d config.rs: sort modules
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 08:33:38 +02:00
4dbe129284 backup: only allow finished backups as base snapshot
If the datastore holds broken backups for some reason, do not attempt to
base following snapshots on those. This would lead to an error on
/previous, leaving the client no choice but to upload all chunks, even
though there might be potential for incremental savings.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-07 07:32:56 +02:00
747c3bc087 administration-guide.rst: move Encryption headline up one level 2020-08-07 07:10:12 +02:00
c23e257c5a administration-guide.rst: fix headline (avoid compile error) 2020-08-07 06:56:58 +02:00
16a18dadba admin-guide: add section explaining master keys
Adds a section under encryption which goes into detail on how to
use a master key to store and recover backup encryption keys.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-07 06:54:37 +02:00
5f76ac37b5 fix: master-key: upload RSA encoded key with backup
When uploading an RSA encoded key alongside the backup,
the backup would fail with the error message: "wrong blob
file extension".
Adding the '.blob' extension to rsa-encrypted.key before the
the call to upload_blob_from_data(), rather than after, fixes
the issue.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-06 09:34:01 +02:00
d74edc3d89 finish_backup: mark backup as finished only after checks have passed
Commit 9fa55e09 "finish_backup: test/verify manifest at server side"
moved the finished-marking above some checks, which means if those fail
the backup would still be marked as successful on the server.

Revert that part and comment the line for the future.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-06 06:39:34 +02:00
2f57a433b1 fix #2909: handle missing chunks gracefully in garbage collection
instead of bailing and stopping the entire GC process, warn about the
missing chunks and continue.

this results in "TASK WARNINGS: X" as the status.

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-08-06 06:36:48 +02:00
df7f04364b d/control: bump proxmox to 0.3.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-04 11:34:58 +02:00
98c259b4c1 remove timer and lock functions, fix building with proxmox 0.3.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-04 11:33:02 +02:00
799b3d88bc bump proxmox dependency to 0.3.2 for timer / file locking
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-04 11:27:44 +02:00
db22e6b270 build: properly regenerate d/control
and commit the latest change

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-04 11:16:11 +02:00
16f0afbfb5 gui: user: fix #2898 add dialog to set password
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-08-04 10:21:00 +02:00
d3d566f7bd GC: use time pre phase1 to calculate min_atime in phase2
Used chunks are marked in phase1 of the garbage collection process by
using the atime property. Each used chunk gets touched so that the atime
gets updated (if older than 24h, see relatime).

Should there ever be a situation in which the phase1 in the GC run needs
a very long time to finish, it could happen that the grace period
calculated in phase2 is not long enough and thus the marking of the
chunks (atime) becomes invalid. This would result in the removal of
needed chunks.

Even though the likelyhood of this happening is very low, using the
timestamp from right before phase1 is started, to calculate the grace
period in phase2 should avoid this situation.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-08-04 10:19:05 +02:00
c96b0de48f datastore: allow browsing signed pxar files
just because we can't verify the signature, does not mean the contents
are not accessible. it might make sense to make it obvious with a hint
or click-through warning that no signature verification can take place
or this and downloading.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-04 07:27:56 +02:00
2ce159343b sync: verify size and checksum of pulled archives
and not just of previously synced ones.

we can't use BackupManifest::verify_file as the archive is still stored
under the tmp path at this point.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-04 07:27:56 +02:00
9e496ff6f1 sync: verify chunk size and digest, if possible
for encrypted chunks this is currently not possible, as we need the key
to decode the chunk.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-04 07:27:56 +02:00
8819d1f2f5 blobs: attempt to verify on decode when possible
regular chunks are only decoded when their contents are accessed, in
which case we need to have the key anyway and want to verify the digest.

for blobs we need to verify beforehand, since their checksums are always
calculated based on their raw content, and stored in the manifest.

manifests are also stored as blobs, but don't have a digest in the
traditional sense (they might have a signature covering parts of their
contents, but that is verified already when loading the manifest).

this commit does not cover pull/sync code which copies blobs and chunks
as-is without decoding them.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-04 07:27:56 +02:00
0f9218079a pxar/extract: fixup path stack for errors
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-03 12:20:30 +02:00
1cafbdc70d more whitespace fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-03 12:02:19 +02:00
a3eb7b2cea whitespace fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-03 12:00:59 +02:00
d9b8e2c795 pxar: better error handling on extract
Errors while applying metadata will not be considered fatal
by default using `pxar extract` unless `--strict` was passed
in which case it'll bail out immediately.

It'll still return an error exit status if something had
failed along the way.

Note that most other errors will still cause it to bail out
(eg. errors creating files, or I/O errors while writing
the contents).

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-03 09:40:55 +02:00
4bd2a9e42d worker_task: add getter for upid
sometimes we need the upid inside the worker itself, so give a
possibilty to get it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-03 08:26:17 +02:00
cef03f4149 worker_task: refactor log text generator
we will need this elsewhere, so pull it out

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-03 08:23:13 +02:00
eeb19aeb2d systemd/time: fix weekday wrapping on month
the weekday does not change depending on the month, so remove that wrapping

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-03 08:18:42 +02:00
6c96ec418d systemd/time: add tests for weekday month wrapping
this will fail for now, gets fixed in the next commit

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-03 08:15:26 +02:00
5e4b32706c depend on proxmox 0.3.1 2020-08-02 12:02:21 +02:00
30c3c5d66c pxar: create: attempt to use O_NOATIME
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-31 11:46:53 +02:00
e51be33807 pxar: create: move common O_ flags to open_file
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-31 11:42:15 +02:00
70030b43d0 list_snapshots: Returns new "comment" property (fisrt line from notes) 2020-07-31 11:34:42 +02:00
724de093dd build: track generated d/control in git
to track changes and allow bootstrap-installation of build dependencies.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-31 11:18:33 +02:00
ff86ef00a7 cleanup: manifest is always CryptMode::None 2020-07-31 10:25:30 +02:00
912b3f5bc9 src/api2/admin/datastore.rs: add API to get/set Notes for backus 2020-07-31 10:17:35 +02:00
a4acb6ef84 lock_file: return std::io::Error 2020-07-31 08:53:00 +02:00
d7ee07d838 src/api2/backup/environment.rs: remove debug code 2020-07-31 07:48:53 +02:00
53705acece src/api2/backup/environment.rs: remove debug code 2020-07-31 07:47:08 +02:00
c8fff67d88 finish_backup: add chunk_upload_stats to manifest 2020-07-31 07:45:47 +02:00
9fa55e09a7 finish_backup: test/verify manifest at server side
We want to make sure that the client uploaded a readable manifest.
2020-07-31 07:45:47 +02:00
e443902583 src/backup/datastore.rs: add helpers to load/store manifest
We want this to modify the manifest "unprotected" data, for example
to add upload statistics, notes, ...
2020-07-31 07:45:47 +02:00
32dc4c4604 introduction: language improvement (fix typos, grammar, wording)
Fix typos and grammatical errors.
Reword some sentences for better readability.
Clean up the list found under "Software Stack", so that it maintains a consistent
style throughout.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-30 12:02:54 +02:00
f39a900722 api2/node/termproxy: fix user in worker task
'username' here is without realm, but we really want to use user@realm

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-30 11:57:43 +02:00
1fc82c41f2 src/api2/backup.rs: aquire backup lock earlier in create_locked_backup_group() 2020-07-30 11:03:05 +02:00
d2b0c78e23 api2/node/termproxy: fix zombies on worker abort
tokios kill_on_drop sometimes leaves zombies around, especially
when there is not another tokio::process::Command spawned after

so instead of relying on the 'kill_on_drop' feature, we explicitly
kill the child on a worker abort. to be able to do this
we have to use 'tokio::select' instead of 'futures::select' since
the latter requires the future to be fused, which consumes the
child handle, leaving us no possibility to kill it after fusing.
(tokio::select does not need the futures to be fused, so we
can reuse the child future after the select again)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-30 10:38:14 +02:00
adfdc36936 verify: keep track and log which dirs failed the verification
so that we can print a list at the end of the worker which backups
are corrupt.

this is useful if there are many snapshots and some in between had an
error. Before this patch, the task log simply says to 'look in the logs'
but if the log is very long it makes it hard to see what exactly failed.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-30 09:39:37 +02:00
d8594d87f1 verify: keep also track of corrupt chunks
so that we do not have to verify a corrupt one multiple times

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-30 09:39:37 +02:00
f66f537da9 verify: check all chunks of an index, even if we encounter a corrupt one
this makes it easier to see which chunks are corrupt
(and enables us in the future to build a 'complete' list of
corrupt chunks)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-30 09:39:37 +02:00
d44185c4a1 fix #2873: if --pattern is used, default to not extracting
The extraction algorithm has a state (bool) indicating
whether we're currently in a positive or negative match
which has always been initialized to true at the beginning,
but when the user provides a `--pattern` argument we need to
start out with a negative match.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-30 09:33:30 +02:00
d53fbe2474 backup: ensure base snapshots are still available after backup
This should never trigger if everything else works correctly, but it is
still a very cheap check to avoid wrongly marking a backup as "OK" when
in fact some chunks might be missing.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-30 08:28:54 +02:00
95bda2f25d backup: use flock on backup group to forbid multiple backups at once
Multiple backups within one backup group don't really make sense, but
break all sorts of guarantees (e.g. a second backup started after a
first would use a "known-chunks" list from the previous unfinished one,
which would be empty - but using the list from the last finished one is
not a fix either, as that one could be deleted or pruned once the first
simultaneous backup is finished).

Fix it by only allowing one backup per backup group at one time. This is
done via a flock on the backup group directory, thus remaining intact
even after a reload.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-30 08:26:26 +02:00
c9756b40d1 datastore: prevent deletion of snaps in use as "previous backup"
To prevent a race with a background GC operation, do not allow deletion
of backups who's index might currently be referenced as the "known chunk
list" for successive backups. Otherwise the GC could delete chunks it
thinks are no longer referenced, while at the same time telling the
client that it doesn't need to upload said chunks because they already
exist.

Additionally, prevent deletion of whole backup groups, if there are
snapshots contained that appear to be currently in-progress. This is
currently unlikely to trigger, as that function is only used for sync
jobs, but it's a useful safeguard either way.

Deleting a single snapshot has a 'force' parameter, which is necessary
to allow deleting incomplete snapshots on an aborted backup. Pruning
also sets force=true to avoid the check, since it calculates which
snapshots to keep on its own.

To avoid code duplication, the is_finished method is factored out.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-30 08:26:01 +02:00
8cd29fb24a tools: add nonblocking mode to lock_file
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-30 08:18:10 +02:00
505c5f0f76 fix typo: avgerage to average
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-30 07:08:08 +02:00
2aaae9705e src/backup/verify.rs: try to verify chunks only once
We use a HashSet (per BackupGroup) to track already verified chunks.
2020-07-29 13:29:13 +02:00
8aa67ee758 bump proxmox to 0.3, cleanup http_err macro usage
Also swap the order of a couple of `.map_err().await` to
`.await.map_err()` since that's generally more efficient.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-29 09:38:36 +02:00
3865e27e96 src/api2/node.rs: 'mod' statement cleanup
split them into groups: `pub`, `pub(crate)` and non-pub

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-29 09:19:57 +02:00
f6c6e09a8a update to pxar 0.3 to support negative timestamps
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-29 08:31:37 +02:00
71282dd988 ui: fix in-progress snapshots always showing as "Encrypted"
We can't know if they are encrypted or not when they're not even
finished yet.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-29 07:13:25 +02:00
80db161e05 ui: fix error when reloading DataStoreContent
...when an entry is selected, that doesn't exist after the reload.

E.g. when one deletes selects a file within a snapshot and then clicks
the delete icon for said snapshot, focusRow would then fail and the
loading mask stay on until a reload.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-29 07:13:12 +02:00
be10cdb122 fix #2856: also check whole device for device mapper
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-28 11:03:45 +02:00
7fde1a71ca upload_chunk: allow upload of empty blobs
a blob can be empty (e.g. an empty pct fw conf), so we
have to set the minimum size to the header size

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-28 11:03:36 +02:00
a83674ad48 administration-guide: fix typo that breaks example command
The ' ' (space) between 'etc/ **/*.txt' resulted in the example command's output
not matching the given example output. Removing this space fixes the command.
2020-07-28 10:59:53 +02:00
02f82148cf docs: pxar create: update docs to match current behavior
This removes parts of the previous explanation of the tool that are no longer
correct, and adds an explanation of '--exclude' parameter, instead.

Adds more clarity to the command, by use of '/path/to/source' to signify
source directory.

Specify that the pattern matching style of the exclude parameter is that of
gitignore's syntax.
2020-07-28 10:59:42 +02:00
39f18b30b6 src/backup/data_blob.rs: new load_from_reader(), which verifies the CRC
And make verify_crc private for now. We always call load_from_reader() to
verify the CRC.

Also add load_chunk() to datastore.rs (from chunk_store::read_chunk())
2020-07-28 10:23:16 +02:00
69d970a658 ui: DataStoreContent: keep selection and expansion on reload
when clicking reload, we keep the existing selection
(if it still exists), and the previous expanded elements expanded

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-27 12:51:34 +02:00
6d55603dcc ui: add search box to DataStore content
which searches the whole tree (name & owner)

we do this by traversing the tree and marking elements as matches,
then afterwards make a simple filter that matches on a boolean

worst case cost of this is O(2n) since we have to traverse the
tree (in the worst) case one time, and the filter function does it again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-27 12:51:11 +02:00
3e395378bc ui: rework DataStore content Panel
instead of having the files as a column, put the files into the tree
as a third level

with this, we can move the actions into an action column and remove
the top buttons (except reload)

clicking the download action now downloads directly, so we would
not need the download window anymore

clicking the browse action, opens the pxar browser like before,
but expands and selects (&focus) the selected pxar file

also changes the icon of 'signed' to the one to locked
but color codes them (singed => greyed out, encrypted => green),
similar to what browsers do/did for certificates

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-27 12:47:51 +02:00
bccdc5fa04 src/backup/manifest.rs: cleanup - again, avoid recursive call to write_canonical_json
And use re-borrow instead of dyn trait casting.
2020-07-27 10:31:34 +02:00
0bf7ba6c92 src/backup/manifest.rs: cleanup - avoid recursive call to write_canonical_json 2020-07-27 08:48:11 +02:00
e6b599aa6c services: make reload safer and default to it in gui
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-25 20:23:12 +02:00
d757021f4c ui: acl: add improved permission selector
taken mostly from PVE, with adaption to how PBS does things.
Main difference is that we do not have a resource store singleton
here which we can use, but for datastores we can already use the
always present datastore-list store. Register it to the store manager
with a "storeId" property (vs. our internal storeid one).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-25 20:10:11 +02:00
ee15af6bb8 api: service command: fix test for essential service
makes no sense to disallow reload or start (even if start cannot
really happen)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 19:35:19 +02:00
3da9b7e0dd followup: server/state: rename task_count to internal_task_count
so that the relation with spawn_internal_task is made more clear

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 12:11:39 +02:00
beaa683a52 bump version to 0.8.9-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 11:24:56 +02:00
33a88dafb9 server/state: add spawn_internal_task and use it for websockets
is a helper to spawn an internal tokio task without it showing up
in the task list

it is still tracked for reload and notifies the last_worker_listeners

this enables the console to survive a reload of proxmox-backup-proxy

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-24 11:17:33 +02:00
224c65f8de termproxy: let users stop the termproxy task
for that we have to do a select on the workers abort_future

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-24 11:17:33 +02:00
f2b4b4b9fe fix 2885: bail on duplicate backup target
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-24 11:08:56 +02:00
ea9e559fc4 client: log archive upload duration more accurate, fix grammar
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 10:15:28 +02:00
0cf14984cc client: avoid division by zero in avg speed calculation, be more accurate
using micros vs. as_secs_f64 allows to have it calculated as usize
bytes, easier to handle - this was also used when it still lived in
upload_chunk_info_stream

Co-authored-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 10:14:40 +02:00
97 changed files with 2863 additions and 1431 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "0.8.8" version = "0.8.10"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"] authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018" edition = "2018"
license = "AGPL-3" license = "AGPL-3"
@ -39,11 +39,11 @@ pam-sys = "0.5"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-utils = "0.1.0" pin-utils = "0.1.0"
pathpatterns = "0.1.2" pathpatterns = "0.1.2"
proxmox = { version = "0.2.1", features = [ "sortable-macro", "api-macro", "websocket" ] } proxmox = { version = "0.3.3", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0" proxmox-fuse = "0.1.0"
pxar = { version = "0.2.1", features = [ "tokio-io", "futures-io" ] } pxar = { version = "0.3.0", features = [ "tokio-io", "futures-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io", "futures-io" ] } #pxar = { path = "../pxar", features = [ "tokio-io", "futures-io" ] }
regex = "1.2" regex = "1.2"
rustyline = "6" rustyline = "6"

View File

@ -69,10 +69,12 @@ doc:
.PHONY: build .PHONY: build
build: build:
rm -rf build rm -rf build
rm -f debian/control
debcargo package --config debian/debcargo.toml --changelog-ready --no-overlay-write-back --directory build proxmox-backup $(shell dpkg-parsechangelog -l debian/changelog -SVersion | sed -e 's/-.*//') debcargo package --config debian/debcargo.toml --changelog-ready --no-overlay-write-back --directory build proxmox-backup $(shell dpkg-parsechangelog -l debian/changelog -SVersion | sed -e 's/-.*//')
sed -e '1,/^$$/ ! d' build/debian/control > build/debian/control.src sed -e '1,/^$$/ ! d' build/debian/control > build/debian/control.src
cat build/debian/control.src build/debian/control.in > build/debian/control cat build/debian/control.src build/debian/control.in > build/debian/control
rm build/debian/control.in build/debian/control.src rm build/debian/control.in build/debian/control.src
cp build/debian/control debian/control
rm build/Cargo.lock rm build/Cargo.lock
find build/debian -name "*.hint" -delete find build/debian -name "*.hint" -delete
$(foreach i,$(SUBDIRS), \ $(foreach i,$(SUBDIRS), \

83
debian/changelog vendored
View File

@ -1,3 +1,86 @@
rust-proxmox-backup (0.8.10-1) unstable; urgency=medium
* ui: acl: add improved permission selector
* services: make reload safer and default to it in gui
* ui: rework DataStore content Panel
* ui: add search box to DataStore content
* ui: DataStoreContent: keep selection and expansion on reload
* upload_chunk: allow upload of empty blobs
* fix #2856: also check whole device for device mapper
* ui: fix error when reloading DataStoreContent
* ui: fix in-progress snapshots always showing as "Encrypted"
* update to pxar 0.3 to support negative timestamps
* fix #2873: if --pattern is used, default to not extracting
* finish_backup: test/verify manifest at server side
* finish_backup: add chunk_upload_stats to manifest
* src/api2/admin/datastore.rs: add API to get/set Notes for backus
* list_snapshots: Returns new "comment" property (first line from notes)
* pxar: create: attempt to use O_NOATIME
* systemd/time: fix weekday wrapping on month
* pxar: better error handling on extract
* pxar/extract: fixup path stack for errors
* datastore: allow browsing signed pxar files
* GC: use time pre phase1 to calculate min_atime in phase2
* gui: user: fix #2898 add dialog to set password
* fix #2909: handle missing chunks gracefully in garbage collection
* finish_backup: mark backup as finished only after checks have passed
* fix: master-key: upload RSA encoded key with backup
* admin-guide: add section explaining master keys
* backup: only allow finished backups as base snapshot
* datastore api: only decode unencrypted indices
* datastore api: verify blob/index csum from manifest
* sync, blobs and chunk readers: add more checks and verification
* verify: add more checks, don't fail on first error
* mark signed manifests as such
* backup/prune/forget: improve locking
* backup: ensure base snapshots are still available after backup
-- Proxmox Support Team <support@proxmox.com> Tue, 11 Aug 2020 15:37:29 +0200
rust-proxmox-backup (0.8.9-1) unstable; urgency=medium
* improve termprocy (console) behavior on updating proxmox-backup-server and
other daemon restarts
* client: improve upload log output and speed calculation
* fix #2885: client upload: bail on duplicate backup targets
-- Proxmox Support Team <support@proxmox.com> Fri, 24 Jul 2020 11:24:07 +0200
rust-proxmox-backup (0.8.8-1) unstable; urgency=medium rust-proxmox-backup (0.8.8-1) unstable; urgency=medium
* pxar: .pxarexclude: match behavior from absolute paths to the one described * pxar: .pxarexclude: match behavior from absolute paths to the one described

132
debian/control vendored Normal file
View File

@ -0,0 +1,132 @@
Source: rust-proxmox-backup
Section: admin
Priority: optional
Build-Depends: debhelper (>= 11),
dh-cargo (>= 18),
cargo:native,
rustc:native,
libstd-rust-dev,
librust-anyhow-1+default-dev,
librust-apt-pkg-native-0.3+default-dev (>= 0.3.1-~~),
librust-base64-0.12+default-dev,
librust-bitflags-1+default-dev (>= 1.2.1-~~),
librust-bytes-0.5+default-dev,
librust-chrono-0.4+default-dev,
librust-crc32fast-1+default-dev,
librust-endian-trait-0.6+arrays-dev,
librust-endian-trait-0.6+default-dev,
librust-futures-0.3+default-dev,
librust-h2-0.2+default-dev,
librust-h2-0.2+stream-dev,
librust-handlebars-3+default-dev,
librust-http-0.2+default-dev,
librust-hyper-0.13+default-dev,
librust-lazy-static-1+default-dev (>= 1.4-~~),
librust-libc-0.2+default-dev,
librust-log-0.4+default-dev,
librust-nix-0.16+default-dev,
librust-nom-5+default-dev (>= 5.1-~~),
librust-num-traits-0.2+default-dev,
librust-once-cell-1+default-dev (>= 1.3.1-~~),
librust-openssl-0.10+default-dev,
librust-pam-0.7+default-dev,
librust-pam-sys-0.5+default-dev,
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.3+api-macro-dev (>= 0.3.3-~~),
librust-proxmox-0.3+default-dev (>= 0.3.3-~~),
librust-proxmox-0.3+sortable-macro-dev (>= 0.3.3-~~),
librust-proxmox-0.3+websocket-dev (>= 0.3.3-~~),
librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.3+default-dev,
librust-pxar-0.3+futures-io-dev,
librust-pxar-0.3+tokio-io-dev,
librust-regex-1+default-dev (>= 1.2-~~),
librust-rustyline-6+default-dev,
librust-serde-1+default-dev,
librust-serde-1+derive-dev,
librust-serde-json-1+default-dev,
librust-siphasher-0.3+default-dev,
librust-syslog-4+default-dev,
librust-tokio-0.2+blocking-dev (>= 0.2.9-~~),
librust-tokio-0.2+default-dev (>= 0.2.9-~~),
librust-tokio-0.2+dns-dev (>= 0.2.9-~~),
librust-tokio-0.2+fs-dev (>= 0.2.9-~~),
librust-tokio-0.2+io-util-dev (>= 0.2.9-~~),
librust-tokio-0.2+macros-dev (>= 0.2.9-~~),
librust-tokio-0.2+process-dev (>= 0.2.9-~~),
librust-tokio-0.2+rt-threaded-dev (>= 0.2.9-~~),
librust-tokio-0.2+signal-dev (>= 0.2.9-~~),
librust-tokio-0.2+stream-dev (>= 0.2.9-~~),
librust-tokio-0.2+tcp-dev (>= 0.2.9-~~),
librust-tokio-0.2+time-dev (>= 0.2.9-~~),
librust-tokio-0.2+uds-dev (>= 0.2.9-~~),
librust-tokio-openssl-0.4+default-dev,
librust-tokio-util-0.3+codec-dev,
librust-tokio-util-0.3+default-dev,
librust-tower-service-0.3+default-dev,
librust-udev-0.4+default-dev | librust-udev-0.3+default-dev,
librust-url-2+default-dev (>= 2.1-~~),
librust-walkdir-2+default-dev,
librust-xdg-2+default-dev (>= 2.2-~~),
librust-zstd-0.4+bindgen-dev,
librust-zstd-0.4+default-dev,
libacl1-dev,
libfuse3-dev,
libsystemd-dev,
uuid-dev,
debhelper (>= 12~),
bash-completion,
python3-docutils,
python3-pygments,
rsync,
fonts-dejavu-core <!nodoc>,
fonts-lato <!nodoc>,
fonts-open-sans <!nodoc>,
graphviz <!nodoc>,
latexmk <!nodoc>,
python3-sphinx <!nodoc>,
texlive-fonts-extra <!nodoc>,
texlive-fonts-recommended <!nodoc>,
texlive-xetex <!nodoc>,
xindy <!nodoc>
Maintainer: Proxmox Support Team <support@proxmox.com>
Standards-Version: 4.4.1
Vcs-Git:
Vcs-Browser:
Homepage: https://www.proxmox.com
Package: proxmox-backup-server
Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8),
lvm2,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4),
pve-xtermjs (>= 4.7.0-1),
smartmontools,
${misc:Depends},
${shlibs:Depends},
Recommends: zfsutils-linux,
Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface.
Package: proxmox-backup-client
Architecture: any
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: Proxmox Backup Client tools
This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups.
Package: proxmox-backup-docs
Build-Profiles: <!nodoc>
Section: doc
Depends: libjs-extjs,
${misc:Depends},
Architecture: all
Description: Proxmox Backup Documentation
This package contains the Proxmox Backup Documentation files.

View File

@ -660,7 +660,7 @@ Restoring this backup will result in:
. .. file2 . .. file2
Encryption Encryption
^^^^^^^^^^ ~~~~~~~~~~
Proxmox Backup supports client-side encryption with AES-256 in GCM_ Proxmox Backup supports client-side encryption with AES-256 in GCM_
mode. To set this up, you first need to create an encryption key: mode. To set this up, you first need to create an encryption key:
@ -677,6 +677,8 @@ extra protection, you can also create it without a password:
# proxmox-backup-client key create /path/to/my-backup.key --kdf none # proxmox-backup-client key create /path/to/my-backup.key --kdf none
Having created this key, it is now possible to create an encrypted backup, by
passing the ``--keyfile`` parameter, with the path to the key file.
.. code-block:: console .. code-block:: console
@ -685,12 +687,95 @@ extra protection, you can also create it without a password:
Encryption Key Password: ************** Encryption Key Password: **************
... ...
.. Note:: If you do not specify the name of the backup key, the key will be
created in the default location
``~/.config/proxmox-backup/encryption-key.json``. ``proxmox-backup-client``
will also search this location by default, in case the ``--keyfile``
parameter is not specified.
You can avoid entering the passwords by setting the environment You can avoid entering the passwords by setting the environment
variables ``PBS_PASSWORD`` and ``PBS_ENCRYPTION_PASSWORD``. variables ``PBS_PASSWORD`` and ``PBS_ENCRYPTION_PASSWORD``.
.. todo:: Explain master-key Using a master key to store and recover encryption keys
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can also use ``proxmox-backup-client key`` to create an RSA public/private
key pair, which can be used to store an encrypted version of the symmetric
backup encryption key alongside each backup and recover it later.
To set up a master key:
1. Create an encryption key for the backup:
.. code-block:: console
# proxmox-backup-client key create
creating default key at: "~/.config/proxmox-backup/encryption-key.json"
Encryption Key Password: **********
...
The resulting file will be saved to ``~/.config/proxmox-backup/encryption-key.json``.
2. Create an RSA public/private key pair:
.. code-block:: console
# proxmox-backup-client key create-master-key
Master Key Password: *********
...
This will create two files in your current directory, ``master-public.pem``
and ``master-private.pem``.
3. Import the newly created ``master-public.pem`` public certificate, so that
``proxmox-backup-client`` can find and use it upon backup.
.. code-block:: console
# proxmox-backup-client key import-master-pubkey /path/to/master-public.pem
Imported public master key to "~/.config/proxmox-backup/master-public.pem"
4. With all these files in place, run a backup job:
.. code-block:: console
# proxmox-backup-client backup etc.pxar:/etc
The key will be stored in your backup, under the name ``rsa-encrypted.key``.
.. Note:: The ``--keyfile`` parameter can be excluded, if the encryption key
is in the default path. If you specified another path upon creation, you
must pass the ``--keyfile`` parameter.
5. To test that everything worked, you can restore the key from the backup:
.. code-block:: console
# proxmox-backup-client restore /path/to/backup/ rsa-encrypted.key /path/to/target
.. Note:: You should not need an encryption key to extract this file. However, if
a key exists at the default location
(``~/.config/proxmox-backup/encryption-key.json``) the program will prompt
you for an encryption key password. Simply moving ``encryption-key.json``
out of this directory will fix this issue.
6. Then, use the previously generated master key to decrypt the file:
.. code-block:: console
# openssl rsautl -decrypt -inkey master-private.pem -in rsa-encrypted.key -out /path/to/target
Enter pass phrase for ./master-private.pem: *********
7. The target file will now contain the encryption key information in plain
text. The success of this can be confirmed by passing the resulting ``json``
file, with the ``--keyfile`` parameter, when decrypting files from the backup.
.. warning:: Without their key, backed up files will be inaccessible. Thus, you should
keep keys ordered and in a place that is separate from the contents being
backed up. It can happen, for example, that you back up an entire system, using
a key on that system. If the system then becomes inaccessable for any reason
and needs to be restored, this will not be possible as the encryption key will be
lost along with the broken system.
Restoring Data Restoring Data
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
@ -777,7 +862,7 @@ For example:
.. code-block:: console .. code-block:: console
pxar:/ > find etc/ **/*.txt --select pxar:/ > find etc/**/*.txt --select
"/etc/X11/rgb.txt" "/etc/X11/rgb.txt"
pxar:/ > list-selected pxar:/ > list-selected
etc/**/*.txt etc/**/*.txt

View File

@ -15,7 +15,7 @@ encryption (AE_). Using :term:`Rust` as the implementation language guarantees h
performance, low resource usage, and a safe, high-quality codebase. performance, low resource usage, and a safe, high-quality codebase.
It features strong client-side encryption. Thus, it's possible to It features strong client-side encryption. Thus, it's possible to
backup data to not fully trusted targets. backup data to targets that are not fully trusted.
Architecture Architecture
@ -23,7 +23,7 @@ Architecture
Proxmox Backup Server uses a `client-server model`_. The server stores the Proxmox Backup Server uses a `client-server model`_. The server stores the
backup data and provides an API to create backups and restore data. With the backup data and provides an API to create backups and restore data. With the
API it's also possible to manage disks and other server side resources. API, it's also possible to manage disks and other server-side resources.
The backup client uses this API to access the backed up data. With the command The backup client uses this API to access the backed up data. With the command
line tool ``proxmox-backup-client`` you can create backups and restore data. line tool ``proxmox-backup-client`` you can create backups and restore data.
@ -32,7 +32,7 @@ For QEMU_ with `Proxmox Virtual Environment`_ we deliver an integrated client.
A single backup is allowed to contain several archives. For example, when you A single backup is allowed to contain several archives. For example, when you
backup a :term:`virtual machine`, each disk is stored as a separate archive backup a :term:`virtual machine`, each disk is stored as a separate archive
inside that backup. The VM configuration itself is stored as an extra file. inside that backup. The VM configuration itself is stored as an extra file.
This way, it is easy to access and restore only important parts of the backup This way, it's easy to access and restore only important parts of the backup,
without the need to scan the whole backup. without the need to scan the whole backup.
@ -44,29 +44,29 @@ Main Features
:term:`container`\ s. :term:`container`\ s.
:Performance: The whole software stack is written in :term:`Rust`, :Performance: The whole software stack is written in :term:`Rust`,
to provide high speed and memory efficiency. in order to provide high speed and memory efficiency.
:Deduplication: Periodic backups produce large amounts of duplicate :Deduplication: Periodic backups produce large amounts of duplicate
data. The deduplication layer avoids redundancy and minimizes the used data. The deduplication layer avoids redundancy and minimizes the storage
storage space. space used.
:Incremental backups: Changes between backups are typically low. Reading and :Incremental backups: Changes between backups are typically low. Reading and
sending only the delta reduces storage and network impact of backups. sending only the delta reduces the storage and network impact of backups.
:Data Integrity: The built-in `SHA-256`_ checksum algorithm assures the :Data Integrity: The built-in `SHA-256`_ checksum algorithm ensures accuracy and
accuracy and consistency of your backups. consistency in your backups.
:Remote Sync: It is possible to efficiently synchronize data to remote :Remote Sync: It is possible to efficiently synchronize data to remote
sites. Only deltas containing new data are transferred. sites. Only deltas containing new data are transferred.
:Compression: The ultra fast Zstandard_ compression is able to compress :Compression: The ultra-fast Zstandard_ compression is able to compress
several gigabytes of data per second. several gigabytes of data per second.
:Encryption: Backups can be encrypted on the client-side using AES-256 in :Encryption: Backups can be encrypted on the client-side, using AES-256 in
Galois/Counter Mode (GCM_) mode. This authenticated encryption (AE_) mode Galois/Counter Mode (GCM_) mode. This authenticated encryption (AE_) mode
provides very high performance on modern hardware. provides very high performance on modern hardware.
:Web interface: Manage the Proxmox Backup Server with the integrated web-based :Web interface: Manage the Proxmox Backup Server with the integrated, web-based
user interface. user interface.
:Open Source: No secrets. Proxmox Backup Server is free and open-source :Open Source: No secrets. Proxmox Backup Server is free and open-source
@ -80,11 +80,11 @@ Reasons for Data Backup?
------------------------ ------------------------
The main purpose of a backup is to protect against data loss. Data loss can be The main purpose of a backup is to protect against data loss. Data loss can be
caused by faulty hardware but also by human error. caused by both faulty hardware and human error.
A common mistake is to accidentally delete a file or folder which is still A common mistake is to accidentally delete a file or folder which is still
required. Virtualization can even amplify this problem; it easily happens that required. Virtualization can even amplify this problem, as deleting a whole
a whole virtual machine is deleted by just pressing a single button. virtual machine can be as easy as pressing a single button.
For administrators, backups can serve as a useful toolkit for temporarily For administrators, backups can serve as a useful toolkit for temporarily
storing data. For example, it is common practice to perform full backups before storing data. For example, it is common practice to perform full backups before
@ -104,16 +104,16 @@ Software Stack
Proxmox Backup Server consists of multiple components: Proxmox Backup Server consists of multiple components:
* server-daemon providing, among others, a RESTfull API, super-fast * A server-daemon providing, among other things, a RESTfull API, super-fast
asynchronous tasks, lightweight usage statistic collection, scheduling asynchronous tasks, lightweight usage statistic collection, scheduling
events, strict separation of privileged and unprivileged execution events, strict separation of privileged and unprivileged execution
environments, ... environments
* JavaScript management webinterface * A JavaScript management web interface
* management CLI tool for the server (`proxmox-backup-manager`) * A management CLI tool for the server (`proxmox-backup-manager`)
* client CLI tool (`proxmox-backup-client`) to access the server easily from * A client CLI tool (`proxmox-backup-client`) to access the server easily from
any `Linux amd64` environment. any `Linux amd64` environment
Everything outside of the web interface is written in the Rust programming Aside from the web interface, everything is written in the Rust programming
language. language.
"The Rust programming language helps you write faster, more reliable software. "The Rust programming language helps you write faster, more reliable software.

View File

@ -18,7 +18,7 @@ Run the following command to create an archive of a folder named ``source``:
.. code-block:: console .. code-block:: console
# pxar create archive.pxar source # pxar create archive.pxar /path/to/source
This will create a new archive called ``archive.pxar`` with the contents of the This will create a new archive called ``archive.pxar`` with the contents of the
``source`` folder. ``source`` folder.
@ -35,35 +35,34 @@ To alter this behavior and follow device boundaries, use the
``--all-file-systems`` flag. ``--all-file-systems`` flag.
It is possible to exclude certain files and/or folders from the archive by It is possible to exclude certain files and/or folders from the archive by
passing glob match patterns as additional parameters. Whenever a file is matched passing the ``--exclude`` parameter with ``gitignore``\-style match patterns.
by one of the patterns, you will get a warning stating that this file is skipped
and therefore not included in the archive.
For example, you can exclude all files ending in ``.txt`` from the archive For example, you can exclude all files ending in ``.txt`` from the archive
by running: by running:
.. code-block:: console .. code-block:: console
# pxar create archive.pxar source '**/*.txt' # pxar create archive.pxar /path/to/source --exclude '**/*.txt'
Be aware that the shell itself will try to expand all of the glob patterns before Be aware that the shell itself will try to expand all of the glob patterns before
invoking ``pxar``. invoking ``pxar``.
In order to avoid this, all globs have to be quoted correctly. In order to avoid this, all globs have to be quoted correctly.
It is possible to pass a list of match patterns to fulfill more complex It is possible to pass the ``--exclude`` parameter multiple times, in order to
file exclusion/inclusion behavior, although it is recommended to use the match more than one pattern. This allows you to use more complex
file exclusion/inclusion behavior. However, it is recommended to use
``.pxarexclude`` files instead for such cases. ``.pxarexclude`` files instead for such cases.
For example you might want to exclude all ``.txt`` files except for a specific For example you might want to exclude all ``.txt`` files except for a specific
one from the archive. This is achieved via the negated match pattern, prefixed one from the archive. This is achieved via the negated match pattern, prefixed
by ``!``. by ``!``.
All the glob pattern are relative to the ``source`` directory. All the glob patterns are relative to the ``source`` directory.
.. code-block:: console .. code-block:: console
# pxar create archive.pxar source '**/*.txt' '!/folder/file.txt' # pxar create archive.pxar /path/to/source --exclude '**/*.txt' --exclude '!/folder/file.txt'
.. NOTE:: The order of the glob match patterns matters as later ones win over .. NOTE:: The order of the glob match patterns matters as later ones override
previous ones. Permutations of the same patterns lead to different results. previous ones. Permutations of the same patterns lead to different results.
``pxar`` will store the list of glob match patterns passed as parameters via the ``pxar`` will store the list of glob match patterns passed as parameters via the

View File

@ -4,6 +4,7 @@ use anyhow::{Error};
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use proxmox_backup::api2::types::Userid;
use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader}; use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader};
pub struct DummyWriter { pub struct DummyWriter {
@ -27,7 +28,7 @@ async fn run() -> Result<(), Error> {
let host = "localhost"; let host = "localhost";
let username = "root@pam"; let username = Userid::root_userid();
let options = HttpClientOptions::new() let options = HttpClientOptions::new()
.interactive(true) .interactive(true)

View File

@ -1,5 +1,6 @@
use anyhow::{Error}; use anyhow::{Error};
use proxmox_backup::api2::types::Userid;
use proxmox_backup::client::*; use proxmox_backup::client::*;
async fn upload_speed() -> Result<f64, Error> { async fn upload_speed() -> Result<f64, Error> {
@ -7,7 +8,7 @@ async fn upload_speed() -> Result<f64, Error> {
let host = "localhost"; let host = "localhost";
let datastore = "store2"; let datastore = "store2";
let username = "root@pam"; let username = Userid::root_userid();
let options = HttpClientOptions::new() let options = HttpClientOptions::new()
.interactive(true) .interactive(true)

View File

@ -2,7 +2,7 @@ use anyhow::{bail, format_err, Error};
use serde_json::{json, Value}; use serde_json::{json, Value};
use proxmox::api::{api, RpcEnvironment, Permission, UserInformation}; use proxmox::api::{api, RpcEnvironment, Permission};
use proxmox::api::router::{Router, SubdirMap}; use proxmox::api::router::{Router, SubdirMap};
use proxmox::{sortable, identity}; use proxmox::{sortable, identity};
use proxmox::{http_err, list_subdirs_api_method}; use proxmox::{http_err, list_subdirs_api_method};
@ -23,7 +23,7 @@ pub mod role;
/// returns Ok(true) if a ticket has to be created /// returns Ok(true) if a ticket has to be created
/// and Ok(false) if not /// and Ok(false) if not
fn authenticate_user( fn authenticate_user(
username: &str, userid: &Userid,
password: &str, password: &str,
path: Option<String>, path: Option<String>,
privs: Option<String>, privs: Option<String>,
@ -31,7 +31,7 @@ fn authenticate_user(
) -> Result<bool, Error> { ) -> Result<bool, Error> {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
if !user_info.is_active_user(&username) { if !user_info.is_active_user(&userid) {
bail!("user account disabled or expired."); bail!("user account disabled or expired.");
} }
@ -39,10 +39,10 @@ fn authenticate_user(
if password.starts_with("PBS:") { if password.starts_with("PBS:") {
if let Ok((_age, Some(ticket_username))) = tools::ticket::verify_rsa_ticket(public_auth_key(), "PBS", password, None, -300, ticket_lifetime) { if let Ok((_age, Some(ticket_username))) = tools::ticket::verify_rsa_ticket(public_auth_key(), "PBS", password, None, -300, ticket_lifetime) {
if ticket_username == username { if *userid == ticket_username {
return Ok(true); return Ok(true);
} else { } else {
bail!("ticket login failed - wrong username"); bail!("ticket login failed - wrong userid");
} }
} }
} else if password.starts_with("PBSTERM:") { } else if password.starts_with("PBSTERM:") {
@ -55,7 +55,7 @@ fn authenticate_user(
let port = port.unwrap(); let port = port.unwrap();
if let Ok((_age, _data)) = if let Ok((_age, _data)) =
tools::ticket::verify_term_ticket(public_auth_key(), &username, &path, port, password) tools::ticket::verify_term_ticket(public_auth_key(), &userid, &path, port, password)
{ {
for (name, privilege) in PRIVILEGES { for (name, privilege) in PRIVILEGES {
if *name == privilege_name { if *name == privilege_name {
@ -66,7 +66,7 @@ fn authenticate_user(
} }
} }
user_info.check_privs(username, &path_vec, *privilege, false)?; user_info.check_privs(userid, &path_vec, *privilege, false)?;
return Ok(false); return Ok(false);
} }
} }
@ -75,7 +75,7 @@ fn authenticate_user(
} }
} }
let _ = crate::auth::authenticate_user(username, password)?; let _ = crate::auth::authenticate_user(userid, password)?;
Ok(true) Ok(true)
} }
@ -83,7 +83,7 @@ fn authenticate_user(
input: { input: {
properties: { properties: {
username: { username: {
schema: PROXMOX_USER_ID_SCHEMA, type: Userid,
}, },
password: { password: {
schema: PASSWORD_SCHEMA, schema: PASSWORD_SCHEMA,
@ -130,7 +130,7 @@ fn authenticate_user(
/// ///
/// Returns: An authentication ticket with additional infos. /// Returns: An authentication ticket with additional infos.
fn create_ticket( fn create_ticket(
username: String, username: Userid,
password: String, password: String,
path: Option<String>, path: Option<String>,
privs: Option<String>, privs: Option<String>,
@ -156,7 +156,7 @@ fn create_ticket(
Err(err) => { Err(err) => {
let client_ip = "unknown"; // $rpcenv->get_client_ip() || ''; let client_ip = "unknown"; // $rpcenv->get_client_ip() || '';
log::error!("authentication failure; rhost={} user={} msg={}", client_ip, username, err.to_string()); log::error!("authentication failure; rhost={} user={} msg={}", client_ip, username, err.to_string());
Err(http_err!(UNAUTHORIZED, "permission check failed.".into())) Err(http_err!(UNAUTHORIZED, "permission check failed."))
} }
} }
} }
@ -165,7 +165,7 @@ fn create_ticket(
input: { input: {
properties: { properties: {
userid: { userid: {
schema: PROXMOX_USER_ID_SCHEMA, type: Userid,
}, },
password: { password: {
schema: PASSWORD_SCHEMA, schema: PASSWORD_SCHEMA,
@ -183,13 +183,15 @@ fn create_ticket(
/// Each user is allowed to change his own password. Superuser /// Each user is allowed to change his own password. Superuser
/// can change all passwords. /// can change all passwords.
fn change_password( fn change_password(
userid: String, userid: Userid,
password: String, password: String,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let current_user = rpcenv.get_user() let current_user: Userid = rpcenv
.ok_or_else(|| format_err!("unknown user"))?; .get_user()
.ok_or_else(|| format_err!("unknown user"))?
.parse()?;
let mut allowed = userid == current_user; let mut allowed = userid == current_user;
@ -205,9 +207,8 @@ fn change_password(
bail!("you are not authorized to change the password."); bail!("you are not authorized to change the password.");
} }
let (username, realm) = crate::auth::parse_userid(&userid)?; let authenticator = crate::auth::lookup_authenticator(userid.realm())?;
let authenticator = crate::auth::lookup_authenticator(&realm)?; authenticator.store_password(userid.name(), &password)?;
authenticator.store_password(&username, &password)?;
Ok(Value::Null) Ok(Value::Null)
} }

View File

@ -2,6 +2,7 @@ use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize}; use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment, Permission}; use proxmox::api::{api, Router, RpcEnvironment, Permission};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::acl; use crate::config::acl;
@ -141,7 +142,7 @@ pub fn read_acl(
}, },
userid: { userid: {
optional: true, optional: true,
schema: PROXMOX_USER_ID_SCHEMA, type: Userid,
}, },
group: { group: {
optional: true, optional: true,
@ -167,14 +168,14 @@ pub fn update_acl(
path: String, path: String,
role: String, role: String,
propagate: Option<bool>, propagate: Option<bool>,
userid: Option<String>, userid: Option<Userid>,
group: Option<String>, group: Option<String>,
delete: Option<bool>, delete: Option<bool>,
digest: Option<String>, digest: Option<String>,
_rpcenv: &mut dyn RpcEnvironment, _rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(acl::ACL_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(acl::ACL_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut tree, expected_digest) = acl::config()?; let (mut tree, expected_digest) = acl::config()?;
@ -192,7 +193,7 @@ pub fn update_acl(
} else if let Some(ref userid) = userid { } else if let Some(ref userid) = userid {
if !delete { // Note: we allow to delete non-existent users if !delete { // Note: we allow to delete non-existent users
let user_cfg = crate::config::user::cached_config()?; let user_cfg = crate::config::user::cached_config()?;
if user_cfg.sections.get(userid).is_none() { if user_cfg.sections.get(&userid.to_string()).is_none() {
bail!("no such user."); bail!("no such user.");
} }
} }

View File

@ -3,6 +3,7 @@ use serde_json::Value;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission}; use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::api::schema::{Schema, StringSchema}; use proxmox::api::schema::{Schema, StringSchema};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::user; use crate::config::user;
@ -48,7 +49,7 @@ pub fn list_users(
input: { input: {
properties: { properties: {
userid: { userid: {
schema: PROXMOX_USER_ID_SCHEMA, type: Userid,
}, },
comment: { comment: {
schema: SINGLE_LINE_COMMENT_SCHEMA, schema: SINGLE_LINE_COMMENT_SCHEMA,
@ -87,25 +88,24 @@ pub fn list_users(
/// Create new user. /// Create new user.
pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error> { pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let user: user::User = serde_json::from_value(param)?; let user: user::User = serde_json::from_value(param)?;
let (mut config, _digest) = user::config()?; let (mut config, _digest) = user::config()?;
if let Some(_) = config.sections.get(&user.userid) { if let Some(_) = config.sections.get(user.userid.as_str()) {
bail!("user '{}' already exists.", user.userid); bail!("user '{}' already exists.", user.userid);
} }
let (username, realm) = crate::auth::parse_userid(&user.userid)?; let authenticator = crate::auth::lookup_authenticator(&user.userid.realm())?;
let authenticator = crate::auth::lookup_authenticator(&realm)?;
config.set_data(&user.userid, "user", &user)?; config.set_data(user.userid.as_str(), "user", &user)?;
user::save_config(&config)?; user::save_config(&config)?;
if let Some(password) = password { if let Some(password) = password {
authenticator.store_password(&username, &password)?; authenticator.store_password(user.userid.name(), &password)?;
} }
Ok(()) Ok(())
@ -115,7 +115,7 @@ pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error>
input: { input: {
properties: { properties: {
userid: { userid: {
schema: PROXMOX_USER_ID_SCHEMA, type: Userid,
}, },
}, },
}, },
@ -128,9 +128,9 @@ pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error>
}, },
)] )]
/// Read user configuration data. /// Read user configuration data.
pub fn read_user(userid: String, mut rpcenv: &mut dyn RpcEnvironment) -> Result<user::User, Error> { pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<user::User, Error> {
let (config, digest) = user::config()?; let (config, digest) = user::config()?;
let user = config.lookup("user", &userid)?; let user = config.lookup("user", userid.as_str())?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into(); rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(user) Ok(user)
} }
@ -140,7 +140,7 @@ pub fn read_user(userid: String, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
input: { input: {
properties: { properties: {
userid: { userid: {
schema: PROXMOX_USER_ID_SCHEMA, type: Userid,
}, },
comment: { comment: {
optional: true, optional: true,
@ -182,7 +182,7 @@ pub fn read_user(userid: String, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
)] )]
/// Update user configuration. /// Update user configuration.
pub fn update_user( pub fn update_user(
userid: String, userid: Userid,
comment: Option<String>, comment: Option<String>,
enable: Option<bool>, enable: Option<bool>,
expire: Option<i64>, expire: Option<i64>,
@ -193,7 +193,7 @@ pub fn update_user(
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = user::config()?; let (mut config, expected_digest) = user::config()?;
@ -202,7 +202,7 @@ pub fn update_user(
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?; crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
} }
let mut data: user::User = config.lookup("user", &userid)?; let mut data: user::User = config.lookup("user", userid.as_str())?;
if let Some(comment) = comment { if let Some(comment) = comment {
let comment = comment.trim().to_string(); let comment = comment.trim().to_string();
@ -222,9 +222,8 @@ pub fn update_user(
} }
if let Some(password) = password { if let Some(password) = password {
let (username, realm) = crate::auth::parse_userid(&userid)?; let authenticator = crate::auth::lookup_authenticator(userid.realm())?;
let authenticator = crate::auth::lookup_authenticator(&realm)?; authenticator.store_password(userid.name(), &password)?;
authenticator.store_password(&username, &password)?;
} }
if let Some(firstname) = firstname { if let Some(firstname) = firstname {
@ -238,7 +237,7 @@ pub fn update_user(
data.email = if email.is_empty() { None } else { Some(email) }; data.email = if email.is_empty() { None } else { Some(email) };
} }
config.set_data(&userid, "user", &data)?; config.set_data(userid.as_str(), "user", &data)?;
user::save_config(&config)?; user::save_config(&config)?;
@ -250,7 +249,7 @@ pub fn update_user(
input: { input: {
properties: { properties: {
userid: { userid: {
schema: PROXMOX_USER_ID_SCHEMA, type: Userid,
}, },
digest: { digest: {
optional: true, optional: true,
@ -263,9 +262,9 @@ pub fn update_user(
}, },
)] )]
/// Remove a user from the configuration file. /// Remove a user from the configuration file.
pub fn delete_user(userid: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = user::config()?; let (mut config, expected_digest) = user::config()?;
@ -274,8 +273,8 @@ pub fn delete_user(userid: String, digest: Option<String>) -> Result<(), Error>
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?; crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
} }
match config.sections.get(&userid) { match config.sections.get(userid.as_str()) {
Some(_) => { config.sections.remove(&userid); }, Some(_) => { config.sections.remove(userid.as_str()); },
None => bail!("user '{}' does not exist.", userid), None => bail!("user '{}' does not exist.", userid),
} }

View File

@ -10,7 +10,8 @@ use serde_json::{json, Value};
use proxmox::api::{ use proxmox::api::{
api, ApiResponseFuture, ApiHandler, ApiMethod, Router, api, ApiResponseFuture, ApiHandler, ApiMethod, Router,
RpcEnvironment, RpcEnvironmentType, Permission, UserInformation}; RpcEnvironment, RpcEnvironmentType, Permission
};
use proxmox::api::router::SubdirMap; use proxmox::api::router::SubdirMap;
use proxmox::api::schema::*; use proxmox::api::schema::*;
use proxmox::tools::fs::{replace_file, CreateOptions}; use proxmox::tools::fs::{replace_file, CreateOptions};
@ -36,7 +37,11 @@ use crate::config::acl::{
PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_BACKUP,
}; };
fn check_backup_owner(store: &DataStore, group: &BackupGroup, userid: &str) -> Result<(), Error> { fn check_backup_owner(
store: &DataStore,
group: &BackupGroup,
userid: &Userid,
) -> Result<(), Error> {
let owner = store.get_owner(group)?; let owner = store.get_owner(group)?;
if &owner != userid { if &owner != userid {
bail!("backup owner check failed ({} != {})", userid, owner); bail!("backup owner check failed ({} != {})", userid, owner);
@ -44,9 +49,12 @@ fn check_backup_owner(store: &DataStore, group: &BackupGroup, userid: &str) -> R
Ok(()) Ok(())
} }
fn read_backup_index(store: &DataStore, backup_dir: &BackupDir) -> Result<Vec<BackupContent>, Error> { fn read_backup_index(
store: &DataStore,
backup_dir: &BackupDir,
) -> Result<(BackupManifest, Vec<BackupContent>), Error> {
let (manifest, manifest_crypt_mode, index_size) = store.load_manifest(backup_dir)?; let (manifest, index_size) = store.load_manifest(backup_dir)?;
let mut result = Vec::new(); let mut result = Vec::new();
for item in manifest.files() { for item in manifest.files() {
@ -59,18 +67,22 @@ fn read_backup_index(store: &DataStore, backup_dir: &BackupDir) -> Result<Vec<Ba
result.push(BackupContent { result.push(BackupContent {
filename: MANIFEST_BLOB_NAME.to_string(), filename: MANIFEST_BLOB_NAME.to_string(),
crypt_mode: Some(manifest_crypt_mode), crypt_mode: match manifest.signature {
Some(_) => Some(CryptMode::SignOnly),
None => Some(CryptMode::None),
},
size: Some(index_size), size: Some(index_size),
}); });
Ok(result) Ok((manifest, result))
} }
fn get_all_snapshot_files( fn get_all_snapshot_files(
store: &DataStore, store: &DataStore,
info: &BackupInfo, info: &BackupInfo,
) -> Result<Vec<BackupContent>, Error> { ) -> Result<(BackupManifest, Vec<BackupContent>), Error> {
let mut files = read_backup_index(&store, &info.backup_dir)?;
let (manifest, mut files) = read_backup_index(&store, &info.backup_dir)?;
let file_set = files.iter().fold(HashSet::new(), |mut acc, item| { let file_set = files.iter().fold(HashSet::new(), |mut acc, item| {
acc.insert(item.filename.clone()); acc.insert(item.filename.clone());
@ -86,7 +98,7 @@ fn get_all_snapshot_files(
}); });
} }
Ok(files) Ok((manifest, files))
} }
fn group_backups(backup_list: Vec<BackupInfo>) -> HashMap<String, Vec<BackupInfo>> { fn group_backups(backup_list: Vec<BackupInfo>) -> HashMap<String, Vec<BackupInfo>> {
@ -130,9 +142,9 @@ fn list_groups(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<GroupListItem>, Error> { ) -> Result<Vec<GroupListItem>, Error> {
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&username, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
@ -153,7 +165,7 @@ fn list_groups(
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0; let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?; let owner = datastore.get_owner(group)?;
if !list_all { if !list_all {
if owner != username { continue; } if owner != userid { continue; }
} }
let result_item = GroupListItem { let result_item = GroupListItem {
@ -211,20 +223,22 @@ pub fn list_snapshot_files(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<BackupContent>, Error> { ) -> Result<Vec<BackupContent>, Error> {
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&username, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let snapshot = BackupDir::new(backup_type, backup_id, backup_time); let snapshot = BackupDir::new(backup_type, backup_id, backup_time);
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)) != 0; let allowed = (user_privs & (PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)) != 0;
if !allowed { check_backup_owner(&datastore, snapshot.group(), &username)?; } if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; }
let info = BackupInfo::new(&datastore.base_path(), snapshot)?; let info = BackupInfo::new(&datastore.base_path(), snapshot)?;
get_all_snapshot_files(&datastore, &info) let (_manifest, files) = get_all_snapshot_files(&datastore, &info)?;
Ok(files)
} }
#[api( #[api(
@ -261,18 +275,18 @@ fn delete_snapshot(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&username, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let snapshot = BackupDir::new(backup_type, backup_id, backup_time); let snapshot = BackupDir::new(backup_type, backup_id, backup_time);
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let allowed = (user_privs & PRIV_DATASTORE_MODIFY) != 0; let allowed = (user_privs & PRIV_DATASTORE_MODIFY) != 0;
if !allowed { check_backup_owner(&datastore, snapshot.group(), &username)?; } if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; }
datastore.remove_backup_dir(&snapshot)?; datastore.remove_backup_dir(&snapshot, false)?;
Ok(Value::Null) Ok(Value::Null)
} }
@ -317,9 +331,9 @@ pub fn list_snapshots (
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SnapshotListItem>, Error> { ) -> Result<Vec<SnapshotListItem>, Error> {
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&username, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
@ -342,27 +356,36 @@ pub fn list_snapshots (
let owner = datastore.get_owner(group)?; let owner = datastore.get_owner(group)?;
if !list_all { if !list_all {
if owner != username { continue; } if owner != userid { continue; }
} }
let mut size = None; let mut size = None;
let files = match get_all_snapshot_files(&datastore, &info) { let (comment, files) = match get_all_snapshot_files(&datastore, &info) {
Ok(files) => { Ok((manifest, files)) => {
size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum()); size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
files // extract the first line from notes
let comment: Option<String> = manifest.unprotected["notes"]
.as_str()
.and_then(|notes| notes.lines().next())
.map(String::from);
(comment, files)
}, },
Err(err) => { Err(err) => {
eprintln!("error during snapshot file listing: '{}'", err); eprintln!("error during snapshot file listing: '{}'", err);
info (
.files None,
.iter() info
.map(|x| BackupContent { .files
filename: x.to_string(), .iter()
size: None, .map(|x| BackupContent {
crypt_mode: None, filename: x.to_string(),
}) size: None,
.collect() crypt_mode: None,
})
.collect()
)
}, },
}; };
@ -370,6 +393,7 @@ pub fn list_snapshots (
backup_type: group.backup_type().to_string(), backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(), backup_id: group.backup_id().to_string(),
backup_time: info.backup_dir.backup_time().timestamp(), backup_time: info.backup_dir.backup_time().timestamp(),
comment,
files, files,
size, size,
owner: Some(owner), owner: Some(owner),
@ -468,24 +492,38 @@ pub fn verify(
_ => bail!("parameters do not spefify a backup group or snapshot"), _ => bail!("parameters do not spefify a backup group or snapshot"),
} }
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"verify", Some(worker_id.clone()), &username, to_stdout, move |worker| "verify",
{ Some(worker_id.clone()),
let success = if let Some(backup_dir) = backup_dir { userid,
verify_backup_dir(&datastore, &backup_dir, &worker)? to_stdout,
move |worker| {
let failed_dirs = if let Some(backup_dir) = backup_dir {
let mut verified_chunks = HashSet::with_capacity(1024*16);
let mut corrupt_chunks = HashSet::with_capacity(64);
let mut res = Vec::new();
if !verify_backup_dir(&datastore, &backup_dir, &mut verified_chunks, &mut corrupt_chunks, &worker)? {
res.push(backup_dir.to_string());
}
res
} else if let Some(backup_group) = backup_group { } else if let Some(backup_group) = backup_group {
verify_backup_group(&datastore, &backup_group, &worker)? verify_backup_group(&datastore, &backup_group, &worker)?
} else { } else {
verify_all_backups(&datastore, &worker)? verify_all_backups(&datastore, &worker)?
}; };
if !success { if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:");
for dir in failed_dirs {
worker.log(format!("\t{}", dir));
}
bail!("verfication failed - please check the log for details"); bail!("verfication failed - please check the log for details");
} }
Ok(()) Ok(())
})?; },
)?;
Ok(json!(upid_str)) Ok(json!(upid_str))
} }
@ -570,9 +608,9 @@ fn prune(
let backup_type = tools::required_string_param(&param, "backup-type")?; let backup_type = tools::required_string_param(&param, "backup-type")?;
let backup_id = tools::required_string_param(&param, "backup-id")?; let backup_id = tools::required_string_param(&param, "backup-id")?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&username, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let dry_run = param["dry-run"].as_bool().unwrap_or(false); let dry_run = param["dry-run"].as_bool().unwrap_or(false);
@ -581,7 +619,7 @@ fn prune(
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let allowed = (user_privs & PRIV_DATASTORE_MODIFY) != 0; let allowed = (user_privs & PRIV_DATASTORE_MODIFY) != 0;
if !allowed { check_backup_owner(&datastore, &group, &username)?; } if !allowed { check_backup_owner(&datastore, &group, &userid)?; }
let prune_options = PruneOptions { let prune_options = PruneOptions {
keep_last: param["keep-last"].as_u64(), keep_last: param["keep-last"].as_u64(),
@ -623,7 +661,7 @@ fn prune(
// We use a WorkerTask just to have a task log, but run synchrounously // We use a WorkerTask just to have a task log, but run synchrounously
let worker = WorkerTask::new("prune", Some(worker_id), "root@pam", true)?; let worker = WorkerTask::new("prune", Some(worker_id), Userid::root_userid().clone(), true)?;
let result = try_block! { let result = try_block! {
if keep_all { if keep_all {
@ -660,7 +698,7 @@ fn prune(
})); }));
if !(dry_run || keep) { if !(dry_run || keep) {
datastore.remove_backup_dir(&info.backup_dir)?; datastore.remove_backup_dir(&info.backup_dir, true)?;
} }
} }
@ -705,11 +743,15 @@ fn start_garbage_collection(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"garbage_collection", Some(store.clone()), "root@pam", to_stdout, move |worker| "garbage_collection",
{ Some(store.clone()),
Userid::root_userid().clone(),
to_stdout,
move |worker| {
worker.log(format!("starting garbage collection on store {}", store)); worker.log(format!("starting garbage collection on store {}", store));
datastore.garbage_collection(&worker) datastore.garbage_collection(&worker)
})?; },
)?;
Ok(json!(upid_str)) Ok(json!(upid_str))
} }
@ -773,13 +815,13 @@ fn get_datastore_list(
let (config, _digest) = datastore::config()?; let (config, _digest) = datastore::config()?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let mut list = Vec::new(); let mut list = Vec::new();
for (store, (_, data)) in &config.sections { for (store, (_, data)) in &config.sections {
let user_privs = user_info.lookup_privs(&username, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0; let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0;
if allowed { if allowed {
let mut entry = json!({ "store": store }); let mut entry = json!({ "store": store });
@ -824,9 +866,9 @@ fn download_file(
let store = tools::required_string_param(&param, "store")?; let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(store)?; let datastore = DataStore::lookup_datastore(store)?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&username, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let file_name = tools::required_string_param(&param, "file-name")?.to_owned(); let file_name = tools::required_string_param(&param, "file-name")?.to_owned();
@ -837,7 +879,7 @@ fn download_file(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time);
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &username)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
println!("Download {} from {} ({}/{})", file_name, store, backup_dir, file_name); println!("Download {} from {} ({}/{})", file_name, store, backup_dir, file_name);
@ -846,8 +888,8 @@ fn download_file(
path.push(&file_name); path.push(&file_name);
let file = tokio::fs::File::open(&path) let file = tokio::fs::File::open(&path)
.map_err(|err| http_err!(BAD_REQUEST, format!("File open failed: {}", err))) .await
.await?; .map_err(|err| http_err!(BAD_REQUEST, "File open failed: {}", err))?;
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new()) let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())
.map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze())) .map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze()))
@ -897,9 +939,9 @@ fn download_file_decoded(
let store = tools::required_string_param(&param, "store")?; let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(store)?; let datastore = DataStore::lookup_datastore(store)?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&username, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let file_name = tools::required_string_param(&param, "file-name")?.to_owned(); let file_name = tools::required_string_param(&param, "file-name")?.to_owned();
@ -910,9 +952,9 @@ fn download_file_decoded(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time);
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &username)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
let files = read_backup_index(&datastore, &backup_dir)?; let (manifest, files) = read_backup_index(&datastore, &backup_dir)?;
for file in files { for file in files {
if file.filename == file_name && file.crypt_mode == Some(CryptMode::Encrypt) { if file.filename == file_name && file.crypt_mode == Some(CryptMode::Encrypt) {
bail!("cannot decode '{}' - is encrypted", file_name); bail!("cannot decode '{}' - is encrypted", file_name);
@ -931,8 +973,10 @@ fn download_file_decoded(
"didx" => { "didx" => {
let index = DynamicIndexReader::open(&path) let index = DynamicIndexReader::open(&path)
.map_err(|err| format_err!("unable to read dynamic index '{:?}' - {}", &path, err))?; .map_err(|err| format_err!("unable to read dynamic index '{:?}' - {}", &path, err))?;
let (csum, size) = index.compute_csum();
manifest.verify_file(&file_name, &csum, size)?;
let chunk_reader = LocalChunkReader::new(datastore, None); let chunk_reader = LocalChunkReader::new(datastore, None, CryptMode::None);
let reader = AsyncIndexReader::new(index, chunk_reader); let reader = AsyncIndexReader::new(index, chunk_reader);
Body::wrap_stream(AsyncReaderStream::new(reader) Body::wrap_stream(AsyncReaderStream::new(reader)
.map_err(move |err| { .map_err(move |err| {
@ -944,7 +988,10 @@ fn download_file_decoded(
let index = FixedIndexReader::open(&path) let index = FixedIndexReader::open(&path)
.map_err(|err| format_err!("unable to read fixed index '{:?}' - {}", &path, err))?; .map_err(|err| format_err!("unable to read fixed index '{:?}' - {}", &path, err))?;
let chunk_reader = LocalChunkReader::new(datastore, None); let (csum, size) = index.compute_csum();
manifest.verify_file(&file_name, &csum, size)?;
let chunk_reader = LocalChunkReader::new(datastore, None, CryptMode::None);
let reader = AsyncIndexReader::new(index, chunk_reader); let reader = AsyncIndexReader::new(index, chunk_reader);
Body::wrap_stream(AsyncReaderStream::with_buffer_size(reader, 4*1024*1024) Body::wrap_stream(AsyncReaderStream::with_buffer_size(reader, 4*1024*1024)
.map_err(move |err| { .map_err(move |err| {
@ -954,7 +1001,9 @@ fn download_file_decoded(
}, },
"blob" => { "blob" => {
let file = std::fs::File::open(&path) let file = std::fs::File::open(&path)
.map_err(|err| http_err!(BAD_REQUEST, format!("File open failed: {}", err)))?; .map_err(|err| http_err!(BAD_REQUEST, "File open failed: {}", err))?;
// FIXME: load full blob to verify index checksum?
Body::wrap_stream( Body::wrap_stream(
WrappedReaderStream::new(DataBlobReader::new(file, None)?) WrappedReaderStream::new(DataBlobReader::new(file, None)?)
@ -1015,8 +1064,8 @@ fn upload_backup_log(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time);
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
check_backup_owner(&datastore, backup_dir.group(), &username)?; check_backup_owner(&datastore, backup_dir.group(), &userid)?;
let mut path = datastore.base_path(); let mut path = datastore.base_path();
path.push(backup_dir.relative_path()); path.push(backup_dir.relative_path());
@ -1037,11 +1086,10 @@ fn upload_backup_log(
}) })
.await?; .await?;
let blob = DataBlob::from_raw(data)?; // always verify blob/CRC at server side
// always verify CRC at server side let blob = DataBlob::load_from_reader(&mut &data[..])?;
blob.verify_crc()?;
let raw_data = blob.raw_data(); replace_file(&path, blob.raw_data(), CreateOptions::new())?;
replace_file(&path, raw_data, CreateOptions::new())?;
// fixme: use correct formatter // fixme: use correct formatter
Ok(crate::server::formatter::json_response(Ok(Value::Null))) Ok(crate::server::formatter::json_response(Ok(Value::Null)))
@ -1086,23 +1134,35 @@ fn catalog(
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&username, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time);
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &username)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
let file_name = CATALOG_NAME;
let (manifest, files) = read_backup_index(&datastore, &backup_dir)?;
for file in files {
if file.filename == file_name && file.crypt_mode == Some(CryptMode::Encrypt) {
bail!("cannot decode '{}' - is encrypted", file_name);
}
}
let mut path = datastore.base_path(); let mut path = datastore.base_path();
path.push(backup_dir.relative_path()); path.push(backup_dir.relative_path());
path.push(CATALOG_NAME); path.push(file_name);
let index = DynamicIndexReader::open(&path) let index = DynamicIndexReader::open(&path)
.map_err(|err| format_err!("unable to read dynamic index '{:?}' - {}", &path, err))?; .map_err(|err| format_err!("unable to read dynamic index '{:?}' - {}", &path, err))?;
let chunk_reader = LocalChunkReader::new(datastore, None); let (csum, size) = index.compute_csum();
manifest.verify_file(&file_name, &csum, size)?;
let chunk_reader = LocalChunkReader::new(datastore, None, CryptMode::None);
let reader = BufferedDynamicReader::new(index, chunk_reader); let reader = BufferedDynamicReader::new(index, chunk_reader);
let mut catalog_reader = CatalogReader::new(reader); let mut catalog_reader = CatalogReader::new(reader);
@ -1185,9 +1245,9 @@ fn pxar_file_download(
let store = tools::required_string_param(&param, "store")?; let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&username, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let filepath = tools::required_string_param(&param, "filepath")?.to_owned(); let filepath = tools::required_string_param(&param, "filepath")?.to_owned();
@ -1198,10 +1258,7 @@ fn pxar_file_download(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time);
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &username)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
let mut path = datastore.base_path();
path.push(backup_dir.relative_path());
let mut components = base64::decode(&filepath)?; let mut components = base64::decode(&filepath)?;
if components.len() > 0 && components[0] == '/' as u8 { if components.len() > 0 && components[0] == '/' as u8 {
@ -1209,15 +1266,26 @@ fn pxar_file_download(
} }
let mut split = components.splitn(2, |c| *c == '/' as u8); let mut split = components.splitn(2, |c| *c == '/' as u8);
let pxar_name = split.next().unwrap(); let pxar_name = std::str::from_utf8(split.next().unwrap())?;
let file_path = split.next().ok_or(format_err!("filepath looks strange '{}'", filepath))?; let file_path = split.next().ok_or(format_err!("filepath looks strange '{}'", filepath))?;
let (manifest, files) = read_backup_index(&datastore, &backup_dir)?;
for file in files {
if file.filename == pxar_name && file.crypt_mode == Some(CryptMode::Encrypt) {
bail!("cannot decode '{}' - is encrypted", pxar_name);
}
}
path.push(OsStr::from_bytes(&pxar_name)); let mut path = datastore.base_path();
path.push(backup_dir.relative_path());
path.push(pxar_name);
let index = DynamicIndexReader::open(&path) let index = DynamicIndexReader::open(&path)
.map_err(|err| format_err!("unable to read dynamic index '{:?}' - {}", &path, err))?; .map_err(|err| format_err!("unable to read dynamic index '{:?}' - {}", &path, err))?;
let chunk_reader = LocalChunkReader::new(datastore, None); let (csum, size) = index.compute_csum();
manifest.verify_file(&pxar_name, &csum, size)?;
let chunk_reader = LocalChunkReader::new(datastore, None, CryptMode::None);
let reader = BufferedDynamicReader::new(index, chunk_reader); let reader = BufferedDynamicReader::new(index, chunk_reader);
let archive_size = reader.archive_size(); let archive_size = reader.archive_size();
let reader = LocalDynamicReadAt::new(reader); let reader = LocalDynamicReadAt::new(reader);
@ -1293,6 +1361,108 @@ fn get_rrd_stats(
) )
} }
#[api(
input: {
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP, true),
},
)]
/// Get "notes" for a specific backup
fn get_notes(
store: String,
backup_type: String,
backup_id: String,
backup_time: i64,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time);
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
let manifest = datastore.load_manifest_json(&backup_dir)?;
let notes = manifest["unprotected"]["notes"]
.as_str()
.unwrap_or("");
Ok(String::from(notes))
}
#[api(
input: {
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
notes: {
description: "A multiline text.",
},
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_MODIFY, true),
},
)]
/// Set "notes" for a specific backup
fn set_notes(
store: String,
backup_type: String,
backup_id: String,
backup_time: i64,
notes: String,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time);
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
let mut manifest = datastore.load_manifest_json(&backup_dir)?;
manifest["unprotected"]["notes"] = notes.into();
datastore.store_manifest(&backup_dir, manifest)?;
Ok(())
}
#[sortable] #[sortable]
const DATASTORE_INFO_SUBDIRS: SubdirMap = &[ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
( (
@ -1326,6 +1496,12 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
&Router::new() &Router::new()
.get(&API_METHOD_LIST_GROUPS) .get(&API_METHOD_LIST_GROUPS)
), ),
(
"notes",
&Router::new()
.get(&API_METHOD_GET_NOTES)
.put(&API_METHOD_SET_NOTES)
),
( (
"prune", "prune",
&Router::new() &Router::new()

View File

@ -1,6 +1,7 @@
use std::collections::HashMap;
use anyhow::{Error}; use anyhow::{Error};
use serde_json::Value; use serde_json::Value;
use std::collections::HashMap;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment}; use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
use proxmox::api::router::SubdirMap; use proxmox::api::router::SubdirMap;
@ -92,16 +93,23 @@ async fn run_sync_job(
let (config, _digest) = sync::config()?; let (config, _digest) = sync::config()?;
let sync_job: SyncJobConfig = config.lookup("sync", &id)?; let sync_job: SyncJobConfig = config.lookup("sync", &id)?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let delete = sync_job.remove_vanished.unwrap_or(true); let delete = sync_job.remove_vanished.unwrap_or(true);
let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?; let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
let upid_str = WorkerTask::spawn("syncjob", Some(id.clone()), &username.clone(), false, move |worker| async move { let upid_str = WorkerTask::spawn("syncjob", Some(id.clone()), userid, false, move |worker| async move {
worker.log(format!("sync job '{}' start", &id)); worker.log(format!("sync job '{}' start", &id));
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, String::from("backup@pam")).await?; crate::client::pull::pull_store(
&worker,
&client,
&src_repo,
tgt_store.clone(),
delete,
Userid::backup_userid().clone(),
).await?;
worker.log(format!("sync job '{}' end", &id)); worker.log(format!("sync job '{}' end", &id));

View File

@ -16,6 +16,7 @@ use crate::backup::*;
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::acl::PRIV_DATASTORE_BACKUP; use crate::config::acl::PRIV_DATASTORE_BACKUP;
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
use crate::tools::fs::lock_dir_noblock;
mod environment; mod environment;
use environment::*; use environment::*;
@ -56,12 +57,12 @@ fn upgrade_to_backup_protocol(
async move { async move {
let debug = param["debug"].as_bool().unwrap_or(false); let debug = param["debug"].as_bool().unwrap_or(false);
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let store = tools::required_string_param(&param, "store")?.to_owned(); let store = tools::required_string_param(&param, "store")?.to_owned();
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
user_info.check_privs(&username, &["datastore", &store], PRIV_DATASTORE_BACKUP, false)?; user_info.check_privs(&userid, &["datastore", &store], PRIV_DATASTORE_BACKUP, false)?;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
@ -88,30 +89,36 @@ async move {
let env_type = rpcenv.env_type(); let env_type = rpcenv.env_type();
let backup_group = BackupGroup::new(backup_type, backup_id); let backup_group = BackupGroup::new(backup_type, backup_id);
let owner = datastore.create_backup_group(&backup_group, &username)?;
// lock backup group to only allow one backup per group at a time
let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &userid)?;
// permission check // permission check
if owner != username { // only the owner is allowed to create additional snapshots if owner != userid { // only the owner is allowed to create additional snapshots
bail!("backup owner check failed ({} != {})", username, owner); bail!("backup owner check failed ({} != {})", userid, owner);
} }
let last_backup = BackupInfo::last_backup(&datastore.base_path(), &backup_group).unwrap_or(None); let last_backup = BackupInfo::last_backup(&datastore.base_path(), &backup_group, true).unwrap_or(None);
let backup_dir = BackupDir::new_with_group(backup_group, backup_time); let backup_dir = BackupDir::new_with_group(backup_group.clone(), backup_time);
if let Some(last) = &last_backup { let _last_guard = if let Some(last) = &last_backup {
if backup_dir.backup_time() <= last.backup_dir.backup_time() { if backup_dir.backup_time() <= last.backup_dir.backup_time() {
bail!("backup timestamp is older than last backup."); bail!("backup timestamp is older than last backup.");
} }
// fixme: abort if last backup is still running - howto test?
// Idea: write upid into a file inside snapshot dir. then test if
// it is still running here.
}
let (path, is_new) = datastore.create_backup_dir(&backup_dir)?; // lock last snapshot to prevent forgetting/pruning it during backup
let full_path = datastore.snapshot_path(&last.backup_dir);
Some(lock_dir_noblock(&full_path, "snapshot", "base snapshot is already locked by another operation")?)
} else {
None
};
let (path, is_new, _snap_guard) = datastore.create_locked_backup_dir(&backup_dir)?;
if !is_new { bail!("backup directory already exists."); } if !is_new { bail!("backup directory already exists."); }
WorkerTask::spawn("backup", Some(worker_id), &username.clone(), true, move |worker| { WorkerTask::spawn("backup", Some(worker_id), userid.clone(), true, move |worker| {
let mut env = BackupEnvironment::new( let mut env = BackupEnvironment::new(
env_type, username.clone(), worker.clone(), datastore, backup_dir); env_type, userid, worker.clone(), datastore, backup_dir);
env.debug = debug; env.debug = debug;
env.last_backup = last_backup; env.last_backup = last_backup;
@ -144,6 +151,11 @@ async move {
.map(|_| Err(format_err!("task aborted"))); .map(|_| Err(format_err!("task aborted")));
async move { async move {
// keep flock until task ends
let _group_guard = _group_guard;
let _snap_guard = _snap_guard;
let _last_guard = _last_guard;
let res = select!{ let res = select!{
req = req_fut => req, req = req_fut => req,
abrt = abort_future => abrt, abrt = abort_future => abrt,

View File

@ -1,18 +1,21 @@
use anyhow::{bail, Error}; use anyhow::{bail, format_err, Error};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use std::collections::HashMap; use std::collections::HashMap;
use ::serde::{Serialize};
use serde_json::{json, Value}; use serde_json::{json, Value};
use proxmox::tools::digest_to_hex; use proxmox::tools::digest_to_hex;
use proxmox::tools::fs::{replace_file, CreateOptions}; use proxmox::tools::fs::{replace_file, CreateOptions};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType}; use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::server::WorkerTask; use crate::api2::types::Userid;
use crate::backup::*; use crate::backup::*;
use crate::server::WorkerTask;
use crate::server::formatter::*; use crate::server::formatter::*;
use hyper::{Body, Response}; use hyper::{Body, Response};
#[derive(Copy, Clone, Serialize)]
struct UploadStatistic { struct UploadStatistic {
count: u64, count: u64,
size: u64, size: u64,
@ -31,6 +34,19 @@ impl UploadStatistic {
} }
} }
impl std::ops::Add for UploadStatistic {
type Output = Self;
fn add(self, other: Self) -> Self {
Self {
count: self.count + other.count,
size: self.size + other.size,
compressed_size: self.compressed_size + other.compressed_size,
duplicates: self.duplicates + other.duplicates,
}
}
}
struct DynamicWriterState { struct DynamicWriterState {
name: String, name: String,
index: DynamicIndexWriter, index: DynamicIndexWriter,
@ -57,6 +73,8 @@ struct SharedBackupState {
dynamic_writers: HashMap<usize, DynamicWriterState>, dynamic_writers: HashMap<usize, DynamicWriterState>,
fixed_writers: HashMap<usize, FixedWriterState>, fixed_writers: HashMap<usize, FixedWriterState>,
known_chunks: HashMap<[u8;32], u32>, known_chunks: HashMap<[u8;32], u32>,
backup_size: u64, // sums up size of all files
backup_stat: UploadStatistic,
} }
impl SharedBackupState { impl SharedBackupState {
@ -82,7 +100,7 @@ impl SharedBackupState {
pub struct BackupEnvironment { pub struct BackupEnvironment {
env_type: RpcEnvironmentType, env_type: RpcEnvironmentType,
result_attributes: Value, result_attributes: Value,
user: String, user: Userid,
pub debug: bool, pub debug: bool,
pub formatter: &'static OutputFormatter, pub formatter: &'static OutputFormatter,
pub worker: Arc<WorkerTask>, pub worker: Arc<WorkerTask>,
@ -95,7 +113,7 @@ pub struct BackupEnvironment {
impl BackupEnvironment { impl BackupEnvironment {
pub fn new( pub fn new(
env_type: RpcEnvironmentType, env_type: RpcEnvironmentType,
user: String, user: Userid,
worker: Arc<WorkerTask>, worker: Arc<WorkerTask>,
datastore: Arc<DataStore>, datastore: Arc<DataStore>,
backup_dir: BackupDir, backup_dir: BackupDir,
@ -108,6 +126,8 @@ impl BackupEnvironment {
dynamic_writers: HashMap::new(), dynamic_writers: HashMap::new(),
fixed_writers: HashMap::new(), fixed_writers: HashMap::new(),
known_chunks: HashMap::new(), known_chunks: HashMap::new(),
backup_size: 0,
backup_stat: UploadStatistic::new(),
}; };
Self { Self {
@ -353,7 +373,6 @@ impl BackupEnvironment {
let expected_csum = data.index.close()?; let expected_csum = data.index.close()?;
println!("server checksum {:?} client: {:?}", expected_csum, csum);
if csum != expected_csum { if csum != expected_csum {
bail!("dynamic writer '{}' close failed - got unexpected checksum", data.name); bail!("dynamic writer '{}' close failed - got unexpected checksum", data.name);
} }
@ -361,6 +380,8 @@ impl BackupEnvironment {
self.log_upload_stat(&data.name, &csum, &uuid, size, chunk_count, &data.upload_stat); self.log_upload_stat(&data.name, &csum, &uuid, size, chunk_count, &data.upload_stat);
state.file_counter += 1; state.file_counter += 1;
state.backup_size += size;
state.backup_stat = state.backup_stat + data.upload_stat;
Ok(()) Ok(())
} }
@ -395,7 +416,6 @@ impl BackupEnvironment {
let uuid = data.index.uuid; let uuid = data.index.uuid;
let expected_csum = data.index.close()?; let expected_csum = data.index.close()?;
println!("server checksum: {:?} client: {:?} (incremental: {})", expected_csum, csum, data.incremental);
if csum != expected_csum { if csum != expected_csum {
bail!("fixed writer '{}' close failed - got unexpected checksum", data.name); bail!("fixed writer '{}' close failed - got unexpected checksum", data.name);
} }
@ -403,6 +423,8 @@ impl BackupEnvironment {
self.log_upload_stat(&data.name, &expected_csum, &uuid, size, chunk_count, &data.upload_stat); self.log_upload_stat(&data.name, &expected_csum, &uuid, size, chunk_count, &data.upload_stat);
state.file_counter += 1; state.file_counter += 1;
state.backup_size += size;
state.backup_stat = state.backup_stat + data.upload_stat;
Ok(()) Ok(())
} }
@ -416,9 +438,8 @@ impl BackupEnvironment {
let blob_len = data.len(); let blob_len = data.len();
let orig_len = data.len(); // fixme: let orig_len = data.len(); // fixme:
let blob = DataBlob::from_raw(data)?; // always verify blob/CRC at server side
// always verify CRC at server side let blob = DataBlob::load_from_reader(&mut &data[..])?;
blob.verify_crc()?;
let raw_data = blob.raw_data(); let raw_data = blob.raw_data();
replace_file(&path, raw_data, CreateOptions::new())?; replace_file(&path, raw_data, CreateOptions::new())?;
@ -427,6 +448,8 @@ impl BackupEnvironment {
let mut state = self.state.lock().unwrap(); let mut state = self.state.lock().unwrap();
state.file_counter += 1; state.file_counter += 1;
state.backup_size += orig_len as u64;
state.backup_stat.size += blob_len as u64;
Ok(()) Ok(())
} }
@ -446,6 +469,28 @@ impl BackupEnvironment {
bail!("backup does not contain valid files (file count == 0)"); bail!("backup does not contain valid files (file count == 0)");
} }
// check manifest
let mut manifest = self.datastore.load_manifest_json(&self.backup_dir)
.map_err(|err| format_err!("unable to load manifest blob - {}", err))?;
let stats = serde_json::to_value(state.backup_stat)?;
manifest["unprotected"]["chunk_upload_stats"] = stats;
self.datastore.store_manifest(&self.backup_dir, manifest)
.map_err(|err| format_err!("unable to store manifest blob - {}", err))?;
if let Some(base) = &self.last_backup {
let path = self.datastore.snapshot_path(&base.backup_dir);
if !path.exists() {
bail!(
"base snapshot {} was removed during backup, cannot finish as chunks might be missing",
base.backup_dir
);
}
}
// marks the backup as successful
state.finished = true; state.finished = true;
Ok(()) Ok(())
@ -480,7 +525,7 @@ impl BackupEnvironment {
let mut state = self.state.lock().unwrap(); let mut state = self.state.lock().unwrap();
state.finished = true; state.finished = true;
self.datastore.remove_backup_dir(&self.backup_dir)?; self.datastore.remove_backup_dir(&self.backup_dir, true)?;
Ok(()) Ok(())
} }
@ -505,7 +550,7 @@ impl RpcEnvironment for BackupEnvironment {
} }
fn get_user(&self) -> Option<String> { fn get_user(&self) -> Option<String> {
Some(self.user.clone()) Some(self.user.to_string())
} }
} }

View File

@ -243,7 +243,7 @@ pub const API_METHOD_UPLOAD_BLOB: ApiMethod = ApiMethod::new(
&sorted!([ &sorted!([
("file-name", false, &crate::api2::types::BACKUP_ARCHIVE_NAME_SCHEMA), ("file-name", false, &crate::api2::types::BACKUP_ARCHIVE_NAME_SCHEMA),
("encoded-size", false, &IntegerSchema::new("Encoded blob size.") ("encoded-size", false, &IntegerSchema::new("Encoded blob size.")
.minimum((std::mem::size_of::<DataBlobHeader>() as isize) +1) .minimum(std::mem::size_of::<DataBlobHeader>() as isize)
.maximum(1024*1024*16+(std::mem::size_of::<EncryptedDataBlobHeader>() as isize)) .maximum(1024*1024*16+(std::mem::size_of::<EncryptedDataBlobHeader>() as isize))
.schema() .schema()
) )

View File

@ -5,6 +5,7 @@ use serde_json::Value;
use ::serde::{Deserialize, Serialize}; use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment, Permission}; use proxmox::api::{api, Router, RpcEnvironment, Permission};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*; use crate::api2::types::*;
use crate::backup::*; use crate::backup::*;
@ -99,7 +100,7 @@ pub fn list_datastores(
/// Create new datastore config. /// Create new datastore config.
pub fn create_datastore(param: Value) -> Result<(), Error> { pub fn create_datastore(param: Value) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let datastore: datastore::DataStoreConfig = serde_json::from_value(param.clone())?; let datastore: datastore::DataStoreConfig = serde_json::from_value(param.clone())?;
@ -253,7 +254,7 @@ pub fn update_datastore(
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
// pass/compare digest // pass/compare digest
let (mut config, expected_digest) = datastore::config()?; let (mut config, expected_digest) = datastore::config()?;
@ -327,7 +328,7 @@ pub fn update_datastore(
/// Remove a datastore configuration. /// Remove a datastore configuration.
pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = datastore::config()?; let (mut config, expected_digest) = datastore::config()?;

View File

@ -4,6 +4,7 @@ use ::serde::{Deserialize, Serialize};
use base64; use base64;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission}; use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::remote; use crate::config::remote;
@ -60,7 +61,7 @@ pub fn list_remotes(
schema: DNS_NAME_OR_IP_SCHEMA, schema: DNS_NAME_OR_IP_SCHEMA,
}, },
userid: { userid: {
schema: PROXMOX_USER_ID_SCHEMA, type: Userid,
}, },
password: { password: {
schema: remote::REMOTE_PASSWORD_SCHEMA, schema: remote::REMOTE_PASSWORD_SCHEMA,
@ -78,7 +79,7 @@ pub fn list_remotes(
/// Create new remote. /// Create new remote.
pub fn create_remote(password: String, param: Value) -> Result<(), Error> { pub fn create_remote(password: String, param: Value) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let mut data = param.clone(); let mut data = param.clone();
data["password"] = Value::from(base64::encode(password.as_bytes())); data["password"] = Value::from(base64::encode(password.as_bytes()));
@ -154,7 +155,7 @@ pub enum DeletableProperty {
}, },
userid: { userid: {
optional: true, optional: true,
schema: PROXMOX_USER_ID_SCHEMA, type: Userid,
}, },
password: { password: {
optional: true, optional: true,
@ -187,14 +188,14 @@ pub fn update_remote(
name: String, name: String,
comment: Option<String>, comment: Option<String>,
host: Option<String>, host: Option<String>,
userid: Option<String>, userid: Option<Userid>,
password: Option<String>, password: Option<String>,
fingerprint: Option<String>, fingerprint: Option<String>,
delete: Option<Vec<DeletableProperty>>, delete: Option<Vec<DeletableProperty>>,
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = remote::config()?; let (mut config, expected_digest) = remote::config()?;
@ -255,7 +256,7 @@ pub fn update_remote(
/// Remove a remote from the configuration file. /// Remove a remote from the configuration file.
pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = remote::config()?; let (mut config, expected_digest) = remote::config()?;

View File

@ -3,6 +3,7 @@ use serde_json::Value;
use ::serde::{Deserialize, Serialize}; use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment}; use proxmox::api::{api, Router, RpcEnvironment};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::sync::{self, SyncJobConfig}; use crate::config::sync::{self, SyncJobConfig};
@ -68,7 +69,7 @@ pub fn list_sync_jobs(
/// Create a new sync job. /// Create a new sync job.
pub fn create_sync_job(param: Value) -> Result<(), Error> { pub fn create_sync_job(param: Value) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let sync_job: sync::SyncJobConfig = serde_json::from_value(param.clone())?; let sync_job: sync::SyncJobConfig = serde_json::from_value(param.clone())?;
@ -184,7 +185,7 @@ pub fn update_sync_job(
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
// pass/compare digest // pass/compare digest
let (mut config, expected_digest) = sync::config()?; let (mut config, expected_digest) = sync::config()?;
@ -247,7 +248,7 @@ pub fn update_sync_job(
/// Remove a sync job configuration /// Remove a sync job configuration
pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = sync::config()?; let (mut config, expected_digest) = sync::config()?;

View File

@ -1,18 +1,19 @@
use std::path::PathBuf; use std::path::PathBuf;
use anyhow::Error; use anyhow::Error;
use futures::*; use futures::stream::TryStreamExt;
use hyper::{Body, Response, StatusCode, header}; use hyper::{Body, Response, StatusCode, header};
use proxmox::http_err;
use proxmox::http_bail;
pub async fn create_download_response(path: PathBuf) -> Result<Response<Body>, Error> { pub async fn create_download_response(path: PathBuf) -> Result<Response<Body>, Error> {
let file = tokio::fs::File::open(path.clone()) let file = match tokio::fs::File::open(path.clone()).await {
.map_err(move |err| { Ok(file) => file,
match err.kind() { Err(ref err) if err.kind() == std::io::ErrorKind::NotFound => {
std::io::ErrorKind::NotFound => http_err!(NOT_FOUND, format!("open file {:?} failed - not found", path.clone())), http_bail!(NOT_FOUND, "open file {:?} failed - not found", path);
_ => http_err!(BAD_REQUEST, format!("open file {:?} failed: {}", path.clone(), err)), }
} Err(err) => http_bail!(BAD_REQUEST, "open file {:?} failed: {}", path, err),
}) };
.await?;
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new()) let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())
.map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze())); .map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze()));

View File

@ -2,10 +2,7 @@ use std::net::TcpListener;
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::{ use futures::future::{FutureExt, TryFutureExt};
future::{FutureExt, TryFutureExt},
try_join,
};
use hyper::body::Body; use hyper::body::Body;
use hyper::http::request::Parts; use hyper::http::request::Parts;
use hyper::upgrade::Upgraded; use hyper::upgrade::Upgraded;
@ -28,15 +25,17 @@ use crate::tools;
pub mod disks; pub mod disks;
pub mod dns; pub mod dns;
mod journal;
pub mod network; pub mod network;
pub mod tasks;
pub(crate) mod rrd; pub(crate) mod rrd;
mod apt;
mod journal;
mod services; mod services;
mod status; mod status;
mod subscription; mod subscription;
mod apt;
mod syslog; mod syslog;
pub mod tasks;
mod time; mod time;
pub const SHELL_CMD_SCHEMA: Schema = StringSchema::new("The command to run.") pub const SHELL_CMD_SCHEMA: Schema = StringSchema::new("The command to run.")
@ -91,12 +90,12 @@ async fn termproxy(
cmd: Option<String>, cmd: Option<String>,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let userid = rpcenv let userid: Userid = rpcenv
.get_user() .get_user()
.ok_or_else(|| format_err!("unknown user"))?; .ok_or_else(|| format_err!("unknown user"))?
let (username, realm) = crate::auth::parse_userid(&userid)?; .parse()?;
if realm != "pam" { if userid.realm() != "pam" {
bail!("only pam users can use the console"); bail!("only pam users can use the console");
} }
@ -134,10 +133,11 @@ async fn termproxy(
_ => bail!("invalid command"), _ => bail!("invalid command"),
}; };
let username = userid.name().to_owned();
let upid = WorkerTask::spawn( let upid = WorkerTask::spawn(
"termproxy", "termproxy",
None, None,
&username, userid,
false, false,
move |worker| async move { move |worker| async move {
// move inside the worker so that it survives and does not close the port // move inside the worker so that it survives and does not close the port
@ -169,9 +169,9 @@ async fn termproxy(
let mut cmd = tokio::process::Command::new("/usr/bin/termproxy"); let mut cmd = tokio::process::Command::new("/usr/bin/termproxy");
cmd.args(&arguments); cmd.args(&arguments)
cmd.stdout(std::process::Stdio::piped()); .stdout(std::process::Stdio::piped())
cmd.stderr(std::process::Stdio::piped()); .stderr(std::process::Stdio::piped());
let mut child = cmd.spawn().expect("error executing termproxy"); let mut child = cmd.spawn().expect("error executing termproxy");
@ -184,7 +184,7 @@ async fn termproxy(
while let Some(line) = reader.next_line().await? { while let Some(line) = reader.next_line().await? {
worker_stdout.log(line); worker_stdout.log(line);
} }
Ok(()) Ok::<(), Error>(())
}; };
let worker_stderr = worker.clone(); let worker_stderr = worker.clone();
@ -193,21 +193,48 @@ async fn termproxy(
while let Some(line) = reader.next_line().await? { while let Some(line) = reader.next_line().await? {
worker_stderr.warn(line); worker_stderr.warn(line);
} }
Ok(()) Ok::<(), Error>(())
}; };
let (exit_code, _, _) = try_join!(child, stdout_fut, stderr_fut)?; let mut needs_kill = false;
if !exit_code.success() { let res = tokio::select!{
match exit_code.code() { res = &mut child => {
Some(code) => bail!("termproxy exited with {}", code), let exit_code = res?;
None => bail!("termproxy exited by signal"), if !exit_code.success() {
match exit_code.code() {
Some(code) => bail!("termproxy exited with {}", code),
None => bail!("termproxy exited by signal"),
}
}
Ok(())
},
res = stdout_fut => res,
res = stderr_fut => res,
res = worker.abort_future() => {
needs_kill = true;
res.map_err(Error::from)
}
};
if needs_kill {
if res.is_ok() {
child.kill()?;
child.await?;
return Ok(());
}
if let Err(err) = child.kill() {
worker.warn(format!("error killing termproxy: {}", err));
} else if let Err(err) = child.await {
worker.warn(format!("error awaiting termproxy: {}", err));
} }
} }
Ok(()) res
}, },
)?; )?;
// FIXME: We're returning the user NAME only?
Ok(json!({ Ok(json!({
"user": username, "user": username,
"ticket": ticket, "ticket": ticket,
@ -245,14 +272,14 @@ fn upgrade_to_websocket(
rpcenv: Box<dyn RpcEnvironment>, rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture { ) -> ApiResponseFuture {
async move { async move {
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let ticket = tools::required_string_param(&param, "vncticket")?.to_owned(); let ticket = tools::required_string_param(&param, "vncticket")?.to_owned();
let port: u16 = tools::required_integer_param(&param, "port")? as u16; let port: u16 = tools::required_integer_param(&param, "port")? as u16;
// will be checked again by termproxy // will be checked again by termproxy
tools::ticket::verify_term_ticket( tools::ticket::verify_term_ticket(
crate::auth_helpers::public_auth_key(), crate::auth_helpers::public_auth_key(),
&username, &userid,
&"/system", &"/system",
port, port,
&ticket, &ticket,
@ -260,7 +287,7 @@ fn upgrade_to_websocket(
let (ws, response) = WebSocket::new(parts.headers)?; let (ws, response) = WebSocket::new(parts.headers)?;
tokio::spawn(async move { crate::server::spawn_internal_task(async move {
let conn: Upgraded = match req_body.on_upgrade().map_err(Error::from).await { let conn: Upgraded = match req_body.on_upgrade().map_err(Error::from).await {
Ok(upgraded) => upgraded, Ok(upgraded) => upgraded,
_ => bail!("error"), _ => bail!("error"),

View File

@ -9,7 +9,7 @@ use proxmox::api::router::{Router, SubdirMap};
use crate::server::WorkerTask; use crate::server::WorkerTask;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY}; use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::api2::types::{APTUpdateInfo, NODE_SCHEMA, UPID_SCHEMA}; use crate::api2::types::{APTUpdateInfo, NODE_SCHEMA, Userid, UPID_SCHEMA};
const_regex! { const_regex! {
VERSION_EPOCH_REGEX = r"^\d+:"; VERSION_EPOCH_REGEX = r"^\d+:";
@ -233,11 +233,11 @@ pub fn apt_update_database(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> { ) -> Result<String, Error> {
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let quiet = quiet.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_QUIET); let quiet = quiet.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_QUIET);
let upid_str = WorkerTask::new_thread("aptupdate", None, &username.clone(), to_stdout, move |worker| { let upid_str = WorkerTask::new_thread("aptupdate", None, userid, to_stdout, move |worker| {
if !quiet { worker.log("starting apt-get update") } if !quiet { worker.log("starting apt-get update") }
// TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE // TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE

View File

@ -13,7 +13,7 @@ use crate::tools::disks::{
}; };
use crate::server::WorkerTask; use crate::server::WorkerTask;
use crate::api2::types::{UPID_SCHEMA, NODE_SCHEMA, BLOCKDEVICE_NAME_SCHEMA}; use crate::api2::types::{Userid, UPID_SCHEMA, NODE_SCHEMA, BLOCKDEVICE_NAME_SCHEMA};
pub mod directory; pub mod directory;
pub mod zfs; pub mod zfs;
@ -140,7 +140,7 @@ pub fn initialize_disk(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let info = get_disk_usage_info(&disk, true)?; let info = get_disk_usage_info(&disk, true)?;
@ -149,7 +149,7 @@ pub fn initialize_disk(
} }
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"diskinit", Some(disk.clone()), &username.clone(), to_stdout, move |worker| "diskinit", Some(disk.clone()), userid, to_stdout, move |worker|
{ {
worker.log(format!("initialize disk {}", disk)); worker.log(format!("initialize disk {}", disk));

View File

@ -133,7 +133,7 @@ pub fn create_datastore_disk(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let info = get_disk_usage_info(&disk, true)?; let info = get_disk_usage_info(&disk, true)?;
@ -142,7 +142,7 @@ pub fn create_datastore_disk(
} }
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"dircreate", Some(name.clone()), &username.clone(), to_stdout, move |worker| "dircreate", Some(name.clone()), userid, to_stdout, move |worker|
{ {
worker.log(format!("create datastore '{}' on disk {}", name, disk)); worker.log(format!("create datastore '{}' on disk {}", name, disk));

View File

@ -254,7 +254,7 @@ pub fn create_zpool(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let add_datastore = add_datastore.unwrap_or(false); let add_datastore = add_datastore.unwrap_or(false);
@ -314,7 +314,7 @@ pub fn create_zpool(
} }
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"zfscreate", Some(name.clone()), &username.clone(), to_stdout, move |worker| "zfscreate", Some(name.clone()), userid, to_stdout, move |worker|
{ {
worker.log(format!("create {:?} zpool '{}' on devices '{}'", raidlevel, name, devices_text)); worker.log(format!("create {:?} zpool '{}' on devices '{}'", raidlevel, name, devices_text));

View File

@ -4,6 +4,7 @@ use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission}; use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::api::schema::parse_property_string; use proxmox::api::schema::parse_property_string;
use proxmox::tools::fs::open_file_locked;
use crate::config::network::{self, NetworkConfig}; use crate::config::network::{self, NetworkConfig};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY}; use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
@ -230,7 +231,7 @@ pub fn create_interface(
let interface_type = crate::tools::required_string_param(&param, "type")?; let interface_type = crate::tools::required_string_param(&param, "type")?;
let interface_type: NetworkInterfaceType = serde_json::from_value(interface_type.into())?; let interface_type: NetworkInterfaceType = serde_json::from_value(interface_type.into())?;
let _lock = crate::tools::open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, _digest) = network::config()?; let (mut config, _digest) = network::config()?;
@ -463,7 +464,7 @@ pub fn update_interface(
param: Value, param: Value,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = network::config()?; let (mut config, expected_digest) = network::config()?;
@ -586,7 +587,7 @@ pub fn update_interface(
/// Remove network interface configuration. /// Remove network interface configuration.
pub fn delete_interface(iface: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_interface(iface: String, digest: Option<String>) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = network::config()?; let (mut config, expected_digest) = network::config()?;
@ -624,9 +625,9 @@ pub async fn reload_network_config(
network::assert_ifupdown2_installed()?; network::assert_ifupdown2_installed()?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let upid_str = WorkerTask::spawn("srvreload", Some(String::from("networking")), &username.clone(), true, |_worker| async { let upid_str = WorkerTask::spawn("srvreload", Some(String::from("networking")), userid, true, |_worker| async {
let _ = std::fs::rename(network::NETWORK_INTERFACES_NEW_FILENAME, network::NETWORK_INTERFACES_FILENAME); let _ = std::fs::rename(network::NETWORK_INTERFACES_NEW_FILENAME, network::NETWORK_INTERFACES_FILENAME);

View File

@ -185,13 +185,14 @@ fn run_service_command(service: &str, cmd: &str) -> Result<Value, Error> {
// fixme: run background worker (fork_worker) ??? // fixme: run background worker (fork_worker) ???
match cmd { let cmd = match cmd {
"start"|"stop"|"restart"|"reload" => {}, "start"|"stop"|"restart"=> cmd,
"reload" => "try-reload-or-restart", // some services do not implement reload
_ => bail!("unknown service command '{}'", cmd), _ => bail!("unknown service command '{}'", cmd),
} };
if service == "proxmox-backup" && cmd != "restart" { if service == "proxmox-backup" && cmd == "stop" {
bail!("invalid service cmd '{} {}'", service, cmd); bail!("invalid service cmd '{} {}' cannot stop essential service!", service, cmd);
} }
let real_service_name = real_service_name(service); let real_service_name = real_service_name(service);

View File

@ -4,7 +4,7 @@ use std::io::{BufRead, BufReader};
use anyhow::{Error}; use anyhow::{Error};
use serde_json::{json, Value}; use serde_json::{json, Value};
use proxmox::api::{api, Router, RpcEnvironment, Permission, UserInformation}; use proxmox::api::{api, Router, RpcEnvironment, Permission};
use proxmox::api::router::SubdirMap; use proxmox::api::router::SubdirMap;
use proxmox::{identity, list_subdirs_api_method, sortable}; use proxmox::{identity, list_subdirs_api_method, sortable};
@ -84,11 +84,11 @@ async fn get_task_status(
let upid = extract_upid(&param)?; let upid = extract_upid(&param)?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
if username != upid.username { if userid != upid.userid {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
user_info.check_privs(&username, &["system", "tasks"], PRIV_SYS_AUDIT, false)?; user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_AUDIT, false)?;
} }
let mut result = json!({ let mut result = json!({
@ -99,7 +99,7 @@ async fn get_task_status(
"starttime": upid.starttime, "starttime": upid.starttime,
"type": upid.worker_type, "type": upid.worker_type,
"id": upid.worker_id, "id": upid.worker_id,
"user": upid.username, "user": upid.userid,
}); });
if crate::server::worker_is_active(&upid).await? { if crate::server::worker_is_active(&upid).await? {
@ -161,11 +161,11 @@ async fn read_task_log(
let upid = extract_upid(&param)?; let upid = extract_upid(&param)?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
if username != upid.username { if userid != upid.userid {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
user_info.check_privs(&username, &["system", "tasks"], PRIV_SYS_AUDIT, false)?; user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_AUDIT, false)?;
} }
let test_status = param["test-status"].as_bool().unwrap_or(false); let test_status = param["test-status"].as_bool().unwrap_or(false);
@ -234,11 +234,11 @@ fn stop_task(
let upid = extract_upid(&param)?; let upid = extract_upid(&param)?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
if username != upid.username { if userid != upid.userid {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
user_info.check_privs(&username, &["system", "tasks"], PRIV_SYS_MODIFY, false)?; user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_MODIFY, false)?;
} }
server::abort_worker_async(upid); server::abort_worker_async(upid);
@ -281,7 +281,7 @@ fn stop_task(
default: false, default: false,
}, },
userfilter: { userfilter: {
optional:true, optional: true,
type: String, type: String,
description: "Only list tasks from this user.", description: "Only list tasks from this user.",
}, },
@ -307,9 +307,9 @@ pub fn list_tasks(
mut rpcenv: &mut dyn RpcEnvironment, mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> { ) -> Result<Vec<TaskListItem>, Error> {
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&username, &["system", "tasks"]); let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0; let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
@ -324,11 +324,11 @@ pub fn list_tasks(
let mut count = 0; let mut count = 0;
for info in list { for info in list {
if !list_all && info.upid.username != username { continue; } if !list_all && info.upid.userid != userid { continue; }
if let Some(username) = userfilter { if let Some(userid) = userfilter {
if !info.upid.username.contains(username) { continue; } if !info.upid.userid.as_str().contains(userid) { continue; }
} }
if let Some(store) = store { if let Some(store) = store {

View File

@ -18,7 +18,7 @@ use crate::config::{
pub fn check_pull_privs( pub fn check_pull_privs(
username: &str, userid: &Userid,
store: &str, store: &str,
remote: &str, remote: &str,
remote_store: &str, remote_store: &str,
@ -27,11 +27,11 @@ pub fn check_pull_privs(
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
user_info.check_privs(username, &["datastore", store], PRIV_DATASTORE_BACKUP, false)?; user_info.check_privs(userid, &["datastore", store], PRIV_DATASTORE_BACKUP, false)?;
user_info.check_privs(username, &["remote", remote, remote_store], PRIV_REMOTE_READ, false)?; user_info.check_privs(userid, &["remote", remote, remote_store], PRIV_REMOTE_READ, false)?;
if delete { if delete {
user_info.check_privs(username, &["datastore", store], PRIV_DATASTORE_PRUNE, false)?; user_info.check_privs(userid, &["datastore", store], PRIV_DATASTORE_PRUNE, false)?;
} }
Ok(()) Ok(())
@ -99,19 +99,19 @@ async fn pull (
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> { ) -> Result<String, Error> {
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let delete = remove_vanished.unwrap_or(true); let delete = remove_vanished.unwrap_or(true);
check_pull_privs(&username, &store, &remote, &remote_store, delete)?; check_pull_privs(&userid, &store, &remote, &remote_store, delete)?;
let (client, src_repo, tgt_store) = get_pull_parameters(&store, &remote, &remote_store).await?; let (client, src_repo, tgt_store) = get_pull_parameters(&store, &remote, &remote_store).await?;
// fixme: set to_stdout to false? // fixme: set to_stdout to false?
let upid_str = WorkerTask::spawn("sync", Some(store.clone()), &username.clone(), true, move |worker| async move { let upid_str = WorkerTask::spawn("sync", Some(store.clone()), userid.clone(), true, move |worker| async move {
worker.log(format!("sync datastore '{}' start", store)); worker.log(format!("sync datastore '{}' start", store));
pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, username).await?; pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, userid).await?;
worker.log(format!("sync datastore '{}' end", store)); worker.log(format!("sync datastore '{}' end", store));

View File

@ -55,11 +55,11 @@ fn upgrade_to_backup_reader_protocol(
async move { async move {
let debug = param["debug"].as_bool().unwrap_or(false); let debug = param["debug"].as_bool().unwrap_or(false);
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let store = tools::required_string_param(&param, "store")?.to_owned(); let store = tools::required_string_param(&param, "store")?.to_owned();
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
user_info.check_privs(&username, &["datastore", &store], PRIV_DATASTORE_READ, false)?; user_info.check_privs(&userid, &["datastore", &store], PRIV_DATASTORE_READ, false)?;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
@ -90,9 +90,14 @@ fn upgrade_to_backup_reader_protocol(
let worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_dir.backup_time().timestamp()); let worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_dir.backup_time().timestamp());
WorkerTask::spawn("reader", Some(worker_id), &username.clone(), true, move |worker| { WorkerTask::spawn("reader", Some(worker_id), userid.clone(), true, move |worker| {
let mut env = ReaderEnvironment::new( let mut env = ReaderEnvironment::new(
env_type, username.clone(), worker.clone(), datastore, backup_dir); env_type,
userid,
worker.clone(),
datastore,
backup_dir,
);
env.debug = debug; env.debug = debug;
@ -225,8 +230,8 @@ fn download_chunk(
env.debug(format!("download chunk {:?}", path)); env.debug(format!("download chunk {:?}", path));
let data = tokio::fs::read(path) let data = tokio::fs::read(path)
.map_err(move |err| http_err!(BAD_REQUEST, format!("reading file {:?} failed: {}", path2, err))) .await
.await?; .map_err(move |err| http_err!(BAD_REQUEST, "reading file {:?} failed: {}", path2, err))?;
let body = Body::from(data); let body = Body::from(data);
@ -260,7 +265,7 @@ fn download_chunk_old(
let path3 = path.clone(); let path3 = path.clone();
let response_future = tokio::fs::File::open(path) let response_future = tokio::fs::File::open(path)
.map_err(move |err| http_err!(BAD_REQUEST, format!("open file {:?} failed: {}", path2, err))) .map_err(move |err| http_err!(BAD_REQUEST, "open file {:?} failed: {}", path2, err))
.and_then(move |file| { .and_then(move |file| {
env2.debug(format!("download chunk {:?}", path3)); env2.debug(format!("download chunk {:?}", path3));
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new()) let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())

View File

@ -5,9 +5,10 @@ use serde_json::{json, Value};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType}; use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::server::WorkerTask; use crate::api2::types::Userid;
use crate::backup::*; use crate::backup::*;
use crate::server::formatter::*; use crate::server::formatter::*;
use crate::server::WorkerTask;
//use proxmox::tools; //use proxmox::tools;
@ -16,7 +17,7 @@ use crate::server::formatter::*;
pub struct ReaderEnvironment { pub struct ReaderEnvironment {
env_type: RpcEnvironmentType, env_type: RpcEnvironmentType,
result_attributes: Value, result_attributes: Value,
user: String, user: Userid,
pub debug: bool, pub debug: bool,
pub formatter: &'static OutputFormatter, pub formatter: &'static OutputFormatter,
pub worker: Arc<WorkerTask>, pub worker: Arc<WorkerTask>,
@ -28,7 +29,7 @@ pub struct ReaderEnvironment {
impl ReaderEnvironment { impl ReaderEnvironment {
pub fn new( pub fn new(
env_type: RpcEnvironmentType, env_type: RpcEnvironmentType,
user: String, user: Userid,
worker: Arc<WorkerTask>, worker: Arc<WorkerTask>,
datastore: Arc<DataStore>, datastore: Arc<DataStore>,
backup_dir: BackupDir, backup_dir: BackupDir,
@ -77,7 +78,7 @@ impl RpcEnvironment for ReaderEnvironment {
} }
fn get_user(&self) -> Option<String> { fn get_user(&self) -> Option<String> {
Some(self.user.clone()) Some(self.user.to_string())
} }
} }

View File

@ -10,14 +10,14 @@ use proxmox::api::{
Router, Router,
RpcEnvironment, RpcEnvironment,
SubdirMap, SubdirMap,
UserInformation,
}; };
use crate::api2::types::{ use crate::api2::types::{
DATASTORE_SCHEMA, DATASTORE_SCHEMA,
RRDMode, RRDMode,
RRDTimeFrameResolution, RRDTimeFrameResolution,
TaskListItem TaskListItem,
Userid,
}; };
use crate::server; use crate::server;
@ -84,13 +84,13 @@ fn datastore_status(
let (config, _digest) = datastore::config()?; let (config, _digest) = datastore::config()?;
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let mut list = Vec::new(); let mut list = Vec::new();
for (store, (_, _)) in &config.sections { for (store, (_, _)) in &config.sections {
let user_privs = user_info.lookup_privs(&username, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0; let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0;
if !allowed { if !allowed {
continue; continue;
@ -202,9 +202,9 @@ pub fn list_tasks(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> { ) -> Result<Vec<TaskListItem>, Error> {
let username = rpcenv.get_user().unwrap(); let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&username, &["system", "tasks"]); let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0; let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
@ -212,7 +212,7 @@ pub fn list_tasks(
let list: Vec<TaskListItem> = server::read_task_list()? let list: Vec<TaskListItem> = server::read_task_list()?
.into_iter() .into_iter()
.map(TaskListItem::from) .map(TaskListItem::from)
.filter(|entry| list_all || entry.user == username) .filter(|entry| list_all || entry.user == userid)
.collect(); .collect();
Ok(list.into()) Ok(list.into())

4
src/api2/types/macros.rs Normal file
View File

@ -0,0 +1,4 @@
//! Macros exported from api2::types.
#[macro_export]
macro_rules! PROXMOX_SAFE_ID_REGEX_STR { () => (r"(?:[A-Za-z0-9_][A-Za-z0-9._\-]*)") }

View File

@ -1,5 +1,5 @@
use anyhow::{bail}; use anyhow::bail;
use ::serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use proxmox::api::{api, schema::*}; use proxmox::api::{api, schema::*};
use proxmox::const_regex; use proxmox::const_regex;
@ -7,6 +7,16 @@ use proxmox::{IPRE, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::CryptMode; use crate::backup::CryptMode;
#[macro_use]
mod macros;
#[macro_use]
mod userid;
pub use userid::{Realm, RealmRef};
pub use userid::{Username, UsernameRef};
pub use userid::Userid;
pub use userid::PROXMOX_GROUP_ID_SCHEMA;
// File names: may not contain slashes, may not start with "." // File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| { pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
if name.starts_with('.') { if name.starts_with('.') {
@ -21,19 +31,6 @@ pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") } macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_NAME { () => (concat!(r"(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!())) } macro_rules! DNS_NAME { () => (concat!(r"(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!())) }
// we only allow a limited set of characters
// colon is not allowed, because we store usernames in
// colon separated lists)!
// slash is not allowed because it is used as pve API delimiter
// also see "man useradd"
macro_rules! USER_NAME_REGEX_STR { () => (r"(?:[^\s:/[:cntrl:]]+)") }
macro_rules! GROUP_NAME_REGEX_STR { () => (USER_NAME_REGEX_STR!()) }
macro_rules! USER_ID_REGEX_STR { () => (concat!(USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!())) }
#[macro_export]
macro_rules! PROXMOX_SAFE_ID_REGEX_STR { () => (r"(?:[A-Za-z0-9_][A-Za-z0-9._\-]*)") }
macro_rules! CIDR_V4_REGEX_STR { () => (concat!(r"(?:", IPV4RE!(), r"/\d{1,2})$")) } macro_rules! CIDR_V4_REGEX_STR { () => (concat!(r"(?:", IPV4RE!(), r"/\d{1,2})$")) }
macro_rules! CIDR_V6_REGEX_STR { () => (concat!(r"(?:", IPV6RE!(), r"/\d{1,3})$")) } macro_rules! CIDR_V6_REGEX_STR { () => (concat!(r"(?:", IPV6RE!(), r"/\d{1,3})$")) }
@ -67,12 +64,8 @@ const_regex!{
pub DNS_NAME_OR_IP_REGEX = concat!(r"^", DNS_NAME!(), "|", IPRE!(), r"$"); pub DNS_NAME_OR_IP_REGEX = concat!(r"^", DNS_NAME!(), "|", IPRE!(), r"$");
pub PROXMOX_USER_ID_REGEX = concat!(r"^", USER_ID_REGEX_STR!(), r"$");
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE!() ,"):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$"); pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE!() ,"):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
pub PROXMOX_GROUP_ID_REGEX = concat!(r"^", GROUP_NAME_REGEX_STR!(), r"$");
pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$"; pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$"); pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
@ -115,12 +108,6 @@ pub const DNS_NAME_FORMAT: ApiStringFormat =
pub const DNS_NAME_OR_IP_FORMAT: ApiStringFormat = pub const DNS_NAME_OR_IP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_OR_IP_REGEX); ApiStringFormat::Pattern(&DNS_NAME_OR_IP_REGEX);
pub const PROXMOX_USER_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_ID_REGEX);
pub const PROXMOX_GROUP_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_GROUP_ID_REGEX);
pub const PASSWORD_FORMAT: ApiStringFormat = pub const PASSWORD_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PASSWORD_REGEX); ApiStringFormat::Pattern(&PASSWORD_REGEX);
@ -343,24 +330,6 @@ pub const DNS_NAME_OR_IP_SCHEMA: Schema = StringSchema::new("DNS name or IP addr
.format(&DNS_NAME_OR_IP_FORMAT) .format(&DNS_NAME_OR_IP_FORMAT)
.schema(); .schema();
pub const PROXMOX_AUTH_REALM_SCHEMA: Schema = StringSchema::new("Authentication domain ID")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const PROXMOX_USER_ID_SCHEMA: Schema = StringSchema::new("User ID")
.format(&PROXMOX_USER_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const PROXMOX_GROUP_ID_SCHEMA: Schema = StringSchema::new("Group ID")
.format(&PROXMOX_GROUP_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name (/sys/block/<name>).") pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name (/sys/block/<name>).")
.format(&BLOCKDEVICE_NAME_FORMAT) .format(&BLOCKDEVICE_NAME_FORMAT)
.min_length(3) .min_length(3)
@ -388,6 +357,10 @@ pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name
schema: BACKUP_ARCHIVE_NAME_SCHEMA schema: BACKUP_ARCHIVE_NAME_SCHEMA
}, },
}, },
owner: {
type: Userid,
optional: true,
},
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
@ -403,7 +376,7 @@ pub struct GroupListItem {
pub files: Vec<String>, pub files: Vec<String>,
/// The owner of group /// The owner of group
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<String>, pub owner: Option<Userid>,
} }
#[api( #[api(
@ -422,6 +395,10 @@ pub struct GroupListItem {
schema: BACKUP_ARCHIVE_NAME_SCHEMA schema: BACKUP_ARCHIVE_NAME_SCHEMA
}, },
}, },
owner: {
type: Userid,
optional: true,
},
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
@ -431,6 +408,9 @@ pub struct SnapshotListItem {
pub backup_type: String, // enum pub backup_type: String, // enum
pub backup_id: String, pub backup_id: String,
pub backup_time: i64, pub backup_time: i64,
/// The first line from manifest "notes"
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
/// List of contained archive files. /// List of contained archive files.
pub files: Vec<BackupContent>, pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes). /// Overall snapshot size (sum of all archive sizes).
@ -438,7 +418,7 @@ pub struct SnapshotListItem {
pub size: Option<u64>, pub size: Option<u64>,
/// The owner of the snapshots group /// The owner of the snapshots group
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<String>, pub owner: Option<Userid>,
} }
#[api( #[api(
@ -581,7 +561,8 @@ pub struct StorageStatus {
#[api( #[api(
properties: { properties: {
"upid": { schema: UPID_SCHEMA }, upid: { schema: UPID_SCHEMA },
user: { type: Userid },
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
@ -601,7 +582,7 @@ pub struct TaskListItem {
/// Worker ID (arbitrary ASCII string) /// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>, pub worker_id: Option<String>,
/// The user who started the task /// The user who started the task
pub user: String, pub user: Userid,
/// The task end time (Epoch) /// The task end time (Epoch)
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub endtime: Option<i64>, pub endtime: Option<i64>,
@ -624,7 +605,7 @@ impl From<crate::server::TaskListInfo> for TaskListItem {
starttime: info.upid.starttime, starttime: info.upid.starttime,
worker_type: info.upid.worker_type, worker_type: info.upid.worker_type,
worker_id: info.upid.worker_id, worker_id: info.upid.worker_id,
user: info.upid.username, user: info.upid.userid,
endtime, endtime,
status, status,
} }
@ -890,9 +871,6 @@ fn test_cert_fingerprint_schema() -> Result<(), anyhow::Error> {
#[test] #[test]
fn test_proxmox_user_id_schema() -> Result<(), anyhow::Error> { fn test_proxmox_user_id_schema() -> Result<(), anyhow::Error> {
let schema = PROXMOX_USER_ID_SCHEMA;
let invalid_user_ids = [ let invalid_user_ids = [
"x", // too short "x", // too short
"xx", // too short "xx", // too short
@ -906,7 +884,7 @@ fn test_proxmox_user_id_schema() -> Result<(), anyhow::Error> {
]; ];
for name in invalid_user_ids.iter() { for name in invalid_user_ids.iter() {
if let Ok(_) = parse_simple_value(name, &schema) { if let Ok(_) = parse_simple_value(name, &Userid::API_SCHEMA) {
bail!("test userid '{}' failed - got Ok() while exception an error.", name); bail!("test userid '{}' failed - got Ok() while exception an error.", name);
} }
} }
@ -920,7 +898,7 @@ fn test_proxmox_user_id_schema() -> Result<(), anyhow::Error> {
]; ];
for name in valid_user_ids.iter() { for name in valid_user_ids.iter() {
let v = match parse_simple_value(name, &schema) { let v = match parse_simple_value(name, &Userid::API_SCHEMA) {
Ok(v) => v, Ok(v) => v,
Err(err) => { Err(err) => {
bail!("unable to parse userid '{}' - {}", name, err); bail!("unable to parse userid '{}' - {}", name, err);

420
src/api2/types/userid.rs Normal file
View File

@ -0,0 +1,420 @@
//! Types for user handling.
//!
//! We have [`Username`]s and [`Realm`]s. To uniquely identify a user, they must be combined into a [`Userid`].
//!
//! Since they're all string types, they're organized as follows:
//!
//! * [`Username`]: an owned user name. Internally a `String`.
//! * [`UsernameRef`]: a borrowed user name. Pairs with a `Username` the same way a `str` pairs
//! with `String`, meaning you can only make references to it.
//! * [`Realm`]: an owned realm (`String` equivalent).
//! * [`RealmRef`]: a borrowed realm (`str` equivalent).
//! * [`Userid`]: an owned user id (`"user@realm"`). Note that this does not have a separte
//! borrowed type.
//!
//! Note that `Username`s are not unique, therefore they do not implement `Eq` and cannot be
//! compared directly. If a direct comparison is really required, they can be compared as strings
//! via the `as_str()` method. [`Realm`]s and [`Userid`]s on the other hand can be compared with
//! each other, as in those two cases the comparison has meaning.
use std::borrow::Borrow;
use std::convert::TryFrom;
use std::fmt;
use anyhow::{bail, format_err, Error};
use lazy_static::lazy_static;
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{ApiStringFormat, Schema, StringSchema};
use proxmox::const_regex;
// we only allow a limited set of characters
// colon is not allowed, because we store usernames in
// colon separated lists)!
// slash is not allowed because it is used as pve API delimiter
// also see "man useradd"
macro_rules! USER_NAME_REGEX_STR { () => (r"(?:[^\s:/[:cntrl:]]+)") }
macro_rules! GROUP_NAME_REGEX_STR { () => (USER_NAME_REGEX_STR!()) }
macro_rules! USER_ID_REGEX_STR { () => (concat!(USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!())) }
const_regex! {
pub PROXMOX_USER_NAME_REGEX = concat!(r"^", USER_NAME_REGEX_STR!(), r"$");
pub PROXMOX_USER_ID_REGEX = concat!(r"^", USER_ID_REGEX_STR!(), r"$");
pub PROXMOX_GROUP_ID_REGEX = concat!(r"^", GROUP_NAME_REGEX_STR!(), r"$");
}
pub const PROXMOX_USER_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_NAME_REGEX);
pub const PROXMOX_USER_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_ID_REGEX);
pub const PROXMOX_GROUP_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_GROUP_ID_REGEX);
pub const PROXMOX_GROUP_ID_SCHEMA: Schema = StringSchema::new("Group ID")
.format(&PROXMOX_GROUP_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const PROXMOX_AUTH_REALM_STRING_SCHEMA: StringSchema =
StringSchema::new("Authentication domain ID")
.format(&super::PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32);
pub const PROXMOX_AUTH_REALM_SCHEMA: Schema = PROXMOX_AUTH_REALM_STRING_SCHEMA.schema();
#[api(
type: String,
format: &PROXMOX_USER_NAME_FORMAT,
)]
/// The user name part of a user id.
///
/// This alone does NOT uniquely identify the user and therefore does not implement `Eq`. In order
/// to compare user names directly, they need to be explicitly compared as strings by calling
/// `.as_str()`.
///
/// ```compile_fail
/// fn test(a: Username, b: Username) -> bool {
/// a == b // illegal and does not compile
/// }
/// ```
#[derive(Clone, Debug, Hash, Deserialize, Serialize)]
pub struct Username(String);
/// A reference to a user name part of a user id. This alone does NOT uniquely identify the user.
///
/// This is like a `str` to the `String` of a [`Username`].
#[derive(Debug, Hash)]
pub struct UsernameRef(str);
#[doc(hidden)]
/// ```compile_fail
/// let a: Username = unsafe { std::mem::zeroed() };
/// let b: Username = unsafe { std::mem::zeroed() };
/// let _ = <Username as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(a, b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(&a, &b);
/// ```
struct _AssertNoEqImpl;
impl UsernameRef {
fn new(s: &str) -> &Self {
unsafe { &*(s as *const str as *const UsernameRef) }
}
pub fn as_str(&self) -> &str {
&self.0
}
}
impl std::ops::Deref for Username {
type Target = UsernameRef;
fn deref(&self) -> &UsernameRef {
self.borrow()
}
}
impl Borrow<UsernameRef> for Username {
fn borrow(&self) -> &UsernameRef {
UsernameRef::new(self.as_str())
}
}
impl AsRef<UsernameRef> for Username {
fn as_ref(&self) -> &UsernameRef {
UsernameRef::new(self.as_str())
}
}
impl ToOwned for UsernameRef {
type Owned = Username;
fn to_owned(&self) -> Self::Owned {
Username(self.0.to_owned())
}
}
impl TryFrom<String> for Username {
type Error = Error;
fn try_from(s: String) -> Result<Self, Error> {
if !PROXMOX_USER_NAME_REGEX.is_match(&s) {
bail!("invalid user name");
}
Ok(Self(s))
}
}
impl<'a> TryFrom<&'a str> for &'a UsernameRef {
type Error = Error;
fn try_from(s: &'a str) -> Result<&'a UsernameRef, Error> {
if !PROXMOX_USER_NAME_REGEX.is_match(s) {
bail!("invalid name in user id");
}
Ok(UsernameRef::new(s))
}
}
#[api(schema: PROXMOX_AUTH_REALM_SCHEMA)]
/// An authentication realm.
#[derive(Clone, Debug, Eq, PartialEq, Hash, Deserialize, Serialize)]
pub struct Realm(String);
/// A reference to an authentication realm.
///
/// This is like a `str` to the `String` of a `Realm`.
#[derive(Debug, Hash, Eq, PartialEq)]
pub struct RealmRef(str);
impl RealmRef {
fn new(s: &str) -> &Self {
unsafe { &*(s as *const str as *const RealmRef) }
}
pub fn as_str(&self) -> &str {
&self.0
}
}
impl std::ops::Deref for Realm {
type Target = RealmRef;
fn deref(&self) -> &RealmRef {
self.borrow()
}
}
impl Borrow<RealmRef> for Realm {
fn borrow(&self) -> &RealmRef {
RealmRef::new(self.as_str())
}
}
impl AsRef<RealmRef> for Realm {
fn as_ref(&self) -> &RealmRef {
RealmRef::new(self.as_str())
}
}
impl ToOwned for RealmRef {
type Owned = Realm;
fn to_owned(&self) -> Self::Owned {
Realm(self.0.to_owned())
}
}
impl TryFrom<String> for Realm {
type Error = Error;
fn try_from(s: String) -> Result<Self, Error> {
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(&s)
.map_err(|_| format_err!("invalid realm"))?;
Ok(Self(s))
}
}
impl<'a> TryFrom<&'a str> for &'a RealmRef {
type Error = Error;
fn try_from(s: &'a str) -> Result<&'a RealmRef, Error> {
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(s)
.map_err(|_| format_err!("invalid realm"))?;
Ok(RealmRef::new(s))
}
}
impl PartialEq<str> for Realm {
fn eq(&self, rhs: &str) -> bool {
self.0 == rhs
}
}
impl PartialEq<&str> for Realm {
fn eq(&self, rhs: &&str) -> bool {
self.0 == *rhs
}
}
impl PartialEq<str> for RealmRef {
fn eq(&self, rhs: &str) -> bool {
self.0 == *rhs
}
}
impl PartialEq<&str> for RealmRef {
fn eq(&self, rhs: &&str) -> bool {
self.0 == **rhs
}
}
impl PartialEq<RealmRef> for Realm {
fn eq(&self, rhs: &RealmRef) -> bool {
self.0 == &rhs.0
}
}
impl PartialEq<Realm> for RealmRef {
fn eq(&self, rhs: &Realm) -> bool {
self.0 == rhs.0
}
}
impl PartialEq<Realm> for &RealmRef {
fn eq(&self, rhs: &Realm) -> bool {
(*self).0 == rhs.0
}
}
/// A complete user id consting of a user name and a realm.
#[derive(Clone, Debug, Hash)]
pub struct Userid {
data: String,
name_len: usize,
//name: Username,
//realm: Realm,
}
impl Userid {
pub const API_SCHEMA: Schema = StringSchema::new("User ID")
.format(&PROXMOX_USER_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
const fn new(data: String, name_len: usize) -> Self {
Self { data, name_len }
}
pub fn name(&self) -> &UsernameRef {
UsernameRef::new(&self.data[..self.name_len])
}
pub fn realm(&self) -> &RealmRef {
RealmRef::new(&self.data[(self.name_len + 1)..])
}
pub fn as_str(&self) -> &str {
&self.data
}
/// Get the "backup@pam" user id.
pub fn backup_userid() -> &'static Self {
&*BACKUP_USERID
}
/// Get the "root@pam" user id.
pub fn root_userid() -> &'static Self {
&*ROOT_USERID
}
}
lazy_static! {
pub static ref BACKUP_USERID: Userid = Userid::new("backup@pam".to_string(), 6);
pub static ref ROOT_USERID: Userid = Userid::new("root@pam".to_string(), 4);
}
impl Eq for Userid {}
impl PartialEq for Userid {
fn eq(&self, rhs: &Self) -> bool {
self.data == rhs.data && self.name_len == rhs.name_len
}
}
impl From<(Username, Realm)> for Userid {
fn from(parts: (Username, Realm)) -> Self {
Self::from((parts.0.as_ref(), parts.1.as_ref()))
}
}
impl From<(&UsernameRef, &RealmRef)> for Userid {
fn from(parts: (&UsernameRef, &RealmRef)) -> Self {
let data = format!("{}@{}", parts.0.as_str(), parts.1.as_str());
let name_len = parts.0.as_str().len();
Self { data, name_len }
}
}
impl fmt::Display for Userid {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.data.fmt(f)
}
}
impl std::str::FromStr for Userid {
type Err = Error;
fn from_str(id: &str) -> Result<Self, Error> {
let (name, realm) = match id.as_bytes().iter().rposition(|&b| b == b'@') {
Some(pos) => (&id[..pos], &id[(pos + 1)..]),
None => bail!("not a valid user id"),
};
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(realm)
.map_err(|_| format_err!("invalid realm in user id"))?;
Ok(Self::from((UsernameRef::new(name), RealmRef::new(realm))))
}
}
impl TryFrom<String> for Userid {
type Error = Error;
fn try_from(data: String) -> Result<Self, Error> {
let name_len = data
.as_bytes()
.iter()
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(&data[(name_len + 1)..])
.map_err(|_| format_err!("invalid realm in user id"))?;
Ok(Self { data, name_len })
}
}
impl PartialEq<str> for Userid {
fn eq(&self, rhs: &str) -> bool {
rhs.len() > self.name_len + 2 // make sure range access below is allowed
&& rhs.starts_with(self.name().as_str())
&& rhs.as_bytes()[self.name_len] == b'@'
&& &rhs[(self.name_len + 1)..] == self.realm().as_str()
}
}
impl PartialEq<&str> for Userid {
fn eq(&self, rhs: &&str) -> bool {
*self == **rhs
}
}
impl PartialEq<String> for Userid {
fn eq(&self, rhs: &String) -> bool {
self == rhs.as_str()
}
}
proxmox::forward_deserialize_to_from_str!(Userid);
proxmox::forward_serialize_to_display!(Userid);

View File

@ -10,39 +10,54 @@ use base64;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::json; use serde_json::json;
use crate::api2::types::{Userid, UsernameRef, RealmRef};
pub trait ProxmoxAuthenticator { pub trait ProxmoxAuthenticator {
fn authenticate_user(&self, username: &str, password: &str) -> Result<(), Error>; fn authenticate_user(&self, username: &UsernameRef, password: &str) -> Result<(), Error>;
fn store_password(&self, username: &str, password: &str) -> Result<(), Error>; fn store_password(&self, username: &UsernameRef, password: &str) -> Result<(), Error>;
} }
pub struct PAM(); pub struct PAM();
impl ProxmoxAuthenticator for PAM { impl ProxmoxAuthenticator for PAM {
fn authenticate_user(&self, username: &str, password: &str) -> Result<(), Error> { fn authenticate_user(&self, username: &UsernameRef, password: &str) -> Result<(), Error> {
let mut auth = pam::Authenticator::with_password("proxmox-backup-auth").unwrap(); let mut auth = pam::Authenticator::with_password("proxmox-backup-auth").unwrap();
auth.get_handler().set_credentials(username, password); auth.get_handler().set_credentials(username.as_str(), password);
auth.authenticate()?; auth.authenticate()?;
return Ok(()); return Ok(());
} }
fn store_password(&self, username: &str, password: &str) -> Result<(), Error> { fn store_password(&self, username: &UsernameRef, password: &str) -> Result<(), Error> {
let mut child = Command::new("passwd") let mut child = Command::new("passwd")
.arg(username) .arg(username.as_str())
.stdin(Stdio::piped()) .stdin(Stdio::piped())
.stderr(Stdio::piped()) .stderr(Stdio::piped())
.spawn() .spawn()
.or_else(|err| Err(format_err!("unable to set password for '{}' - execute passwd failed: {}", username, err)))?; .map_err(|err| format_err!(
"unable to set password for '{}' - execute passwd failed: {}",
username.as_str(),
err,
))?;
// Note: passwd reads password twice from stdin (for verify) // Note: passwd reads password twice from stdin (for verify)
writeln!(child.stdin.as_mut().unwrap(), "{}\n{}", password, password)?; writeln!(child.stdin.as_mut().unwrap(), "{}\n{}", password, password)?;
let output = child.wait_with_output() let output = child
.or_else(|err| Err(format_err!("unable to set password for '{}' - wait failed: {}", username, err)))?; .wait_with_output()
.map_err(|err| format_err!(
"unable to set password for '{}' - wait failed: {}",
username.as_str(),
err,
))?;
if !output.status.success() { if !output.status.success() {
bail!("unable to set password for '{}' - {}", username, String::from_utf8_lossy(&output.stderr)); bail!(
"unable to set password for '{}' - {}",
username.as_str(),
String::from_utf8_lossy(&output.stderr),
);
} }
Ok(()) Ok(())
@ -90,23 +105,23 @@ pub fn verify_crypt_pw(password: &str, enc_password: &str) -> Result<(), Error>
Ok(()) Ok(())
} }
const SHADOW_CONFIG_FILENAME: &str = "/etc/proxmox-backup/shadow.json"; const SHADOW_CONFIG_FILENAME: &str = configdir!("/shadow.json");
impl ProxmoxAuthenticator for PBS { impl ProxmoxAuthenticator for PBS {
fn authenticate_user(&self, username: &str, password: &str) -> Result<(), Error> { fn authenticate_user(&self, username: &UsernameRef, password: &str) -> Result<(), Error> {
let data = proxmox::tools::fs::file_get_json(SHADOW_CONFIG_FILENAME, Some(json!({})))?; let data = proxmox::tools::fs::file_get_json(SHADOW_CONFIG_FILENAME, Some(json!({})))?;
match data[username].as_str() { match data[username.as_str()].as_str() {
None => bail!("no password set"), None => bail!("no password set"),
Some(enc_password) => verify_crypt_pw(password, enc_password)?, Some(enc_password) => verify_crypt_pw(password, enc_password)?,
} }
Ok(()) Ok(())
} }
fn store_password(&self, username: &str, password: &str) -> Result<(), Error> { fn store_password(&self, username: &UsernameRef, password: &str) -> Result<(), Error> {
let enc_password = encrypt_pw(password)?; let enc_password = encrypt_pw(password)?;
let mut data = proxmox::tools::fs::file_get_json(SHADOW_CONFIG_FILENAME, Some(json!({})))?; let mut data = proxmox::tools::fs::file_get_json(SHADOW_CONFIG_FILENAME, Some(json!({})))?;
data[username] = enc_password.into(); data[username.as_str()] = enc_password.into();
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0600); let mode = nix::sys::stat::Mode::from_bits_truncate(0o0600);
let options = proxmox::tools::fs::CreateOptions::new() let options = proxmox::tools::fs::CreateOptions::new()
@ -121,28 +136,18 @@ impl ProxmoxAuthenticator for PBS {
} }
} }
pub fn parse_userid(userid: &str) -> Result<(String, String), Error> {
let data: Vec<&str> = userid.rsplitn(2, '@').collect();
if data.len() != 2 {
bail!("userid '{}' has no realm", userid);
}
Ok((data[1].to_owned(), data[0].to_owned()))
}
/// Lookup the autenticator for the specified realm /// Lookup the autenticator for the specified realm
pub fn lookup_authenticator(realm: &str) -> Result<Box<dyn ProxmoxAuthenticator>, Error> { pub fn lookup_authenticator(realm: &RealmRef) -> Result<Box<dyn ProxmoxAuthenticator>, Error> {
match realm { match realm.as_str() {
"pam" => Ok(Box::new(PAM())), "pam" => Ok(Box::new(PAM())),
"pbs" => Ok(Box::new(PBS())), "pbs" => Ok(Box::new(PBS())),
_ => bail!("unknown realm '{}'", realm), _ => bail!("unknown realm '{}'", realm.as_str()),
} }
} }
/// Authenticate users /// Authenticate users
pub fn authenticate_user(userid: &str, password: &str) -> Result<(), Error> { pub fn authenticate_user(userid: &Userid, password: &str) -> Result<(), Error> {
let (username, realm) = parse_userid(userid)?;
lookup_authenticator(&realm)? lookup_authenticator(userid.realm())?
.authenticate_user(&username, password) .authenticate_user(userid.name(), password)
} }

View File

@ -10,16 +10,17 @@ use std::path::PathBuf;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions}; use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::try_block; use proxmox::try_block;
use crate::api2::types::Userid;
use crate::tools::epoch_now_u64; use crate::tools::epoch_now_u64;
fn compute_csrf_secret_digest( fn compute_csrf_secret_digest(
timestamp: i64, timestamp: i64,
secret: &[u8], secret: &[u8],
username: &str, userid: &Userid,
) -> String { ) -> String {
let mut hasher = sha::Sha256::new(); let mut hasher = sha::Sha256::new();
let data = format!("{:08X}:{}:", timestamp, username); let data = format!("{:08X}:{}:", timestamp, userid);
hasher.update(data.as_bytes()); hasher.update(data.as_bytes());
hasher.update(secret); hasher.update(secret);
@ -28,19 +29,19 @@ fn compute_csrf_secret_digest(
pub fn assemble_csrf_prevention_token( pub fn assemble_csrf_prevention_token(
secret: &[u8], secret: &[u8],
username: &str, userid: &Userid,
) -> String { ) -> String {
let epoch = epoch_now_u64().unwrap() as i64; let epoch = epoch_now_u64().unwrap() as i64;
let digest = compute_csrf_secret_digest(epoch, secret, username); let digest = compute_csrf_secret_digest(epoch, secret, userid);
format!("{:08X}:{}", epoch, digest) format!("{:08X}:{}", epoch, digest)
} }
pub fn verify_csrf_prevention_token( pub fn verify_csrf_prevention_token(
secret: &[u8], secret: &[u8],
username: &str, userid: &Userid,
token: &str, token: &str,
min_age: i64, min_age: i64,
max_age: i64, max_age: i64,
@ -62,7 +63,7 @@ pub fn verify_csrf_prevention_token(
let ttime = i64::from_str_radix(timestamp, 16). let ttime = i64::from_str_radix(timestamp, 16).
map_err(|err| format_err!("timestamp format error - {}", err))?; map_err(|err| format_err!("timestamp format error - {}", err))?;
let digest = compute_csrf_secret_digest(ttime, secret, username); let digest = compute_csrf_secret_digest(ttime, secret, userid);
if digest != sig { if digest != sig {
bail!("invalid signature."); bail!("invalid signature.");

View File

@ -173,7 +173,7 @@ impl std::str::FromStr for BackupGroup {
/// Uniquely identify a Backup (relative to data store) /// Uniquely identify a Backup (relative to data store)
/// ///
/// We also call this a backup snaphost. /// We also call this a backup snaphost.
#[derive(Debug, Clone)] #[derive(Debug, Eq, PartialEq, Clone)]
pub struct BackupDir { pub struct BackupDir {
/// Backup group /// Backup group
group: BackupGroup, group: BackupGroup,
@ -272,9 +272,13 @@ impl BackupInfo {
} }
/// Finds the latest backup inside a backup group /// Finds the latest backup inside a backup group
pub fn last_backup(base_path: &Path, group: &BackupGroup) -> Result<Option<BackupInfo>, Error> { pub fn last_backup(base_path: &Path, group: &BackupGroup, only_finished: bool)
-> Result<Option<BackupInfo>, Error>
{
let backups = group.list_backups(base_path)?; let backups = group.list_backups(base_path)?;
Ok(backups.into_iter().max_by_key(|item| item.backup_dir.backup_time())) Ok(backups.into_iter()
.filter(|item| !only_finished || item.is_finished())
.max_by_key(|item| item.backup_dir.backup_time()))
} }
pub fn sort_list(list: &mut Vec<BackupInfo>, ascendending: bool) { pub fn sort_list(list: &mut Vec<BackupInfo>, ascendending: bool) {
@ -317,6 +321,11 @@ impl BackupInfo {
})?; })?;
Ok(list) Ok(list)
} }
pub fn is_finished(&self) -> bool {
// backup is considered unfinished if there is no manifest
self.files.iter().any(|name| name == super::MANIFEST_BLOB_NAME)
}
} }
fn list_backup_files<P: ?Sized + nix::NixPath>(dirfd: RawFd, path: &P) -> Result<Vec<String>, Error> { fn list_backup_files<P: ?Sized + nix::NixPath>(dirfd: RawFd, path: &P) -> Result<Vec<String>, Error> {

View File

@ -3,7 +3,7 @@ use std::ffi::{CStr, CString, OsStr, OsString};
use std::future::Future; use std::future::Future;
use std::io::Write; use std::io::Write;
use std::mem; use std::mem;
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::{OsStrExt, OsStringExt};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::pin::Pin; use std::pin::Pin;
@ -1073,6 +1073,7 @@ impl<'a> ExtractorState<'a> {
} }
self.path.extend(&entry.name); self.path.extend(&entry.name);
self.extractor.set_path(OsString::from_vec(self.path.clone()));
self.handle_entry(entry).await?; self.handle_entry(entry).await?;
} }

View File

@ -184,22 +184,6 @@ impl ChunkStore {
Ok(true) Ok(true)
} }
pub fn read_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let (chunk_path, digest_str) = self.chunk_path(digest);
let mut file = std::fs::File::open(&chunk_path)
.map_err(|err| {
format_err!(
"store '{}', unable to read chunk '{}' - {}",
self.name,
digest_str,
err,
)
})?;
DataBlob::load(&mut file)
}
pub fn get_chunk_iterator( pub fn get_chunk_iterator(
&self, &self,
) -> Result< ) -> Result<
@ -291,14 +275,13 @@ impl ChunkStore {
pub fn sweep_unused_chunks( pub fn sweep_unused_chunks(
&self, &self,
oldest_writer: i64, oldest_writer: i64,
phase1_start_time: i64,
status: &mut GarbageCollectionStatus, status: &mut GarbageCollectionStatus,
worker: &WorkerTask, worker: &WorkerTask,
) -> Result<(), Error> { ) -> Result<(), Error> {
use nix::sys::stat::fstatat; use nix::sys::stat::fstatat;
let now = unsafe { libc::time(std::ptr::null_mut()) }; let mut min_atime = phase1_start_time - 3600*24; // at least 24h (see mount option relatime)
let mut min_atime = now - 3600*24; // at least 24h (see mount option relatime)
if oldest_writer < min_atime { if oldest_writer < min_atime {
min_atime = oldest_writer; min_atime = oldest_writer;

View File

@ -36,6 +36,11 @@ impl DataBlob {
&self.raw_data &self.raw_data
} }
/// Returns raw_data size
pub fn raw_size(&self) -> u64 {
self.raw_data.len() as u64
}
/// Consume self and returns raw_data /// Consume self and returns raw_data
pub fn into_inner(self) -> Vec<u8> { pub fn into_inner(self) -> Vec<u8> {
self.raw_data self.raw_data
@ -66,8 +71,8 @@ impl DataBlob {
hasher.finalize() hasher.finalize()
} }
/// verify the CRC32 checksum // verify the CRC32 checksum
pub fn verify_crc(&self) -> Result<(), Error> { fn verify_crc(&self) -> Result<(), Error> {
let expected_crc = self.compute_crc(); let expected_crc = self.compute_crc();
if expected_crc != self.crc() { if expected_crc != self.crc() {
bail!("Data blob has wrong CRC checksum."); bail!("Data blob has wrong CRC checksum.");
@ -180,16 +185,23 @@ impl DataBlob {
} }
/// Decode blob data /// Decode blob data
pub fn decode(&self, config: Option<&CryptConfig>) -> Result<Vec<u8>, Error> { pub fn decode(&self, config: Option<&CryptConfig>, digest: Option<&[u8; 32]>) -> Result<Vec<u8>, Error> {
let magic = self.magic(); let magic = self.magic();
if magic == &UNCOMPRESSED_BLOB_MAGIC_1_0 { if magic == &UNCOMPRESSED_BLOB_MAGIC_1_0 {
let data_start = std::mem::size_of::<DataBlobHeader>(); let data_start = std::mem::size_of::<DataBlobHeader>();
Ok(self.raw_data[data_start..].to_vec()) let data = self.raw_data[data_start..].to_vec();
if let Some(digest) = digest {
Self::verify_digest(&data, None, digest)?;
}
Ok(data)
} else if magic == &COMPRESSED_BLOB_MAGIC_1_0 { } else if magic == &COMPRESSED_BLOB_MAGIC_1_0 {
let data_start = std::mem::size_of::<DataBlobHeader>(); let data_start = std::mem::size_of::<DataBlobHeader>();
let data = zstd::block::decompress(&self.raw_data[data_start..], MAX_BLOB_SIZE)?; let data = zstd::block::decompress(&self.raw_data[data_start..], MAX_BLOB_SIZE)?;
if let Some(digest) = digest {
Self::verify_digest(&data, None, digest)?;
}
Ok(data) Ok(data)
} else if magic == &ENCR_COMPR_BLOB_MAGIC_1_0 || magic == &ENCRYPTED_BLOB_MAGIC_1_0 { } else if magic == &ENCR_COMPR_BLOB_MAGIC_1_0 || magic == &ENCRYPTED_BLOB_MAGIC_1_0 {
let header_len = std::mem::size_of::<EncryptedDataBlobHeader>(); let header_len = std::mem::size_of::<EncryptedDataBlobHeader>();
@ -203,6 +215,9 @@ impl DataBlob {
} else { } else {
config.decode_uncompressed_chunk(&self.raw_data[header_len..], &head.iv, &head.tag)? config.decode_uncompressed_chunk(&self.raw_data[header_len..], &head.iv, &head.tag)?
}; };
if let Some(digest) = digest {
Self::verify_digest(&data, Some(config), digest)?;
}
Ok(data) Ok(data)
} else { } else {
bail!("unable to decrypt blob - missing CryptConfig"); bail!("unable to decrypt blob - missing CryptConfig");
@ -212,13 +227,17 @@ impl DataBlob {
} }
} }
/// Load blob from ``reader`` /// Load blob from ``reader``, verify CRC
pub fn load(reader: &mut dyn std::io::Read) -> Result<Self, Error> { pub fn load_from_reader(reader: &mut dyn std::io::Read) -> Result<Self, Error> {
let mut data = Vec::with_capacity(1024*1024); let mut data = Vec::with_capacity(1024*1024);
reader.read_to_end(&mut data)?; reader.read_to_end(&mut data)?;
Self::from_raw(data) let blob = Self::from_raw(data)?;
blob.verify_crc()?;
Ok(blob)
} }
/// Create Instance from raw data /// Create Instance from raw data
@ -254,7 +273,7 @@ impl DataBlob {
/// To do that, we need to decompress data first. Please note that /// To do that, we need to decompress data first. Please note that
/// this is not possible for encrypted chunks. This function simply return Ok /// this is not possible for encrypted chunks. This function simply return Ok
/// for encrypted chunks. /// for encrypted chunks.
/// Note: This does not call verify_crc /// Note: This does not call verify_crc, because this is usually done in load
pub fn verify_unencrypted( pub fn verify_unencrypted(
&self, &self,
expected_chunk_size: usize, expected_chunk_size: usize,
@ -267,12 +286,26 @@ impl DataBlob {
return Ok(()); return Ok(());
} }
let data = self.decode(None)?; // verifies digest!
let data = self.decode(None, Some(expected_digest))?;
if expected_chunk_size != data.len() { if expected_chunk_size != data.len() {
bail!("detected chunk with wrong length ({} != {})", expected_chunk_size, data.len()); bail!("detected chunk with wrong length ({} != {})", expected_chunk_size, data.len());
} }
let digest = openssl::sha::sha256(&data);
Ok(())
}
fn verify_digest(
data: &[u8],
config: Option<&CryptConfig>,
expected_digest: &[u8; 32],
) -> Result<(), Error> {
let digest = match config {
Some(config) => config.compute_digest(data),
None => openssl::sha::sha256(&data),
};
if &digest != expected_digest { if &digest != expected_digest {
bail!("detected chunk with wrong digest."); bail!("detected chunk with wrong digest.");
} }

View File

@ -7,6 +7,9 @@ use std::convert::TryFrom;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use lazy_static::lazy_static; use lazy_static::lazy_static;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use serde_json::Value;
use proxmox::tools::fs::{replace_file, CreateOptions};
use super::backup_info::{BackupGroup, BackupDir}; use super::backup_info::{BackupGroup, BackupDir};
use super::chunk_store::ChunkStore; use super::chunk_store::ChunkStore;
@ -15,11 +18,11 @@ use super::fixed_index::{FixedIndexReader, FixedIndexWriter};
use super::manifest::{MANIFEST_BLOB_NAME, CLIENT_LOG_BLOB_NAME, BackupManifest}; use super::manifest::{MANIFEST_BLOB_NAME, CLIENT_LOG_BLOB_NAME, BackupManifest};
use super::index::*; use super::index::*;
use super::{DataBlob, ArchiveType, archive_type}; use super::{DataBlob, ArchiveType, archive_type};
use crate::backup::CryptMode;
use crate::config::datastore; use crate::config::datastore;
use crate::server::WorkerTask; use crate::server::WorkerTask;
use crate::tools; use crate::tools;
use crate::api2::types::GarbageCollectionStatus; use crate::tools::fs::{lock_dir_noblock, DirLockGuard};
use crate::api2::types::{GarbageCollectionStatus, Userid};
lazy_static! { lazy_static! {
static ref DATASTORE_MAP: Mutex<HashMap<String, Arc<DataStore>>> = Mutex::new(HashMap::new()); static ref DATASTORE_MAP: Mutex<HashMap<String, Arc<DataStore>>> = Mutex::new(HashMap::new());
@ -197,6 +200,8 @@ impl DataStore {
let full_path = self.group_path(backup_group); let full_path = self.group_path(backup_group);
let _guard = tools::fs::lock_dir_noblock(&full_path, "backup group", "possible running backup")?;
log::info!("removing backup group {:?}", full_path); log::info!("removing backup group {:?}", full_path);
std::fs::remove_dir_all(&full_path) std::fs::remove_dir_all(&full_path)
.map_err(|err| { .map_err(|err| {
@ -211,10 +216,15 @@ impl DataStore {
} }
/// Remove a backup directory including all content /// Remove a backup directory including all content
pub fn remove_backup_dir(&self, backup_dir: &BackupDir) -> Result<(), Error> { pub fn remove_backup_dir(&self, backup_dir: &BackupDir, force: bool) -> Result<(), Error> {
let full_path = self.snapshot_path(backup_dir); let full_path = self.snapshot_path(backup_dir);
let _guard;
if !force {
_guard = lock_dir_noblock(&full_path, "snapshot", "possibly running or used as base")?;
}
log::info!("removing backup snapshot {:?}", full_path); log::info!("removing backup snapshot {:?}", full_path);
std::fs::remove_dir_all(&full_path) std::fs::remove_dir_all(&full_path)
.map_err(|err| { .map_err(|err| {
@ -246,16 +256,21 @@ impl DataStore {
/// Returns the backup owner. /// Returns the backup owner.
/// ///
/// The backup owner is the user who first created the backup group. /// The backup owner is the user who first created the backup group.
pub fn get_owner(&self, backup_group: &BackupGroup) -> Result<String, Error> { pub fn get_owner(&self, backup_group: &BackupGroup) -> Result<Userid, Error> {
let mut full_path = self.base_path(); let mut full_path = self.base_path();
full_path.push(backup_group.group_path()); full_path.push(backup_group.group_path());
full_path.push("owner"); full_path.push("owner");
let owner = proxmox::tools::fs::file_read_firstline(full_path)?; let owner = proxmox::tools::fs::file_read_firstline(full_path)?;
Ok(owner.trim_end().to_string()) // remove trailing newline Ok(owner.trim_end().parse()?) // remove trailing newline
} }
/// Set the backup owner. /// Set the backup owner.
pub fn set_owner(&self, backup_group: &BackupGroup, userid: &str, force: bool) -> Result<(), Error> { pub fn set_owner(
&self,
backup_group: &BackupGroup,
userid: &Userid,
force: bool,
) -> Result<(), Error> {
let mut path = self.base_path(); let mut path = self.base_path();
path.push(backup_group.group_path()); path.push(backup_group.group_path());
path.push("owner"); path.push("owner");
@ -279,12 +294,17 @@ impl DataStore {
Ok(()) Ok(())
} }
/// Create a backup group if it does not already exists. /// Create (if it does not already exists) and lock a backup group
/// ///
/// And set the owner to 'userid'. If the group already exists, it returns the /// And set the owner to 'userid'. If the group already exists, it returns the
/// current owner (instead of setting the owner). /// current owner (instead of setting the owner).
pub fn create_backup_group(&self, backup_group: &BackupGroup, userid: &str) -> Result<String, Error> { ///
/// This also aquires an exclusive lock on the directory and returns the lock guard.
pub fn create_locked_backup_group(
&self,
backup_group: &BackupGroup,
userid: &Userid,
) -> Result<(Userid, DirLockGuard), Error> {
// create intermediate path first: // create intermediate path first:
let base_path = self.base_path(); let base_path = self.base_path();
@ -297,13 +317,15 @@ impl DataStore {
// create the last component now // create the last component now
match std::fs::create_dir(&full_path) { match std::fs::create_dir(&full_path) {
Ok(_) => { Ok(_) => {
let guard = lock_dir_noblock(&full_path, "backup group", "another backup is already running")?;
self.set_owner(backup_group, userid, false)?; self.set_owner(backup_group, userid, false)?;
let owner = self.get_owner(backup_group)?; // just to be sure let owner = self.get_owner(backup_group)?; // just to be sure
Ok(owner) Ok((owner, guard))
} }
Err(ref err) if err.kind() == io::ErrorKind::AlreadyExists => { Err(ref err) if err.kind() == io::ErrorKind::AlreadyExists => {
let guard = lock_dir_noblock(&full_path, "backup group", "another backup is already running")?;
let owner = self.get_owner(backup_group)?; // just to be sure let owner = self.get_owner(backup_group)?; // just to be sure
Ok(owner) Ok((owner, guard))
} }
Err(err) => bail!("unable to create backup group {:?} - {}", full_path, err), Err(err) => bail!("unable to create backup group {:?} - {}", full_path, err),
} }
@ -312,15 +334,20 @@ impl DataStore {
/// Creates a new backup snapshot inside a BackupGroup /// Creates a new backup snapshot inside a BackupGroup
/// ///
/// The BackupGroup directory needs to exist. /// The BackupGroup directory needs to exist.
pub fn create_backup_dir(&self, backup_dir: &BackupDir) -> Result<(PathBuf, bool), io::Error> { pub fn create_locked_backup_dir(&self, backup_dir: &BackupDir)
-> Result<(PathBuf, bool, DirLockGuard), Error>
{
let relative_path = backup_dir.relative_path(); let relative_path = backup_dir.relative_path();
let mut full_path = self.base_path(); let mut full_path = self.base_path();
full_path.push(&relative_path); full_path.push(&relative_path);
let lock = ||
lock_dir_noblock(&full_path, "snapshot", "internal error - tried creating snapshot that's already in use");
match std::fs::create_dir(&full_path) { match std::fs::create_dir(&full_path) {
Ok(_) => Ok((relative_path, true)), Ok(_) => Ok((relative_path, true, lock()?)),
Err(ref e) if e.kind() == io::ErrorKind::AlreadyExists => Ok((relative_path, false)), Err(ref e) if e.kind() == io::ErrorKind::AlreadyExists => Ok((relative_path, false, lock()?)),
Err(e) => Err(e) Err(e) => Err(e.into())
} }
} }
@ -391,8 +418,8 @@ impl DataStore {
tools::fail_on_shutdown()?; tools::fail_on_shutdown()?;
let digest = index.index_digest(pos).unwrap(); let digest = index.index_digest(pos).unwrap();
if let Err(err) = self.chunk_store.touch_chunk(digest) { if let Err(err) = self.chunk_store.touch_chunk(digest) {
bail!("unable to access chunk {}, required by {:?} - {}", worker.warn(&format!("warning: unable to access chunk {}, required by {:?} - {}",
proxmox::tools::digest_to_hex(digest), file_name, err); proxmox::tools::digest_to_hex(digest), file_name, err));
} }
} }
Ok(()) Ok(())
@ -447,7 +474,7 @@ impl DataStore {
self.mark_used_chunks(&mut gc_status, &worker)?; self.mark_used_chunks(&mut gc_status, &worker)?;
worker.log("Start GC phase2 (sweep unused chunks)"); worker.log("Start GC phase2 (sweep unused chunks)");
self.chunk_store.sweep_unused_chunks(oldest_writer, &mut gc_status, &worker)?; self.chunk_store.sweep_unused_chunks(oldest_writer, now, &mut gc_status, &worker)?;
worker.log(&format!("Removed bytes: {}", gc_status.removed_bytes)); worker.log(&format!("Removed bytes: {}", gc_status.removed_bytes));
worker.log(&format!("Removed chunks: {}", gc_status.removed_chunks)); worker.log(&format!("Removed chunks: {}", gc_status.removed_chunks));
@ -498,31 +525,69 @@ impl DataStore {
self.chunk_store.insert_chunk(chunk, digest) self.chunk_store.insert_chunk(chunk, digest)
} }
pub fn verify_stored_chunk(&self, digest: &[u8; 32], expected_chunk_size: u64) -> Result<(), Error> { pub fn load_blob(&self, backup_dir: &BackupDir, filename: &str) -> Result<DataBlob, Error> {
let blob = self.chunk_store.read_chunk(digest)?;
blob.verify_crc()?;
blob.verify_unencrypted(expected_chunk_size as usize, digest)?;
Ok(())
}
pub fn load_blob(&self, backup_dir: &BackupDir, filename: &str) -> Result<(DataBlob, u64), Error> {
let mut path = self.base_path(); let mut path = self.base_path();
path.push(backup_dir.relative_path()); path.push(backup_dir.relative_path());
path.push(filename); path.push(filename);
let raw_data = proxmox::tools::fs::file_get_contents(&path)?; proxmox::try_block!({
let raw_size = raw_data.len() as u64; let mut file = std::fs::File::open(&path)?;
let blob = DataBlob::from_raw(raw_data)?; DataBlob::load_from_reader(&mut file)
Ok((blob, raw_size)) }).map_err(|err| format_err!("unable to load blob '{:?}' - {}", path, err))
} }
pub fn load_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let (chunk_path, digest_str) = self.chunk_store.chunk_path(digest);
proxmox::try_block!({
let mut file = std::fs::File::open(&chunk_path)?;
DataBlob::load_from_reader(&mut file)
}).map_err(|err| format_err!(
"store '{}', unable to load chunk '{}' - {}",
self.name(),
digest_str,
err,
))
}
pub fn load_manifest( pub fn load_manifest(
&self, &self,
backup_dir: &BackupDir, backup_dir: &BackupDir,
) -> Result<(BackupManifest, CryptMode, u64), Error> { ) -> Result<(BackupManifest, u64), Error> {
let (blob, raw_size) = self.load_blob(backup_dir, MANIFEST_BLOB_NAME)?; let blob = self.load_blob(backup_dir, MANIFEST_BLOB_NAME)?;
let crypt_mode = blob.crypt_mode()?; let raw_size = blob.raw_size();
let manifest = BackupManifest::try_from(blob)?; let manifest = BackupManifest::try_from(blob)?;
Ok((manifest, crypt_mode, raw_size)) Ok((manifest, raw_size))
}
pub fn load_manifest_json(
&self,
backup_dir: &BackupDir,
) -> Result<Value, Error> {
let blob = self.load_blob(backup_dir, MANIFEST_BLOB_NAME)?;
// no expected digest available
let manifest_data = blob.decode(None, None)?;
let manifest: Value = serde_json::from_slice(&manifest_data[..])?;
Ok(manifest)
}
pub fn store_manifest(
&self,
backup_dir: &BackupDir,
manifest: Value,
) -> Result<(), Error> {
let manifest = serde_json::to_string_pretty(&manifest)?;
let blob = DataBlob::encode(manifest.as_bytes(), None, true)?;
let raw_data = blob.raw_data();
let mut path = self.base_path();
path.push(backup_dir.relative_path());
path.push(MANIFEST_BLOB_NAME);
replace_file(&path, raw_data, CreateOptions::new())?;
Ok(())
} }
} }

View File

@ -49,6 +49,20 @@ pub struct FileInfo {
pub csum: [u8; 32], pub csum: [u8; 32],
} }
impl FileInfo {
/// Return expected CryptMode of referenced chunks
///
/// Encrypted Indices should only reference encrypted chunks, while signed or plain indices
/// should only reference plain chunks.
pub fn chunk_crypt_mode (&self) -> CryptMode {
match self.crypt_mode {
CryptMode::Encrypt => CryptMode::Encrypt,
CryptMode::SignOnly | CryptMode::None => CryptMode::None,
}
}
}
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")] #[serde(rename_all="kebab-case")]
pub struct BackupManifest { pub struct BackupManifest {
@ -58,6 +72,7 @@ pub struct BackupManifest {
files: Vec<FileInfo>, files: Vec<FileInfo>,
#[serde(default="empty_value")] // to be compatible with < 0.8.0 backups #[serde(default="empty_value")] // to be compatible with < 0.8.0 backups
pub unprotected: Value, pub unprotected: Value,
pub signature: Option<String>,
} }
#[derive(PartialEq)] #[derive(PartialEq)]
@ -91,6 +106,7 @@ impl BackupManifest {
backup_time: snapshot.backup_time().timestamp(), backup_time: snapshot.backup_time().timestamp(),
files: Vec::new(), files: Vec::new(),
unprotected: json!({}), unprotected: json!({}),
signature: None,
} }
} }
@ -160,12 +176,12 @@ impl BackupManifest {
keys.sort(); keys.sort();
let mut iter = keys.into_iter(); let mut iter = keys.into_iter();
if let Some(key) = iter.next() { if let Some(key) = iter.next() {
Self::write_canonical_json(&key.into(), output)?; serde_json::to_writer(&mut *output, &key)?;
output.push(b':'); output.push(b':');
Self::write_canonical_json(&map[key], output)?; Self::write_canonical_json(&map[key], output)?;
for key in iter { for key in iter {
output.push(b','); output.push(b',');
Self::write_canonical_json(&key.into(), output)?; serde_json::to_writer(&mut *output, &key)?;
output.push(b':'); output.push(b':');
Self::write_canonical_json(&map[key], output)?; Self::write_canonical_json(&map[key], output)?;
} }
@ -238,7 +254,8 @@ impl TryFrom<super::DataBlob> for BackupManifest {
type Error = Error; type Error = Error;
fn try_from(blob: super::DataBlob) -> Result<Self, Error> { fn try_from(blob: super::DataBlob) -> Result<Self, Error> {
let data = blob.decode(None) // no expected digest available
let data = blob.decode(None, None)
.map_err(|err| format_err!("decode backup manifest blob failed - {}", err))?; .map_err(|err| format_err!("decode backup manifest blob failed - {}", err))?;
let json: Value = serde_json::from_slice(&data[..]) let json: Value = serde_json::from_slice(&data[..])
.map_err(|err| format_err!("unable to parse backup manifest json - {}", err))?; .map_err(|err| format_err!("unable to parse backup manifest json - {}", err))?;

View File

@ -53,7 +53,7 @@ fn remove_incomplete_snapshots(
let mut keep_unfinished = true; let mut keep_unfinished = true;
for info in list.iter() { for info in list.iter() {
// backup is considered unfinished if there is no manifest // backup is considered unfinished if there is no manifest
if info.files.iter().any(|name| name == super::MANIFEST_BLOB_NAME) { if info.is_finished() {
// There is a new finished backup, so there is no need // There is a new finished backup, so there is no need
// to keep older unfinished backups. // to keep older unfinished backups.
keep_unfinished = false; keep_unfinished = false;

View File

@ -2,9 +2,9 @@ use std::future::Future;
use std::pin::Pin; use std::pin::Pin;
use std::sync::Arc; use std::sync::Arc;
use anyhow::Error; use anyhow::{bail, Error};
use super::crypt_config::CryptConfig; use super::crypt_config::{CryptConfig, CryptMode};
use super::data_blob::DataBlob; use super::data_blob::DataBlob;
use super::datastore::DataStore; use super::datastore::DataStore;
@ -21,33 +21,47 @@ pub trait ReadChunk {
pub struct LocalChunkReader { pub struct LocalChunkReader {
store: Arc<DataStore>, store: Arc<DataStore>,
crypt_config: Option<Arc<CryptConfig>>, crypt_config: Option<Arc<CryptConfig>>,
crypt_mode: CryptMode,
} }
impl LocalChunkReader { impl LocalChunkReader {
pub fn new(store: Arc<DataStore>, crypt_config: Option<Arc<CryptConfig>>) -> Self { pub fn new(store: Arc<DataStore>, crypt_config: Option<Arc<CryptConfig>>, crypt_mode: CryptMode) -> Self {
Self { Self {
store, store,
crypt_config, crypt_config,
crypt_mode,
}
}
fn ensure_crypt_mode(&self, chunk_mode: CryptMode) -> Result<(), Error> {
match self.crypt_mode {
CryptMode::Encrypt => {
match chunk_mode {
CryptMode::Encrypt => Ok(()),
CryptMode::SignOnly | CryptMode::None => bail!("Index and chunk CryptMode don't match."),
}
},
CryptMode::SignOnly | CryptMode::None => {
match chunk_mode {
CryptMode::Encrypt => bail!("Index and chunk CryptMode don't match."),
CryptMode::SignOnly | CryptMode::None => Ok(()),
}
},
} }
} }
} }
impl ReadChunk for LocalChunkReader { impl ReadChunk for LocalChunkReader {
fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> { fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let (path, _) = self.store.chunk_path(digest); let chunk = self.store.load_chunk(digest)?;
let raw_data = proxmox::tools::fs::file_get_contents(&path)?; self.ensure_crypt_mode(chunk.crypt_mode()?)?;
let chunk = DataBlob::from_raw(raw_data)?;
chunk.verify_crc()?;
Ok(chunk) Ok(chunk)
} }
fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> { fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
let chunk = ReadChunk::read_raw_chunk(self, digest)?; let chunk = ReadChunk::read_raw_chunk(self, digest)?;
let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?; let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref), Some(digest))?;
// fixme: verify digest?
Ok(raw_data) Ok(raw_data)
} }
@ -76,8 +90,9 @@ impl AsyncReadChunk for LocalChunkReader {
let (path, _) = self.store.chunk_path(digest); let (path, _) = self.store.chunk_path(digest);
let raw_data = tokio::fs::read(&path).await?; let raw_data = tokio::fs::read(&path).await?;
let chunk = DataBlob::from_raw(raw_data)?;
chunk.verify_crc()?; let chunk = DataBlob::load_from_reader(&mut &raw_data[..])?;
self.ensure_crypt_mode(chunk.crypt_mode()?)?;
Ok(chunk) Ok(chunk)
}) })
@ -90,7 +105,7 @@ impl AsyncReadChunk for LocalChunkReader {
Box::pin(async move { Box::pin(async move {
let chunk = AsyncReadChunk::read_raw_chunk(self, digest).await?; let chunk = AsyncReadChunk::read_raw_chunk(self, digest).await?;
let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?; let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref), Some(digest))?;
// fixme: verify digest? // fixme: verify digest?

View File

@ -1,58 +1,118 @@
use std::collections::HashSet;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
use crate::server::WorkerTask; use crate::server::WorkerTask;
use super::{ use super::{
DataStore, BackupGroup, BackupDir, BackupInfo, IndexFile, DataStore, BackupGroup, BackupDir, BackupInfo, IndexFile,
ENCR_COMPR_BLOB_MAGIC_1_0, ENCRYPTED_BLOB_MAGIC_1_0, CryptMode,
FileInfo, ArchiveType, archive_type, FileInfo, ArchiveType, archive_type,
}; };
fn verify_blob(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> { fn verify_blob(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
let (blob, raw_size) = datastore.load_blob(backup_dir, &info.filename)?; let blob = datastore.load_blob(backup_dir, &info.filename)?;
let csum = openssl::sha::sha256(blob.raw_data()); let raw_size = blob.raw_size();
if raw_size != info.size { if raw_size != info.size {
bail!("wrong size ({} != {})", info.size, raw_size); bail!("wrong size ({} != {})", info.size, raw_size);
} }
let csum = openssl::sha::sha256(blob.raw_data());
if csum != info.csum { if csum != info.csum {
bail!("wrong index checksum"); bail!("wrong index checksum");
} }
blob.verify_crc()?; match blob.crypt_mode()? {
CryptMode::Encrypt => Ok(()),
let magic = blob.magic(); CryptMode::None => {
// digest already verified above
if magic == &ENCR_COMPR_BLOB_MAGIC_1_0 || magic == &ENCRYPTED_BLOB_MAGIC_1_0 { blob.decode(None, None)?;
return Ok(()); Ok(())
},
CryptMode::SignOnly => bail!("Invalid CryptMode for blob"),
} }
blob.decode(None)?;
Ok(())
} }
fn verify_index_chunks( fn verify_index_chunks(
datastore: &DataStore, datastore: &DataStore,
index: Box<dyn IndexFile>, index: Box<dyn IndexFile>,
verified_chunks: &mut HashSet<[u8;32]>,
corrupt_chunks: &mut HashSet<[u8; 32]>,
crypt_mode: CryptMode,
worker: &WorkerTask, worker: &WorkerTask,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut errors = 0;
for pos in 0..index.index_count() { for pos in 0..index.index_count() {
worker.fail_on_abort()?; worker.fail_on_abort()?;
let info = index.chunk_info(pos).unwrap(); let info = index.chunk_info(pos).unwrap();
let size = info.range.end - info.range.start; let size = info.range.end - info.range.start;
datastore.verify_stored_chunk(&info.digest, size)?;
let chunk = match datastore.load_chunk(&info.digest) {
Err(err) => {
corrupt_chunks.insert(info.digest);
worker.log(format!("can't verify chunk, load failed - {}", err));
errors += 1;
continue;
},
Ok(chunk) => chunk,
};
let chunk_crypt_mode = match chunk.crypt_mode() {
Err(err) => {
corrupt_chunks.insert(info.digest);
worker.log(format!("can't verify chunk, unknown CryptMode - {}", err));
errors += 1;
continue;
},
Ok(mode) => mode,
};
if chunk_crypt_mode != crypt_mode {
worker.log(format!(
"chunk CryptMode {:?} does not match index CryptMode {:?}",
chunk_crypt_mode,
crypt_mode
));
errors += 1;
}
if !verified_chunks.contains(&info.digest) {
if !corrupt_chunks.contains(&info.digest) {
if let Err(err) = chunk.verify_unencrypted(size as usize, &info.digest) {
corrupt_chunks.insert(info.digest);
worker.log(format!("{}", err));
errors += 1;
} else {
verified_chunks.insert(info.digest);
}
} else {
let digest_str = proxmox::tools::digest_to_hex(&info.digest);
worker.log(format!("chunk {} was marked as corrupt", digest_str));
errors += 1;
}
}
}
if errors > 0 {
bail!("chunks could not be verified");
} }
Ok(()) Ok(())
} }
fn verify_fixed_index(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo, worker: &WorkerTask) -> Result<(), Error> { fn verify_fixed_index(
datastore: &DataStore,
backup_dir: &BackupDir,
info: &FileInfo,
verified_chunks: &mut HashSet<[u8;32]>,
corrupt_chunks: &mut HashSet<[u8;32]>,
worker: &WorkerTask,
) -> Result<(), Error> {
let mut path = backup_dir.relative_path(); let mut path = backup_dir.relative_path();
path.push(&info.filename); path.push(&info.filename);
@ -68,10 +128,18 @@ fn verify_fixed_index(datastore: &DataStore, backup_dir: &BackupDir, info: &File
bail!("wrong index checksum"); bail!("wrong index checksum");
} }
verify_index_chunks(datastore, Box::new(index), worker) verify_index_chunks(datastore, Box::new(index), verified_chunks, corrupt_chunks, info.chunk_crypt_mode(), worker)
} }
fn verify_dynamic_index(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo, worker: &WorkerTask) -> Result<(), Error> { fn verify_dynamic_index(
datastore: &DataStore,
backup_dir: &BackupDir,
info: &FileInfo,
verified_chunks: &mut HashSet<[u8;32]>,
corrupt_chunks: &mut HashSet<[u8;32]>,
worker: &WorkerTask,
) -> Result<(), Error> {
let mut path = backup_dir.relative_path(); let mut path = backup_dir.relative_path();
path.push(&info.filename); path.push(&info.filename);
@ -86,7 +154,7 @@ fn verify_dynamic_index(datastore: &DataStore, backup_dir: &BackupDir, info: &Fi
bail!("wrong index checksum"); bail!("wrong index checksum");
} }
verify_index_chunks(datastore, Box::new(index), worker) verify_index_chunks(datastore, Box::new(index), verified_chunks, corrupt_chunks, info.chunk_crypt_mode(), worker)
} }
/// Verify a single backup snapshot /// Verify a single backup snapshot
@ -98,10 +166,16 @@ fn verify_dynamic_index(datastore: &DataStore, backup_dir: &BackupDir, info: &Fi
/// - Ok(true) if verify is successful /// - Ok(true) if verify is successful
/// - Ok(false) if there were verification errors /// - Ok(false) if there were verification errors
/// - Err(_) if task was aborted /// - Err(_) if task was aborted
pub fn verify_backup_dir(datastore: &DataStore, backup_dir: &BackupDir, worker: &WorkerTask) -> Result<bool, Error> { pub fn verify_backup_dir(
datastore: &DataStore,
backup_dir: &BackupDir,
verified_chunks: &mut HashSet<[u8;32]>,
corrupt_chunks: &mut HashSet<[u8;32]>,
worker: &WorkerTask
) -> Result<bool, Error> {
let manifest = match datastore.load_manifest(&backup_dir) { let manifest = match datastore.load_manifest(&backup_dir) {
Ok((manifest, _crypt_mode, _)) => manifest, Ok((manifest, _)) => manifest,
Err(err) => { Err(err) => {
worker.log(format!("verify {}:{} - manifest load error: {}", datastore.name(), backup_dir, err)); worker.log(format!("verify {}:{} - manifest load error: {}", datastore.name(), backup_dir, err));
return Ok(false); return Ok(false);
@ -116,8 +190,24 @@ pub fn verify_backup_dir(datastore: &DataStore, backup_dir: &BackupDir, worker:
let result = proxmox::try_block!({ let result = proxmox::try_block!({
worker.log(format!(" check {}", info.filename)); worker.log(format!(" check {}", info.filename));
match archive_type(&info.filename)? { match archive_type(&info.filename)? {
ArchiveType::FixedIndex => verify_fixed_index(&datastore, &backup_dir, info, worker), ArchiveType::FixedIndex =>
ArchiveType::DynamicIndex => verify_dynamic_index(&datastore, &backup_dir, info, worker), verify_fixed_index(
&datastore,
&backup_dir,
info,
verified_chunks,
corrupt_chunks,
worker
),
ArchiveType::DynamicIndex =>
verify_dynamic_index(
&datastore,
&backup_dir,
info,
verified_chunks,
corrupt_chunks,
worker
),
ArchiveType::Blob => verify_blob(&datastore, &backup_dir, info), ArchiveType::Blob => verify_blob(&datastore, &backup_dir, info),
} }
}); });
@ -138,31 +228,32 @@ pub fn verify_backup_dir(datastore: &DataStore, backup_dir: &BackupDir, worker:
/// Errors are logged to the worker log. /// Errors are logged to the worker log.
/// ///
/// Returns /// Returns
/// - Ok(true) if verify is successful /// - Ok(failed_dirs) where failed_dirs had verification errors
/// - Ok(false) if there were verification errors
/// - Err(_) if task was aborted /// - Err(_) if task was aborted
pub fn verify_backup_group(datastore: &DataStore, group: &BackupGroup, worker: &WorkerTask) -> Result<bool, Error> { pub fn verify_backup_group(datastore: &DataStore, group: &BackupGroup, worker: &WorkerTask) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
let mut list = match group.list_backups(&datastore.base_path()) { let mut list = match group.list_backups(&datastore.base_path()) {
Ok(list) => list, Ok(list) => list,
Err(err) => { Err(err) => {
worker.log(format!("verify group {}:{} - unable to list backups: {}", datastore.name(), group, err)); worker.log(format!("verify group {}:{} - unable to list backups: {}", datastore.name(), group, err));
return Ok(false); return Ok(errors);
} }
}; };
worker.log(format!("verify group {}:{}", datastore.name(), group)); worker.log(format!("verify group {}:{}", datastore.name(), group));
let mut error_count = 0; let mut verified_chunks = HashSet::with_capacity(1024*16); // start with 16384 chunks (up to 65GB)
let mut corrupt_chunks = HashSet::with_capacity(64); // start with 64 chunks since we assume there are few corrupt ones
BackupInfo::sort_list(&mut list, false); // newest first BackupInfo::sort_list(&mut list, false); // newest first
for info in list { for info in list {
if !verify_backup_dir(datastore, &info.backup_dir, worker)? { if !verify_backup_dir(datastore, &info.backup_dir, &mut verified_chunks, &mut corrupt_chunks, worker)?{
error_count += 1; errors.push(info.backup_dir.to_string());
} }
} }
Ok(error_count == 0) Ok(errors)
} }
/// Verify all backups inside a datastore /// Verify all backups inside a datastore
@ -170,27 +261,26 @@ pub fn verify_backup_group(datastore: &DataStore, group: &BackupGroup, worker: &
/// Errors are logged to the worker log. /// Errors are logged to the worker log.
/// ///
/// Returns /// Returns
/// - Ok(true) if verify is successful /// - Ok(failed_dirs) where failed_dirs had verification errors
/// - Ok(false) if there were verification errors
/// - Err(_) if task was aborted /// - Err(_) if task was aborted
pub fn verify_all_backups(datastore: &DataStore, worker: &WorkerTask) -> Result<bool, Error> { pub fn verify_all_backups(datastore: &DataStore, worker: &WorkerTask) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
let list = match BackupGroup::list_groups(&datastore.base_path()) { let list = match BackupGroup::list_groups(&datastore.base_path()) {
Ok(list) => list, Ok(list) => list,
Err(err) => { Err(err) => {
worker.log(format!("verify datastore {} - unable to list backups: {}", datastore.name(), err)); worker.log(format!("verify datastore {} - unable to list backups: {}", datastore.name(), err));
return Ok(false); return Ok(errors);
} }
}; };
worker.log(format!("verify datastore {}", datastore.name())); worker.log(format!("verify datastore {}", datastore.name()));
let mut error_count = 0;
for group in list { for group in list {
if !verify_backup_group(datastore, &group, worker)? { let mut group_errors = verify_backup_group(datastore, &group, worker)?;
error_count += 1; errors.append(&mut group_errors);
}
} }
Ok(error_count == 0) Ok(errors)
} }

View File

@ -184,7 +184,7 @@ pub fn complete_repository(_arg: &str, _param: &HashMap<String, String>) -> Vec<
result result
} }
fn connect(server: &str, userid: &str) -> Result<HttpClient, Error> { fn connect(server: &str, userid: &Userid) -> Result<HttpClient, Error> {
let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok(); let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
@ -935,12 +935,18 @@ async fn create_backup(
} }
let mut upload_list = vec![]; let mut upload_list = vec![];
let mut target_set = HashSet::new();
for backupspec in backupspec_list { for backupspec in backupspec_list {
let spec = parse_backup_specification(backupspec.as_str().unwrap())?; let spec = parse_backup_specification(backupspec.as_str().unwrap())?;
let filename = &spec.config_string; let filename = &spec.config_string;
let target = &spec.archive_name; let target = &spec.archive_name;
if target_set.contains(target) {
bail!("got target twice: '{}'", target);
}
target_set.insert(target.to_string());
use std::os::unix::fs::FileTypeExt; use std::os::unix::fs::FileTypeExt;
let metadata = std::fs::metadata(filename) let metadata = std::fs::metadata(filename)
@ -1114,12 +1120,12 @@ async fn create_backup(
} }
if let Some(rsa_encrypted_key) = rsa_encrypted_key { if let Some(rsa_encrypted_key) = rsa_encrypted_key {
let target = "rsa-encrypted.key"; let target = "rsa-encrypted.key.blob";
println!("Upload RSA encoded key to '{:?}' as {}", repo, target); println!("Upload RSA encoded key to '{:?}' as {}", repo, target);
let stats = client let stats = client
.upload_blob_from_data(rsa_encrypted_key, target, false, false) .upload_blob_from_data(rsa_encrypted_key, target, false, false)
.await?; .await?;
manifest.add_file(format!("{}.blob", target), stats.size, stats.csum, crypt_mode)?; manifest.add_file(target.to_string(), stats.size, stats.csum, crypt_mode)?;
// openssl rsautl -decrypt -inkey master-private.pem -in rsa-encrypted.key -out t // openssl rsautl -decrypt -inkey master-private.pem -in rsa-encrypted.key -out t
/* /*
@ -1130,7 +1136,6 @@ async fn create_backup(
println!("TEST {} {:?}", len, buffer2); println!("TEST {} {:?}", len, buffer2);
*/ */
} }
// create manifest (index.json) // create manifest (index.json)
// manifests are never encrypted, but include a signature // manifests are never encrypted, but include a signature
let manifest = manifest.to_string(crypt_config.as_ref().map(Arc::as_ref)) let manifest = manifest.to_string(crypt_config.as_ref().map(Arc::as_ref))
@ -1177,6 +1182,7 @@ fn complete_backup_source(arg: &str, param: &HashMap<String, String>) -> Vec<Str
async fn dump_image<W: Write>( async fn dump_image<W: Write>(
client: Arc<BackupReader>, client: Arc<BackupReader>,
crypt_config: Option<Arc<CryptConfig>>, crypt_config: Option<Arc<CryptConfig>>,
crypt_mode: CryptMode,
index: FixedIndexReader, index: FixedIndexReader,
mut writer: W, mut writer: W,
verbose: bool, verbose: bool,
@ -1184,7 +1190,7 @@ async fn dump_image<W: Write>(
let most_used = index.find_most_used_chunks(8); let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used); let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, crypt_mode, most_used);
// Note: we avoid using BufferedFixedReader, because that add an additional buffer/copy // Note: we avoid using BufferedFixedReader, because that add an additional buffer/copy
// and thus slows down reading. Instead, directly use RemoteChunkReader // and thus slows down reading. Instead, directly use RemoteChunkReader
@ -1335,7 +1341,12 @@ async fn restore(param: Value) -> Result<Value, Error> {
.map_err(|err| format_err!("unable to pipe data - {}", err))?; .map_err(|err| format_err!("unable to pipe data - {}", err))?;
} }
} else if archive_type == ArchiveType::Blob { return Ok(Value::Null);
}
let file_info = manifest.lookup_file_info(&archive_name)?;
if archive_type == ArchiveType::Blob {
let mut reader = client.download_blob(&manifest, &archive_name).await?; let mut reader = client.download_blob(&manifest, &archive_name).await?;
@ -1360,7 +1371,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
let most_used = index.find_most_used_chunks(8); let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used); let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, file_info.chunk_crypt_mode(), most_used);
let mut reader = BufferedDynamicReader::new(index, chunk_reader); let mut reader = BufferedDynamicReader::new(index, chunk_reader);
@ -1369,6 +1380,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
pxar::decoder::Decoder::from_std(reader)?, pxar::decoder::Decoder::from_std(reader)?,
Path::new(target), Path::new(target),
&[], &[],
true,
proxmox_backup::pxar::Flags::DEFAULT, proxmox_backup::pxar::Flags::DEFAULT,
allow_existing_dirs, allow_existing_dirs,
|path| { |path| {
@ -1376,6 +1388,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
println!("{:?}", path); println!("{:?}", path);
} }
}, },
None,
) )
.map_err(|err| format_err!("error extracting archive - {}", err))?; .map_err(|err| format_err!("error extracting archive - {}", err))?;
} else { } else {
@ -1405,7 +1418,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
.map_err(|err| format_err!("unable to open /dev/stdout - {}", err))? .map_err(|err| format_err!("unable to open /dev/stdout - {}", err))?
}; };
dump_image(client.clone(), crypt_config.clone(), index, &mut writer, verbose).await?; dump_image(client.clone(), crypt_config.clone(), file_info.chunk_crypt_mode(), index, &mut writer, verbose).await?;
} }
Ok(Value::Null) Ok(Value::Null)

View File

@ -59,12 +59,17 @@ fn connect() -> Result<HttpClient, Error> {
.verify_cert(false); // not required for connection to localhost .verify_cert(false); // not required for connection to localhost
let client = if uid.is_root() { let client = if uid.is_root() {
let ticket = assemble_rsa_ticket(private_auth_key(), "PBS", Some("root@pam"), None)?; let ticket = assemble_rsa_ticket(
private_auth_key(),
"PBS",
Some(Userid::root_userid()),
None,
)?;
options = options.password(Some(ticket)); options = options.password(Some(ticket));
HttpClient::new("localhost", "root@pam", options)? HttpClient::new("localhost", Userid::root_userid(), options)?
} else { } else {
options = options.ticket_cache(true).interactive(true); options = options.ticket_cache(true).interactive(true);
HttpClient::new("localhost", "root@pam", options)? HttpClient::new("localhost", Userid::root_userid(), options)?
}; };
Ok(client) Ok(client)

View File

@ -9,6 +9,7 @@ use openssl::ssl::{SslMethod, SslAcceptor, SslFiletype};
use proxmox::try_block; use proxmox::try_block;
use proxmox::api::RpcEnvironmentType; use proxmox::api::RpcEnvironmentType;
use proxmox_backup::api2::types::Userid;
use proxmox_backup::configdir; use proxmox_backup::configdir;
use proxmox_backup::buildcfg; use proxmox_backup::buildcfg;
use proxmox_backup::server; use proxmox_backup::server;
@ -318,7 +319,7 @@ async fn schedule_datastore_garbage_collection() {
if let Err(err) = WorkerTask::new_thread( if let Err(err) = WorkerTask::new_thread(
worker_type, worker_type,
Some(store.clone()), Some(store.clone()),
"backup@pam", Userid::backup_userid().clone(),
false, false,
move |worker| { move |worker| {
worker.log(format!("starting garbage collection on store {}", store)); worker.log(format!("starting garbage collection on store {}", store));
@ -429,7 +430,7 @@ async fn schedule_datastore_prune() {
if let Err(err) = WorkerTask::new_thread( if let Err(err) = WorkerTask::new_thread(
worker_type, worker_type,
Some(store.clone()), Some(store.clone()),
"backup@pam", Userid::backup_userid().clone(),
false, false,
move |worker| { move |worker| {
worker.log(format!("Starting datastore prune on store \"{}\"", store)); worker.log(format!("Starting datastore prune on store \"{}\"", store));
@ -455,7 +456,7 @@ async fn schedule_datastore_prune() {
BackupDir::backup_time_to_string(info.backup_dir.backup_time()))); BackupDir::backup_time_to_string(info.backup_dir.backup_time())));
if !keep { if !keep {
datastore.remove_backup_dir(&info.backup_dir)?; datastore.remove_backup_dir(&info.backup_dir, true)?;
} }
} }
} }
@ -568,14 +569,14 @@ async fn schedule_datastore_sync_jobs() {
} }
}; };
let username = String::from("backup@pam"); let userid = Userid::backup_userid().clone();
let delete = job_config.remove_vanished.unwrap_or(true); let delete = job_config.remove_vanished.unwrap_or(true);
if let Err(err) = WorkerTask::spawn( if let Err(err) = WorkerTask::spawn(
worker_type, worker_type,
Some(job_id.clone()), Some(job_id.clone()),
&username.clone(), userid.clone(),
false, false,
move |worker| async move { move |worker| async move {
worker.log(format!("Starting datastore sync job '{}'", job_id)); worker.log(format!("Starting datastore sync job '{}'", job_id));
@ -594,7 +595,7 @@ async fn schedule_datastore_sync_jobs() {
let src_repo = BackupRepository::new(Some(remote.userid), Some(remote.host), job_config.remote_store); let src_repo = BackupRepository::new(Some(remote.userid), Some(remote.host), job_config.remote_store);
pull_store(&worker, &client, &src_repo, tgt_store, delete, username).await?; pull_store(&worker, &client, &src_repo, tgt_store, delete, userid).await?;
Ok(()) Ok(())
} }

View File

@ -97,7 +97,9 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
let most_used = index.find_most_used_chunks(8); let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used); let file_info = manifest.lookup_file_info(&CATALOG_NAME)?;
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, file_info.chunk_crypt_mode(), most_used);
let mut reader = BufferedDynamicReader::new(index, chunk_reader); let mut reader = BufferedDynamicReader::new(index, chunk_reader);
@ -200,7 +202,9 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?; let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8); let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config.clone(), most_used);
let file_info = manifest.lookup_file_info(&server_archive_name)?;
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config.clone(), file_info.chunk_crypt_mode(), most_used);
let reader = BufferedDynamicReader::new(index, chunk_reader); let reader = BufferedDynamicReader::new(index, chunk_reader);
let archive_size = reader.archive_size(); let archive_size = reader.archive_size();
let reader: proxmox_backup::pxar::fuse::Reader = let reader: proxmox_backup::pxar::fuse::Reader =
@ -216,7 +220,9 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
manifest.verify_file(CATALOG_NAME, &csum, size)?; manifest.verify_file(CATALOG_NAME, &csum, size)?;
let most_used = index.find_most_used_chunks(8); let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used);
let file_info = manifest.lookup_file_info(&CATALOG_NAME)?;
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, file_info.chunk_crypt_mode(), most_used);
let mut reader = BufferedDynamicReader::new(index, chunk_reader); let mut reader = BufferedDynamicReader::new(index, chunk_reader);
let mut catalogfile = std::fs::OpenOptions::new() let mut catalogfile = std::fs::OpenOptions::new()
.write(true) .write(true)

View File

@ -141,10 +141,12 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
let file_info = manifest.lookup_file_info(&archive_name)?;
if server_archive_name.ends_with(".didx") { if server_archive_name.ends_with(".didx") {
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?; let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8); let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used); let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, file_info.chunk_crypt_mode(), most_used);
let reader = BufferedDynamicReader::new(index, chunk_reader); let reader = BufferedDynamicReader::new(index, chunk_reader);
let archive_size = reader.archive_size(); let archive_size = reader.archive_size();
let reader: proxmox_backup::pxar::fuse::Reader = let reader: proxmox_backup::pxar::fuse::Reader =

View File

@ -3,8 +3,10 @@ use std::ffi::OsStr;
use std::fs::OpenOptions; use std::fs::OpenOptions;
use std::os::unix::fs::OpenOptionsExt; use std::os::unix::fs::OpenOptionsExt;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, Ordering};
use anyhow::{format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::future::FutureExt; use futures::future::FutureExt;
use futures::select; use futures::select;
use tokio::signal::unix::{signal, SignalKind}; use tokio::signal::unix::{signal, SignalKind};
@ -24,11 +26,14 @@ fn extract_archive_from_reader<R: std::io::Read>(
allow_existing_dirs: bool, allow_existing_dirs: bool,
verbose: bool, verbose: bool,
match_list: &[MatchEntry], match_list: &[MatchEntry],
extract_match_default: bool,
on_error: Option<Box<dyn FnMut(Error) -> Result<(), Error> + Send>>,
) -> Result<(), Error> { ) -> Result<(), Error> {
proxmox_backup::pxar::extract_archive( proxmox_backup::pxar::extract_archive(
pxar::decoder::Decoder::from_std(reader)?, pxar::decoder::Decoder::from_std(reader)?,
Path::new(target), Path::new(target),
&match_list, &match_list,
extract_match_default,
feature_flags, feature_flags,
allow_existing_dirs, allow_existing_dirs,
|path| { |path| {
@ -36,6 +41,7 @@ fn extract_archive_from_reader<R: std::io::Read>(
println!("{:?}", path); println!("{:?}", path);
} }
}, },
on_error,
) )
} }
@ -102,6 +108,11 @@ fn extract_archive_from_reader<R: std::io::Read>(
optional: true, optional: true,
default: false, default: false,
}, },
strict: {
description: "Stop on errors. Otherwise most errors will simply warn.",
optional: true,
default: false,
},
}, },
}, },
)] )]
@ -119,6 +130,7 @@ fn extract_archive(
no_device_nodes: bool, no_device_nodes: bool,
no_fifos: bool, no_fifos: bool,
no_sockets: bool, no_sockets: bool,
strict: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut feature_flags = Flags::DEFAULT; let mut feature_flags = Flags::DEFAULT;
if no_xattrs { if no_xattrs {
@ -162,6 +174,22 @@ fn extract_archive(
); );
} }
let extract_match_default = match_list.is_empty();
let was_ok = Arc::new(AtomicBool::new(true));
let on_error = if strict {
// by default errors are propagated up
None
} else {
let was_ok = Arc::clone(&was_ok);
// otherwise we want to log them but not act on them
Some(Box::new(move |err| {
was_ok.store(false, Ordering::Release);
eprintln!("error: {}", err);
Ok(())
}) as Box<dyn FnMut(Error) -> Result<(), Error> + Send>)
};
if archive == "-" { if archive == "-" {
let stdin = std::io::stdin(); let stdin = std::io::stdin();
let mut reader = stdin.lock(); let mut reader = stdin.lock();
@ -172,6 +200,8 @@ fn extract_archive(
allow_existing_dirs, allow_existing_dirs,
verbose, verbose,
&match_list, &match_list,
extract_match_default,
on_error,
)?; )?;
} else { } else {
if verbose { if verbose {
@ -186,9 +216,15 @@ fn extract_archive(
allow_existing_dirs, allow_existing_dirs,
verbose, verbose,
&match_list, &match_list,
extract_match_default,
on_error,
)?; )?;
} }
if !was_ok.load(Ordering::Acquire) {
bail!("there were errors");
}
Ok(()) Ok(())
} }

View File

@ -129,9 +129,9 @@ impl BackupReader {
let mut raw_data = Vec::with_capacity(64 * 1024); let mut raw_data = Vec::with_capacity(64 * 1024);
self.download(MANIFEST_BLOB_NAME, &mut raw_data).await?; self.download(MANIFEST_BLOB_NAME, &mut raw_data).await?;
let blob = DataBlob::from_raw(raw_data)?; let blob = DataBlob::load_from_reader(&mut &raw_data[..])?;
blob.verify_crc()?; // no expected digest available
let data = blob.decode(None)?; let data = blob.decode(None, None)?;
let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?; let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;

View File

@ -1,3 +1,4 @@
use std::convert::TryFrom;
use std::fmt; use std::fmt;
use anyhow::{format_err, Error}; use anyhow::{format_err, Error};
@ -15,7 +16,7 @@ pub const BACKUP_REPO_URL: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_RE
#[derive(Debug)] #[derive(Debug)]
pub struct BackupRepository { pub struct BackupRepository {
/// The user name used for Authentication /// The user name used for Authentication
user: Option<String>, user: Option<Userid>,
/// The host name or IP address /// The host name or IP address
host: Option<String>, host: Option<String>,
/// The name of the datastore /// The name of the datastore
@ -24,15 +25,15 @@ pub struct BackupRepository {
impl BackupRepository { impl BackupRepository {
pub fn new(user: Option<String>, host: Option<String>, store: String) -> Self { pub fn new(user: Option<Userid>, host: Option<String>, store: String) -> Self {
Self { user, host, store } Self { user, host, store }
} }
pub fn user(&self) -> &str { pub fn user(&self) -> &Userid {
if let Some(ref user) = self.user { if let Some(ref user) = self.user {
return user; return &user;
} }
"root@pam" Userid::root_userid()
} }
pub fn host(&self) -> &str { pub fn host(&self) -> &str {
@ -73,7 +74,7 @@ impl std::str::FromStr for BackupRepository {
.ok_or_else(|| format_err!("unable to parse repository url '{}'", url))?; .ok_or_else(|| format_err!("unable to parse repository url '{}'", url))?;
Ok(Self { Ok(Self {
user: cap.get(1).map(|m| m.as_str().to_owned()), user: cap.get(1).map(|m| Userid::try_from(m.as_str().to_owned())).transpose()?,
host: cap.get(2).map(|m| m.as_str().to_owned()), host: cap.get(2).map(|m| m.as_str().to_owned()),
store: cap[3].to_owned(), store: cap[3].to_owned(),
}) })

View File

@ -264,9 +264,9 @@ impl BackupWriter {
crate::tools::format::strip_server_file_expenstion(archive_name.clone()) crate::tools::format::strip_server_file_expenstion(archive_name.clone())
}; };
if archive_name != CATALOG_NAME { if archive_name != CATALOG_NAME {
let speed: HumanByte = (uploaded / (duration.as_secs() as usize)).into(); let speed: HumanByte = ((uploaded * 1_000_000) / (duration.as_micros() as usize)).into();
let uploaded: HumanByte = uploaded.into(); let uploaded: HumanByte = uploaded.into();
println!("{}: had to upload {} from {} in {}s, avgerage speed {}/s).", archive, uploaded, vsize_h, duration.as_secs(), speed); println!("{}: had to upload {} of {} in {:.2}s, average speed {}/s).", archive, uploaded, vsize_h, duration.as_secs_f64(), speed);
} else { } else {
println!("Uploaded backup catalog ({})", vsize_h); println!("Uploaded backup catalog ({})", vsize_h);
} }
@ -479,9 +479,9 @@ impl BackupWriter {
let param = json!({ "archive-name": MANIFEST_BLOB_NAME }); let param = json!({ "archive-name": MANIFEST_BLOB_NAME });
self.h2.download("previous", Some(param), &mut raw_data).await?; self.h2.download("previous", Some(param), &mut raw_data).await?;
let blob = DataBlob::from_raw(raw_data)?; let blob = DataBlob::load_from_reader(&mut &raw_data[..])?;
blob.verify_crc()?; // no expected digest available
let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref))?; let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref), None)?;
let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?; let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;

View File

@ -24,6 +24,7 @@ use proxmox::{
}; };
use super::pipe_to_stream::PipeToSendStream; use super::pipe_to_stream::PipeToSendStream;
use crate::api2::types::Userid;
use crate::tools::async_io::EitherStream; use crate::tools::async_io::EitherStream;
use crate::tools::{self, BroadcastFuture, DEFAULT_ENCODE_SET}; use crate::tools::{self, BroadcastFuture, DEFAULT_ENCODE_SET};
@ -104,7 +105,7 @@ pub struct HttpClient {
} }
/// Delete stored ticket data (logout) /// Delete stored ticket data (logout)
pub fn delete_ticket_info(prefix: &str, server: &str, username: &str) -> Result<(), Error> { pub fn delete_ticket_info(prefix: &str, server: &str, username: &Userid) -> Result<(), Error> {
let base = BaseDirectories::with_prefix(prefix)?; let base = BaseDirectories::with_prefix(prefix)?;
@ -116,7 +117,7 @@ pub fn delete_ticket_info(prefix: &str, server: &str, username: &str) -> Result<
let mut data = file_get_json(&path, Some(json!({})))?; let mut data = file_get_json(&path, Some(json!({})))?;
if let Some(map) = data[server].as_object_mut() { if let Some(map) = data[server].as_object_mut() {
map.remove(username); map.remove(username.as_str());
} }
replace_file(path, data.to_string().as_bytes(), CreateOptions::new().perm(mode))?; replace_file(path, data.to_string().as_bytes(), CreateOptions::new().perm(mode))?;
@ -223,7 +224,7 @@ fn store_ticket_info(prefix: &str, server: &str, username: &str, ticket: &str, t
Ok(()) Ok(())
} }
fn load_ticket_info(prefix: &str, server: &str, username: &str) -> Option<(String, String)> { fn load_ticket_info(prefix: &str, server: &str, userid: &Userid) -> Option<(String, String)> {
let base = BaseDirectories::with_prefix(prefix).ok()?; let base = BaseDirectories::with_prefix(prefix).ok()?;
// usually /run/user/<uid>/... // usually /run/user/<uid>/...
@ -231,7 +232,7 @@ fn load_ticket_info(prefix: &str, server: &str, username: &str) -> Option<(Strin
let data = file_get_json(&path, None).ok()?; let data = file_get_json(&path, None).ok()?;
let now = Utc::now().timestamp(); let now = Utc::now().timestamp();
let ticket_lifetime = tools::ticket::TICKET_LIFETIME - 60; let ticket_lifetime = tools::ticket::TICKET_LIFETIME - 60;
let uinfo = data[server][username].as_object()?; let uinfo = data[server][userid.as_str()].as_object()?;
let timestamp = uinfo["timestamp"].as_i64()?; let timestamp = uinfo["timestamp"].as_i64()?;
let age = now - timestamp; let age = now - timestamp;
@ -245,8 +246,11 @@ fn load_ticket_info(prefix: &str, server: &str, username: &str) -> Option<(Strin
} }
impl HttpClient { impl HttpClient {
pub fn new(
pub fn new(server: &str, username: &str, mut options: HttpClientOptions) -> Result<Self, Error> { server: &str,
userid: &Userid,
mut options: HttpClientOptions,
) -> Result<Self, Error> {
let verified_fingerprint = Arc::new(Mutex::new(None)); let verified_fingerprint = Arc::new(Mutex::new(None));
@ -306,20 +310,20 @@ impl HttpClient {
} else { } else {
let mut ticket_info = None; let mut ticket_info = None;
if use_ticket_cache { if use_ticket_cache {
ticket_info = load_ticket_info(options.prefix.as_ref().unwrap(), server, username); ticket_info = load_ticket_info(options.prefix.as_ref().unwrap(), server, userid);
} }
if let Some((ticket, _token)) = ticket_info { if let Some((ticket, _token)) = ticket_info {
ticket ticket
} else { } else {
Self::get_password(&username, options.interactive)? Self::get_password(userid, options.interactive)?
} }
}; };
let login_future = Self::credentials( let login_future = Self::credentials(
client.clone(), client.clone(),
server.to_owned(), server.to_owned(),
username.to_owned(), userid.to_owned(),
password, password.to_owned(),
).map_ok({ ).map_ok({
let server = server.to_string(); let server = server.to_string();
let prefix = options.prefix.clone(); let prefix = options.prefix.clone();
@ -355,7 +359,7 @@ impl HttpClient {
(*self.fingerprint.lock().unwrap()).clone() (*self.fingerprint.lock().unwrap()).clone()
} }
fn get_password(username: &str, interactive: bool) -> Result<String, Error> { fn get_password(username: &Userid, interactive: bool) -> Result<String, Error> {
// If we're on a TTY, query the user for a password // If we're on a TTY, query the user for a password
if interactive && tty::stdin_isatty() { if interactive && tty::stdin_isatty() {
let msg = format!("Password for \"{}\": ", username); let msg = format!("Password for \"{}\": ", username);
@ -579,7 +583,7 @@ impl HttpClient {
async fn credentials( async fn credentials(
client: Client<HttpsConnector>, client: Client<HttpsConnector>,
server: String, server: String,
username: String, username: Userid,
password: String, password: String,
) -> Result<AuthInfo, Error> { ) -> Result<AuthInfo, Error> {
let data = json!({ "username": username, "password": password }); let data = json!({ "username": username, "password": password });

View File

@ -27,16 +27,18 @@ async fn pull_index_chunks<I: IndexFile>(
for pos in 0..index.index_count() { for pos in 0..index.index_count() {
let digest = index.index_digest(pos).unwrap(); let info = index.chunk_info(pos).unwrap();
let chunk_exists = target.cond_touch_chunk(digest, false)?; let chunk_exists = target.cond_touch_chunk(&info.digest, false)?;
if chunk_exists { if chunk_exists {
//worker.log(format!("chunk {} exists {}", pos, proxmox::tools::digest_to_hex(digest))); //worker.log(format!("chunk {} exists {}", pos, proxmox::tools::digest_to_hex(digest)));
continue; continue;
} }
//worker.log(format!("sync {} chunk {}", pos, proxmox::tools::digest_to_hex(digest))); //worker.log(format!("sync {} chunk {}", pos, proxmox::tools::digest_to_hex(digest)));
let chunk = chunk_reader.read_raw_chunk(&digest).await?; let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
target.insert_chunk(&chunk, &digest)?; chunk.verify_unencrypted(info.size() as usize, &info.digest)?;
target.insert_chunk(&chunk, &info.digest)?;
} }
Ok(()) Ok(())
@ -60,15 +62,32 @@ async fn download_manifest(
Ok(tmp_manifest_file) Ok(tmp_manifest_file)
} }
fn verify_archive(
info: &FileInfo,
csum: &[u8; 32],
size: u64,
) -> Result<(), Error> {
if size != info.size {
bail!("wrong size for file '{}' ({} != {})", info.filename, info.size, size);
}
if csum != &info.csum {
bail!("wrong checksum for file '{}'", info.filename);
}
Ok(())
}
async fn pull_single_archive( async fn pull_single_archive(
worker: &WorkerTask, worker: &WorkerTask,
reader: &BackupReader, reader: &BackupReader,
chunk_reader: &mut RemoteChunkReader, chunk_reader: &mut RemoteChunkReader,
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
snapshot: &BackupDir, snapshot: &BackupDir,
archive_name: &str, archive_info: &FileInfo,
) -> Result<(), Error> { ) -> Result<(), Error> {
let archive_name = &archive_info.filename;
let mut path = tgt_store.base_path(); let mut path = tgt_store.base_path();
path.push(snapshot.relative_path()); path.push(snapshot.relative_path());
path.push(archive_name); path.push(archive_name);
@ -89,16 +108,23 @@ async fn pull_single_archive(
ArchiveType::DynamicIndex => { ArchiveType::DynamicIndex => {
let index = DynamicIndexReader::new(tmpfile) let index = DynamicIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read dynamic index {:?} - {}", tmp_path, err))?; .map_err(|err| format_err!("unable to read dynamic index {:?} - {}", tmp_path, err))?;
let (csum, size) = index.compute_csum();
verify_archive(archive_info, &csum, size)?;
pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?; pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?;
} }
ArchiveType::FixedIndex => { ArchiveType::FixedIndex => {
let index = FixedIndexReader::new(tmpfile) let index = FixedIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read fixed index '{:?}' - {}", tmp_path, err))?; .map_err(|err| format_err!("unable to read fixed index '{:?}' - {}", tmp_path, err))?;
let (csum, size) = index.compute_csum();
verify_archive(archive_info, &csum, size)?;
pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?; pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?;
} }
ArchiveType::Blob => { /* nothing to do */ } ArchiveType::Blob => {
let (csum, size) = compute_file_csum(&mut tmpfile)?;
verify_archive(archive_info, &csum, size)?;
}
} }
if let Err(err) = std::fs::rename(&tmp_path, &path) { if let Err(err) = std::fs::rename(&tmp_path, &path) {
bail!("Atomic rename file {:?} failed - {}", path, err); bail!("Atomic rename file {:?} failed - {}", path, err);
@ -174,16 +200,14 @@ async fn pull_snapshot(
}; };
}, },
}; };
let tmp_manifest_blob = DataBlob::load(&mut tmp_manifest_file)?; let tmp_manifest_blob = DataBlob::load_from_reader(&mut tmp_manifest_file)?;
tmp_manifest_blob.verify_crc()?;
if manifest_name.exists() { if manifest_name.exists() {
let manifest_blob = proxmox::try_block!({ let manifest_blob = proxmox::try_block!({
let mut manifest_file = std::fs::File::open(&manifest_name) let mut manifest_file = std::fs::File::open(&manifest_name)
.map_err(|err| format_err!("unable to open local manifest {:?} - {}", manifest_name, err))?; .map_err(|err| format_err!("unable to open local manifest {:?} - {}", manifest_name, err))?;
let manifest_blob = DataBlob::load(&mut manifest_file)?; let manifest_blob = DataBlob::load_from_reader(&mut manifest_file)?;
manifest_blob.verify_crc()?;
Ok(manifest_blob) Ok(manifest_blob)
}).map_err(|err: Error| { }).map_err(|err: Error| {
format_err!("unable to read local manifest {:?} - {}", manifest_name, err) format_err!("unable to read local manifest {:?} - {}", manifest_name, err)
@ -200,8 +224,6 @@ async fn pull_snapshot(
let manifest = BackupManifest::try_from(tmp_manifest_blob)?; let manifest = BackupManifest::try_from(tmp_manifest_blob)?;
let mut chunk_reader = RemoteChunkReader::new(reader.clone(), None, HashMap::new());
for item in manifest.files() { for item in manifest.files() {
let mut path = tgt_store.base_path(); let mut path = tgt_store.base_path();
path.push(snapshot.relative_path()); path.push(snapshot.relative_path());
@ -242,13 +264,15 @@ async fn pull_snapshot(
} }
} }
let mut chunk_reader = RemoteChunkReader::new(reader.clone(), None, item.chunk_crypt_mode(), HashMap::new());
pull_single_archive( pull_single_archive(
worker, worker,
&reader, &reader,
&mut chunk_reader, &mut chunk_reader,
tgt_store.clone(), tgt_store.clone(),
snapshot, snapshot,
&item.filename, &item,
).await?; ).await?;
} }
@ -273,13 +297,13 @@ pub async fn pull_snapshot_from(
snapshot: &BackupDir, snapshot: &BackupDir,
) -> Result<(), Error> { ) -> Result<(), Error> {
let (_path, is_new) = tgt_store.create_backup_dir(&snapshot)?; let (_path, is_new, _snap_lock) = tgt_store.create_locked_backup_dir(&snapshot)?;
if is_new { if is_new {
worker.log(format!("sync snapshot {:?}", snapshot.relative_path())); worker.log(format!("sync snapshot {:?}", snapshot.relative_path()));
if let Err(err) = pull_snapshot(worker, reader, tgt_store.clone(), &snapshot).await { if let Err(err) = pull_snapshot(worker, reader, tgt_store.clone(), &snapshot).await {
if let Err(cleanup_err) = tgt_store.remove_backup_dir(&snapshot) { if let Err(cleanup_err) = tgt_store.remove_backup_dir(&snapshot, true) {
worker.log(format!("cleanup error - {}", cleanup_err)); worker.log(format!("cleanup error - {}", cleanup_err));
} }
return Err(err); return Err(err);
@ -364,7 +388,7 @@ pub async fn pull_group(
let backup_time = info.backup_dir.backup_time(); let backup_time = info.backup_dir.backup_time();
if remote_snapshots.contains(&backup_time) { continue; } if remote_snapshots.contains(&backup_time) { continue; }
worker.log(format!("delete vanished snapshot {:?}", info.backup_dir.relative_path())); worker.log(format!("delete vanished snapshot {:?}", info.backup_dir.relative_path()));
tgt_store.remove_backup_dir(&info.backup_dir)?; tgt_store.remove_backup_dir(&info.backup_dir, false)?;
} }
} }
@ -377,7 +401,7 @@ pub async fn pull_store(
src_repo: &BackupRepository, src_repo: &BackupRepository,
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
delete: bool, delete: bool,
username: String, userid: Userid,
) -> Result<(), Error> { ) -> Result<(), Error> {
// explicit create shared lock to prevent GC on newly created chunks // explicit create shared lock to prevent GC on newly created chunks
@ -408,11 +432,11 @@ pub async fn pull_store(
for item in list { for item in list {
let group = BackupGroup::new(&item.backup_type, &item.backup_id); let group = BackupGroup::new(&item.backup_type, &item.backup_id);
let owner = tgt_store.create_backup_group(&group, &username)?; let (owner, _lock_guard) = tgt_store.create_locked_backup_group(&group, &userid)?;
// permission check // permission check
if owner != username { // only the owner is allowed to create additional snapshots if userid != owner { // only the owner is allowed to create additional snapshots
worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})", worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})",
item.backup_type, item.backup_id, username, owner)); item.backup_type, item.backup_id, userid, owner));
errors = true; errors = true;
continue; // do not stop here, instead continue continue; // do not stop here, instead continue
} }

View File

@ -3,10 +3,10 @@ use std::collections::HashMap;
use std::pin::Pin; use std::pin::Pin;
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use anyhow::Error; use anyhow::{bail, Error};
use super::BackupReader; use super::BackupReader;
use crate::backup::{AsyncReadChunk, CryptConfig, DataBlob, ReadChunk}; use crate::backup::{AsyncReadChunk, CryptConfig, CryptMode, DataBlob, ReadChunk};
use crate::tools::runtime::block_on; use crate::tools::runtime::block_on;
/// Read chunks from remote host using ``BackupReader`` /// Read chunks from remote host using ``BackupReader``
@ -14,6 +14,7 @@ use crate::tools::runtime::block_on;
pub struct RemoteChunkReader { pub struct RemoteChunkReader {
client: Arc<BackupReader>, client: Arc<BackupReader>,
crypt_config: Option<Arc<CryptConfig>>, crypt_config: Option<Arc<CryptConfig>>,
crypt_mode: CryptMode,
cache_hint: HashMap<[u8; 32], usize>, cache_hint: HashMap<[u8; 32], usize>,
cache: Arc<Mutex<HashMap<[u8; 32], Vec<u8>>>>, cache: Arc<Mutex<HashMap<[u8; 32], Vec<u8>>>>,
} }
@ -25,16 +26,20 @@ impl RemoteChunkReader {
pub fn new( pub fn new(
client: Arc<BackupReader>, client: Arc<BackupReader>,
crypt_config: Option<Arc<CryptConfig>>, crypt_config: Option<Arc<CryptConfig>>,
crypt_mode: CryptMode,
cache_hint: HashMap<[u8; 32], usize>, cache_hint: HashMap<[u8; 32], usize>,
) -> Self { ) -> Self {
Self { Self {
client, client,
crypt_config, crypt_config,
crypt_mode,
cache_hint, cache_hint,
cache: Arc::new(Mutex::new(HashMap::new())), cache: Arc::new(Mutex::new(HashMap::new())),
} }
} }
/// Downloads raw chunk. This only verifies the (untrusted) CRC32, use
/// DataBlob::verify_unencrypted or DataBlob::decode before storing/processing further.
pub async fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> { pub async fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let mut chunk_data = Vec::with_capacity(4 * 1024 * 1024); let mut chunk_data = Vec::with_capacity(4 * 1024 * 1024);
@ -42,10 +47,22 @@ impl RemoteChunkReader {
.download_chunk(&digest, &mut chunk_data) .download_chunk(&digest, &mut chunk_data)
.await?; .await?;
let chunk = DataBlob::from_raw(chunk_data)?; let chunk = DataBlob::load_from_reader(&mut &chunk_data[..])?;
chunk.verify_crc()?;
Ok(chunk) match self.crypt_mode {
CryptMode::Encrypt => {
match chunk.crypt_mode()? {
CryptMode::Encrypt => Ok(chunk),
CryptMode::SignOnly | CryptMode::None => bail!("Index and chunk CryptMode don't match."),
}
},
CryptMode::SignOnly | CryptMode::None => {
match chunk.crypt_mode()? {
CryptMode::Encrypt => bail!("Index and chunk CryptMode don't match."),
CryptMode::SignOnly | CryptMode::None => Ok(chunk),
}
},
}
} }
} }
@ -61,9 +78,7 @@ impl ReadChunk for RemoteChunkReader {
let chunk = ReadChunk::read_raw_chunk(self, digest)?; let chunk = ReadChunk::read_raw_chunk(self, digest)?;
let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?; let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref), Some(digest))?;
// fixme: verify digest?
let use_cache = self.cache_hint.contains_key(digest); let use_cache = self.cache_hint.contains_key(digest);
if use_cache { if use_cache {
@ -93,9 +108,7 @@ impl AsyncReadChunk for RemoteChunkReader {
let chunk = Self::read_raw_chunk(self, digest).await?; let chunk = Self::read_raw_chunk(self, digest).await?;
let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?; let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref), Some(digest))?;
// fixme: verify digest?
let use_cache = self.cache_hint.contains_key(digest); let use_cache = self.cache_hint.contains_key(digest);
if use_cache { if use_cache {

View File

@ -15,13 +15,13 @@ use proxmox::try_block;
use crate::buildcfg; use crate::buildcfg;
pub mod datastore;
pub mod remote;
pub mod user;
pub mod acl; pub mod acl;
pub mod cached_user_info; pub mod cached_user_info;
pub mod datastore;
pub mod network; pub mod network;
pub mod remote;
pub mod sync; pub mod sync;
pub mod user;
/// Check configuration directory permissions /// Check configuration directory permissions
/// ///

View File

@ -15,6 +15,8 @@ use proxmox::tools::{fs::replace_file, fs::CreateOptions};
use proxmox::constnamemap; use proxmox::constnamemap;
use proxmox::api::{api, schema::*}; use proxmox::api::{api, schema::*};
use crate::api2::types::Userid;
// define Privilege bitfield // define Privilege bitfield
constnamemap! { constnamemap! {
@ -224,7 +226,7 @@ pub struct AclTree {
} }
pub struct AclTreeNode { pub struct AclTreeNode {
pub users: HashMap<String, HashMap<String, bool>>, pub users: HashMap<Userid, HashMap<String, bool>>,
pub groups: HashMap<String, HashMap<String, bool>>, pub groups: HashMap<String, HashMap<String, bool>>,
pub children: BTreeMap<String, AclTreeNode>, pub children: BTreeMap<String, AclTreeNode>,
} }
@ -239,7 +241,7 @@ impl AclTreeNode {
} }
} }
pub fn extract_roles(&self, user: &str, all: bool) -> HashSet<String> { pub fn extract_roles(&self, user: &Userid, all: bool) -> HashSet<String> {
let user_roles = self.extract_user_roles(user, all); let user_roles = self.extract_user_roles(user, all);
if !user_roles.is_empty() { if !user_roles.is_empty() {
// user privs always override group privs // user privs always override group privs
@ -249,7 +251,7 @@ impl AclTreeNode {
self.extract_group_roles(user, all) self.extract_group_roles(user, all)
} }
pub fn extract_user_roles(&self, user: &str, all: bool) -> HashSet<String> { pub fn extract_user_roles(&self, user: &Userid, all: bool) -> HashSet<String> {
let mut set = HashSet::new(); let mut set = HashSet::new();
@ -273,7 +275,7 @@ impl AclTreeNode {
set set
} }
pub fn extract_group_roles(&self, _user: &str, all: bool) -> HashSet<String> { pub fn extract_group_roles(&self, _user: &Userid, all: bool) -> HashSet<String> {
let mut set = HashSet::new(); let mut set = HashSet::new();
@ -305,7 +307,7 @@ impl AclTreeNode {
roles.remove(role); roles.remove(role);
} }
pub fn delete_user_role(&mut self, userid: &str, role: &str) { pub fn delete_user_role(&mut self, userid: &Userid, role: &str) {
let roles = match self.users.get_mut(userid) { let roles = match self.users.get_mut(userid) {
Some(r) => r, Some(r) => r,
None => return, None => return,
@ -324,7 +326,7 @@ impl AclTreeNode {
} }
} }
pub fn insert_user_role(&mut self, user: String, role: String, propagate: bool) { pub fn insert_user_role(&mut self, user: Userid, role: String, propagate: bool) {
let map = self.users.entry(user).or_insert_with(|| HashMap::new()); let map = self.users.entry(user).or_insert_with(|| HashMap::new());
if role == ROLE_NAME_NO_ACCESS { if role == ROLE_NAME_NO_ACCESS {
map.clear(); map.clear();
@ -376,7 +378,7 @@ impl AclTree {
node.delete_group_role(group, role); node.delete_group_role(group, role);
} }
pub fn delete_user_role(&mut self, path: &str, userid: &str, role: &str) { pub fn delete_user_role(&mut self, path: &str, userid: &Userid, role: &str) {
let path = split_acl_path(path); let path = split_acl_path(path);
let node = match self.get_node(&path) { let node = match self.get_node(&path) {
Some(n) => n, Some(n) => n,
@ -391,10 +393,10 @@ impl AclTree {
node.insert_group_role(group.to_string(), role.to_string(), propagate); node.insert_group_role(group.to_string(), role.to_string(), propagate);
} }
pub fn insert_user_role(&mut self, path: &str, user: &str, role: &str, propagate: bool) { pub fn insert_user_role(&mut self, path: &str, user: &Userid, role: &str, propagate: bool) {
let path = split_acl_path(path); let path = split_acl_path(path);
let node = self.get_or_insert_node(&path); let node = self.get_or_insert_node(&path);
node.insert_user_role(user.to_string(), role.to_string(), propagate); node.insert_user_role(user.to_owned(), role.to_string(), propagate);
} }
fn write_node_config( fn write_node_config(
@ -521,7 +523,7 @@ impl AclTree {
let group = &user_or_group[1..]; let group = &user_or_group[1..];
node.insert_group_role(group.to_string(), role.to_string(), propagate); node.insert_group_role(group.to_string(), role.to_string(), propagate);
} else { } else {
node.insert_user_role(user_or_group.to_string(), role.to_string(), propagate); node.insert_user_role(user_or_group.parse()?, role.to_string(), propagate);
} }
} }
} }
@ -569,7 +571,7 @@ impl AclTree {
Ok(tree) Ok(tree)
} }
pub fn roles(&self, userid: &str, path: &[&str]) -> HashSet<String> { pub fn roles(&self, userid: &Userid, path: &[&str]) -> HashSet<String> {
let mut node = &self.root; let mut node = &self.root;
let mut role_set = node.extract_roles(userid, path.is_empty()); let mut role_set = node.extract_roles(userid, path.is_empty());
@ -665,13 +667,14 @@ pub fn save_config(acl: &AclTree) -> Result<(), Error> {
#[cfg(test)] #[cfg(test)]
mod test { mod test {
use anyhow::{Error}; use anyhow::{Error};
use super::AclTree; use super::AclTree;
use crate::api2::types::Userid;
fn check_roles( fn check_roles(
tree: &AclTree, tree: &AclTree,
user: &str, user: &Userid,
path: &str, path: &str,
expected_roles: &str, expected_roles: &str,
) { ) {
@ -686,22 +689,23 @@ mod test {
} }
#[test] #[test]
fn test_acl_line_compression() -> Result<(), Error> { fn test_acl_line_compression() {
let tree = AclTree::from_raw(r###" let tree = AclTree::from_raw(
acl:0:/store/store2:user1:Admin "\
acl:0:/store/store2:user2:Admin acl:0:/store/store2:user1@pbs:Admin\n\
acl:0:/store/store2:user1:DatastoreBackup acl:0:/store/store2:user2@pbs:Admin\n\
acl:0:/store/store2:user2:DatastoreBackup acl:0:/store/store2:user1@pbs:DatastoreBackup\n\
"###)?; acl:0:/store/store2:user2@pbs:DatastoreBackup\n\
",
)
.expect("failed to parse acl tree");
let mut raw: Vec<u8> = Vec::new(); let mut raw: Vec<u8> = Vec::new();
tree.write_config(&mut raw)?; tree.write_config(&mut raw).expect("failed to write acl tree");
let raw = std::str::from_utf8(&raw)?; let raw = std::str::from_utf8(&raw).expect("acl tree is not valid utf8");
assert_eq!(raw, "acl:0:/store/store2:user1,user2:Admin,DatastoreBackup\n"); assert_eq!(raw, "acl:0:/store/store2:user1@pbs,user2@pbs:Admin,DatastoreBackup\n");
Ok(())
} }
#[test] #[test]
@ -712,15 +716,17 @@ acl:1:/storage:user1@pbs:Admin
acl:1:/storage/store1:user1@pbs:DatastoreBackup acl:1:/storage/store1:user1@pbs:DatastoreBackup
acl:1:/storage/store2:user2@pbs:DatastoreBackup acl:1:/storage/store2:user2@pbs:DatastoreBackup
"###)?; "###)?;
check_roles(&tree, "user1@pbs", "/", ""); let user1: Userid = "user1@pbs".parse()?;
check_roles(&tree, "user1@pbs", "/storage", "Admin"); check_roles(&tree, &user1, "/", "");
check_roles(&tree, "user1@pbs", "/storage/store1", "DatastoreBackup"); check_roles(&tree, &user1, "/storage", "Admin");
check_roles(&tree, "user1@pbs", "/storage/store2", "Admin"); check_roles(&tree, &user1, "/storage/store1", "DatastoreBackup");
check_roles(&tree, &user1, "/storage/store2", "Admin");
check_roles(&tree, "user2@pbs", "/", ""); let user2: Userid = "user2@pbs".parse()?;
check_roles(&tree, "user2@pbs", "/storage", ""); check_roles(&tree, &user2, "/", "");
check_roles(&tree, "user2@pbs", "/storage/store1", ""); check_roles(&tree, &user2, "/storage", "");
check_roles(&tree, "user2@pbs", "/storage/store2", "DatastoreBackup"); check_roles(&tree, &user2, "/storage/store1", "");
check_roles(&tree, &user2, "/storage/store2", "DatastoreBackup");
Ok(()) Ok(())
} }
@ -733,22 +739,23 @@ acl:1:/:user1@pbs:Admin
acl:1:/storage:user1@pbs:NoAccess acl:1:/storage:user1@pbs:NoAccess
acl:1:/storage/store1:user1@pbs:DatastoreBackup acl:1:/storage/store1:user1@pbs:DatastoreBackup
"###)?; "###)?;
check_roles(&tree, "user1@pbs", "/", "Admin"); let user1: Userid = "user1@pbs".parse()?;
check_roles(&tree, "user1@pbs", "/storage", "NoAccess"); check_roles(&tree, &user1, "/", "Admin");
check_roles(&tree, "user1@pbs", "/storage/store1", "DatastoreBackup"); check_roles(&tree, &user1, "/storage", "NoAccess");
check_roles(&tree, "user1@pbs", "/storage/store2", "NoAccess"); check_roles(&tree, &user1, "/storage/store1", "DatastoreBackup");
check_roles(&tree, "user1@pbs", "/system", "Admin"); check_roles(&tree, &user1, "/storage/store2", "NoAccess");
check_roles(&tree, &user1, "/system", "Admin");
let tree = AclTree::from_raw(r###" let tree = AclTree::from_raw(r###"
acl:1:/:user1@pbs:Admin acl:1:/:user1@pbs:Admin
acl:0:/storage:user1@pbs:NoAccess acl:0:/storage:user1@pbs:NoAccess
acl:1:/storage/store1:user1@pbs:DatastoreBackup acl:1:/storage/store1:user1@pbs:DatastoreBackup
"###)?; "###)?;
check_roles(&tree, "user1@pbs", "/", "Admin"); check_roles(&tree, &user1, "/", "Admin");
check_roles(&tree, "user1@pbs", "/storage", "NoAccess"); check_roles(&tree, &user1, "/storage", "NoAccess");
check_roles(&tree, "user1@pbs", "/storage/store1", "DatastoreBackup"); check_roles(&tree, &user1, "/storage/store1", "DatastoreBackup");
check_roles(&tree, "user1@pbs", "/storage/store2", "Admin"); check_roles(&tree, &user1, "/storage/store2", "Admin");
check_roles(&tree, "user1@pbs", "/system", "Admin"); check_roles(&tree, &user1, "/system", "Admin");
Ok(()) Ok(())
} }
@ -758,13 +765,15 @@ acl:1:/storage/store1:user1@pbs:DatastoreBackup
let mut tree = AclTree::new(); let mut tree = AclTree::new();
tree.insert_user_role("/", "user1@pbs", "Admin", true); let user1: Userid = "user1@pbs".parse()?;
tree.insert_user_role("/", "user1@pbs", "Audit", true);
check_roles(&tree, "user1@pbs", "/", "Admin,Audit"); tree.insert_user_role("/", &user1, "Admin", true);
tree.insert_user_role("/", &user1, "Audit", true);
tree.insert_user_role("/", "user1@pbs", "NoAccess", true); check_roles(&tree, &user1, "/", "Admin,Audit");
check_roles(&tree, "user1@pbs", "/", "NoAccess");
tree.insert_user_role("/", &user1, "NoAccess", true);
check_roles(&tree, &user1, "/", "NoAccess");
let mut raw: Vec<u8> = Vec::new(); let mut raw: Vec<u8> = Vec::new();
tree.write_config(&mut raw)?; tree.write_config(&mut raw)?;
@ -780,20 +789,21 @@ acl:1:/storage/store1:user1@pbs:DatastoreBackup
let mut tree = AclTree::new(); let mut tree = AclTree::new();
tree.insert_user_role("/storage", "user1@pbs", "NoAccess", true); let user1: Userid = "user1@pbs".parse()?;
check_roles(&tree, "user1@pbs", "/storage", "NoAccess"); tree.insert_user_role("/storage", &user1, "NoAccess", true);
tree.insert_user_role("/storage", "user1@pbs", "Admin", true); check_roles(&tree, &user1, "/storage", "NoAccess");
tree.insert_user_role("/storage", "user1@pbs", "Audit", true);
check_roles(&tree, "user1@pbs", "/storage", "Admin,Audit"); tree.insert_user_role("/storage", &user1, "Admin", true);
tree.insert_user_role("/storage", &user1, "Audit", true);
tree.insert_user_role("/storage", "user1@pbs", "NoAccess", true); check_roles(&tree, &user1, "/storage", "Admin,Audit");
check_roles(&tree, "user1@pbs", "/storage", "NoAccess"); tree.insert_user_role("/storage", &user1, "NoAccess", true);
check_roles(&tree, &user1, "/storage", "NoAccess");
Ok(()) Ok(())
} }
} }

View File

@ -10,6 +10,7 @@ use proxmox::api::UserInformation;
use super::acl::{AclTree, ROLE_NAMES, ROLE_ADMIN}; use super::acl::{AclTree, ROLE_NAMES, ROLE_ADMIN};
use super::user::User; use super::user::User;
use crate::api2::types::Userid;
/// Cache User/Group/Acl configuration data for fast permission tests /// Cache User/Group/Acl configuration data for fast permission tests
pub struct CachedUserInfo { pub struct CachedUserInfo {
@ -57,8 +58,8 @@ impl CachedUserInfo {
} }
/// Test if a user account is enabled and not expired /// Test if a user account is enabled and not expired
pub fn is_active_user(&self, userid: &str) -> bool { pub fn is_active_user(&self, userid: &Userid) -> bool {
if let Ok(info) = self.user_cfg.lookup::<User>("user", &userid) { if let Ok(info) = self.user_cfg.lookup::<User>("user", userid.as_str()) {
if !info.enable.unwrap_or(true) { if !info.enable.unwrap_or(true) {
return false; return false;
} }
@ -77,12 +78,12 @@ impl CachedUserInfo {
pub fn check_privs( pub fn check_privs(
&self, &self,
userid: &str, userid: &Userid,
path: &[&str], path: &[&str],
required_privs: u64, required_privs: u64,
partial: bool, partial: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let user_privs = self.lookup_privs(userid, path); let user_privs = self.lookup_privs(&userid, path);
let allowed = if partial { let allowed = if partial {
(user_privs & required_privs) != 0 (user_privs & required_privs) != 0
} else { } else {
@ -97,18 +98,20 @@ impl CachedUserInfo {
} }
} }
impl UserInformation for CachedUserInfo { impl CachedUserInfo {
fn is_superuser(&self, userid: &str) -> bool { pub fn is_superuser(&self, userid: &Userid) -> bool {
userid == "root@pam" userid == "root@pam"
} }
fn is_group_member(&self, _userid: &str, _group: &str) -> bool { pub fn is_group_member(&self, _userid: &Userid, _group: &str) -> bool {
false false
} }
fn lookup_privs(&self, userid: &str, path: &[&str]) -> u64 { pub fn lookup_privs(&self, userid: &Userid, path: &[&str]) -> u64 {
if self.is_superuser(userid) { return ROLE_ADMIN; } if self.is_superuser(userid) {
return ROLE_ADMIN;
}
let roles = self.acl_tree.roles(userid, path); let roles = self.acl_tree.roles(userid, path);
let mut privs: u64 = 0; let mut privs: u64 = 0;
@ -120,3 +123,20 @@ impl UserInformation for CachedUserInfo {
privs privs
} }
} }
impl UserInformation for CachedUserInfo {
fn is_superuser(&self, userid: &str) -> bool {
userid == "root@pam"
}
fn is_group_member(&self, _userid: &str, _group: &str) -> bool {
false
}
fn lookup_privs(&self, userid: &str, path: &[&str]) -> u64 {
match userid.parse::<Userid>() {
Ok(userid) => Self::lookup_privs(self, &userid, path),
Err(_) => 0,
}
}
}

View File

@ -40,7 +40,7 @@ pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth t
schema: DNS_NAME_OR_IP_SCHEMA, schema: DNS_NAME_OR_IP_SCHEMA,
}, },
userid: { userid: {
schema: PROXMOX_USER_ID_SCHEMA, type: Userid,
}, },
password: { password: {
schema: REMOTE_PASSWORD_SCHEMA, schema: REMOTE_PASSWORD_SCHEMA,
@ -58,7 +58,7 @@ pub struct Remote {
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>, pub comment: Option<String>,
pub host: String, pub host: String,
pub userid: String, pub userid: Userid,
#[serde(skip_serializing_if="String::is_empty")] #[serde(skip_serializing_if="String::is_empty")]
#[serde(with = "proxmox::tools::serde::string_as_base64")] #[serde(with = "proxmox::tools::serde::string_as_base64")]
pub password: String, pub password: String,

View File

@ -56,7 +56,7 @@ pub const EMAIL_SCHEMA: Schema = StringSchema::new("E-Mail Address.")
#[api( #[api(
properties: { properties: {
userid: { userid: {
schema: PROXMOX_USER_ID_SCHEMA, type: Userid,
}, },
comment: { comment: {
optional: true, optional: true,
@ -87,7 +87,7 @@ pub const EMAIL_SCHEMA: Schema = StringSchema::new("E-Mail Address.")
#[derive(Serialize,Deserialize)] #[derive(Serialize,Deserialize)]
/// User properties. /// User properties.
pub struct User { pub struct User {
pub userid: String, pub userid: Userid,
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>, pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
@ -109,7 +109,7 @@ fn init() -> SectionConfig {
}; };
let plugin = SectionConfigPlugin::new("user".to_string(), Some("userid".to_string()), obj_schema); let plugin = SectionConfigPlugin::new("user".to_string(), Some("userid".to_string()), obj_schema);
let mut config = SectionConfig::new(&PROXMOX_USER_ID_SCHEMA); let mut config = SectionConfig::new(&Userid::API_SCHEMA);
config.register_plugin(plugin); config.register_plugin(plugin);
@ -129,7 +129,7 @@ pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
if data.sections.get("root@pam").is_none() { if data.sections.get("root@pam").is_none() {
let user: User = User { let user: User = User {
userid: "root@pam".to_string(), userid: Userid::root_userid().clone(),
comment: Some("Superuser".to_string()), comment: Some("Superuser".to_string()),
enable: None, enable: None,
expire: None, expire: None,

View File

@ -1,5 +1,4 @@
use std::collections::{HashSet, HashMap}; use std::collections::{HashSet, HashMap};
use std::convert::TryFrom;
use std::ffi::{CStr, CString, OsStr}; use std::ffi::{CStr, CString, OsStr};
use std::fmt; use std::fmt;
use std::io::{self, Read, Write}; use std::io::{self, Read, Write};
@ -259,34 +258,40 @@ impl<'a, 'b> Archiver<'a, 'b> {
oflags: OFlag, oflags: OFlag,
existed: bool, existed: bool,
) -> Result<Option<Fd>, Error> { ) -> Result<Option<Fd>, Error> {
match Fd::openat( // common flags we always want to use:
&unsafe { RawFdNum::from_raw_fd(parent) }, let oflags = oflags | OFlag::O_CLOEXEC | OFlag::O_NOCTTY;
file_name,
oflags, let mut noatime = OFlag::O_NOATIME;
Mode::empty(), loop {
) { return match Fd::openat(
Ok(fd) => Ok(Some(fd)), &unsafe { RawFdNum::from_raw_fd(parent) },
Err(nix::Error::Sys(Errno::ENOENT)) => { file_name,
if existed { oflags | noatime,
self.report_vanished_file()?; Mode::empty(),
) {
Ok(fd) => Ok(Some(fd)),
Err(nix::Error::Sys(Errno::ENOENT)) => {
if existed {
self.report_vanished_file()?;
}
Ok(None)
} }
Ok(None) Err(nix::Error::Sys(Errno::EACCES)) => {
writeln!(self.errors, "failed to open file: {:?}: access denied", file_name)?;
Ok(None)
}
Err(nix::Error::Sys(Errno::EPERM)) if !noatime.is_empty() => {
// Retry without O_NOATIME:
noatime = OFlag::empty();
continue;
}
Err(other) => Err(Error::from(other)),
} }
Err(nix::Error::Sys(Errno::EACCES)) => {
writeln!(self.errors, "failed to open file: {:?}: access denied", file_name)?;
Ok(None)
}
Err(other) => Err(Error::from(other)),
} }
} }
fn read_pxar_excludes(&mut self, parent: RawFd) -> Result<(), Error> { fn read_pxar_excludes(&mut self, parent: RawFd) -> Result<(), Error> {
let fd = self.open_file( let fd = self.open_file(parent, c_str!(".pxarexclude"), OFlag::O_RDONLY, false)?;
parent,
c_str!(".pxarexclude"),
OFlag::O_RDONLY | OFlag::O_CLOEXEC | OFlag::O_NOCTTY,
false,
)?;
let old_pattern_count = self.patterns.len(); let old_pattern_count = self.patterns.len();
@ -480,7 +485,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
let fd = self.open_file( let fd = self.open_file(
parent, parent,
c_file_name, c_file_name,
open_mode | OFlag::O_RDONLY | OFlag::O_NOFOLLOW | OFlag::O_CLOEXEC | OFlag::O_NOCTTY, open_mode | OFlag::O_RDONLY | OFlag::O_NOFOLLOW,
true, true,
)?; )?;
@ -696,16 +701,16 @@ fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Resu
// required for some of these // required for some of these
let proc_path = Path::new("/proc/self/fd/").join(fd.to_string()); let proc_path = Path::new("/proc/self/fd/").join(fd.to_string());
let mtime = u64::try_from(stat.st_mtime * 1_000_000_000 + stat.st_mtime_nsec)
.map_err(|_| format_err!("file with negative mtime"))?;
let mut meta = Metadata { let mut meta = Metadata {
stat: pxar::Stat { stat: pxar::Stat {
mode: u64::from(stat.st_mode), mode: u64::from(stat.st_mode),
flags: 0, flags: 0,
uid: stat.st_uid, uid: stat.st_uid,
gid: stat.st_gid, gid: stat.st_gid,
mtime, mtime: pxar::format::StatxTimestamp {
secs: stat.st_mtime,
nanos: stat.st_mtime_nsec as u32,
},
}, },
..Default::default() ..Default::default()
}; };

View File

@ -6,6 +6,7 @@ use std::io;
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use std::os::unix::io::{AsRawFd, FromRawFd, RawFd}; use std::os::unix::io::{AsRawFd, FromRawFd, RawFd};
use std::path::Path; use std::path::Path;
use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use nix::dir::Dir; use nix::dir::Dir;
@ -20,16 +21,18 @@ use proxmox::c_result;
use proxmox::tools::fs::{create_path, CreateOptions}; use proxmox::tools::fs::{create_path, CreateOptions};
use crate::pxar::dir_stack::PxarDirStack; use crate::pxar::dir_stack::PxarDirStack;
use crate::pxar::Flags;
use crate::pxar::metadata; use crate::pxar::metadata;
use crate::pxar::Flags;
pub fn extract_archive<T, F>( pub fn extract_archive<T, F>(
mut decoder: pxar::decoder::Decoder<T>, mut decoder: pxar::decoder::Decoder<T>,
destination: &Path, destination: &Path,
match_list: &[MatchEntry], match_list: &[MatchEntry],
extract_match_default: bool,
feature_flags: Flags, feature_flags: Flags,
allow_existing_dirs: bool, allow_existing_dirs: bool,
mut callback: F, mut callback: F,
on_error: Option<Box<dyn FnMut(Error) -> Result<(), Error> + Send>>,
) -> Result<(), Error> ) -> Result<(), Error>
where where
T: pxar::decoder::SeqRead, T: pxar::decoder::SeqRead,
@ -68,8 +71,13 @@ where
feature_flags, feature_flags,
); );
if let Some(on_error) = on_error {
extractor.on_error(on_error);
}
let mut match_stack = Vec::new(); let mut match_stack = Vec::new();
let mut current_match = true; let mut err_path_stack = vec![OsString::from("/")];
let mut current_match = extract_match_default;
while let Some(entry) = decoder.next() { while let Some(entry) = decoder.next() {
use pxar::EntryKind; use pxar::EntryKind;
@ -87,6 +95,8 @@ where
let metadata = entry.metadata(); let metadata = entry.metadata();
extractor.set_path(entry.path().as_os_str().to_owned());
let match_result = match_list.matches( let match_result = match_list.matches(
entry.path().as_os_str().as_bytes(), entry.path().as_os_str().as_bytes(),
Some(metadata.file_type() as u32), Some(metadata.file_type() as u32),
@ -102,17 +112,32 @@ where
callback(entry.path()); callback(entry.path());
let create = current_match && match_result != Some(MatchType::Exclude); let create = current_match && match_result != Some(MatchType::Exclude);
extractor.enter_directory(file_name_os.to_owned(), metadata.clone(), create)?; extractor
.enter_directory(file_name_os.to_owned(), metadata.clone(), create)
.map_err(|err| format_err!("error at entry {:?}: {}", file_name_os, err))?;
// We're starting a new directory, push our old matching state and replace it with // We're starting a new directory, push our old matching state and replace it with
// our new one: // our new one:
match_stack.push(current_match); match_stack.push(current_match);
current_match = did_match; current_match = did_match;
// When we hit the goodbye table we'll try to apply metadata to the directory, but
// the Goodbye entry will not contain the path, so push it to our path stack for
// error messages:
err_path_stack.push(extractor.clone_path());
Ok(()) Ok(())
} }
(_, EntryKind::GoodbyeTable) => { (_, EntryKind::GoodbyeTable) => {
// go up a directory // go up a directory
extractor.set_path(err_path_stack.pop().ok_or_else(|| {
format_err!(
"error at entry {:?}: unexpected end of directory",
file_name_os
)
})?);
extractor extractor
.leave_directory() .leave_directory()
.map_err(|err| format_err!("error at entry {:?}: {}", file_name_os, err))?; .map_err(|err| format_err!("error at entry {:?}: {}", file_name_os, err))?;
@ -181,6 +206,13 @@ pub(crate) struct Extractor {
feature_flags: Flags, feature_flags: Flags,
allow_existing_dirs: bool, allow_existing_dirs: bool,
dir_stack: PxarDirStack, dir_stack: PxarDirStack,
/// For better error output we need to track the current path in the Extractor state.
current_path: Arc<Mutex<OsString>>,
/// Error callback. Includes `current_path` in the reformatted error, should return `Ok` to
/// continue extracting or the passed error as `Err` to bail out.
on_error: Box<dyn FnMut(Error) -> Result<(), Error> + Send>,
} }
impl Extractor { impl Extractor {
@ -195,9 +227,30 @@ impl Extractor {
dir_stack: PxarDirStack::new(root_dir, metadata), dir_stack: PxarDirStack::new(root_dir, metadata),
allow_existing_dirs, allow_existing_dirs,
feature_flags, feature_flags,
current_path: Arc::new(Mutex::new(OsString::new())),
on_error: Box::new(|err| Err(err)),
} }
} }
/// We call this on errors. The error will be reformatted to include `current_path`. The
/// callback should decide whether this error was fatal (simply return it) to bail out early,
/// or log/remember/accumulate errors somewhere and return `Ok(())` in its place to continue
/// extracting.
pub fn on_error(&mut self, mut on_error: Box<dyn FnMut(Error) -> Result<(), Error> + Send>) {
let path = Arc::clone(&self.current_path);
self.on_error = Box::new(move |err: Error| -> Result<(), Error> {
on_error(format_err!("error at {:?}: {}", path.lock().unwrap(), err))
});
}
pub fn set_path(&mut self, path: OsString) {
*self.current_path.lock().unwrap() = path;
}
pub fn clone_path(&self) -> OsString {
self.current_path.lock().unwrap().clone()
}
/// When encountering a directory during extraction, this is used to keep track of it. If /// When encountering a directory during extraction, this is used to keep track of it. If
/// `create` is true it is immediately created and its metadata will be updated once we leave /// `create` is true it is immediately created and its metadata will be updated once we leave
/// it. If `create` is false it will only be created if it is going to have any actual content. /// it. If `create` is false it will only be created if it is going to have any actual content.
@ -216,7 +269,7 @@ impl Extractor {
Ok(()) Ok(())
} }
/// When done with a directory we need to make sure we're /// When done with a directory we can apply its metadata if it has been created.
pub fn leave_directory(&mut self) -> Result<(), Error> { pub fn leave_directory(&mut self) -> Result<(), Error> {
let dir = self let dir = self
.dir_stack .dir_stack
@ -230,6 +283,7 @@ impl Extractor {
dir.metadata(), dir.metadata(),
fd, fd,
&CString::new(dir.file_name().as_bytes())?, &CString::new(dir.file_name().as_bytes())?,
&mut self.on_error,
) )
.map_err(|err| format_err!("failed to apply directory metadata: {}", err))?; .map_err(|err| format_err!("failed to apply directory metadata: {}", err))?;
} }
@ -255,14 +309,16 @@ impl Extractor {
) -> Result<(), Error> { ) -> Result<(), Error> {
let parent = self.parent_fd()?; let parent = self.parent_fd()?;
nix::unistd::symlinkat(link, Some(parent), file_name)?; nix::unistd::symlinkat(link, Some(parent), file_name)?;
metadata::apply_at(self.feature_flags, metadata, parent, file_name) metadata::apply_at(
self.feature_flags,
metadata,
parent,
file_name,
&mut self.on_error,
)
} }
pub fn extract_hardlink( pub fn extract_hardlink(&mut self, file_name: &CStr, link: &OsStr) -> Result<(), Error> {
&mut self,
file_name: &CStr,
link: &OsStr,
) -> Result<(), Error> {
crate::pxar::tools::assert_relative_path(link)?; crate::pxar::tools::assert_relative_path(link)?;
let parent = self.parent_fd()?; let parent = self.parent_fd()?;
@ -306,7 +362,13 @@ impl Extractor {
unsafe { c_result!(libc::mknodat(parent, file_name.as_ptr(), mode, device)) } unsafe { c_result!(libc::mknodat(parent, file_name.as_ptr(), mode, device)) }
.map_err(|err| format_err!("failed to create device node: {}", err))?; .map_err(|err| format_err!("failed to create device node: {}", err))?;
metadata::apply_at(self.feature_flags, metadata, parent, file_name) metadata::apply_at(
self.feature_flags,
metadata,
parent,
file_name,
&mut self.on_error,
)
} }
pub fn extract_file( pub fn extract_file(
@ -318,16 +380,23 @@ impl Extractor {
) -> Result<(), Error> { ) -> Result<(), Error> {
let parent = self.parent_fd()?; let parent = self.parent_fd()?;
let mut file = unsafe { let mut file = unsafe {
std::fs::File::from_raw_fd(nix::fcntl::openat( std::fs::File::from_raw_fd(
parent, nix::fcntl::openat(
file_name, parent,
OFlag::O_CREAT | OFlag::O_EXCL | OFlag::O_WRONLY | OFlag::O_CLOEXEC, file_name,
Mode::from_bits(0o600).unwrap(), OFlag::O_CREAT | OFlag::O_EXCL | OFlag::O_WRONLY | OFlag::O_CLOEXEC,
Mode::from_bits(0o600).unwrap(),
)
.map_err(|err| format_err!("failed to create file {:?}: {}", file_name, err))?,
) )
.map_err(|err| format_err!("failed to create file {:?}: {}", file_name, err))?)
}; };
metadata::apply_initial_flags(self.feature_flags, metadata, file.as_raw_fd())?; metadata::apply_initial_flags(
self.feature_flags,
metadata,
file.as_raw_fd(),
&mut self.on_error,
)?;
let extracted = io::copy(&mut *contents, &mut file) let extracted = io::copy(&mut *contents, &mut file)
.map_err(|err| format_err!("failed to copy file contents: {}", err))?; .map_err(|err| format_err!("failed to copy file contents: {}", err))?;
@ -335,7 +404,13 @@ impl Extractor {
bail!("extracted {} bytes of a file of {} bytes", extracted, size); bail!("extracted {} bytes of a file of {} bytes", extracted, size);
} }
metadata::apply(self.feature_flags, metadata, file.as_raw_fd(), file_name) metadata::apply(
self.feature_flags,
metadata,
file.as_raw_fd(),
file_name,
&mut self.on_error,
)
} }
pub async fn async_extract_file<T: tokio::io::AsyncRead + Unpin>( pub async fn async_extract_file<T: tokio::io::AsyncRead + Unpin>(
@ -347,16 +422,23 @@ impl Extractor {
) -> Result<(), Error> { ) -> Result<(), Error> {
let parent = self.parent_fd()?; let parent = self.parent_fd()?;
let mut file = tokio::fs::File::from_std(unsafe { let mut file = tokio::fs::File::from_std(unsafe {
std::fs::File::from_raw_fd(nix::fcntl::openat( std::fs::File::from_raw_fd(
parent, nix::fcntl::openat(
file_name, parent,
OFlag::O_CREAT | OFlag::O_EXCL | OFlag::O_WRONLY | OFlag::O_CLOEXEC, file_name,
Mode::from_bits(0o600).unwrap(), OFlag::O_CREAT | OFlag::O_EXCL | OFlag::O_WRONLY | OFlag::O_CLOEXEC,
Mode::from_bits(0o600).unwrap(),
)
.map_err(|err| format_err!("failed to create file {:?}: {}", file_name, err))?,
) )
.map_err(|err| format_err!("failed to create file {:?}: {}", file_name, err))?)
}); });
metadata::apply_initial_flags(self.feature_flags, metadata, file.as_raw_fd())?; metadata::apply_initial_flags(
self.feature_flags,
metadata,
file.as_raw_fd(),
&mut self.on_error,
)?;
let extracted = tokio::io::copy(&mut *contents, &mut file) let extracted = tokio::io::copy(&mut *contents, &mut file)
.await .await
@ -365,6 +447,12 @@ impl Extractor {
bail!("extracted {} bytes of a file of {} bytes", extracted, size); bail!("extracted {} bytes of a file of {} bytes", extracted, size);
} }
metadata::apply(self.feature_flags, metadata, file.as_raw_fd(), file_name) metadata::apply(
self.feature_flags,
metadata,
file.as_raw_fd(),
file_name,
&mut self.on_error,
)
} }
} }

View File

@ -673,11 +673,6 @@ fn to_stat(inode: u64, entry: &pxar::Entry) -> Result<libc::stat, Error> {
let metadata = entry.metadata(); let metadata = entry.metadata();
let time = i64::try_from(metadata.stat.mtime)
.map_err(|_| format_err!("mtime does not fit into a signed 64 bit integer"))?;
let sec = time / 1_000_000_000;
let nsec = time % 1_000_000_000;
let mut stat: libc::stat = unsafe { mem::zeroed() }; let mut stat: libc::stat = unsafe { mem::zeroed() };
stat.st_ino = inode; stat.st_ino = inode;
stat.st_nlink = nlink; stat.st_nlink = nlink;
@ -687,11 +682,11 @@ fn to_stat(inode: u64, entry: &pxar::Entry) -> Result<libc::stat, Error> {
.map_err(|err| format_err!("size does not fit into st_size field: {}", err))?; .map_err(|err| format_err!("size does not fit into st_size field: {}", err))?;
stat.st_uid = metadata.stat.uid; stat.st_uid = metadata.stat.uid;
stat.st_gid = metadata.stat.gid; stat.st_gid = metadata.stat.gid;
stat.st_atime = sec; stat.st_atime = metadata.stat.mtime.secs;
stat.st_atime_nsec = nsec; stat.st_atime_nsec = metadata.stat.mtime.nanos as _;
stat.st_mtime = sec; stat.st_mtime = metadata.stat.mtime.secs;
stat.st_mtime_nsec = nsec; stat.st_mtime_nsec = metadata.stat.mtime.nanos as _;
stat.st_ctime = sec; stat.st_ctime = metadata.stat.mtime.secs;
stat.st_ctime_nsec = nsec; stat.st_ctime_nsec = metadata.stat.mtime.nanos as _;
Ok(stat) Ok(stat)
} }

View File

@ -37,26 +37,20 @@ fn allow_notsupp_remember<E: SysError>(err: E, not_supp: &mut bool) -> Result<()
} }
} }
fn nsec_to_update_timespec(mtime_nsec: u64) -> [libc::timespec; 2] { fn timestamp_to_update_timespec(mtime: &pxar::format::StatxTimestamp) -> [libc::timespec; 2] {
// restore mtime // restore mtime
const UTIME_OMIT: i64 = (1 << 30) - 2; const UTIME_OMIT: i64 = (1 << 30) - 2;
const NANOS_PER_SEC: i64 = 1_000_000_000;
let sec = (mtime_nsec as i64) / NANOS_PER_SEC; [
let nsec = (mtime_nsec as i64) % NANOS_PER_SEC;
let times: [libc::timespec; 2] = [
libc::timespec { libc::timespec {
tv_sec: 0, tv_sec: 0,
tv_nsec: UTIME_OMIT, tv_nsec: UTIME_OMIT,
}, },
libc::timespec { libc::timespec {
tv_sec: sec, tv_sec: mtime.secs,
tv_nsec: nsec, tv_nsec: mtime.nanos as _,
}, },
]; ]
times
} }
// //
@ -68,6 +62,7 @@ pub fn apply_at(
metadata: &Metadata, metadata: &Metadata,
parent: RawFd, parent: RawFd,
file_name: &CStr, file_name: &CStr,
on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send),
) -> Result<(), Error> { ) -> Result<(), Error> {
let fd = proxmox::tools::fd::Fd::openat( let fd = proxmox::tools::fd::Fd::openat(
&unsafe { RawFdNum::from_raw_fd(parent) }, &unsafe { RawFdNum::from_raw_fd(parent) },
@ -76,20 +71,32 @@ pub fn apply_at(
Mode::empty(), Mode::empty(),
)?; )?;
apply(flags, metadata, fd.as_raw_fd(), file_name) apply(flags, metadata, fd.as_raw_fd(), file_name, on_error)
} }
pub fn apply_initial_flags( pub fn apply_initial_flags(
flags: Flags, flags: Flags,
metadata: &Metadata, metadata: &Metadata,
fd: RawFd, fd: RawFd,
on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send),
) -> Result<(), Error> { ) -> Result<(), Error> {
let entry_flags = Flags::from_bits_truncate(metadata.stat.flags); let entry_flags = Flags::from_bits_truncate(metadata.stat.flags);
apply_chattr(fd, entry_flags.to_initial_chattr(), flags.to_initial_chattr())?; apply_chattr(
fd,
entry_flags.to_initial_chattr(),
flags.to_initial_chattr(),
)
.or_else(on_error)?;
Ok(()) Ok(())
} }
pub fn apply(flags: Flags, metadata: &Metadata, fd: RawFd, file_name: &CStr) -> Result<(), Error> { pub fn apply(
flags: Flags,
metadata: &Metadata,
fd: RawFd,
file_name: &CStr,
on_error: &mut (dyn FnMut(Error) -> Result<(), Error> + Send),
) -> Result<(), Error> {
let c_proc_path = CString::new(format!("/proc/self/fd/{}", fd)).unwrap(); let c_proc_path = CString::new(format!("/proc/self/fd/{}", fd)).unwrap();
unsafe { unsafe {
@ -101,15 +108,18 @@ pub fn apply(flags: Flags, metadata: &Metadata, fd: RawFd, file_name: &CStr) ->
)) ))
.map(drop) .map(drop)
.or_else(allow_notsupp) .or_else(allow_notsupp)
.map_err(|err| format_err!("failed to set ownership: {}", err))?; .map_err(|err| format_err!("failed to set ownership: {}", err))
.or_else(&mut *on_error)?;
} }
let mut skip_xattrs = false; let mut skip_xattrs = false;
apply_xattrs(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)?; apply_xattrs(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)
add_fcaps(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)?; .or_else(&mut *on_error)?;
add_fcaps(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs).or_else(&mut *on_error)?;
apply_acls(flags, &c_proc_path, metadata) apply_acls(flags, &c_proc_path, metadata)
.map_err(|err| format_err!("failed to apply acls: {}", err))?; .map_err(|err| format_err!("failed to apply acls: {}", err))
apply_quota_project_id(flags, fd, metadata)?; .or_else(&mut *on_error)?;
apply_quota_project_id(flags, fd, metadata).or_else(&mut *on_error)?;
// Finally mode and time. We may lose access with mode, but the changing the mode also // Finally mode and time. We may lose access with mode, but the changing the mode also
// affects times. // affects times.
@ -119,31 +129,32 @@ pub fn apply(flags: Flags, metadata: &Metadata, fd: RawFd, file_name: &CStr) ->
}) })
.map(drop) .map(drop)
.or_else(allow_notsupp) .or_else(allow_notsupp)
.map_err(|err| format_err!("failed to change file mode: {}", err))?; .map_err(|err| format_err!("failed to change file mode: {}", err))
.or_else(&mut *on_error)?;
} }
if metadata.stat.flags != 0 { if metadata.stat.flags != 0 {
apply_flags(flags, fd, metadata.stat.flags)?; apply_flags(flags, fd, metadata.stat.flags).or_else(&mut *on_error)?;
} }
let res = c_result!(unsafe { let res = c_result!(unsafe {
libc::utimensat( libc::utimensat(
libc::AT_FDCWD, libc::AT_FDCWD,
c_proc_path.as_ptr(), c_proc_path.as_ptr(),
nsec_to_update_timespec(metadata.stat.mtime).as_ptr(), timestamp_to_update_timespec(&metadata.stat.mtime).as_ptr(),
0, 0,
) )
}); });
match res { match res {
Ok(_) => (), Ok(_) => (),
Err(ref err) if err.is_errno(Errno::EOPNOTSUPP) => (), Err(ref err) if err.is_errno(Errno::EOPNOTSUPP) => (),
Err(ref err) if err.is_errno(Errno::EPERM) => { Err(err) => {
println!( on_error(format_err!(
"failed to restore mtime attribute on {:?}: {}", "failed to restore mtime attribute on {:?}: {}",
file_name, err file_name,
); err
))?;
} }
Err(err) => return Err(err.into()),
} }
Ok(()) Ok(())
@ -195,7 +206,7 @@ fn apply_xattrs(
} }
if !xattr::is_valid_xattr_name(xattr.name()) { if !xattr::is_valid_xattr_name(xattr.name()) {
println!("skipping invalid xattr named {:?}", xattr.name()); eprintln!("skipping invalid xattr named {:?}", xattr.name());
continue; continue;
} }

View File

@ -120,8 +120,7 @@ pub fn format_single_line_entry(entry: &Entry) -> String {
let mode_string = mode_string(entry); let mode_string = mode_string(entry);
let meta = entry.metadata(); let meta = entry.metadata();
let mtime = meta.mtime_as_duration(); let mtime = chrono::Local.timestamp(meta.stat.mtime.secs, meta.stat.mtime.nanos);
let mtime = chrono::Local.timestamp(mtime.as_secs() as i64, mtime.subsec_nanos());
let (size, link) = match entry.kind() { let (size, link) = match entry.kind() {
EntryKind::File { size, .. } => (format!("{}", *size), String::new()), EntryKind::File { size, .. } => (format!("{}", *size), String::new()),
@ -148,8 +147,7 @@ pub fn format_multi_line_entry(entry: &Entry) -> String {
let mode_string = mode_string(entry); let mode_string = mode_string(entry);
let meta = entry.metadata(); let meta = entry.metadata();
let mtime = meta.mtime_as_duration(); let mtime = chrono::Local.timestamp(meta.stat.mtime.secs, meta.stat.mtime.nanos);
let mtime = chrono::Local.timestamp(mtime.as_secs() as i64, mtime.subsec_nanos());
let (size, link, type_name) = match entry.kind() { let (size, link, type_name) = match entry.kind() {
EntryKind::File { size, .. } => (format!("{}", *size), String::new(), "file"), EntryKind::File { size, .. } => (format!("{}", *size), String::new(), "file"),

View File

@ -44,7 +44,7 @@ impl <E: RpcEnvironment + Clone> H2Service<E> {
let (path, components) = match tools::normalize_uri_path(parts.uri.path()) { let (path, components) = match tools::normalize_uri_path(parts.uri.path()) {
Ok((p,c)) => (p, c), Ok((p,c)) => (p, c),
Err(err) => return future::err(http_err!(BAD_REQUEST, err.to_string())).boxed(), Err(err) => return future::err(http_err!(BAD_REQUEST, "{}", err)).boxed(),
}; };
self.debug(format!("{} {}", method, path)); self.debug(format!("{} {}", method, path));
@ -55,7 +55,7 @@ impl <E: RpcEnvironment + Clone> H2Service<E> {
match self.router.find_method(&components, method, &mut uri_param) { match self.router.find_method(&components, method, &mut uri_param) {
None => { None => {
let err = http_err!(NOT_FOUND, format!("Path '{}' not found.", path).to_string()); let err = http_err!(NOT_FOUND, "Path '{}' not found.", path);
future::ok((formatter.format_error)(err)).boxed() future::ok((formatter.format_error)(err)).boxed()
} }
Some(api_method) => { Some(api_method) => {

View File

@ -27,6 +27,7 @@ use super::formatter::*;
use super::ApiConfig; use super::ApiConfig;
use crate::auth_helpers::*; use crate::auth_helpers::*;
use crate::api2::types::Userid;
use crate::tools; use crate::tools;
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
@ -204,13 +205,13 @@ async fn get_request_parameters<S: 'static + BuildHasher + Send>(
} }
let body = req_body let body = req_body
.map_err(|err| http_err!(BAD_REQUEST, format!("Promlems reading request body: {}", err))) .map_err(|err| http_err!(BAD_REQUEST, "Promlems reading request body: {}", err))
.try_fold(Vec::new(), |mut acc, chunk| async move { .try_fold(Vec::new(), |mut acc, chunk| async move {
if acc.len() + chunk.len() < 64*1024 { //fimxe: max request body size? if acc.len() + chunk.len() < 64*1024 { //fimxe: max request body size?
acc.extend_from_slice(&*chunk); acc.extend_from_slice(&*chunk);
Ok(acc) Ok(acc)
} else { } else {
Err(http_err!(BAD_REQUEST, "Request body too large".to_string())) Err(http_err!(BAD_REQUEST, "Request body too large"))
} }
}).await?; }).await?;
@ -311,10 +312,10 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
Ok(resp) Ok(resp)
} }
fn get_index(username: Option<String>, token: Option<String>, api: &Arc<ApiConfig>, parts: Parts) -> Response<Body> { fn get_index(userid: Option<Userid>, token: Option<String>, api: &Arc<ApiConfig>, parts: Parts) -> Response<Body> {
let nodename = proxmox::tools::nodename(); let nodename = proxmox::tools::nodename();
let username = username.unwrap_or_else(|| String::from("")); let userid = userid.as_ref().map(|u| u.as_str()).unwrap_or("");
let token = token.unwrap_or_else(|| String::from("")); let token = token.unwrap_or_else(|| String::from(""));
@ -333,7 +334,7 @@ fn get_index(username: Option<String>, token: Option<String>, api: &Arc<ApiConfi
let data = json!({ let data = json!({
"NodeName": nodename, "NodeName": nodename,
"UserName": username, "UserName": userid,
"CSRFPreventionToken": token, "CSRFPreventionToken": token,
"debug": debug, "debug": debug,
}); });
@ -392,12 +393,12 @@ async fn simple_static_file_download(filename: PathBuf) -> Result<Response<Body>
let mut file = File::open(filename) let mut file = File::open(filename)
.await .await
.map_err(|err| http_err!(BAD_REQUEST, format!("File open failed: {}", err)))?; .map_err(|err| http_err!(BAD_REQUEST, "File open failed: {}", err))?;
let mut data: Vec<u8> = Vec::new(); let mut data: Vec<u8> = Vec::new();
file.read_to_end(&mut data) file.read_to_end(&mut data)
.await .await
.map_err(|err| http_err!(BAD_REQUEST, format!("File read failed: {}", err)))?; .map_err(|err| http_err!(BAD_REQUEST, "File read failed: {}", err))?;
let mut response = Response::new(data.into()); let mut response = Response::new(data.into());
response.headers_mut().insert( response.headers_mut().insert(
@ -411,7 +412,7 @@ async fn chuncked_static_file_download(filename: PathBuf) -> Result<Response<Bod
let file = File::open(filename) let file = File::open(filename)
.await .await
.map_err(|err| http_err!(BAD_REQUEST, format!("File open failed: {}", err)))?; .map_err(|err| http_err!(BAD_REQUEST, "File open failed: {}", err))?;
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new()) let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())
.map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze())); .map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze()));
@ -429,7 +430,7 @@ async fn chuncked_static_file_download(filename: PathBuf) -> Result<Response<Bod
async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> { async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body>, Error> {
let metadata = tokio::fs::metadata(filename.clone()) let metadata = tokio::fs::metadata(filename.clone())
.map_err(|err| http_err!(BAD_REQUEST, format!("File access problems: {}", err))) .map_err(|err| http_err!(BAD_REQUEST, "File access problems: {}", err))
.await?; .await?;
if metadata.len() < 1024*32 { if metadata.len() < 1024*32 {
@ -461,33 +462,33 @@ fn check_auth(
ticket: &Option<String>, ticket: &Option<String>,
token: &Option<String>, token: &Option<String>,
user_info: &CachedUserInfo, user_info: &CachedUserInfo,
) -> Result<String, Error> { ) -> Result<Userid, Error> {
let ticket_lifetime = tools::ticket::TICKET_LIFETIME; let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
let username = match ticket { let userid = match ticket {
Some(ticket) => match tools::ticket::verify_rsa_ticket(public_auth_key(), "PBS", &ticket, None, -300, ticket_lifetime) { Some(ticket) => match tools::ticket::verify_rsa_ticket(public_auth_key(), "PBS", &ticket, None, -300, ticket_lifetime) {
Ok((_age, Some(username))) => username.to_owned(), Ok((_age, Some(userid))) => userid,
Ok((_, None)) => bail!("ticket without username."), Ok((_, None)) => bail!("ticket without username."),
Err(err) => return Err(err), Err(err) => return Err(err),
} }
None => bail!("missing ticket"), None => bail!("missing ticket"),
}; };
if !user_info.is_active_user(&username) { if !user_info.is_active_user(&userid) {
bail!("user account disabled or expired."); bail!("user account disabled or expired.");
} }
if method != hyper::Method::GET { if method != hyper::Method::GET {
if let Some(token) = token { if let Some(token) = token {
println!("CSRF prevention token: {:?}", token); println!("CSRF prevention token: {:?}", token);
verify_csrf_prevention_token(csrf_secret(), &username, &token, -300, ticket_lifetime)?; verify_csrf_prevention_token(csrf_secret(), &userid, &token, -300, ticket_lifetime)?;
} else { } else {
bail!("missing CSRF prevention token"); bail!("missing CSRF prevention token");
} }
} }
Ok(username) Ok(userid)
} }
pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<Response<Body>, Error> { pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<Response<Body>, Error> {
@ -532,10 +533,10 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
} else { } else {
let (ticket, token) = extract_auth_data(&parts.headers); let (ticket, token) = extract_auth_data(&parts.headers);
match check_auth(&method, &ticket, &token, &user_info) { match check_auth(&method, &ticket, &token, &user_info) {
Ok(username) => rpcenv.set_user(Some(username)), Ok(userid) => rpcenv.set_user(Some(userid.to_string())),
Err(err) => { Err(err) => {
// always delay unauthorized calls by 3 seconds (from start of request) // always delay unauthorized calls by 3 seconds (from start of request)
let err = http_err!(UNAUTHORIZED, format!("authentication failed - {}", err)); let err = http_err!(UNAUTHORIZED, "authentication failed - {}", err);
tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await; tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await;
return Ok((formatter.format_error)(err)); return Ok((formatter.format_error)(err));
} }
@ -544,13 +545,13 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
match api.find_method(&components[2..], method, &mut uri_param) { match api.find_method(&components[2..], method, &mut uri_param) {
None => { None => {
let err = http_err!(NOT_FOUND, format!("Path '{}' not found.", path).to_string()); let err = http_err!(NOT_FOUND, "Path '{}' not found.", path);
return Ok((formatter.format_error)(err)); return Ok((formatter.format_error)(err));
} }
Some(api_method) => { Some(api_method) => {
let user = rpcenv.get_user(); let user = rpcenv.get_user();
if !check_api_permission(api_method.access.permission, user.as_deref(), &uri_param, user_info.as_ref()) { if !check_api_permission(api_method.access.permission, user.as_deref(), &uri_param, user_info.as_ref()) {
let err = http_err!(FORBIDDEN, format!("permission check failed")); let err = http_err!(FORBIDDEN, "permission check failed");
tokio::time::delay_until(Instant::from_std(access_forbidden_time)).await; tokio::time::delay_until(Instant::from_std(access_forbidden_time)).await;
return Ok((formatter.format_error)(err)); return Ok((formatter.format_error)(err));
} }
@ -580,9 +581,9 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
let (ticket, token) = extract_auth_data(&parts.headers); let (ticket, token) = extract_auth_data(&parts.headers);
if ticket != None { if ticket != None {
match check_auth(&method, &ticket, &token, &user_info) { match check_auth(&method, &ticket, &token, &user_info) {
Ok(username) => { Ok(userid) => {
let new_token = assemble_csrf_prevention_token(csrf_secret(), &username); let new_token = assemble_csrf_prevention_token(csrf_secret(), &userid);
return Ok(get_index(Some(username), Some(new_token), &api, parts)); return Ok(get_index(Some(userid), Some(new_token), &api, parts));
} }
_ => { _ => {
tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await; tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await;
@ -598,5 +599,5 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
} }
} }
Err(http_err!(NOT_FOUND, format!("Path '{}' not found.", path).to_string())) Err(http_err!(NOT_FOUND, "Path '{}' not found.", path))
} }

View File

@ -19,6 +19,7 @@ pub struct ServerState {
pub shutdown_listeners: BroadcastData<()>, pub shutdown_listeners: BroadcastData<()>,
pub last_worker_listeners: BroadcastData<()>, pub last_worker_listeners: BroadcastData<()>,
pub worker_count: usize, pub worker_count: usize,
pub internal_task_count: usize,
pub reload_request: bool, pub reload_request: bool,
} }
@ -28,6 +29,7 @@ lazy_static! {
shutdown_listeners: BroadcastData::new(), shutdown_listeners: BroadcastData::new(),
last_worker_listeners: BroadcastData::new(), last_worker_listeners: BroadcastData::new(),
worker_count: 0, worker_count: 0,
internal_task_count: 0,
reload_request: false, reload_request: false,
}); });
} }
@ -101,20 +103,40 @@ pub fn last_worker_future() -> impl Future<Output = Result<(), Error>> {
} }
pub fn set_worker_count(count: usize) { pub fn set_worker_count(count: usize) {
let mut data = SERVER_STATE.lock().unwrap(); SERVER_STATE.lock().unwrap().worker_count = count;
data.worker_count = count;
if !(data.mode == ServerMode::Shutdown && data.worker_count == 0) { return; } check_last_worker();
data.last_worker_listeners.notify_listeners(Ok(()));
} }
pub fn check_last_worker() { pub fn check_last_worker() {
let mut data = SERVER_STATE.lock().unwrap(); let mut data = SERVER_STATE.lock().unwrap();
if !(data.mode == ServerMode::Shutdown && data.worker_count == 0) { return; } if !(data.mode == ServerMode::Shutdown && data.worker_count == 0 && data.internal_task_count == 0) { return; }
data.last_worker_listeners.notify_listeners(Ok(())); data.last_worker_listeners.notify_listeners(Ok(()));
} }
/// Spawns a tokio task that will be tracked for reload
/// and if it is finished, notify the last_worker_listener if we
/// are in shutdown mode
pub fn spawn_internal_task<T>(task: T)
where
T: Future + Send + 'static,
T::Output: Send + 'static,
{
let mut data = SERVER_STATE.lock().unwrap();
data.internal_task_count += 1;
tokio::spawn(async move {
let _ = tokio::spawn(task).await; // ignore errors
{ // drop mutex
let mut data = SERVER_STATE.lock().unwrap();
if data.internal_task_count > 0 {
data.internal_task_count -= 1;
}
}
check_last_worker();
});
}

View File

@ -1,19 +1,21 @@
use anyhow::{bail, Error};
use lazy_static::lazy_static;
use regex::Regex;
use chrono::Local;
use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::atomic::{AtomicUsize, Ordering};
use anyhow::{bail, Error};
use chrono::Local;
use lazy_static::lazy_static;
use regex::Regex;
use proxmox::sys::linux::procfs; use proxmox::sys::linux::procfs;
use crate::api2::types::Userid;
/// Unique Process/Task Identifier /// Unique Process/Task Identifier
/// ///
/// We use this to uniquely identify worker task. UPIDs have a short /// We use this to uniquely identify worker task. UPIDs have a short
/// string repesentaion, which gives additional information about the /// string repesentaion, which gives additional information about the
/// type of the task. for example: /// type of the task. for example:
/// ```text /// ```text
/// UPID:{node}:{pid}:{pstart}:{task_id}:{starttime}:{worker_type}:{worker_id}:{username}: /// UPID:{node}:{pid}:{pstart}:{task_id}:{starttime}:{worker_type}:{worker_id}:{userid}:
/// UPID:elsa:00004F37:0039E469:00000000:5CA78B83:garbage_collection::root@pam: /// UPID:elsa:00004F37:0039E469:00000000:5CA78B83:garbage_collection::root@pam:
/// ``` /// ```
/// Please note that we use tokio, so a single thread can run multiple /// Please note that we use tokio, so a single thread can run multiple
@ -33,7 +35,7 @@ pub struct UPID {
/// Worker ID (arbitrary ASCII string) /// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>, pub worker_id: Option<String>,
/// The user who started the task /// The user who started the task
pub username: String, pub userid: Userid,
/// The node name. /// The node name.
pub node: String, pub node: String,
} }
@ -41,7 +43,11 @@ pub struct UPID {
impl UPID { impl UPID {
/// Create a new UPID /// Create a new UPID
pub fn new(worker_type: &str, worker_id: Option<String>, username: &str) -> Result<Self, Error> { pub fn new(
worker_type: &str,
worker_id: Option<String>,
userid: Userid,
) -> Result<Self, Error> {
let pid = unsafe { libc::getpid() }; let pid = unsafe { libc::getpid() };
@ -67,7 +73,7 @@ impl UPID {
task_id, task_id,
worker_type: worker_type.to_owned(), worker_type: worker_type.to_owned(),
worker_id, worker_id,
username: username.to_owned(), userid,
node: proxmox::tools::nodename().to_owned(), node: proxmox::tools::nodename().to_owned(),
}) })
} }
@ -91,7 +97,7 @@ impl std::str::FromStr for UPID {
static ref REGEX: Regex = Regex::new(concat!( static ref REGEX: Regex = Regex::new(concat!(
r"^UPID:(?P<node>[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?):(?P<pid>[0-9A-Fa-f]{8}):", r"^UPID:(?P<node>[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?):(?P<pid>[0-9A-Fa-f]{8}):",
r"(?P<pstart>[0-9A-Fa-f]{8,9}):(?P<task_id>[0-9A-Fa-f]{8,16}):(?P<starttime>[0-9A-Fa-f]{8}):", r"(?P<pstart>[0-9A-Fa-f]{8,9}):(?P<task_id>[0-9A-Fa-f]{8,16}):(?P<starttime>[0-9A-Fa-f]{8}):",
r"(?P<wtype>[^:\s]+):(?P<wid>[^:\s]*):(?P<username>[^:\s]+):$" r"(?P<wtype>[^:\s]+):(?P<wid>[^:\s]*):(?P<userid>[^:\s]+):$"
)).unwrap(); )).unwrap();
} }
@ -104,7 +110,7 @@ impl std::str::FromStr for UPID {
task_id: usize::from_str_radix(&cap["task_id"], 16).unwrap(), task_id: usize::from_str_radix(&cap["task_id"], 16).unwrap(),
worker_type: cap["wtype"].to_string(), worker_type: cap["wtype"].to_string(),
worker_id: if cap["wid"].is_empty() { None } else { Some(cap["wid"].to_string()) }, worker_id: if cap["wid"].is_empty() { None } else { Some(cap["wid"].to_string()) },
username: cap["username"].to_string(), userid: cap["userid"].parse()?,
node: cap["node"].to_string(), node: cap["node"].to_string(),
}) })
} else { } else {
@ -124,6 +130,6 @@ impl std::fmt::Display for UPID {
// more that 8 characters for pstart // more that 8 characters for pstart
write!(f, "UPID:{}:{:08X}:{:08X}:{:08X}:{:08X}:{}:{}:{}:", write!(f, "UPID:{}:{:08X}:{:08X}:{:08X}:{:08X}:{}:{}:{}:",
self.node, self.pid, self.pstart, self.task_id, self.starttime, self.worker_type, wid, self.username) self.node, self.pid, self.pstart, self.task_id, self.starttime, self.worker_type, wid, self.userid)
} }
} }

View File

@ -15,11 +15,12 @@ use tokio::sync::oneshot;
use proxmox::sys::linux::procfs; use proxmox::sys::linux::procfs;
use proxmox::try_block; use proxmox::try_block;
use proxmox::tools::fs::{create_path, replace_file, CreateOptions}; use proxmox::tools::fs::{create_path, open_file_locked, replace_file, CreateOptions};
use super::UPID; use super::UPID;
use crate::tools::FileLogger; use crate::tools::FileLogger;
use crate::api2::types::Userid;
macro_rules! PROXMOX_BACKUP_VAR_RUN_DIR_M { () => ("/run/proxmox-backup") } macro_rules! PROXMOX_BACKUP_VAR_RUN_DIR_M { () => ("/run/proxmox-backup") }
macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") } macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") }
@ -247,7 +248,7 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
let backup_user = crate::backup::backup_user()?; let backup_user = crate::backup::backup_user()?;
let lock = crate::tools::open_file_locked(PROXMOX_BACKUP_TASK_LOCK_FN, std::time::Duration::new(10, 0))?; let lock = open_file_locked(PROXMOX_BACKUP_TASK_LOCK_FN, std::time::Duration::new(10, 0))?;
nix::unistd::chown(PROXMOX_BACKUP_TASK_LOCK_FN, Some(backup_user.uid), Some(backup_user.gid))?; nix::unistd::chown(PROXMOX_BACKUP_TASK_LOCK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
let reader = match File::open(PROXMOX_BACKUP_ACTIVE_TASK_FN) { let reader = match File::open(PROXMOX_BACKUP_ACTIVE_TASK_FN) {
@ -394,10 +395,10 @@ impl Drop for WorkerTask {
impl WorkerTask { impl WorkerTask {
pub fn new(worker_type: &str, worker_id: Option<String>, username: &str, to_stdout: bool) -> Result<Arc<Self>, Error> { pub fn new(worker_type: &str, worker_id: Option<String>, userid: Userid, to_stdout: bool) -> Result<Arc<Self>, Error> {
println!("register worker"); println!("register worker");
let upid = UPID::new(worker_type, worker_id, username)?; let upid = UPID::new(worker_type, worker_id, userid)?;
let task_id = upid.task_id; let task_id = upid.task_id;
let mut path = std::path::PathBuf::from(PROXMOX_BACKUP_TASK_DIR); let mut path = std::path::PathBuf::from(PROXMOX_BACKUP_TASK_DIR);
@ -442,14 +443,14 @@ impl WorkerTask {
pub fn spawn<F, T>( pub fn spawn<F, T>(
worker_type: &str, worker_type: &str,
worker_id: Option<String>, worker_id: Option<String>,
username: &str, userid: Userid,
to_stdout: bool, to_stdout: bool,
f: F, f: F,
) -> Result<String, Error> ) -> Result<String, Error>
where F: Send + 'static + FnOnce(Arc<WorkerTask>) -> T, where F: Send + 'static + FnOnce(Arc<WorkerTask>) -> T,
T: Send + 'static + Future<Output = Result<(), Error>>, T: Send + 'static + Future<Output = Result<(), Error>>,
{ {
let worker = WorkerTask::new(worker_type, worker_id, username, to_stdout)?; let worker = WorkerTask::new(worker_type, worker_id, userid, to_stdout)?;
let upid_str = worker.upid.to_string(); let upid_str = worker.upid.to_string();
let f = f(worker.clone()); let f = f(worker.clone());
tokio::spawn(async move { tokio::spawn(async move {
@ -464,7 +465,7 @@ impl WorkerTask {
pub fn new_thread<F>( pub fn new_thread<F>(
worker_type: &str, worker_type: &str,
worker_id: Option<String>, worker_id: Option<String>,
username: &str, userid: Userid,
to_stdout: bool, to_stdout: bool,
f: F, f: F,
) -> Result<String, Error> ) -> Result<String, Error>
@ -474,7 +475,7 @@ impl WorkerTask {
let (p, c) = oneshot::channel::<()>(); let (p, c) = oneshot::channel::<()>();
let worker = WorkerTask::new(worker_type, worker_id, username, to_stdout)?; let worker = WorkerTask::new(worker_type, worker_id, userid, to_stdout)?;
let upid_str = worker.upid.to_string(); let upid_str = worker.upid.to_string();
let _child = std::thread::Builder::new().name(upid_str.clone()).spawn(move || { let _child = std::thread::Builder::new().name(upid_str.clone()).spawn(move || {
@ -502,17 +503,23 @@ impl WorkerTask {
Ok(upid_str) Ok(upid_str)
} }
/// Log task result, remove task from running list /// get the Text of the result
pub fn log_result(&self, result: &Result<(), Error>) { pub fn get_log_text(&self, result: &Result<(), Error>) -> String {
let warn_count = self.data.lock().unwrap().warn_count; let warn_count = self.data.lock().unwrap().warn_count;
if let Err(err) = result { if let Err(err) = result {
self.log(&format!("TASK ERROR: {}", err)); format!("ERROR: {}", err)
} else if warn_count > 0 { } else if warn_count > 0 {
self.log(format!("TASK WARNINGS: {}", warn_count)); format!("WARNINGS: {}", warn_count)
} else { } else {
self.log("TASK OK"); "OK".to_string()
} }
}
/// Log task result, remove task from running list
pub fn log_result(&self, result: &Result<(), Error>) {
self.log(format!("TASK {}", self.get_log_text(result)));
WORKER_TASK_LIST.lock().unwrap().remove(&self.upid.task_id); WORKER_TASK_LIST.lock().unwrap().remove(&self.upid.task_id);
let _ = update_active_workers(None); let _ = update_active_workers(None);
@ -583,4 +590,8 @@ impl WorkerTask {
} }
rx rx
} }
pub fn upid(&self) -> &UPID {
&self.upid
}
} }

View File

@ -4,9 +4,9 @@
use std::any::Any; use std::any::Any;
use std::collections::HashMap; use std::collections::HashMap;
use std::hash::BuildHasher; use std::hash::BuildHasher;
use std::fs::{File, OpenOptions}; use std::fs::File;
use std::io::{self, BufRead, ErrorKind, Read}; use std::io::{self, BufRead, ErrorKind, Read};
use std::os::unix::io::{AsRawFd, RawFd}; use std::os::unix::io::RawFd;
use std::path::Path; use std::path::Path;
use std::time::Duration; use std::time::Duration;
use std::time::{SystemTime, SystemTimeError, UNIX_EPOCH}; use std::time::{SystemTime, SystemTimeError, UNIX_EPOCH};
@ -31,7 +31,6 @@ pub mod format;
pub mod lru_cache; pub mod lru_cache;
pub mod runtime; pub mod runtime;
pub mod ticket; pub mod ticket;
pub mod timer;
pub mod statistics; pub mod statistics;
pub mod systemd; pub mod systemd;
pub mod nom; pub mod nom;
@ -89,60 +88,6 @@ pub fn map_struct_mut<T>(buffer: &mut [u8]) -> Result<&mut T, Error> {
Ok(unsafe { &mut *(buffer.as_ptr() as *mut T) }) Ok(unsafe { &mut *(buffer.as_ptr() as *mut T) })
} }
/// Create a file lock using fntl. This function allows you to specify
/// a timeout if you want to avoid infinite blocking.
pub fn lock_file<F: AsRawFd>(
file: &mut F,
exclusive: bool,
timeout: Option<Duration>,
) -> Result<(), Error> {
let lockarg = if exclusive {
nix::fcntl::FlockArg::LockExclusive
} else {
nix::fcntl::FlockArg::LockShared
};
let timeout = match timeout {
None => {
nix::fcntl::flock(file.as_raw_fd(), lockarg)?;
return Ok(());
}
Some(t) => t,
};
// unblock the timeout signal temporarily
let _sigblock_guard = timer::unblock_timeout_signal();
// setup a timeout timer
let mut timer = timer::Timer::create(
timer::Clock::Realtime,
timer::TimerEvent::ThisThreadSignal(timer::SIGTIMEOUT),
)?;
timer.arm(
timer::TimerSpec::new()
.value(Some(timeout))
.interval(Some(Duration::from_millis(10))),
)?;
nix::fcntl::flock(file.as_raw_fd(), lockarg)?;
Ok(())
}
/// Open or create a lock file (append mode). Then try to
/// acquire a lock using `lock_file()`.
pub fn open_file_locked<P: AsRef<Path>>(path: P, timeout: Duration) -> Result<File, Error> {
let path = path.as_ref();
let mut file = match OpenOptions::new().create(true).append(true).open(path) {
Ok(file) => file,
Err(err) => bail!("Unable to open lock {:?} - {}", path, err),
};
match lock_file(&mut file, true, Some(timeout)) {
Ok(_) => Ok(file),
Err(err) => bail!("Unable to acquire lock {:?} - {}", path, err),
}
}
/// Split a file into equal sized chunks. The last chunk may be /// Split a file into equal sized chunks. The last chunk may be
/// smaller. Note: We cannot implement an `Iterator`, because iterators /// smaller. Note: We cannot implement an `Iterator`, because iterators
/// cannot return a borrowed buffer ref (we want zero-copy) /// cannot return a borrowed buffer ref (we want zero-copy)

View File

@ -825,6 +825,10 @@ pub fn get_disks(
}; };
} }
if usage == DiskUsageType::Unused && disk.has_holders()? {
usage = DiskUsageType::DeviceMapper;
}
let mut status = SmartStatus::Unknown; let mut status = SmartStatus::Unknown;
let mut wearout = None; let mut wearout = None;

View File

@ -8,7 +8,7 @@ use lazy_static::lazy_static;
lazy_static!{ lazy_static!{
static ref LVM_UUIDS: HashSet<&'static str> = { static ref LVM_UUIDS: HashSet<&'static str> = {
let mut set = HashSet::new(); let mut set = HashSet::new();
set.insert("e6d6d379-f507-44c2-a23c-238f2a3df928"); set.insert("e6d6d379-f507-44c2-a23c-238f2a3df928");
set set
}; };
} }

View File

@ -155,7 +155,7 @@ pub fn get_smart_data(
if let Some(list) = output["nvme_smart_health_information_log"].as_object() { if let Some(list) = output["nvme_smart_health_information_log"].as_object() {
for (name, value) in list { for (name, value) in list {
if name == "percentage_used" { if name == "percentage_used" {
// extract wearout from nvme text, allow for decimal values // extract wearout from nvme text, allow for decimal values
if let Some(v) = value.as_f64() { if let Some(v) = value.as_f64() {
if v <= 100.0 { if v <= 100.0 {
wearout = Some(100.0 - v); wearout = Some(100.0 - v);

View File

@ -10,8 +10,8 @@ use super::*;
lazy_static!{ lazy_static!{
static ref ZFS_UUIDS: HashSet<&'static str> = { static ref ZFS_UUIDS: HashSet<&'static str> = {
let mut set = HashSet::new(); let mut set = HashSet::new();
set.insert("6a898cc3-1dd2-11b2-99a6-080020736631"); // apple set.insert("6a898cc3-1dd2-11b2-99a6-080020736631"); // apple
set.insert("516e7cba-6ecf-11d6-8ff8-00022d09712b"); // bsd set.insert("516e7cba-6ecf-11d6-8ff8-00022d09712b"); // bsd
set set
}; };
} }

View File

@ -7,10 +7,18 @@ use std::os::unix::io::{AsRawFd, RawFd};
use anyhow::{format_err, Error}; use anyhow::{format_err, Error};
use nix::dir; use nix::dir;
use nix::dir::Dir; use nix::dir::Dir;
use nix::fcntl::OFlag;
use nix::sys::stat::Mode;
use regex::Regex; use regex::Regex;
use proxmox::sys::error::SysError;
use crate::tools::borrow::Tied; use crate::tools::borrow::Tied;
pub type DirLockGuard = Dir;
/// This wraps nix::dir::Entry with the parent directory's file descriptor. /// This wraps nix::dir::Entry with the parent directory's file descriptor.
pub struct ReadDirEntry { pub struct ReadDirEntry {
entry: dir::Entry, entry: dir::Entry,
@ -94,9 +102,6 @@ impl Iterator for ReadDir {
/// Create an iterator over sub directory entries. /// Create an iterator over sub directory entries.
/// This uses `openat` on `dirfd`, so `path` can be relative to that or an absolute path. /// This uses `openat` on `dirfd`, so `path` can be relative to that or an absolute path.
pub fn read_subdir<P: ?Sized + nix::NixPath>(dirfd: RawFd, path: &P) -> nix::Result<ReadDir> { pub fn read_subdir<P: ?Sized + nix::NixPath>(dirfd: RawFd, path: &P) -> nix::Result<ReadDir> {
use nix::fcntl::OFlag;
use nix::sys::stat::Mode;
let dir = Dir::openat(dirfd, path, OFlag::O_RDONLY, Mode::empty())?; let dir = Dir::openat(dirfd, path, OFlag::O_RDONLY, Mode::empty())?;
let fd = dir.as_raw_fd(); let fd = dir.as_raw_fd();
let iter = Tied::new(dir, |dir| { let iter = Tied::new(dir, |dir| {
@ -259,3 +264,31 @@ impl Default for FSXAttr {
} }
} }
} }
pub fn lock_dir_noblock(
path: &std::path::Path,
what: &str,
would_block_msg: &str,
) -> Result<DirLockGuard, Error> {
let mut handle = Dir::open(path, OFlag::O_RDONLY, Mode::empty())
.map_err(|err| {
format_err!("unable to open {} directory {:?} for locking - {}", what, path, err)
})?;
// acquire in non-blocking mode, no point in waiting here since other
// backups could still take a very long time
proxmox::tools::fs::lock_file(&mut handle, true, Some(std::time::Duration::from_nanos(0)))
.map_err(|err| {
format_err!(
"unable to acquire lock on {} directory {:?} - {}", what, path,
if err.would_block() {
String::from(would_block_msg)
} else {
err.to_string()
}
)
})?;
Ok(handle)
}

View File

@ -301,6 +301,9 @@ mod test {
const THURSDAY_00_00: i64 = make_test_time(0, 0, 0); const THURSDAY_00_00: i64 = make_test_time(0, 0, 0);
const THURSDAY_15_00: i64 = make_test_time(0, 15, 0); const THURSDAY_15_00: i64 = make_test_time(0, 15, 0);
const JUL_31_2020: i64 = 1596153600; // Friday, 2020-07-31 00:00:00
const DEC_31_2020: i64 = 1609372800; // Thursday, 2020-12-31 00:00:00
test_value("*:0", THURSDAY_00_00, THURSDAY_00_00 + HOUR)?; test_value("*:0", THURSDAY_00_00, THURSDAY_00_00 + HOUR)?;
test_value("*:*", THURSDAY_00_00, THURSDAY_00_00 + MIN)?; test_value("*:*", THURSDAY_00_00, THURSDAY_00_00 + MIN)?;
test_value("*:*:*", THURSDAY_00_00, THURSDAY_00_00 + 1)?; test_value("*:*:*", THURSDAY_00_00, THURSDAY_00_00 + 1)?;
@ -317,6 +320,24 @@ mod test {
test_value("sat", THURSDAY_00_00, THURSDAY_00_00 + 2*DAY)?; test_value("sat", THURSDAY_00_00, THURSDAY_00_00 + 2*DAY)?;
test_value("sun", THURSDAY_00_00, THURSDAY_00_00 + 3*DAY)?; test_value("sun", THURSDAY_00_00, THURSDAY_00_00 + 3*DAY)?;
// test month wrapping
test_value("sat", JUL_31_2020, JUL_31_2020 + 1*DAY)?;
test_value("sun", JUL_31_2020, JUL_31_2020 + 2*DAY)?;
test_value("mon", JUL_31_2020, JUL_31_2020 + 3*DAY)?;
test_value("tue", JUL_31_2020, JUL_31_2020 + 4*DAY)?;
test_value("wed", JUL_31_2020, JUL_31_2020 + 5*DAY)?;
test_value("thu", JUL_31_2020, JUL_31_2020 + 6*DAY)?;
test_value("fri", JUL_31_2020, JUL_31_2020 + 7*DAY)?;
// test year wrapping
test_value("fri", DEC_31_2020, DEC_31_2020 + 1*DAY)?;
test_value("sat", DEC_31_2020, DEC_31_2020 + 2*DAY)?;
test_value("sun", DEC_31_2020, DEC_31_2020 + 3*DAY)?;
test_value("mon", DEC_31_2020, DEC_31_2020 + 4*DAY)?;
test_value("tue", DEC_31_2020, DEC_31_2020 + 5*DAY)?;
test_value("wed", DEC_31_2020, DEC_31_2020 + 6*DAY)?;
test_value("thu", DEC_31_2020, DEC_31_2020 + 7*DAY)?;
test_value("daily", THURSDAY_00_00, THURSDAY_00_00 + DAY)?; test_value("daily", THURSDAY_00_00, THURSDAY_00_00 + DAY)?;
test_value("daily", THURSDAY_00_00+1, THURSDAY_00_00 + DAY)?; test_value("daily", THURSDAY_00_00+1, THURSDAY_00_00 + DAY)?;

View File

@ -123,7 +123,6 @@ impl TmEditor {
if self.t.tm_mday < days_in_mon { break; } if self.t.tm_mday < days_in_mon { break; }
// Wrap one month // Wrap one month
self.t.tm_mday -= days_in_mon; self.t.tm_mday -= days_in_mon;
self.t.tm_wday += 7 - (days_in_mon % 7);
self.t.tm_mon += 1; self.t.tm_mon += 1;
self.changes.insert(TMChanges::MDAY|TMChanges::WDAY|TMChanges::MON); self.changes.insert(TMChanges::MDAY|TMChanges::WDAY|TMChanges::MON);
} }

View File

@ -7,6 +7,7 @@ use openssl::pkey::{PKey, Public, Private};
use openssl::sign::{Signer, Verifier}; use openssl::sign::{Signer, Verifier};
use openssl::hash::MessageDigest; use openssl::hash::MessageDigest;
use crate::api2::types::Userid;
use crate::tools::epoch_now_u64; use crate::tools::epoch_now_u64;
pub const TICKET_LIFETIME: i64 = 3600*2; // 2 hours pub const TICKET_LIFETIME: i64 = 3600*2; // 2 hours
@ -15,7 +16,7 @@ const TERM_PREFIX: &str = "PBSTERM";
pub fn assemble_term_ticket( pub fn assemble_term_ticket(
keypair: &PKey<Private>, keypair: &PKey<Private>,
username: &str, userid: &Userid,
path: &str, path: &str,
port: u16, port: u16,
) -> Result<String, Error> { ) -> Result<String, Error> {
@ -23,22 +24,22 @@ pub fn assemble_term_ticket(
keypair, keypair,
TERM_PREFIX, TERM_PREFIX,
None, None,
Some(&format!("{}{}{}", username, path, port)), Some(&format!("{}{}{}", userid, path, port)),
) )
} }
pub fn verify_term_ticket( pub fn verify_term_ticket(
keypair: &PKey<Public>, keypair: &PKey<Public>,
username: &str, userid: &Userid,
path: &str, path: &str,
port: u16, port: u16,
ticket: &str, ticket: &str,
) -> Result<(i64, Option<String>), Error> { ) -> Result<(i64, Option<Userid>), Error> {
verify_rsa_ticket( verify_rsa_ticket(
keypair, keypair,
TERM_PREFIX, TERM_PREFIX,
ticket, ticket,
Some(&format!("{}{}{}", username, path, port)), Some(&format!("{}{}{}", userid, path, port)),
-300, -300,
TICKET_LIFETIME, TICKET_LIFETIME,
) )
@ -47,7 +48,7 @@ pub fn verify_term_ticket(
pub fn assemble_rsa_ticket( pub fn assemble_rsa_ticket(
keypair: &PKey<Private>, keypair: &PKey<Private>,
prefix: &str, prefix: &str,
data: Option<&str>, data: Option<&Userid>,
secret_data: Option<&str>, secret_data: Option<&str>,
) -> Result<String, Error> { ) -> Result<String, Error> {
@ -59,7 +60,8 @@ pub fn assemble_rsa_ticket(
plain.push(':'); plain.push(':');
if let Some(data) = data { if let Some(data) = data {
plain.push_str(data); use std::fmt::Write;
write!(plain, "{}", data)?;
plain.push(':'); plain.push(':');
} }
@ -87,7 +89,7 @@ pub fn verify_rsa_ticket(
secret_data: Option<&str>, secret_data: Option<&str>,
min_age: i64, min_age: i64,
max_age: i64, max_age: i64,
) -> Result<(i64, Option<String>), Error> { ) -> Result<(i64, Option<Userid>), Error> {
use std::collections::VecDeque; use std::collections::VecDeque;
@ -145,5 +147,5 @@ pub fn verify_rsa_ticket(
bail!("invalid ticket - timestamp too old."); bail!("invalid ticket - timestamp too old.");
} }
Ok((age, data)) Ok((age, data.map(|s| s.parse()).transpose()?))
} }

View File

@ -1,370 +0,0 @@
//! POSIX per-process timer interface.
//!
//! This module provides a wrapper around POSIX timers (see `timer_create(2)`) and utilities to
//! setup thread-targeted signaling and signal masks.
use std::mem::MaybeUninit;
use std::time::Duration;
use std::{io, mem};
use libc::{c_int, clockid_t, pid_t};
/// Timers can use various clocks. See `timer_create(2)`.
pub enum Clock {
/// Use `CLOCK_REALTIME` for the timer.
Realtime,
/// Use `CLOCK_MONOTONIC` for the timer.
Monotonic,
}
/// Strong thread-id type to prevent accidental conversion of pid_t.
pub struct Tid(pid_t);
/// Convenience helper to get the current thread ID suitable to pass to a
/// `TimerEvent::ThreadSignal` entry.
pub fn gettid() -> Tid {
Tid(unsafe { libc::syscall(libc::SYS_gettid) } as pid_t)
}
/// Strong signal type which is more advanced than nix::sys::signal::Signal as
/// it doesn't prevent you from using signals that the nix crate is unaware
/// of...!
pub struct Signal(c_int);
impl Into<c_int> for Signal {
fn into(self) -> c_int {
self.0
}
}
impl From<c_int> for Signal {
fn from(v: c_int) -> Signal {
Signal(v)
}
}
/// When instantiating a Timer, it needs to have an event type associated with
/// it to be fired whenever the timer expires. Most of the time this will be a
/// `Signal`. Sometimes we need to be able to send signals to specific threads.
pub enum TimerEvent {
/// This will act like passing `NULL` to `timer_create()`, which maps to
/// using the same as `Signal(SIGALRM)`.
None,
/// When the timer expires, send a specific signal to the current process.
Signal(Signal),
/// When the timer expires, send a specific signal to a specific thread.
ThreadSignal(Tid, Signal),
/// Convenience value to send a signal to the current thread. This is
/// equivalent to using `ThreadSignal(gettid(), signal)`.
ThisThreadSignal(Signal),
}
// timer_t is a pointer type, so we create a strongly typed internal handle
// type for it
#[repr(C)]
struct InternalTimerT(u32);
type TimerT = *mut InternalTimerT;
// These wrappers are defined in -lrt.
#[link(name = "rt")]
extern "C" {
fn timer_create(clockid: clockid_t, evp: *mut libc::sigevent, timer: *mut TimerT) -> c_int;
fn timer_delete(timer: TimerT) -> c_int;
fn timer_settime(
timerid: TimerT,
flags: c_int,
new_value: *const libc::itimerspec,
old_value: *mut libc::itimerspec,
) -> c_int;
}
/// Represents a POSIX per-process timer as created via `timer_create(2)`.
pub struct Timer {
timer: TimerT,
}
/// Timer specification used to arm a `Timer`.
#[derive(Default)]
pub struct TimerSpec {
/// The timeout to the next timer event.
pub value: Option<Duration>,
/// When a timer expires, it may be automatically rearmed with another
/// timeout. This will keep happening until this is explicitly disabled
/// or the timer deleted.
pub interval: Option<Duration>,
}
// Helpers to convert between libc::timespec and Option<Duration>
fn opt_duration_to_timespec(v: Option<Duration>) -> libc::timespec {
match v {
None => libc::timespec {
tv_sec: 0,
tv_nsec: 0,
},
Some(value) => libc::timespec {
tv_sec: value.as_secs() as i64,
tv_nsec: value.subsec_nanos() as i64,
},
}
}
fn timespec_to_opt_duration(v: libc::timespec) -> Option<Duration> {
if v.tv_sec == 0 && v.tv_nsec == 0 {
None
} else {
Some(Duration::new(v.tv_sec as u64, v.tv_nsec as u32))
}
}
impl TimerSpec {
// Helpers to convert between TimerSpec and libc::itimerspec
fn to_itimerspec(&self) -> libc::itimerspec {
libc::itimerspec {
it_value: opt_duration_to_timespec(self.value),
it_interval: opt_duration_to_timespec(self.interval),
}
}
fn from_itimerspec(ts: libc::itimerspec) -> Self {
TimerSpec {
value: timespec_to_opt_duration(ts.it_value),
interval: timespec_to_opt_duration(ts.it_interval),
}
}
/// Create an empty timer specification representing a disabled timer.
pub fn new() -> Self {
TimerSpec {
value: None,
interval: None,
}
}
/// Change the specification to have a specific value.
pub fn value(self, value: Option<Duration>) -> Self {
TimerSpec {
value,
interval: self.interval,
}
}
/// Change the specification to have a specific interval.
pub fn interval(self, interval: Option<Duration>) -> Self {
TimerSpec {
value: self.value,
interval,
}
}
}
impl Timer {
/// Create a Timer object governing a POSIX timer.
pub fn create(clock: Clock, event: TimerEvent) -> io::Result<Timer> {
// Map from our clock type to the libc id
let clkid = match clock {
Clock::Realtime => libc::CLOCK_REALTIME,
Clock::Monotonic => libc::CLOCK_MONOTONIC,
} as clockid_t;
// Map the TimerEvent to libc::sigevent
let mut ev: libc::sigevent = unsafe { mem::zeroed() };
match event {
TimerEvent::None => ev.sigev_notify = libc::SIGEV_NONE,
TimerEvent::Signal(signo) => {
ev.sigev_signo = signo.0;
ev.sigev_notify = libc::SIGEV_SIGNAL;
}
TimerEvent::ThreadSignal(tid, signo) => {
ev.sigev_signo = signo.0;
ev.sigev_notify = libc::SIGEV_THREAD_ID;
ev.sigev_notify_thread_id = tid.0;
}
TimerEvent::ThisThreadSignal(signo) => {
ev.sigev_signo = signo.0;
ev.sigev_notify = libc::SIGEV_THREAD_ID;
ev.sigev_notify_thread_id = gettid().0;
}
}
// Create the timer
let mut timer: TimerT = unsafe { mem::zeroed() };
let rc = unsafe { timer_create(clkid, &mut ev, &mut timer) };
if rc != 0 {
Err(io::Error::last_os_error())
} else {
Ok(Timer { timer })
}
}
/// Arm a timer. This returns the previous timer specification.
pub fn arm(&mut self, spec: TimerSpec) -> io::Result<TimerSpec> {
let newspec = spec.to_itimerspec();
let mut oldspec = MaybeUninit::<libc::itimerspec>::uninit();
let rc = unsafe { timer_settime(self.timer, 0, &newspec, &mut *oldspec.as_mut_ptr()) };
if rc != 0 {
return Err(io::Error::last_os_error());
}
Ok(TimerSpec::from_itimerspec(unsafe { oldspec.assume_init() }))
}
}
impl Drop for Timer {
fn drop(&mut self) {
unsafe {
timer_delete(self.timer);
}
}
}
/// This is the signal number we use in our timeout implementations. We expect
/// the signal handler for this signal to never be replaced by some other
/// library. If this does happen, we need to find another signal. There should
/// be plenty.
/// Currently this is SIGRTMIN+4, the 5th real-time signal. glibc reserves the
/// first two for pthread internals.
pub const SIGTIMEOUT: Signal = Signal(32 + 4);
// Our timeout handler does exactly nothing. We only need it to interrupt
// system calls.
extern "C" fn sig_timeout_handler(_: c_int) {}
// See setup_timeout_handler().
fn do_setup_timeout_handler() -> io::Result<()> {
// Unfortunately nix::sys::signal::Signal cannot represent real time
// signals, so we need to use libc instead...
//
// This WOULD be a nicer impl though:
//nix::sys::signal::sigaction(
// SIGTIMEOUT,
// nix::sys::signal::SigAction::new(
// nix::sys::signal::SigHandler::Handler(sig_timeout_handler),
// nix::sys::signal::SaFlags::empty(),
// nix::sys::signal::SigSet::all()))
// .map(|_|())
unsafe {
let mut sa_mask = MaybeUninit::<libc::sigset_t>::uninit();
if libc::sigemptyset(&mut *sa_mask.as_mut_ptr()) != 0
|| libc::sigaddset(&mut *sa_mask.as_mut_ptr(), SIGTIMEOUT.0) != 0
{
return Err(io::Error::last_os_error());
}
let sa = libc::sigaction {
sa_sigaction:
// libc::sigaction uses `usize` for the function pointer...
sig_timeout_handler as *const extern "C" fn(i32) as usize,
sa_mask: sa_mask.assume_init(),
sa_flags: 0,
sa_restorer: None,
};
if libc::sigaction(SIGTIMEOUT.0, &sa, std::ptr::null_mut()) != 0 {
return Err(io::Error::last_os_error());
}
}
Ok(())
}
// The first time we unblock SIGTIMEOUT should cause approprate initialization:
static SETUP_TIMEOUT_HANDLER: std::sync::Once = std::sync::Once::new();
/// Setup our timeout-signal workflow. This establishes the signal handler for
/// our `SIGTIMEOUT` and should be called once during initialization.
#[inline]
pub fn setup_timeout_handler() {
SETUP_TIMEOUT_HANDLER.call_once(|| {
// We unwrap here.
// If setting up this handler fails you have other problems already,
// plus, if setting up fails you can't *use* it either, so everything
// goes to die.
do_setup_timeout_handler().unwrap();
});
}
/// This guards the state of the timeout signal: We want it blocked usually.
pub struct TimeoutBlockGuard(bool);
impl Drop for TimeoutBlockGuard {
fn drop(&mut self) {
if self.0 {
block_timeout_signal();
} else {
unblock_timeout_signal().forget();
}
}
}
impl TimeoutBlockGuard {
/// Convenience helper to "forget" to restore the signal block mask.
#[inline(always)]
pub fn forget(self) {
std::mem::forget(self);
}
/// Convenience helper to trigger the guard behavior immediately.
#[inline(always)]
pub fn trigger(self) {
std::mem::drop(self); // be explicit here...
}
}
/// Unblock the timeout signal for the current thread. By default we block the
/// signal this behavior should be restored when done using timeouts, therefor this
/// returns a guard:
#[inline(always)]
pub fn unblock_timeout_signal() -> TimeoutBlockGuard {
// This calls std::sync::Once:
setup_timeout_handler();
//let mut set = nix::sys::signal::SigSet::empty();
//set.add(SIGTIMEOUT.0);
//set.thread_unblock()?;
//Ok(TimeoutBlockGuard{})
// Again, nix crate and its signal limitations...
// NOTE:
// sigsetops(3) and pthread_sigmask(3) can only fail if invalid memory is
// passed to the kernel, or signal numbers are "invalid", since we know
// neither is the case we will panic on error...
let was_blocked = unsafe {
let mut mask = MaybeUninit::<libc::sigset_t>::uninit();
let mut oldset = MaybeUninit::<libc::sigset_t>::uninit();
if libc::sigemptyset(&mut *mask.as_mut_ptr()) != 0
|| libc::sigaddset(&mut *mask.as_mut_ptr(), SIGTIMEOUT.0) != 0
|| libc::pthread_sigmask(
libc::SIG_UNBLOCK,
&mask.assume_init(),
&mut *oldset.as_mut_ptr(),
) != 0
{
panic!("Impossibly failed to unblock SIGTIMEOUT");
//return Err(io::Error::last_os_error());
}
libc::sigismember(&oldset.assume_init(), SIGTIMEOUT.0) == 1
};
TimeoutBlockGuard(was_blocked)
}
/// Block the timeout signal for the current thread. This is the default.
#[inline(always)]
pub fn block_timeout_signal() {
//let mut set = nix::sys::signal::SigSet::empty();
//set.add(SIGTIMEOUT);
//set.thread_block()
unsafe {
let mut mask = MaybeUninit::<libc::sigset_t>::uninit();
if libc::sigemptyset(&mut *mask.as_mut_ptr()) != 0
|| libc::sigaddset(&mut *mask.as_mut_ptr(), SIGTIMEOUT.0) != 0
|| libc::pthread_sigmask(libc::SIG_BLOCK, &mask.assume_init(), std::ptr::null_mut())
!= 0
{
panic!("Impossibly failed to block SIGTIMEOUT");
//return Err(io::Error::last_os_error());
}
}
}

View File

@ -21,9 +21,13 @@ lazy_static! {
let key = [1u8; 32]; let key = [1u8; 32];
Arc::new(CryptConfig::new(key).unwrap()) Arc::new(CryptConfig::new(key).unwrap())
}; };
static ref TEST_DIGEST_PLAIN: [u8; 32] = [83, 154, 96, 195, 167, 204, 38, 142, 204, 224, 130, 201, 24, 71, 2, 188, 130, 155, 177, 6, 162, 100, 61, 238, 38, 219, 63, 240, 191, 132, 87, 238];
static ref TEST_DIGEST_ENC: [u8; 32] = [50, 162, 191, 93, 255, 132, 9, 14, 127, 23, 92, 39, 246, 102, 245, 204, 130, 104, 4, 106, 182, 239, 218, 14, 80, 17, 150, 188, 239, 253, 198, 117];
} }
fn verify_test_blob(mut cursor: Cursor<Vec<u8>>) -> Result<(), Error> { fn verify_test_blob(mut cursor: Cursor<Vec<u8>>, digest: &[u8; 32]) -> Result<(), Error> {
// run read tests with different buffer sizes // run read tests with different buffer sizes
for size in [1, 3, 64*1024].iter() { for size in [1, 3, 64*1024].iter() {
@ -50,10 +54,9 @@ fn verify_test_blob(mut cursor: Cursor<Vec<u8>>) -> Result<(), Error> {
let raw_data = cursor.into_inner(); let raw_data = cursor.into_inner();
let blob = DataBlob::from_raw(raw_data)?; let blob = DataBlob::load_from_reader(&mut &raw_data[..])?;
blob.verify_crc()?;
let data = blob.decode(Some(&CRYPT_CONFIG))?; let data = blob.decode(Some(&CRYPT_CONFIG), Some(digest))?;
if data != *TEST_DATA { if data != *TEST_DATA {
bail!("blob data is wrong (decode)"); bail!("blob data is wrong (decode)");
} }
@ -66,7 +69,7 @@ fn test_uncompressed_blob_writer() -> Result<(), Error> {
let mut blob_writer = DataBlobWriter::new_uncompressed(tmp)?; let mut blob_writer = DataBlobWriter::new_uncompressed(tmp)?;
blob_writer.write_all(&TEST_DATA)?; blob_writer.write_all(&TEST_DATA)?;
verify_test_blob(blob_writer.finish()?) verify_test_blob(blob_writer.finish()?, &*TEST_DIGEST_PLAIN)
} }
#[test] #[test]
@ -75,7 +78,7 @@ fn test_compressed_blob_writer() -> Result<(), Error> {
let mut blob_writer = DataBlobWriter::new_compressed(tmp)?; let mut blob_writer = DataBlobWriter::new_compressed(tmp)?;
blob_writer.write_all(&TEST_DATA)?; blob_writer.write_all(&TEST_DATA)?;
verify_test_blob(blob_writer.finish()?) verify_test_blob(blob_writer.finish()?, &*TEST_DIGEST_PLAIN)
} }
#[test] #[test]
@ -84,7 +87,7 @@ fn test_encrypted_blob_writer() -> Result<(), Error> {
let mut blob_writer = DataBlobWriter::new_encrypted(tmp, CRYPT_CONFIG.clone())?; let mut blob_writer = DataBlobWriter::new_encrypted(tmp, CRYPT_CONFIG.clone())?;
blob_writer.write_all(&TEST_DATA)?; blob_writer.write_all(&TEST_DATA)?;
verify_test_blob(blob_writer.finish()?) verify_test_blob(blob_writer.finish()?, &*TEST_DIGEST_ENC)
} }
#[test] #[test]
@ -93,5 +96,5 @@ fn test_encrypted_compressed_blob_writer() -> Result<(), Error> {
let mut blob_writer = DataBlobWriter::new_encrypted_compressed(tmp, CRYPT_CONFIG.clone())?; let mut blob_writer = DataBlobWriter::new_encrypted_compressed(tmp, CRYPT_CONFIG.clone())?;
blob_writer.write_all(&TEST_DATA)?; blob_writer.write_all(&TEST_DATA)?;
verify_test_blob(blob_writer.finish()?) verify_test_blob(blob_writer.finish()?, &*TEST_DIGEST_ENC)
} }

View File

@ -54,21 +54,27 @@ fn worker_task_abort() -> Result<(), Error> {
} }
let errmsg = errmsg1.clone(); let errmsg = errmsg1.clone();
let res = server::WorkerTask::new_thread("garbage_collection", None, "root@pam", true, move |worker| { let res = server::WorkerTask::new_thread(
println!("WORKER {}", worker); "garbage_collection",
None,
proxmox_backup::api2::types::Userid::root_userid().clone(),
true,
move |worker| {
println!("WORKER {}", worker);
let result = garbage_collection(&worker); let result = garbage_collection(&worker);
tools::request_shutdown(); tools::request_shutdown();
if let Err(err) = result { if let Err(err) = result {
println!("got expected error: {}", err); println!("got expected error: {}", err);
} else { } else {
let mut data = errmsg.lock().unwrap(); let mut data = errmsg.lock().unwrap();
*data = Some(String::from("thread finished - seems abort did not work as expected")); *data = Some(String::from("thread finished - seems abort did not work as expected"));
} }
Ok(()) Ok(())
}); },
);
match res { match res {
Err(err) => { Err(err) => {

View File

@ -35,7 +35,12 @@ Ext.define('pbs-data-store-snapshots', {
return PBS.Utils.calculateCryptMode(crypt); return PBS.Utils.calculateCryptMode(crypt);
} }
} },
{
name: 'matchesFilter',
type: 'boolean',
defaultValue: true,
},
] ]
}); });
@ -126,6 +131,7 @@ Ext.define('PBS.DataStoreContent', {
}, },
onLoad: function(store, records, success, operation) { onLoad: function(store, records, success, operation) {
let me = this;
let view = this.getView(); let view = this.getView();
if (!success) { if (!success) {
@ -135,6 +141,30 @@ Ext.define('PBS.DataStoreContent', {
let groups = this.getRecordGroups(records); let groups = this.getRecordGroups(records);
let selected;
let expanded = {};
view.getSelection().some(function(item) {
let id = item.data.text;
if (item.data.leaf) {
id = item.parentNode.data.text + id;
}
selected = id;
return true;
});
view.getRootNode().cascadeBy({
before: item => {
if (item.isExpanded() && !item.data.leaf) {
let id = item.data.text;
expanded[id] = true;
return true;
}
return false;
},
after: () => {},
});
for (const item of records) { for (const item of records) {
let group = item.data["backup-type"] + "/" + item.data["backup-id"]; let group = item.data["backup-type"] + "/" + item.data["backup-id"];
let children = groups[group].children; let children = groups[group].children;
@ -142,14 +172,27 @@ Ext.define('PBS.DataStoreContent', {
let data = item.data; let data = item.data;
data.text = group + '/' + PBS.Utils.render_datetime_utc(data["backup-time"]); data.text = group + '/' + PBS.Utils.render_datetime_utc(data["backup-time"]);
data.leaf = true; data.leaf = false;
data.cls = 'no-leaf-icons'; data.cls = 'no-leaf-icons';
data.matchesFilter = true;
data.expanded = !!expanded[data.text];
data.children = [];
for (const file of data.files) {
file.text = file.filename,
file['crypt-mode'] = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
file.leaf = true;
file.matchesFilter = true;
data.children.push(file);
}
children.push(data); children.push(data);
} }
let children = []; let children = [];
for (const [_key, group] of Object.entries(groups)) { for (const [name, group] of Object.entries(groups)) {
let last_backup = 0; let last_backup = 0;
let crypt = { let crypt = {
none: 0, none: 0,
@ -169,8 +212,10 @@ Ext.define('PBS.DataStoreContent', {
} }
group.count = group.children.length; group.count = group.children.length;
group.matchesFilter = true;
crypt.count = group.count; crypt.count = group.count;
group['crypt-mode'] = PBS.Utils.calculateCryptMode(crypt); group['crypt-mode'] = PBS.Utils.calculateCryptMode(crypt);
group.expanded = !!expanded[name];
children.push(group); children.push(group);
} }
@ -178,16 +223,35 @@ Ext.define('PBS.DataStoreContent', {
expanded: true, expanded: true,
children: children children: children
}); });
if (selected !== undefined) {
let selection = view.getRootNode().findChildBy(function(item) {
let id = item.data.text;
if (item.data.leaf) {
id = item.parentNode.data.text + id;
}
return selected === id;
}, undefined, true);
if (selection) {
view.setSelection(selection);
view.getView().focusRow(selection);
}
}
Proxmox.Utils.setErrorMask(view, false); Proxmox.Utils.setErrorMask(view, false);
if (view.getStore().getFilters().length > 0) {
let searchBox = me.lookup("searchbox");
let searchvalue = searchBox.getValue();;
me.search(searchBox, searchvalue);
}
}, },
onPrune: function() { onPrune: function(view, rI, cI, item, e, rec) {
var view = this.getView(); var view = this.getView();
let rec = view.selModel.getSelection()[0];
if (!(rec && rec.data)) return; if (!(rec && rec.data)) return;
let data = rec.data; let data = rec.data;
if (data.leaf) return; if (rec.parentNode.id !== 'root') return;
if (!view.datastore) return; if (!view.datastore) return;
@ -200,18 +264,17 @@ Ext.define('PBS.DataStoreContent', {
win.show(); win.show();
}, },
onVerify: function() { onVerify: function(view, rI, cI, item, e, rec) {
var view = this.getView(); var view = this.getView();
if (!view.datastore) return; if (!view.datastore) return;
let rec = view.selModel.getSelection()[0];
if (!(rec && rec.data)) return; if (!(rec && rec.data)) return;
let data = rec.data; let data = rec.data;
let params; let params;
if (data.leaf) { if (rec.parentNode.id !== 'root') {
params = { params = {
"backup-type": data["backup-type"], "backup-type": data["backup-type"],
"backup-id": data["backup-id"], "backup-id": data["backup-id"],
@ -239,75 +302,77 @@ Ext.define('PBS.DataStoreContent', {
}); });
}, },
onForget: function() { onForget: function(view, rI, cI, item, e, rec) {
let me = this;
var view = this.getView(); var view = this.getView();
let rec = view.selModel.getSelection()[0];
if (!(rec && rec.data)) return; if (!(rec && rec.data)) return;
let data = rec.data; let data = rec.data;
if (!data.leaf) return;
if (!view.datastore) return; if (!view.datastore) return;
console.log(data); Ext.Msg.show({
title: gettext('Confirm'),
icon: Ext.Msg.WARNING,
message: Ext.String.format(gettext('Are you sure you want to remove snapshot {0}'), `'${data.text}'`),
buttons: Ext.Msg.YESNO,
defaultFocus: 'no',
callback: function(btn) {
if (btn !== 'yes') {
return;
}
Proxmox.Utils.API2Request({ Proxmox.Utils.API2Request({
params: { params: {
"backup-type": data["backup-type"], "backup-type": data["backup-type"],
"backup-id": data["backup-id"], "backup-id": data["backup-id"],
"backup-time": (data['backup-time'].getTime()/1000).toFixed(0), "backup-time": (data['backup-time'].getTime()/1000).toFixed(0),
},
url: `/admin/datastore/${view.datastore}/snapshots`,
method: 'DELETE',
waitMsgTarget: view,
failure: function(response, opts) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
callback: me.reload.bind(me),
});
}, },
url: `/admin/datastore/${view.datastore}/snapshots`,
method: 'DELETE',
waitMsgTarget: view,
failure: function(response, opts) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
callback: this.reload.bind(this),
}); });
}, },
openBackupFileDownloader: function() { downloadFile: function(tV, rI, cI, item, e, rec) {
let me = this; let me = this;
let view = me.getView(); let view = me.getView();
let rec = view.selModel.getSelection()[0];
if (!(rec && rec.data)) return; if (!(rec && rec.data)) return;
let data = rec.data; let data = rec.parentNode.data;
Ext.create('PBS.window.BackupFileDownloader', { let file = rec.data.filename;
baseurl: `/api2/json/admin/datastore/${view.datastore}`, let params = {
params: { 'backup-id': data['backup-id'],
'backup-id': data['backup-id'], 'backup-type': data['backup-type'],
'backup-type': data['backup-type'], 'backup-time': (data['backup-time'].getTime()/1000).toFixed(0),
'backup-time': (data['backup-time'].getTime()/1000).toFixed(0), 'file-name': file,
}, };
files: data.files,
}).show();
},
openPxarBrowser: function() { let idx = file.lastIndexOf('.');
let me = this; let filename = file.slice(0, idx);
let view = me.getView(); let atag = document.createElement('a');
params['file-name'] = file;
let rec = view.selModel.getSelection()[0]; atag.download = filename;
if (!(rec && rec.data)) return; let url = new URL(`/api2/json/admin/datastore/${view.datastore}/download-decoded`, window.location.origin);
let data = rec.data; for (const [key, value] of Object.entries(params)) {
url.searchParams.append(key, value);
let encrypted = false;
data.files.forEach(file => {
if (file.filename === 'catalog.pcat1.didx' && file['crypt-mode'] === 'encrypt') {
encrypted = true;
}
});
if (encrypted) {
Ext.Msg.alert(
gettext('Cannot open Catalog'),
gettext('Only unencrypted Backups can be opened on the server. Please use the client with the decryption key instead.'),
);
return;
} }
atag.href = url.href;
atag.click();
},
openPxarBrowser: function(tv, rI, Ci, item, e, rec) {
let me = this;
let view = me.getView();
if (!(rec && rec.data)) return;
let data = rec.parentNode.data;
let id = data['backup-id']; let id = data['backup-id'];
let time = data['backup-time']; let time = data['backup-time'];
@ -320,8 +385,73 @@ Ext.define('PBS.DataStoreContent', {
'backup-id': id, 'backup-id': id,
'backup-time': (time.getTime()/1000).toFixed(0), 'backup-time': (time.getTime()/1000).toFixed(0),
'backup-type': type, 'backup-type': type,
archive: rec.data.filename,
}).show(); }).show();
} },
filter: function(item, value) {
if (item.data.text.indexOf(value) !== -1) {
return true;
}
if (item.data.owner && item.data.owner.indexOf(value) !== -1) {
return true;
}
return false;
},
search: function(tf, value) {
let me = this;
let view = me.getView();
let store = view.getStore();
if (!value && value !== 0) {
store.clearFilter();
store.getRoot().collapseChildren(true);
tf.triggers.clear.setVisible(false);
return;
}
tf.triggers.clear.setVisible(true);
if (value.length < 2) return;
Proxmox.Utils.setErrorMask(view, true);
// we do it a little bit later for the error mask to work
setTimeout(function() {
store.clearFilter();
store.getRoot().collapseChildren(true);
store.beginUpdate();
store.getRoot().cascadeBy({
before: function(item) {
if(me.filter(item, value)) {
item.set('matchesFilter', true);
if (item.parentNode && item.parentNode.id !== 'root') {
item.parentNode.childmatches = true;
}
return false;
}
return true;
},
after: function(item) {
if (me.filter(item, value) || item.id === 'root' || item.childmatches) {
item.set('matchesFilter', true);
if (item.parentNode && item.parentNode.id !== 'root') {
item.parentNode.childmatches = true;
}
if (item.childmatches) {
item.expand();
}
} else {
item.set('matchesFilter', false);
}
delete item.childmatches;
},
});
store.endUpdate();
store.filter((item) => !!item.get('matchesFilter'));
Proxmox.Utils.setErrorMask(view, false);
}, 10);
},
}, },
columns: [ columns: [
@ -331,6 +461,55 @@ Ext.define('PBS.DataStoreContent', {
dataIndex: 'text', dataIndex: 'text',
flex: 1 flex: 1
}, },
{
header: gettext('Actions'),
xtype: 'actioncolumn',
dataIndex: 'text',
items: [
{
handler: 'onVerify',
tooltip: gettext('Verify'),
getClass: (v, m, rec) => rec.data.leaf ? 'pmx-hidden' : 'fa fa-search',
isDisabled: (v, r, c, i, rec) => !!rec.data.leaf,
},
{
handler: 'onPrune',
tooltip: gettext('Prune'),
getClass: (v, m, rec) => rec.parentNode.id ==='root' ? 'fa fa-scissors' : 'pmx-hidden',
isDisabled: (v, r, c, i, rec) => rec.parentNode.id !=='root',
},
{
handler: 'onForget',
tooltip: gettext('Forget Snapshot'),
getClass: (v, m, rec) => !rec.data.leaf && rec.parentNode.id !== 'root' ? 'fa critical fa-trash-o' : 'pmx-hidden',
isDisabled: (v, r, c, i, rec) => rec.data.leaf || rec.parentNode.id === 'root',
},
{
handler: 'downloadFile',
tooltip: gettext('Download'),
getClass: (v, m, rec) => rec.data.leaf && rec.data.filename ? 'fa fa-download' : 'pmx-hidden',
isDisabled: (v, r, c, i, rec) => !rec.data.leaf || !rec.data.filename || rec.data['crypt-mode'] > 2,
},
{
handler: 'openPxarBrowser',
tooltip: gettext('Browse'),
getClass: (v, m, rec) => {
let data = rec.data;
if (data.leaf && data.filename && data.filename.endsWith('pxar.didx')) {
return 'fa fa-folder-open-o';
}
return 'pmx-hidden';
},
isDisabled: (v, r, c, i, rec) => {
let data = rec.data;
return !(data.leaf &&
data.filename &&
data.filename.endsWith('pxar.didx') &&
data['crypt-mode'] < 3);
}
},
]
},
{ {
xtype: 'datecolumn', xtype: 'datecolumn',
header: gettext('Backup Time'), header: gettext('Backup Time'),
@ -344,6 +523,9 @@ Ext.define('PBS.DataStoreContent', {
sortable: true, sortable: true,
dataIndex: 'size', dataIndex: 'size',
renderer: (v, meta, record) => { renderer: (v, meta, record) => {
if (record.data.text === 'client.log.blob' && v === undefined) {
return '';
}
if (v === undefined || v === null) { if (v === undefined || v === null) {
meta.tdCls = "x-grid-row-loading"; meta.tdCls = "x-grid-row-loading";
return ''; return '';
@ -366,28 +548,20 @@ Ext.define('PBS.DataStoreContent', {
{ {
header: gettext('Encrypted'), header: gettext('Encrypted'),
dataIndex: 'crypt-mode', dataIndex: 'crypt-mode',
renderer: value => PBS.Utils.cryptText[value] || Proxmox.Utils.unknownText, renderer: (v, meta, record) => {
}, if (record.data.size === undefined || record.data.size === null) {
{ return '';
header: gettext("Files"), }
sortable: false, if (v === -1) {
dataIndex: 'files', return '';
renderer: function(files) { }
return files.map((file) => { let iconCls = PBS.Utils.cryptIconCls[v] || '';
let icon = ''; let iconTxt = "";
let size = ''; if (iconCls) {
let mode = PBS.Utils.cryptmap.indexOf(file['crypt-mode']); iconTxt = `<i class="fa fa-fw fa-${iconCls}"></i> `;
let iconCls = PBS.Utils.cryptIconCls[mode] || ''; }
if (iconCls !== '') { return (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText
icon = `<i class="fa fa-${iconCls}"></i> `; }
}
if (file.size) {
size = ` (${Proxmox.Utils.format_size(file.size)})`;
}
return `${icon}${file.filename}${size}`;
}).join(', ');
},
flex: 2
}, },
], ],
@ -397,54 +571,30 @@ Ext.define('PBS.DataStoreContent', {
iconCls: 'fa fa-refresh', iconCls: 'fa fa-refresh',
handler: 'reload', handler: 'reload',
}, },
'-', '->',
{ {
xtype: 'proxmoxButton', xtype: 'tbtext',
text: gettext('Verify'), html: gettext('Search'),
disabled: true,
parentXType: 'pbsDataStoreContent',
enableFn: (rec) => !!rec.data && rec.data.size !== null,
handler: 'onVerify',
}, },
{ {
xtype: 'proxmoxButton', xtype: 'textfield',
text: gettext('Prune'), reference: 'searchbox',
disabled: true, triggers: {
parentXType: 'pbsDataStoreContent', clear: {
enableFn: (rec) => !rec.data.leaf, cls: 'pmx-clear-trigger',
handler: 'onPrune', weight: -1,
}, hidden: true,
{ handler: function() {
xtype: 'proxmoxButton', this.triggers.clear.setVisible(false);
text: gettext('Forget'), this.setValue('');
disabled: true, },
parentXType: 'pbsDataStoreContent', }
handler: 'onForget',
dangerous: true,
confirmMsg: function(record) {
//console.log(record);
let name = record.data.text;
return Ext.String.format(gettext('Are you sure you want to remove snapshot {0}'), `'${name}'`);
}, },
enableFn: (rec) => !!rec.data.leaf && rec.data.size !== null, listeners: {
}, change: {
'-', fn: 'search',
{ buffer: 500,
xtype: 'proxmoxButton', },
text: gettext('Download Files'),
disabled: true,
parentXType: 'pbsDataStoreContent',
handler: 'openBackupFileDownloader',
enableFn: (rec) => !!rec.data.leaf && rec.data.size !== null,
},
{
xtype: "proxmoxButton",
text: gettext('PXAR File Browser'),
disabled: true,
handler: 'openPxarBrowser',
parentXType: 'pbsDataStoreContent',
enableFn: function(record) {
return !!record.data.leaf && record.size !== null && record.data.files.some(el => el.filename.endsWith('pxar.didx'));
}, },
} }
], ],

View File

@ -9,6 +9,7 @@ JSSRC= \
form/RemoteSelector.js \ form/RemoteSelector.js \
form/DataStoreSelector.js \ form/DataStoreSelector.js \
form/CalendarEvent.js \ form/CalendarEvent.js \
form/PermissionPathSelector.js \
data/RunningTasksStore.js \ data/RunningTasksStore.js \
button/TaskButton.js \ button/TaskButton.js \
config/UserView.js \ config/UserView.js \
@ -17,6 +18,7 @@ JSSRC= \
config/SyncView.js \ config/SyncView.js \
config/DataStoreConfig.js \ config/DataStoreConfig.js \
window/UserEdit.js \ window/UserEdit.js \
window/UserPassword.js \
window/RemoteEdit.js \ window/RemoteEdit.js \
window/SyncJobEdit.js \ window/SyncJobEdit.js \
window/ACLEdit.js \ window/ACLEdit.js \

View File

@ -102,6 +102,7 @@ Ext.define('PBS.view.main.NavigationTree', {
view.rstore = Ext.create('Proxmox.data.UpdateStore', { view.rstore = Ext.create('Proxmox.data.UpdateStore', {
autoStart: true, autoStart: true,
interval: 15 * 1000, interval: 15 * 1000,
storeId: 'pbs-datastore-list',
storeid: 'pbs-datastore-list', storeid: 'pbs-datastore-list',
model: 'pbs-datastore-list' model: 'pbs-datastore-list'
}); });

View File

@ -26,6 +26,7 @@ Ext.define('PBS.ServerAdministration', {
xtype: 'proxmoxNodeServiceView', xtype: 'proxmoxNodeServiceView',
title: gettext('Services'), title: gettext('Services'),
itemId: 'services', itemId: 'services',
restartCommand: 'reload', // avoid disruptions
startOnlyServices: { startOnlyServices: {
syslog: true, syslog: true,
'proxmox-backup': true, 'proxmox-backup': true,

View File

@ -30,8 +30,8 @@ Ext.define('PBS.Utils', {
cryptIconCls: [ cryptIconCls: [
'', '',
'', '',
'certificate', 'lock faded',
'lock', 'lock good',
], ],
calculateCryptMode: function(data) { calculateCryptMode: function(data) {

View File

@ -51,6 +51,18 @@ Ext.define('PBS.config.UserView', {
}).show(); }).show();
}, },
setPassword: function() {
let me = this;
let view = me.getView();
let selection = view.getSelection();
if (selection.length < 1) return;
Ext.create('PBS.window.UserPassword', {
url: '/api2/extjs/access/users/' + selection[0].data.userid,
}).show();
},
renderUsername: function(userid) { renderUsername: function(userid) {
return Ext.String.htmlEncode(userid.match(/^(.+)@([^@]+)$/)[1]); return Ext.String.htmlEncode(userid.match(/^(.+)@([^@]+)$/)[1]);
}, },
@ -98,6 +110,12 @@ Ext.define('PBS.config.UserView', {
handler: 'editUser', handler: 'editUser',
disabled: true, disabled: true,
}, },
{
xtype: 'proxmoxButton',
text: gettext('Password'),
handler: 'setPassword',
disabled: true,
},
{ {
xtype: 'proxmoxStdRemoveButton', xtype: 'proxmoxStdRemoveButton',
baseurl: '/access/users/', baseurl: '/access/users/',

View File

@ -208,3 +208,19 @@ p.logs {
.pmx-button-badge.active { .pmx-button-badge.active {
background-color: #464d4d; background-color: #464d4d;
} }
.pmx-hidden {
cursor: default;
}
.x-action-col-icon.good:before {
color: #21BF4B;
}
.x-action-col-icon.warning:before {
color: #fc0;
}
.x-action-col-icon.critical:before {
color: #FF6C59;
}

View File

@ -0,0 +1,73 @@
Ext.define('PBS.data.PermissionPathsStore', {
extend: 'Ext.data.Store',
alias: 'store.pbsPermissionPaths',
fields: ['value'],
autoLoad: false,
data: [
{ 'value': '/' },
{ 'value': '/access' },
{ 'value': '/access/acl' },
{ 'value': '/access/users' },
{ 'value': '/datastore' },
{ 'value': '/remote' },
{ 'value': '/system' },
{ 'value': '/system/disks' },
{ 'value': '/system/log' },
{ 'value': '/system/network' },
{ 'value': '/system/network/dns' },
{ 'value': '/system/network/interfaces' },
{ 'value': '/system/services' },
{ 'value': '/system/status' },
{ 'value': '/system/tasks' },
{ 'value': '/system/time' },
],
constructor: function(config) {
let me = this;
config = config || {};
me.callParent([config]);
// TODO: this is but a HACK until we have some sort of resource
// storage like PVE
let datastores = Ext.data.StoreManager.lookup('pbs-datastore-list');
if (datastores) {
let donePaths = {};
me.suspendEvents();
datastores.each(function(record) {
let path = `/datastore/${record.data.store}`;
if (path !== undefined && !donePaths[path]) {
me.add({ value: path });
donePaths[path] = 1;
}
});
me.resumeEvents();
me.fireEvent('refresh', me);
me.fireEvent('datachanged', me);
}
me.sort({
property: 'value',
direction: 'ASC',
});
},
});
Ext.define('PBS.form.PermissionPathSelector', {
extend: 'Ext.form.field.ComboBox',
xtype: 'pbsPermissionPathSelector',
valueField: 'value',
displayField: 'value',
typeAhead: true,
anyMatch: true,
queryMode: 'local',
store: {
type: 'pbsPermissionPaths',
},
regexText: gettext('Invalid permission path.'),
regex: /\/((access|datastore|remote|system)\/.*)?/,
});

View File

@ -7,6 +7,7 @@ Ext.define('PBS.window.ACLEdit', {
method: 'PUT', method: 'PUT',
isAdd: true, isAdd: true,
isCreate: true, isCreate: true,
width: 450,
// caller can give a static path // caller can give a static path
path: undefined, path: undefined,
@ -25,7 +26,7 @@ Ext.define('PBS.window.ACLEdit', {
items: [ items: [
{ {
xtype: 'pmxDisplayEditField', xtype: 'pbsPermissionPathSelector',
fieldLabel: gettext('Path'), fieldLabel: gettext('Path'),
cbind: { cbind: {
editable: '{!path}', editable: '{!path}',

View File

@ -145,7 +145,16 @@ Ext.define("PBS.window.FileBrowser", {
store.load(() => { store.load(() => {
let root = store.getRoot(); let root = store.getRoot();
root.expand(); // always expand invisible root node root.expand(); // always expand invisible root node
if (root.childNodes.length === 1) { if (view.archive) {
let child = root.findChild('text', view.archive);
if (child) {
child.expand();
setTimeout(function() {
tree.setSelection(child);
tree.getView().focusRow(child);
}, 10);
}
} else if (root.childNodes.length === 1) {
root.firstChild.expand(); root.firstChild.expand();
} }
}); });

View File

@ -0,0 +1,41 @@
Ext.define('PBS.window.UserPassword', {
extend: 'Proxmox.window.Edit',
alias: 'widget.pbsUserPassword',
userid: undefined,
method: 'PUT',
subject: gettext('User Password'),
fieldDefaults: { labelWidth: 120 },
items: [
{
xtype: 'textfield',
inputType: 'password',
fieldLabel: gettext('Password'),
minLength: 5,
allowBlank: false,
name: 'password',
listeners: {
change: function(field) {
field.next().validate();
},
blur: function(field) {
field.next().validate();
},
},
},
{
xtype: 'textfield',
inputType: 'password',
fieldLabel: gettext('Confirm password'),
name: 'verifypassword',
vtype: 'password',
initialPassField: 'password',
allowBlank: false,
submitValue: false,
},
],
});