Compare commits

..

878 Commits

Author SHA1 Message Date
c5ac2b9ddd bump version to 0.8.10-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 15:47:30 +02:00
81f293513e backup: lock base snapshot and ensure existance on finish
To prevent forgetting the base snapshot of a running backup, and catch
the case when it still happens (e.g. via manual rm) to at least error
out instead of storing a potentially invalid backup.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 11:04:47 +02:00
8b5f72b176 Revert "backup: ensure base snapshots are still available after backup"
This reverts commit d53fbe2474.

The HashSet and "register" function are unnecessary, as we already know
which backup is the one we need to check: the last one, stored as
'last_backup'.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 11:03:53 +02:00
f23f75433f backup: flock snapshot on backup start
An flock on the snapshot dir itself is used in addition to the group dir
lock. The lock is used to avoid races with forget and prune, while
having more granularity than the group lock (i.e. the group lock is
necessary to prevent more than one backup per group, but the snapshot
lock still allows backups unrelated to the currently running to be
forgotten/pruned).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 11:02:21 +02:00
6d6b4e72d3 datastore: prevent in-use deletion with locks instead of heuristic
Attempt to lock the backup directory to be deleted, if it works keep the
lock until the deletion is complete. This way we ensure that no other
locking operation (e.g. using a snapshot as base for another backup) can
happen concurrently.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 11:00:29 +02:00
e434258592 src/backup/backup_info.rs: remove BackupGroup lock()
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 10:58:35 +02:00
3dc1a2d5b6 src/tools/fs.rs: new helper lock_dir_noblock
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-11 10:57:48 +02:00
5d95558bae Makefile: build target - do not fail if control file does not exist
This can happen if a previous build failed ...
2020-08-11 10:47:23 +02:00
882c082369 mark signed manifests as such
for less-confusing display in the web interface

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 09:56:53 +02:00
9a38fa29c2 verify: also check chunk CryptMode
and in-line verify_stored_chunk to avoid double-loading each chunk.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 09:56:20 +02:00
14f6c9cb8b chunk readers: ensure chunk/index CryptMode matches
an encrypted Index should never reference a plain-text chunk, and an
unencrypted Index should never reference an encrypted chunk.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 09:54:22 +02:00
2d55beeca0 datastore api: verify blob/index csum from manifest
when dowloading decoded files.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 09:52:45 +02:00
9238cdf50d datastore api: only decode unencrypted indices
these checks were already in place for regular downloading of backed up
files, also do them when attempting to decode a catalog, or when
downloading decoded files referenced by a pxar index.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-11 09:51:20 +02:00
5d30f03826 impl PartialEq between Realm and RealmRef
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 12:23:36 +02:00
14263ef989 assert that Username does not impl PartialEq
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 12:21:12 +02:00
e7cb4dc50d introduce Username, Realm and Userid api types
and begin splitting up types.rs as it has grown quite large
already

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 12:05:01 +02:00
27d864210a d/control: proxmox 0.3.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 12:05:01 +02:00
f667f49dab bump proxmox dependency to 0.3.3 for serde helpers
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 11:32:01 +02:00
866c556faf move types.rs to types/mod.rs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 10:32:31 +02:00
90d515c97d config.rs: sort modules
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 08:33:38 +02:00
4dbe129284 backup: only allow finished backups as base snapshot
If the datastore holds broken backups for some reason, do not attempt to
base following snapshots on those. This would lead to an error on
/previous, leaving the client no choice but to upload all chunks, even
though there might be potential for incremental savings.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-07 07:32:56 +02:00
747c3bc087 administration-guide.rst: move Encryption headline up one level 2020-08-07 07:10:12 +02:00
c23e257c5a administration-guide.rst: fix headline (avoid compile error) 2020-08-07 06:56:58 +02:00
16a18dadba admin-guide: add section explaining master keys
Adds a section under encryption which goes into detail on how to
use a master key to store and recover backup encryption keys.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-07 06:54:37 +02:00
5f76ac37b5 fix: master-key: upload RSA encoded key with backup
When uploading an RSA encoded key alongside the backup,
the backup would fail with the error message: "wrong blob
file extension".
Adding the '.blob' extension to rsa-encrypted.key before the
the call to upload_blob_from_data(), rather than after, fixes
the issue.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-06 09:34:01 +02:00
d74edc3d89 finish_backup: mark backup as finished only after checks have passed
Commit 9fa55e09 "finish_backup: test/verify manifest at server side"
moved the finished-marking above some checks, which means if those fail
the backup would still be marked as successful on the server.

Revert that part and comment the line for the future.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-08-06 06:39:34 +02:00
2f57a433b1 fix #2909: handle missing chunks gracefully in garbage collection
instead of bailing and stopping the entire GC process, warn about the
missing chunks and continue.

this results in "TASK WARNINGS: X" as the status.

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-08-06 06:36:48 +02:00
df7f04364b d/control: bump proxmox to 0.3.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-04 11:34:58 +02:00
98c259b4c1 remove timer and lock functions, fix building with proxmox 0.3.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-04 11:33:02 +02:00
799b3d88bc bump proxmox dependency to 0.3.2 for timer / file locking
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-04 11:27:44 +02:00
db22e6b270 build: properly regenerate d/control
and commit the latest change

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-04 11:16:11 +02:00
16f0afbfb5 gui: user: fix #2898 add dialog to set password
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-08-04 10:21:00 +02:00
d3d566f7bd GC: use time pre phase1 to calculate min_atime in phase2
Used chunks are marked in phase1 of the garbage collection process by
using the atime property. Each used chunk gets touched so that the atime
gets updated (if older than 24h, see relatime).

Should there ever be a situation in which the phase1 in the GC run needs
a very long time to finish, it could happen that the grace period
calculated in phase2 is not long enough and thus the marking of the
chunks (atime) becomes invalid. This would result in the removal of
needed chunks.

Even though the likelyhood of this happening is very low, using the
timestamp from right before phase1 is started, to calculate the grace
period in phase2 should avoid this situation.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-08-04 10:19:05 +02:00
c96b0de48f datastore: allow browsing signed pxar files
just because we can't verify the signature, does not mean the contents
are not accessible. it might make sense to make it obvious with a hint
or click-through warning that no signature verification can take place
or this and downloading.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-04 07:27:56 +02:00
2ce159343b sync: verify size and checksum of pulled archives
and not just of previously synced ones.

we can't use BackupManifest::verify_file as the archive is still stored
under the tmp path at this point.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-04 07:27:56 +02:00
9e496ff6f1 sync: verify chunk size and digest, if possible
for encrypted chunks this is currently not possible, as we need the key
to decode the chunk.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-04 07:27:56 +02:00
8819d1f2f5 blobs: attempt to verify on decode when possible
regular chunks are only decoded when their contents are accessed, in
which case we need to have the key anyway and want to verify the digest.

for blobs we need to verify beforehand, since their checksums are always
calculated based on their raw content, and stored in the manifest.

manifests are also stored as blobs, but don't have a digest in the
traditional sense (they might have a signature covering parts of their
contents, but that is verified already when loading the manifest).

this commit does not cover pull/sync code which copies blobs and chunks
as-is without decoding them.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-08-04 07:27:56 +02:00
0f9218079a pxar/extract: fixup path stack for errors
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-03 12:20:30 +02:00
1cafbdc70d more whitespace fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-03 12:02:19 +02:00
a3eb7b2cea whitespace fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-03 12:00:59 +02:00
d9b8e2c795 pxar: better error handling on extract
Errors while applying metadata will not be considered fatal
by default using `pxar extract` unless `--strict` was passed
in which case it'll bail out immediately.

It'll still return an error exit status if something had
failed along the way.

Note that most other errors will still cause it to bail out
(eg. errors creating files, or I/O errors while writing
the contents).

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-03 09:40:55 +02:00
4bd2a9e42d worker_task: add getter for upid
sometimes we need the upid inside the worker itself, so give a
possibilty to get it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-03 08:26:17 +02:00
cef03f4149 worker_task: refactor log text generator
we will need this elsewhere, so pull it out

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-03 08:23:13 +02:00
eeb19aeb2d systemd/time: fix weekday wrapping on month
the weekday does not change depending on the month, so remove that wrapping

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-03 08:18:42 +02:00
6c96ec418d systemd/time: add tests for weekday month wrapping
this will fail for now, gets fixed in the next commit

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-03 08:15:26 +02:00
5e4b32706c depend on proxmox 0.3.1 2020-08-02 12:02:21 +02:00
30c3c5d66c pxar: create: attempt to use O_NOATIME
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-31 11:46:53 +02:00
e51be33807 pxar: create: move common O_ flags to open_file
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-31 11:42:15 +02:00
70030b43d0 list_snapshots: Returns new "comment" property (fisrt line from notes) 2020-07-31 11:34:42 +02:00
724de093dd build: track generated d/control in git
to track changes and allow bootstrap-installation of build dependencies.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-31 11:18:33 +02:00
ff86ef00a7 cleanup: manifest is always CryptMode::None 2020-07-31 10:25:30 +02:00
912b3f5bc9 src/api2/admin/datastore.rs: add API to get/set Notes for backus 2020-07-31 10:17:35 +02:00
a4acb6ef84 lock_file: return std::io::Error 2020-07-31 08:53:00 +02:00
d7ee07d838 src/api2/backup/environment.rs: remove debug code 2020-07-31 07:48:53 +02:00
53705acece src/api2/backup/environment.rs: remove debug code 2020-07-31 07:47:08 +02:00
c8fff67d88 finish_backup: add chunk_upload_stats to manifest 2020-07-31 07:45:47 +02:00
9fa55e09a7 finish_backup: test/verify manifest at server side
We want to make sure that the client uploaded a readable manifest.
2020-07-31 07:45:47 +02:00
e443902583 src/backup/datastore.rs: add helpers to load/store manifest
We want this to modify the manifest "unprotected" data, for example
to add upload statistics, notes, ...
2020-07-31 07:45:47 +02:00
32dc4c4604 introduction: language improvement (fix typos, grammar, wording)
Fix typos and grammatical errors.
Reword some sentences for better readability.
Clean up the list found under "Software Stack", so that it maintains a consistent
style throughout.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-30 12:02:54 +02:00
f39a900722 api2/node/termproxy: fix user in worker task
'username' here is without realm, but we really want to use user@realm

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-30 11:57:43 +02:00
1fc82c41f2 src/api2/backup.rs: aquire backup lock earlier in create_locked_backup_group() 2020-07-30 11:03:05 +02:00
d2b0c78e23 api2/node/termproxy: fix zombies on worker abort
tokios kill_on_drop sometimes leaves zombies around, especially
when there is not another tokio::process::Command spawned after

so instead of relying on the 'kill_on_drop' feature, we explicitly
kill the child on a worker abort. to be able to do this
we have to use 'tokio::select' instead of 'futures::select' since
the latter requires the future to be fused, which consumes the
child handle, leaving us no possibility to kill it after fusing.
(tokio::select does not need the futures to be fused, so we
can reuse the child future after the select again)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-30 10:38:14 +02:00
adfdc36936 verify: keep track and log which dirs failed the verification
so that we can print a list at the end of the worker which backups
are corrupt.

this is useful if there are many snapshots and some in between had an
error. Before this patch, the task log simply says to 'look in the logs'
but if the log is very long it makes it hard to see what exactly failed.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-30 09:39:37 +02:00
d8594d87f1 verify: keep also track of corrupt chunks
so that we do not have to verify a corrupt one multiple times

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-30 09:39:37 +02:00
f66f537da9 verify: check all chunks of an index, even if we encounter a corrupt one
this makes it easier to see which chunks are corrupt
(and enables us in the future to build a 'complete' list of
corrupt chunks)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-30 09:39:37 +02:00
d44185c4a1 fix #2873: if --pattern is used, default to not extracting
The extraction algorithm has a state (bool) indicating
whether we're currently in a positive or negative match
which has always been initialized to true at the beginning,
but when the user provides a `--pattern` argument we need to
start out with a negative match.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-30 09:33:30 +02:00
d53fbe2474 backup: ensure base snapshots are still available after backup
This should never trigger if everything else works correctly, but it is
still a very cheap check to avoid wrongly marking a backup as "OK" when
in fact some chunks might be missing.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-30 08:28:54 +02:00
95bda2f25d backup: use flock on backup group to forbid multiple backups at once
Multiple backups within one backup group don't really make sense, but
break all sorts of guarantees (e.g. a second backup started after a
first would use a "known-chunks" list from the previous unfinished one,
which would be empty - but using the list from the last finished one is
not a fix either, as that one could be deleted or pruned once the first
simultaneous backup is finished).

Fix it by only allowing one backup per backup group at one time. This is
done via a flock on the backup group directory, thus remaining intact
even after a reload.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-30 08:26:26 +02:00
c9756b40d1 datastore: prevent deletion of snaps in use as "previous backup"
To prevent a race with a background GC operation, do not allow deletion
of backups who's index might currently be referenced as the "known chunk
list" for successive backups. Otherwise the GC could delete chunks it
thinks are no longer referenced, while at the same time telling the
client that it doesn't need to upload said chunks because they already
exist.

Additionally, prevent deletion of whole backup groups, if there are
snapshots contained that appear to be currently in-progress. This is
currently unlikely to trigger, as that function is only used for sync
jobs, but it's a useful safeguard either way.

Deleting a single snapshot has a 'force' parameter, which is necessary
to allow deleting incomplete snapshots on an aborted backup. Pruning
also sets force=true to avoid the check, since it calculates which
snapshots to keep on its own.

To avoid code duplication, the is_finished method is factored out.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-30 08:26:01 +02:00
8cd29fb24a tools: add nonblocking mode to lock_file
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-30 08:18:10 +02:00
505c5f0f76 fix typo: avgerage to average
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-30 07:08:08 +02:00
2aaae9705e src/backup/verify.rs: try to verify chunks only once
We use a HashSet (per BackupGroup) to track already verified chunks.
2020-07-29 13:29:13 +02:00
8aa67ee758 bump proxmox to 0.3, cleanup http_err macro usage
Also swap the order of a couple of `.map_err().await` to
`.await.map_err()` since that's generally more efficient.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-29 09:38:36 +02:00
3865e27e96 src/api2/node.rs: 'mod' statement cleanup
split them into groups: `pub`, `pub(crate)` and non-pub

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-29 09:19:57 +02:00
f6c6e09a8a update to pxar 0.3 to support negative timestamps
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-29 08:31:37 +02:00
71282dd988 ui: fix in-progress snapshots always showing as "Encrypted"
We can't know if they are encrypted or not when they're not even
finished yet.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-29 07:13:25 +02:00
80db161e05 ui: fix error when reloading DataStoreContent
...when an entry is selected, that doesn't exist after the reload.

E.g. when one deletes selects a file within a snapshot and then clicks
the delete icon for said snapshot, focusRow would then fail and the
loading mask stay on until a reload.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-29 07:13:12 +02:00
be10cdb122 fix #2856: also check whole device for device mapper
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-28 11:03:45 +02:00
7fde1a71ca upload_chunk: allow upload of empty blobs
a blob can be empty (e.g. an empty pct fw conf), so we
have to set the minimum size to the header size

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-28 11:03:36 +02:00
a83674ad48 administration-guide: fix typo that breaks example command
The ' ' (space) between 'etc/ **/*.txt' resulted in the example command's output
not matching the given example output. Removing this space fixes the command.
2020-07-28 10:59:53 +02:00
02f82148cf docs: pxar create: update docs to match current behavior
This removes parts of the previous explanation of the tool that are no longer
correct, and adds an explanation of '--exclude' parameter, instead.

Adds more clarity to the command, by use of '/path/to/source' to signify
source directory.

Specify that the pattern matching style of the exclude parameter is that of
gitignore's syntax.
2020-07-28 10:59:42 +02:00
39f18b30b6 src/backup/data_blob.rs: new load_from_reader(), which verifies the CRC
And make verify_crc private for now. We always call load_from_reader() to
verify the CRC.

Also add load_chunk() to datastore.rs (from chunk_store::read_chunk())
2020-07-28 10:23:16 +02:00
69d970a658 ui: DataStoreContent: keep selection and expansion on reload
when clicking reload, we keep the existing selection
(if it still exists), and the previous expanded elements expanded

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-27 12:51:34 +02:00
6d55603dcc ui: add search box to DataStore content
which searches the whole tree (name & owner)

we do this by traversing the tree and marking elements as matches,
then afterwards make a simple filter that matches on a boolean

worst case cost of this is O(2n) since we have to traverse the
tree (in the worst) case one time, and the filter function does it again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-27 12:51:11 +02:00
3e395378bc ui: rework DataStore content Panel
instead of having the files as a column, put the files into the tree
as a third level

with this, we can move the actions into an action column and remove
the top buttons (except reload)

clicking the download action now downloads directly, so we would
not need the download window anymore

clicking the browse action, opens the pxar browser like before,
but expands and selects (&focus) the selected pxar file

also changes the icon of 'signed' to the one to locked
but color codes them (singed => greyed out, encrypted => green),
similar to what browsers do/did for certificates

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-27 12:47:51 +02:00
bccdc5fa04 src/backup/manifest.rs: cleanup - again, avoid recursive call to write_canonical_json
And use re-borrow instead of dyn trait casting.
2020-07-27 10:31:34 +02:00
0bf7ba6c92 src/backup/manifest.rs: cleanup - avoid recursive call to write_canonical_json 2020-07-27 08:48:11 +02:00
e6b599aa6c services: make reload safer and default to it in gui
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-25 20:23:12 +02:00
d757021f4c ui: acl: add improved permission selector
taken mostly from PVE, with adaption to how PBS does things.
Main difference is that we do not have a resource store singleton
here which we can use, but for datastores we can already use the
always present datastore-list store. Register it to the store manager
with a "storeId" property (vs. our internal storeid one).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-25 20:10:11 +02:00
ee15af6bb8 api: service command: fix test for essential service
makes no sense to disallow reload or start (even if start cannot
really happen)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 19:35:19 +02:00
3da9b7e0dd followup: server/state: rename task_count to internal_task_count
so that the relation with spawn_internal_task is made more clear

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 12:11:39 +02:00
beaa683a52 bump version to 0.8.9-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 11:24:56 +02:00
33a88dafb9 server/state: add spawn_internal_task and use it for websockets
is a helper to spawn an internal tokio task without it showing up
in the task list

it is still tracked for reload and notifies the last_worker_listeners

this enables the console to survive a reload of proxmox-backup-proxy

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-24 11:17:33 +02:00
224c65f8de termproxy: let users stop the termproxy task
for that we have to do a select on the workers abort_future

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-24 11:17:33 +02:00
f2b4b4b9fe fix 2885: bail on duplicate backup target
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-24 11:08:56 +02:00
ea9e559fc4 client: log archive upload duration more accurate, fix grammar
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 10:15:28 +02:00
0cf14984cc client: avoid division by zero in avg speed calculation, be more accurate
using micros vs. as_secs_f64 allows to have it calculated as usize
bytes, easier to handle - this was also used when it still lived in
upload_chunk_info_stream

Co-authored-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 10:14:40 +02:00
7d07b73def bump version to 0.8.8-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 13:12:18 +02:00
3d3670d786 termproxy: cmd: support upgrade
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 13:12:18 +02:00
14291179ce d/control: add dependecy for pve-xtermjs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 12:57:11 +02:00
e744de0eb0 api: termproxy: fix ACL as /nodes is /system
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 12:57:11 +02:00
98b1733760 api: apt: use schema default const for quiet param
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 12:25:28 +02:00
fdac28fcec update proxmox crate to get latest websocket implementation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 12:15:49 +02:00
653e2031d2 ui: add Console Button
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 12:06:38 +02:00
01ca99da2d server/rest: add console to index
register the console template and render it when the 'console' parameter
is given

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 12:06:38 +02:00
1c2f842a98 api2/nodes: add termproxy and vncwebsocket api calls
Even though it has nothing to do with vnc, we keep the name of the api
call for compatibility with our xtermjs client.

termproxy:
verifies that the user is allowed to open a console and starts
termproxy with the correct parameters

starts a TcpListener on "localhost:0" so that the kernel decides the
port (instead of trying to rerserving like in pve). Then it
leaves the fd open for termproxy and gives the number as port
and tells it via '--port-as-fd' that it should interpret this
as an open fd

the vncwebsocket api call checks the 'vncticket' (name for compatibility)
and connects the remote side (after an Upgrade) with a local TcpStream
connecting to the port given via WebSocket from the proxmox crate

to make sure that only the client can connect that called termproxy and
no one can connect to an arbitrary port on the host we have to include
the port in the ticket data

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 12:06:38 +02:00
a4d1675513 api2/access: implement term ticket
modeled after pves/pmgs vncticket (i substituted the vnc with term)
by putting the path and username as secret data in the ticket

when sending the ticket to /access/ticket it only verifies it,
checks the privs on the path and does not generate a new ticket

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 11:55:00 +02:00
2ab5acac5a server/config: add mechanism to update template
instead of exposing handlebars itself, offer a register_template and
a render_template ourselves.

render_template checks if the template file was modified since
the last render and reloads it when necessary

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 11:55:00 +02:00
27fde64794 api: apt update must run protected
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 11:45:52 +02:00
fa3f0584bb api: apt: support refreshing package index
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 11:21:54 +02:00
d12720c796 docs: epilog: point "Proxmox Backup" hyperlink to pbs wiki
This changes the "Proxmox Backup" hyperlink, which is referred to throughout the
Proxmox Backup Server documentation. Following this patch, it now points to the
pbs wiki page, rather than the unpublished product page.

*Note: This change is only a temporary measure, while the product page
(https://www.proxmox.com/proxmox-backup) is in development.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-07-23 10:43:17 +02:00
a4e86972a4 add .../apt/update API call
Depends on patched apt-pkg-native-rs. Changelog-URL detection is
inspired by PVE perl code for now, though marked with fixme to use 'apt
changelog' later on, if/when our repos have APT-compatible changelogs
set up.

list_installed_apt_packages iterates all packages and creates an
APTUpdateInfo with detailed information for every package matched by the
given filter Fn.

Sadly, libapt-pkg has some questionable design choices regarding their
use of 'iterators', which means quite a bit of nesting...

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-23 10:41:14 +02:00
3a3af6e2b6 backup manifest: make lookup_file_info public
useful to get info like, was the previous snapshot encrypted in
libproxmox-backup-qemu

Requested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 10:39:21 +02:00
482409641f docs: remove duplicate feature
Signed-off-by: Moayad Almalat <m.almalat@proxmox.com>
2020-07-23 10:29:08 +02:00
9688f6de0f client: log index.json upload only when verbose
I mean the user expects that we know what archives, fidx or didx, are
in a backup, so this is internal info and should not be logged by
default

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 10:28:38 +02:00
5b32820e93 client: don't use debug format for printing BackupRepository
It implements the fmt::Display  trait after all

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 10:28:38 +02:00
f40b4fb05a client writer: do not output chunklist for now on verbose true
Verbosity needs to be a non binary level, as this now is just
debug/development info, for endusers normally to much.

We want to have it available, but with a much higher verbosity level.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 10:28:38 +02:00
6e1deb158a client: rework logging upload size, bandwidth, ... info
Track reused size and chunk counts.
Log reused size and use pretty print for all sizes and bandwidth
metrics.
Calculate speed over the actually uploaded size, as else it can be
skewed really bad (showing like terabytes per second)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 10:28:38 +02:00
50ec1a8712 tools/format: add struct to pretty print bytes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 09:36:02 +02:00
a74b026baa systemd/time: document CalendarEvent struct and add TODOs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 07:55:42 +02:00
7e42ccdaf2 fixed index: chunk_from_offset: avoid slow modulo operation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-22 17:46:07 +02:00
e713ee5c56 remove BufferedFixedReader interface
replaced by AsyncIndexReader

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-22 17:28:49 +02:00
ec5f9d3525 implement AsyncSeek for AsyncIndexReader
Requires updating the AsyncRead implementation to cope with byte-wise
seeks to intra-chunk positions.

Uses chunk_from_offset to get locations within chunks, but tries to
avoid it for sequential read to not reduce performance from before.

AsyncSeek needs to use the temporary seek_to_pos to avoid changing the
position in case an invalid seek is given and it needs to error in
poll_complete.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-22 17:28:49 +02:00
d0463b67ca add and implement chunk_from_offset for IndexFile
Necessary for byte-wise seeking through chunks in an index.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-22 17:28:49 +02:00
2ff4c2cd5f datastore/chunker: fix comment typos
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-22 16:12:49 +02:00
c3b090ac8a backup: list images: handle walkdir error, catch "lost+found"
We support using an ext4 mountpoint directly as datastore and even do
so ourself when creating one through the disk manage code.

Such ext4 ountpoints have a lost+found directory which only root can
traverse into. As the GC list images is done as backup:backup user
walkdir gets an error.

We cannot ignore just all permission errors, as they could lead to
missing some backup indexes and thus possibly sweeping more chunks
than desired. While *normally* that should not happen through our
stack, we had already user report that they do rsyncs to move a
datastore from old to new server and got the permission wrong.

So for now be still very strict, only allow a "lost+found" directory
as immediate child of the datastore base directory, nothing else.

If deemed safe, this can always be made less strict. Possibly by
filtering the known backup-types on the highest level first.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-22 16:01:55 +02:00
c47e294ea7 datastore: fix typo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-22 15:04:14 +02:00
25455bd06d fix #2871: close FDs when scanning backup group
otherwise we leak those descriptors and run into EMFILE when a backup
group contains many snapshots.

fcntl::openat and Dir::openat are not the same ;)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-22 09:19:29 +02:00
c1c4a18f48 fix #2865: detect and skip vanished snapshots
also when they have been removed/forgotten since we retrieved the
snapshot list for the currently syncing backup group.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-22 09:19:29 +02:00
91f5594c08 api: translate ENOTFOUND to 404 for downloads
and percolate the HttpError back up on the client side

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-22 09:19:29 +02:00
86f6f74114 fix #2860: skip in-progress snapshots when syncing
they don't have a final manifest yet and are not done, so they can't be
synced either.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-22 09:19:29 +02:00
13d9fe3a6c .gitignore: add build directory
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-22 09:19:29 +02:00
41e4388005 ui: add calendar event selector
modelled after the PVE one, but we are not 1:1 compatible and need
deleteEmpty support. For now let's just have some duplicate code, but
we should try to move this to widget toolkit ASAP.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 19:33:52 +02:00
06a94edcf6 ui: sync job: default to false for "remove-vanished"
can be enabled later one easily, and restoring deleted snapshots
isn't easy.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 19:33:52 +02:00
ef496e2c20 ui: sync job: group remote fields and use "Source" in labels
Using "Source" helps to understand that this is a "pull from remote"
sync, not a "push to remote" one.

https://forum.proxmox.com/threads/suggestions-regarding-configurations-terminology.73272/

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 19:33:52 +02:00
113c9b5981 move subscription API path to /nodes
This aligns it with PVE and allows the widget toolkit's update window
"refresh" to work without modifications once POST /apt/update is
implemented.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-21 19:33:52 +02:00
956295cefe parse_calendar_event: support the weekly special expression
While we do not yet support the date specs for CalendarEvent the left
out "weekly" special expression[0] dies not requires that support.
It is specified to be equivalent with `Mon *-*-* 00:00:00` [0] and
this can be implemented with the weekday and time support we already
have.

[0]: https://www.freedesktop.org/software/systemd/man/systemd.time.html#Calendar%20Events

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 13:24:51 +02:00
a26c27c8e6 api2/status: fix estimation bug
when a datastore has enough data to calculate the estimated full date,
but always has exactly the same usage, the factor b of the regression
is '0'

return 0 for that case so that the gui can show 'never' instead of
'not enough data'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-21 13:02:08 +02:00
0c1c492d48 docs: fix some typos
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 13:01:21 +02:00
255ed62166 docs: GC followup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 12:58:47 +02:00
b96b11cdb7 chunk_store: Fix typo in bail message
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-07-21 12:51:41 +02:00
faa8e6948a backup: Fix typos and grammar
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-07-21 12:51:41 +02:00
8314ca9c10 docs: fix #2851 Add note about GC grace period
Adding a note about the garbage collection's grace period due to the
default atime behavior should help to avoid confusion as to why space is
not freed immediately.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-07-21 12:51:41 +02:00
538c2b6dcf followup: fixup the directory number, refactor
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-20 14:39:02 +02:00
e9b44bec01 docs: add note on supported filesystems
certain filesystems cannot be used as chunkstores, because they don't
support 2^16 subdirectories (e.g. ext4 with certain features disabled
or ext3 - see ext4(5))

reported via our community forum:
https://forum.proxmox.com/threads/emlink-too-many-links.73108/

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-20 14:10:39 +02:00
65418a0763 docs: introduction: rewording and fixing of minor errors
Reworded one sentence for improved readability.
Fixed some minor language errors.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-07-20 14:10:39 +02:00
aef4976801 docs: admin guide: fix grammatical errors and improve English
Mostly fixed typos and grammatical errors.
Improved wording in some sections to make instructions/advice clearer.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-07-20 14:10:39 +02:00
295d4f4116 bump udev build-dependency
0.4 contains a fix for C chars on non-x86 architectures.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-20 12:11:54 +02:00
c47a900ceb build: run tests on build (again)
now that all examples and tests are fixed again.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-20 11:37:53 +02:00
1b1110581a manifest: revert canonicalization to old behaviour
JSON keys MUST be quoted. this is a one-time break in signature
validation for backups created with the broken canonicalization code.
QEMU backups are not affected, as libproxmox-backup-qemu never linked
the broken versions.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-20 11:37:53 +02:00
eb13d9151a examples/upload-speed: adapt to change
commit 323b2f3dd6
changed the signature of upload_speedtest
adapt the example

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-20 10:22:42 +02:00
449e4a66fe tools/xattr: a char from C is not universally a rust i8
Make it actually do the correct cast by using `libc::c_char`.

Fixes issues when building on other platforms, e.g., the aarch64
client only build on Arch Linux ARM I tested in my free time.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-19 19:46:27 +02:00
217c22c754 server: add path value to NOT_FOUND http error
Especially helpful for requests not coming from browsers (where the
URL is normally easy to find out).

Makes it easier to detect if one triggered a request with an old
client, or so..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-16 12:46:51 +02:00
ba5b8a3e76 bump pxar dependency to 0.2.1
Contains a fix for the check for the maximum allowed size of
acl group object entries.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-16 11:48:22 +02:00
ac5e9e770b catalog_shell: add exit command
it is nice to have a command to exit from the shell instead of
only allowing ctrl+d or ctrl+c

the api method is just for documentation/help purposes and does nothing
by itself, the real logic is directly in the read loop

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-15 12:19:57 +02:00
b25deec0be pxar: .pxarexclude: absolute paths and byte based paths
Change the .pxarexclude parser to byte based parsing with
`.split(b'\n')` instead of `.lines()`, to not panic on
non-utf8 paths.

Specially deal with absolute paths by prefixing them with
the current directory.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-15 11:55:48 +02:00
cdf1da2872 tools: add strip_ascii_whitespace for byte slices
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-15 11:55:48 +02:00
3cfc56f5c2 cached user info: check_privs: print privilege path in error message
As else this is really user unfriendly, and it not printing it has no
advantage. If one doesn't wants to leak resource existence they just
need to *always* check permissions before checking if the requested
resource exists, if that's not done one can leak information also
without getting the path returned (as the system will either print
"resource doesn't exists" or "no permissions" respectively)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-15 08:55:58 +02:00
37e53b4c07 buildsys: fix targets to not run dpkg-buildpackage 4 times
and add a deb-all target

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 12:31:20 +02:00
77d634710e bump version to 0.8.7-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 12:05:34 +02:00
5c5181a252 d/lintian-overrides: ignore systemd-service-file-refers-to-unusual-wantedby-target
proxmox-backup-banner.service needs getty.target

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 11:08:36 +02:00
67042466e8 ui: datastore edit: avoid an extra indentation level
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 10:56:36 +02:00
757d0ccc76 warning fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-14 10:37:14 +02:00
4a55fa87d5 bump version to 0.8.7-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-14 10:25:53 +02:00
032cd1b862 pxar: restore file attributes, improve errors
and use the correct integer types for these operations

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-14 10:25:45 +02:00
ec2434fe3c ui: buildsys: add lint target
not yet automatically called on build, as it still fails.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 07:43:01 +02:00
34389132d9 docs: installation: add note where to find the webinterface
As the 8007 vs 8006 port is new and could confuse people, especially
if they did not used the PBS installer.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 07:35:59 +02:00
78ee20d72d docs: fix typo s/PBS_REPOSTOR/PBS_REPOSITOR/
Reported-by: Piotr Paszkowski aka patefoniQ
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-13 19:23:50 +02:00
601e42ac35 ui: running tasks: update limit to 100
else we'll never see the 99+ tasks ..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-11 12:53:32 +02:00
e1897b363b docs: add secure-apt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 14:12:51 +02:00
cf063c1973 bump version to 0.8.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:35:04 +02:00
f58233a73a src/backup/data_blob_reader.rs: avoid unwrap() - return error instead 2020-07-10 11:28:19 +02:00
d257c2ecbd ui: fingerprint: add icon to copy button
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:17:20 +02:00
e4ee7b7ac8 ui: fingerprint: add copy button
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:13:54 +02:00
1f0d23f792 ui: add show fingerprint button to dashboard
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
bfcef26a99 api2/node/status: add fingerprint
and rename get_usage to get_status (since its not usage only anymore)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
ec01eeadc6 refactor CertInfo to tools
we want to reuse some of the functionality elsewhere

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
660a34892d update proxmox crate to 0.2.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-10 11:08:27 +02:00
d86034afec src/bin/proxmox_backup_client/catalog.rs: fix keyfile handling 2020-07-10 10:36:45 +02:00
62593aba1e src/backup/manifest.rs: fix signature (exclude 'signature' property) 2020-07-10 10:36:45 +02:00
0eaef8eb84 client: show key path when creating/changing default key
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-10 09:58:24 +02:00
e39974afbf client: add simple version command
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 09:34:07 +02:00
dde18bbb85 proxmox-backup-client benchmark: improve output format 2020-07-10 09:13:52 +02:00
a40e1b0e8b src/server/rest.rs: avoid compiler warning 2020-07-10 09:13:52 +02:00
a0eb0cd372 ui: running task: increase active limit we show in badge to 99
Two digits fit nicely, and the extra plus for the >99 case doesn't
takes that much space either. So that and the fact that 9 is just
really low makes me bump this to 99 as cut-off value.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:56:46 +02:00
41067870c6 ui: tune badge styling a bit
the idea is to blend in when no task is running, thus no
background-color there. When tasks are running use the proxmox
branding guideline dark-grey, it isn't used as often so it should
fall into ones eye when changing but it has some use so it doesn't
seems out of place.

Reduce the border radius by a lot, so that it seems similar to the
one our ExtJS theme uses for the buttons outside - the original
border radius seems like it comes from the time where this was
intended to be a floating badge, there it'd make sense but as
integrated button one this seems to fit the style much more.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:51:25 +02:00
33a87bc39a docs: reference PDF variant in HTML output
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:31:38 +02:00
bed3e15f16 debian/proxmox-backup-docs.links: fix name and target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:23:41 +02:00
c687da9e8e datastore: chown base dir on creation
When creating a new datastore the basedir is only owned by the backup
user if it did not exist beforehand (create_path chowns only if it
creates the directory), and returns false if it did not create the
directory).

This improves the experience when adding a new datastore on a fresh
disk or existing directory (not owned by backup) - backups/pulls can
be run instead of terminating with EPERM.

Tested on my local testinstall with a new disk, and a existing directory:

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 18:20:16 +02:00
be30e7d269 ui: dashboard/TaskSummary: fade icons if count is zero
so that users can see the relevant counts faster

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 17:10:47 +02:00
106603c58f ui: fix crypt mode caluclation
also include 'mixed' in the calculation of the overall mode of a
snapshot and group

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 17:09:56 +02:00
7ba2c1c386 docs: add initial basic software stack definition
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 17:09:05 +02:00
4327a8462a proxmox-backup-client benchamrk: add more speed tests 2020-07-09 17:07:22 +02:00
e193544b8e src/server/rest.rs: disable debug logs 2020-07-09 16:18:14 +02:00
323b2f3dd6 proxmox-backup-client benchmark: add --verbose flag 2020-07-09 16:16:39 +02:00
7884e7ef4f bump version to 0.8.5-1 2020-07-09 15:35:07 +02:00
fae11693f0 fix cross process task listing
it does not make sense to check if the worker is running if we already
have an endtime and state

our 'worker_is_active_local' heuristic returns true for non
process-local tasks, so we got 'running' for all tasks that were not
started by 'our' pid and were still running

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 15:30:52 +02:00
22231524e2 docs: expand datastore documentation
document retention settings and schedules per datastore with
some minimal examples.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 15:04:26 +02:00
9634ca07db docs: add remotes and sync-jobs and schedules
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 15:04:26 +02:00
62f6a7e3d9 bump pathpatterns to 0.1.2
Fixes `**/foo` not matching "foo" without slashes.
(`**/lost+found` now matches the `lost+found` dir at the
root of our tree properly).

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 14:34:10 +02:00
86443141b5 ui: align version and user-menu spacing with pve/pmg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 14:31:21 +02:00
f6e964b96e ui: make username a menu-button
like we did in PVE and PMG

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 14:31:21 +02:00
c8bed1b4d7 bump version to 0.8.4-1 2020-07-09 14:28:44 +02:00
a3970d6c1e ui: add TaskButton in header
opens a grid with the running tasks and a shortcut the the node tasks

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 14:26:57 +02:00
cc83c13660 ui: add RunningTasksStore
so that we have a global store for running tasks

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 14:26:57 +02:00
bf7e2a4648 simpler lost+found pattern
the **/ is not required and currently also mistakenly
doesn't match /lost+found which is probably buggy on the
pathpatterns crate side and needs fixing there

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 14:06:42 +02:00
e284073e4a bump version to 0.8.3-1 2020-07-09 13:55:15 +02:00
3ec99affc8 get_disks: don't fail on zfs_devices
zfs does not have to be installed, so simply log an error and
continue, users still get an error when clicking directly on
ZFS

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:47:31 +02:00
a9649ddc44 disks/zpool_status: add test for pool with special character
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
4f9096a211 disks/zpool_list: allow some more characters for pool list
not exhaustive of what zfs allows (space is missing), but this
can be done easily without problems

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
c3a4b5e2e1 zpool_list: add tests for special pool names
those names are allowed for zpools

these will fail for now, but it will be fixed in the next commit

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
7957fabff2 api: add ZPOOL_NAME_SCHEMA and regex
poolnames can containe spaces and some other special characters

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
20a4e4e252 minor optimization to 'to_canonical_json'
* don't clone hash keys, just use references
* we don't need a String, stick to Vec<u8> and use
  serde_json::to_writer to avoid a temporary strings
  altogether

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 13:32:11 +02:00
2774566b03 ui: adapt for new sign-only crypt mode
we can now show 'none', 'encprypted', 'signed' or 'mixed' for
the crypt mode

also adds a different icon for signed files, and adds a hint that
signatures cannot be verified on the server

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:28:55 +02:00
4459ffe30e src/backup/manifest.rs: add default toömake it compatible with older backus 2020-07-09 13:25:38 +02:00
d16ed66c88 bump version toö 0.8.2-1 2020-07-09 11:59:10 +02:00
3ec6e249b3 buildsys: also upload debug packages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 11:39:10 +02:00
dfa517ad6c src/backup/manifest.rs: rename into_string -> to_string
And do not consume self.
2020-07-09 11:28:05 +02:00
8b2ad84a25 bump version to 0.8.1-1 2020-07-09 10:01:31 +02:00
3dacedce71 src/backup/manifest.rs: use serde_json::from_value() to deserialize data
Also modified from_data compute signature ditectly from json.
2020-07-09 09:50:28 +02:00
512d50a455 typos
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 09:34:58 +02:00
b53f637914 src/backup/manifest.rs: cleanup signature generation 2020-07-09 09:20:49 +02:00
152a926149 tests/blob_writer.rs: make it work again 2020-07-09 09:15:15 +02:00
7f388acea8 ship pbstest repo as sources.list.d file for beta
NOTE: the repo url is not yet working at time of commit, this is a
preparatory step.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 19:09:31 +02:00
b2bfb46835 docs: package repos: drop non-tests for now
they won't work and thus just confuse people, re-add them once we're
releasing final.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 18:17:55 +02:00
24406ebc0c docs: move host sysadmin out to own chapter, fix ZFS one
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 18:15:33 +02:00
1f24d9114c docs: add missing todos
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 18:14:17 +02:00
859fe9c1fb add local-zfs.rst
content is > 90% same as local-zfs.adoc in pve-docs.

adapted the format for .rst

fixed some typos and wrote some parts slightly different (wording).

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-07-08 16:49:40 +02:00
2107a5aebc src/backup/manifest.rs: include signature inside the manifest
This is more flexible, because we can choose what fileds we want to sign.
2020-07-08 16:23:26 +02:00
3638341aa4 src/backup/file_formats.rs: remove signed chunks
We can include signature in the manifest instead (patch will follow).
2020-07-08 16:23:26 +02:00
067fe514e6 docs: fix repo paths
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 15:41:09 +02:00
8c6e5ce23c improve administration guide
fixing some typos and grammar errors.

added example file layout for datastores.

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-07-08 14:20:49 +02:00
0351f23ba4 client: introduce --keyfd parameter
This is a more convenient way to pass along the key when
creating encrypted backups of unprivileged containers in PVE
where the unprivileged user namespace cannot access
`/etc/pve/priv`.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 13:56:38 +02:00
c1ff544eff src/backup/crypt_config.rs - compute_digest: make it more secure 2020-07-08 12:53:04 +02:00
69e5d71961 ui: ds/content: disable some button for in-progress backup
We cannot verify, download, file-browse backups which are currently
in progress.

'Forget' could work but is probably not desirable?

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 12:22:00 +02:00
48e22a8900 ui: ds/content: do not count in-progress backups for last made one
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 12:11:27 +02:00
a7a5f56daa ui: ds/content: show spinner for backups in progress
use the fact that they do not have a size property at all

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 12:09:21 +02:00
05389a0109 more xdg cleanup and encryption parameter improvements
Have a single common function to get the BaseDirectories
instance and a wrapper for `find()` and `place()` which
wrap the error with some context.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 10:57:28 +02:00
b65390ebc9 client: xdg usage: place() vs find()
place() is used when creating a file, as it will create
intermediate directories, only use it when actually placing
a new file.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 10:57:28 +02:00
3bad3e6e52 src/client/backup_writer.rs - upload_stream: add crypt_mode 2020-07-08 10:43:28 +02:00
24be37e3f6 client: fix schema to include --crypt-mode parameter
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 10:09:15 +02:00
1008a69a13 pxar: less confusing logic
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:58:29 +02:00
521a0acb2e DataStore::load_manifest: also return CryptMode
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:19:53 +02:00
3b66040de6 add DataBlob::crypt_mode
and move use statements up

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:19:53 +02:00
af3a0ae7b1 remove CryptMode::sign_only special method
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:19:53 +02:00
4e36f78438 src/backup/manifest.rs: support old encrypted property
Just to avoid confusion.
2020-07-08 08:52:27 +02:00
f28d9088ed introduce a CryptMode enum
This also replaces the recently introduced --encryption
parameter on the client with a --crypt-mode parameter.

This can be "none", "encrypt" or "sign-only".

Note that this introduces various changes in the API types
which previously did not take the above distinction into
account properly:

Both `BackupContent` and the manifest's `FileInfo`:
    lose `encryption: Option<bool>`
    gain `crypt_mode: Option<CryptMode>`

Within the backup manifest itself, the "crypt-mode" property
will always be set.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-07 15:24:19 +02:00
56b814e378 docs: add getting help section
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-07 13:24:39 +02:00
0c136efe30 docs: features: minor wording
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-07 13:23:17 +02:00
cdead6cd12 docs: drop initial out of context sentence
the footer mentions sphinx and this feels weird to read as user
(which doesn't really cares what language/format the source of the
docs are in)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-07 13:22:03 +02:00
c950826e46 bump version to 0.8.0-1 2020-07-07 10:15:44 +02:00
f91d58e157 src/tools/runtime.rs: implement get_runtime_with_builder 2020-07-07 10:11:04 +02:00
1ff840ffad bump version to 0.7.0-1 2020-07-07 07:40:22 +02:00
7443a6e092 src/client/remote_chunk_reader.rs: implement clone for RemoteChunkReader 2020-07-07 07:34:58 +02:00
3a9988638b docs: move todolist to own document, don't link in release build
It is always build for html, but not linked if the devbuild tag isn't
set. This tag is set in the Makefile if the $(BUILD_MODE) variable
isn't "release".

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-06 14:44:53 +02:00
96ee857752 client: add --encryption boolen parameter
This can be used to explicitly disable encryption even if a
default key file exists in ~/.config.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
887018bb79 client: use default encryption key if it is available
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
9696f5193b client: move key management into separate module
and use api macro for methods and Kdf type

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
e13c4f66bb minor style & whitespace fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 10:55:25 +02:00
8a25809573 docs: sync up copyright years
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:57:47 +02:00
d87b193b0b docs: todo: avoid leaking build details, link only
One can just search for them... If really wanted, we could set it to
true for dev builds (i.e., no DEB_VERSION defined)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:54:00 +02:00
ea5289e869 d/rules: do not compress .pdf files
as else the docs .pdf is a PITA to use for some endusers..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:53:04 +02:00
1f6a4f587a docs: do not hardcode version
use the debian package ones, if not defined we're doing a dev build

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:51:58 +02:00
705b2293ec d/control: add missing dependencies for lvm, smartmontools and ZFS
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 19:37:43 +02:00
d2c7ef09ba docs: rework and add a bit to introduction
Contributed-by: Daniela Häsler <daniela@proxmox.com>
[ discussed and edited some parts live with me, Thomas ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:58:17 +02:00
27f86f997e docs: fix index title
Contributed-by: Daniela Häsler <daniela@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:57:04 +02:00
fc93d38076 ui: ZFS create: set name-field minLength to 3 to match backend
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:03:51 +02:00
a5a85d41ff ui: ZFS create: use correct typeParameter name for disk selector
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:00:12 +02:00
08cb2038bd api: disks: indentation fixup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:59:30 +02:00
6f711c1737 ui: ZFS list: fix details top-bar button handler
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:20:33 +02:00
42ec9f577f ui: buildsys: actually include PBS.window.ZFSCreate component in source
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:19:59 +02:00
9de69cdb1a src/bin/proxmox_backup_client/catalog.rs: split out catalog code 2020-07-03 16:45:47 +02:00
bd260569d3 ui: fix glitch on some zoom steps
if the baseCls is not 'x-plain' the background of the flex
element is white, and on some zoom steps it gets taller
than one pixel and appears as a white line

making it have the plain baseCls, so it does not get any
background color and is always invisible

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 13:05:19 +02:00
36cb4b30ef add beta text with link to bugtracker
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 13:05:08 +02:00
4e717240bf bump version to 0.6.0-1 2020-07-03 09:46:19 +02:00
e9764238df make ReadChunk not require mutable self.
That way we can reduce lock contentions because we lock for much shorter
times.
2020-07-03 07:37:29 +02:00
26f499b17b ui: increase timeout for snapshot listing
the api call can take a very long time (for now), until we can
improve that, increase the timeout from the default of 30s

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 06:14:21 +02:00
cc7995ac40 src/bin/proxmox_backup_client/task.rs: split out task command 2020-07-02 18:04:29 +02:00
43abba4b4f src/bin/proxmox_backup_client/mount.rs: split out mount code 2020-07-02 17:49:59 +02:00
58f950c546 ui: consistently spell Datastore without space between words
Not even hard feeling on 'Datastore' vs. 'Data Store' but consistency
is desired in such names.
Talked shortly with Dominik, which also slightly favored the one
without space - so just go for that one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-02 17:20:41 +02:00
c426e65893 ui: disk create: sync and improve 'add-datastore' checkbox label
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-02 17:06:37 +02:00
caea8d611f proxmox-backup-client: add benchmark command
This is just a start, We need to add more useful things here...
2020-07-02 14:01:57 +02:00
7d0754a6d2 pxar: fixup 'vanished-file' logic a bit
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-30 14:41:42 +02:00
5afa0755ea pxar: fix missing newlines in warnings
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-30 14:37:20 +02:00
40b63186a6 DataStoreConfig.js: add verify button 2020-06-30 13:28:42 +02:00
8f6088c130 DataStoreContent.js: add verify button 2020-06-30 13:22:02 +02:00
2162e2c15d src/api2/admin/datastore.rs: avoid slash in UPID strings 2020-06-30 13:11:22 +02:00
0d5ab04a90 bump version to 0.5.0-1 2020-06-29 13:01:11 +02:00
4059285649 fix typo 2020-06-29 12:59:25 +02:00
2e079b8bf2 partially revert commit 1f82f9b7b5
do it backwards compatible. Also, code was wrong because FixedIndexWriter
still computed old style csums...
2020-06-29 12:44:45 +02:00
4ff2c9b832 ui: allow to Forget (delete) backup snapshots. 2020-06-26 15:58:06 +02:00
a8e2940ff3 pxar: deal with files changing size during archiving
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-26 11:49:51 +02:00
d5d5f2174e bump version to 0.4.0-1 2020-06-26 10:43:52 +02:00
2311238450 depend on proxmox 0.1.41 2020-06-26 10:40:47 +02:00
2ea501ffdf ui: add ZFS management
adds a ZFSList and ZFSCreate class, modeled after the one in pve

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 10:33:23 +02:00
4eb4e94918 fix test output
field separator for pools is always a tab when using -H

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 10:31:11 +02:00
817bcda848 src/backup/verify.rs: do not stop on server shutdown
This is a read-only task, so there is no need to stop.
2020-06-26 09:45:59 +02:00
f6de2c7359 WorkerTask: add warnings and count them
so that we have one level more between errors and OK

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:42:11 +02:00
3f0b9c10ec ui: dashboard: remove 'wobbling' of tasks that have the same duration
by sorting them by upid after sorting by duration

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:13:33 +02:00
2b66abbfab ui: dashboard: use last value for holes in history graph
it is only designed to be a quick overview, so having holes there
is not really pretty and since we do not even show any date
for the points, we can simply reuse the last value for holes

the 'real' graph with holes is still available on the
DataStoreStatistics panel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:13:16 +02:00
402c8861d8 fix typo
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:12:29 +02:00
3f683799a8 improve 'debug' parameter
instead of checking on '1' or 'true', check that it is there and not
'0' and 'false'. this allows using simply

https://foo:8007/?debug

instead of

https://foo:8007/?debug=1

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:12:14 +02:00
573bcd9a92 ui: automatically add 'localhost' as nodename for all panels
this will make refactoring easier for panels that are reused from pve
(where we always have a hostname)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:11:36 +02:00
90779237ae ui: show proper loadMask for DataStoreContent
we have to use the correct store, and we have to manually show the
error (since monStoreErrors only works for Proxmox Proxies)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:11:10 +02:00
1f82f9b7b5 src/backup/index.rs: add compute_csum
And use it for fixed and dynamic index. Please note that this
changes checksums for fixed indexes, so restore older backups
will fails now (not backward compatible).
2020-06-26 09:00:34 +02:00
19b5c3c43e examples/upload-speed.rs: fix compile error 2020-06-26 08:59:51 +02:00
fe3e65c3ea src/api2/backup.rs: call register_chunk in previous download api 2020-06-26 08:22:46 +02:00
fdaab0df4e src/backup/index.rs: add chunk_info method 2020-06-26 08:14:45 +02:00
b957aa81bd update backup api for incremental backup
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-06-26 07:17:08 +02:00
8ea00f6e49 allow to abort verify jobs
And improve job description rendering on gui.
2020-06-25 12:56:36 +02:00
4bd789b0fa ui: file browser: expand child node if only one archive present
Get the first visible node through the Ext.data.NodeInterface defined
"firstChild" element and expand that if there's only one archive
present.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-25 12:52:42 +02:00
2f050cf2ed ui: file browser: adapt height for 4:3 instead of weird 2:1 ratio
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-25 11:59:23 +02:00
e22f4882e7 extract create_download_response API helper
and put it into a new "api2::helpers" module.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-06-25 11:57:37 +02:00
c65bc99a41 [chore] bump to using pxar 0.2.0
This breaks all previously created pxar archives!

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-25 09:46:56 +02:00
355c055e81 src/bin/proxmox-backup-manager.rs: implement verify 2020-06-24 13:35:21 +02:00
c2009e5309 src/api2/admin/datastore.rs: add verify api 2020-06-24 13:35:21 +02:00
23f74c190e src/backup/backup_info.rs: impl Display for BackupGroup 2020-06-24 13:35:21 +02:00
a6f8728339 update to pxar 0.1.9, update ReadAt implementations
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-24 11:57:12 +02:00
c1769a749c bump version to 0.3.0-1 2020-06-24 10:13:56 +02:00
facd9801cf add incremental backup support
To support incremental backups (where not all chunks are sent to the
server), a new parameter "reuse-csum" is introduced on the
"create_fixed_index" API call. When set and equal to last backups'
checksum, the backup writer clones the data from the last index of this
archive file, and only updates chunks it actually receives.

In incremental mode some checks usually done on closing an index cannot
be made, since they would be inaccurate.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-06-24 10:01:25 +02:00
21302088de remove debug println 2020-06-24 09:15:13 +02:00
8268c9d161 fix overflow panic during upload
if *only* data chunks are registered (high chance during incremental
backup), then chunk_count might be one lower then upload_stat.count
because of the zero chunk being unconditionally uploaded but not used.
Thus when subtracting the two, an overflow would occur.

In general, don't let the client make the server panic, instead just set
duplicates to 0.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-06-24 09:07:22 +02:00
b91b7d9ffd api2/node/disks/zfs: check if default zfs mount path exists
and if it does bail, because otherwise we would get an
error on mounting and have a zpool that is not imported
and disks that are used

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-24 07:22:39 +02:00
6e1f0c138f ui: fix missing deleteEmpty on SyncJobEdit
else we cannot delete those fields

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-24 07:21:59 +02:00
8567c0d29c ui: add pxar FileBrowser
for unencrypted backups, enables browsing the pxar archives and
downloading single files from it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-24 07:21:33 +02:00
d33d8f4e6a api2/admin/datastore: add pxar-file-download api call
streams a file from a pxar file of an unencrypted backup

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-24 07:21:15 +02:00
5b1cfa01f1 api2/admin/datastore: add 'catalog' api call
returns the dir listing of the given filepath of the backup snapshot
the filepath has to be base64 encoded or 'root'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-24 07:16:12 +02:00
05d18b907a add From<&DirEntryAttribute to CatalogEntryType and make it pub(crate)
we want to get a string representation of the DirEntryAttribute
like 'f' for file, etc. and since we have such a mapping already
in the CatalogEntryType, use that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-24 07:08:50 +02:00
e44fe0c9f5 derive Clone for the LocalChunkReader
this will be necessary for accessing local pxar behind didx files

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-24 07:07:28 +02:00
4cf0ced950 add LocalDynamicReadAt
mostly copied from BufferedDynamicReadAt from proxmox-backup-client
but the reader is wrapped in an Arc in addition to the Mutex

we will use this for local access to a pxar behind a didx file

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-24 07:05:31 +02:00
98425309b0 ui: add BackupFileDownloader
enables to be able to download whole files from the backup (e.g.
the decoded didx/fidx/blobs) for unencrypted backups

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-24 07:01:56 +02:00
7b1e26699d ui: fix sorting of backup snapshots
we have to sort the treestore not the original store where we get the data

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-24 07:01:09 +02:00
676b0fde49 ui: fix encrypted column
do not use two different gettexts for groups and single backups
and correct logic for backup groups

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-24 07:00:32 +02:00
60f9a6ea8f src/backup/datastore.rs: add new helpers to load blobs and verify chunks 2020-06-24 06:58:14 +02:00
1090fd4424 src/backup/data_blob.rs: cleanup - improve code reuse 2020-06-24 06:56:48 +02:00
92c3fd2e22 src/backup/chunk_store.rs: allow to read name()
This is helpful for logging ...
2020-06-24 06:54:21 +02:00
e3efaa1972 ui: fix undefined data for iodelay
if ios are 0 and io_ticks are defined, the io_delay is zero

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-23 11:40:08 +02:00
0cf2b6441e tests/prune.rs: fix compile error 2020-06-23 11:32:53 +02:00
d6d3b353be cleanup: implement FromStr for BackupGroup 2020-06-23 08:16:56 +02:00
a67f7d0a07 cleanup: implement FromStr for BackupDir 2020-06-23 08:09:52 +02:00
c8137518fe src/bin/proxmox_backup_manager/disk.rs: add renderer for wearout
So that we display the same value as the GUI.
2020-06-23 07:44:09 +02:00
cbef49bf4f remove absolute paths when executing binaries
we set the paths manually, so this is ok

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-23 07:09:06 +02:00
0b99e5aebc remove debug prints
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-23 06:33:58 +02:00
29c55e5fc4 remove tokio:main from download-speed example
we use proxmox_backup::tools::runtime::main already

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-23 06:33:48 +02:00
f386f512d0 add AsyncReaderStream
and replace AsyncIndexReader's stream implementation with that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-23 06:33:31 +02:00
3ddb14889a src/tools/daemon.rs: reopen STDOUT/STDERR journald streams to get correct PID in logs 2020-06-22 13:06:53 +02:00
00c2327564 bump pxar dep to 0.1.8
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-22 11:10:22 +02:00
d79926795a debian/postinst: use try-reload-or-restart for both services 2020-06-22 10:59:13 +02:00
c08fac4d69 tools::daemon: sync with child after MainPid message
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-22 10:58:04 +02:00
c40440092d tools: add socketpair helper
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-22 10:49:56 +02:00
dc2ef2b54f tools::daemon: fetch exe name in the beginning
We get the path to our executable via a readlink() on
"/proc/self/exe", which appends a " (deleted)" during
package reloads.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-22 10:31:54 +02:00
b28253d650 try to reload proxmox-backup-proxy on package upgrade 2020-06-22 09:39:37 +02:00
f28cfb322a avoid compiler warnings 2020-06-20 07:24:02 +02:00
3bbe291c51 zpool_status.rs - indented_list_to_tree: do not set name property
This is no necessary. We only touch/set 'children' and 'leaf' properties.
2020-06-20 07:19:25 +02:00
42d19fdf69 src/api2/node/disks/zfs.rs: always set pool name 2020-06-20 07:15:32 +02:00
215968e033 src/tools/disks/zpool_status.rs: add 'leaf' attribute to root node, rename 'prev' into 'parent' 2020-06-20 06:49:06 +02:00
eddd1a1b9c src/tools/disks/zpool_status.rs: move use clause top of file 2020-06-20 06:17:22 +02:00
d2ce211899 fixup for previous commit 2020-06-20 06:15:26 +02:00
1cb46c6f65 src/tools/disks/zpool_status.rs - cleanup: use struct StackItem instead of tuple 2020-06-19 18:58:57 +02:00
5d88c3a1c8 src/tools/disks/zpool_status.rs: remove unnecessary checks
Thos things can never happen, so simply use unwrap().
2020-06-19 18:27:39 +02:00
07fb504943 src/tools/disks/zpool_status.rs: simplify code by using serde_json::to_value 2020-06-19 17:51:13 +02:00
f675c5e978 src/tools/disks/zpool_status.rs - add all attributes to the tree 2020-06-19 16:55:28 +02:00
4e37d9ce67 add general indented_list_to_tree implementation
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-19 14:37:40 +02:00
e303077132 lru_cache: restrict and annotate Send impl
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-19 09:37:34 +02:00
6ef9bb59eb api2/admin/datastore: add download-decoded endpoint
similar to 'download', but streams the decoded file
when it is not encrypted

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-19 08:39:15 +02:00
eeaa2c212b impl Sync for DataBlobReader
this is safe for the reason explained in the comment

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-19 08:37:47 +02:00
4a3adc3de8 add AsyncIndexReader
implements AsyncRead as well as Stream for an IndexFile and a store
that implements AsyncReadChunk

we can use this to asyncread or stream the content of a FixedIndex or
DynamicIndex

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-19 08:32:33 +02:00
abdb976340 add Display trait to BackupDir
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-19 08:28:35 +02:00
3b62116ce6 implement AsyncReadChunk for LocalChunkReader
same as the sync ReadChunk but uses tokio::fs::read instead
of file_get_contents

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-19 07:54:23 +02:00
e005f953d9 ui: add encryption info to snapshot list
show which backups/files are encrypted in the snapshot list

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-19 07:43:10 +02:00
1c090810f5 api2/admin/datastore/snapshos: show encrypted and size info per file
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-19 07:39:56 +02:00
e181d2f6da add encrypted info to Manifest
we want to save if a file of a backup is encrypted, so that we can
* show that info on the gui
* can later decide if we need to decrypt the backup

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-19 07:35:39 +02:00
16021f6ab7 use the existing async method for read_raw_chunk
does the same, except the manual drop, but thats handled there by
letting the value go out of scope

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-19 07:23:17 +02:00
ba694720fc api2/admin/datastore: log stream error during file download
the client cannot get an error during an chunked http transfer, so at
least log it server side

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-19 06:58:55 +02:00
bde8e243cf remove unsafe copy code
copy_nonoverlapping is basically a memcpy which can also be done
via copy_from_slice which is not unsafe
(copy_from_slice uses copy_nonoverlapping internally)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-19 06:56:15 +02:00
3352ee5656 parse_zpool_status_field: handle tabs without copying input 2020-06-18 19:40:01 +02:00
b29cbc414d parse_zpool_status_vdev: consider tabs as 8 spaces 2020-06-18 18:38:56 +02:00
026dc1d11f src/api2/node/disks/zfs.rs: add zpool_details api 2020-06-18 15:04:46 +02:00
9438aca6c9 src/tools/disks/zpool_status.rs: improve parser 2020-06-18 14:55:22 +02:00
547f0c97e4 src/tools/nom.rs: new helper parse_complete_line() for single line parsers
Like parse_complete(), but generates simpler error messages.
2020-06-18 12:57:55 +02:00
177a2de992 src/tools/nom.rs: move nom helpers into separate file 2020-06-18 12:41:13 +02:00
0686b1f4db src/tools/disks/zpool_list.rs: split code into separate file 2020-06-18 10:31:07 +02:00
0727e56a06 src/tools/disks/zpool_status.rs: parse zpool status output 2020-06-18 10:23:15 +02:00
2fd3d57490 src/tools/disks/zfs.rs: rename ZFSPoolStatus into ZFSPoolInfo, fix error message 2020-06-17 09:08:26 +02:00
3f851d1321 src/api2/node/disks/directory.rs: add early check if disk is unused 2020-06-17 08:31:11 +02:00
1aef491e24 src/bin/proxmox_backup_manager/disk.rs: add cli to create mounted disks 2020-06-17 08:07:54 +02:00
d0eccae37d avoid compiler warning 2020-06-17 08:07:42 +02:00
a34154d900 src/tools/disks/zfs.rs: cleanup parse_pool_header 2020-06-17 07:47:11 +02:00
c2cc32b4dd src/tools/disks/zfs.rs: add more parser tests 2020-06-17 07:38:19 +02:00
46405fa35d src/tools/disks/zfs.rs: add comment 2020-06-17 07:14:26 +02:00
66af7f51bc src/tools/disks/zfs.rs: make zfs list parser private 2020-06-17 07:00:54 +02:00
c72ccd4e33 src/tools/disks/zfs.rs: add regression tests for parse_zfs_list 2020-06-16 18:14:35 +02:00
902b2cc278 src/tools/disks/zfs.rs: simplify code 2020-06-16 17:51:17 +02:00
8ecd7c9c21 move api dump binaries back to src/bin for package building
they're required for docs

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-16 14:48:01 +02:00
7f17f7444a ui: add DiskList and DirectoryList
this also contains an adapted CreateDirectory window
for now this is mostly copied, since refactoring was not that
straightforward (changed parameters, etc.)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-16 13:36:32 +02:00
fb5a066500 src/api2/node/disks.rs: expose directory api 2020-06-16 13:36:32 +02:00
d19c96d507 move test binaries to examples/
These aren't installed and are only used for manual testing,
so there's no reason to force them to be built all the time.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-16 13:32:24 +02:00
929a13b357 src/api2/node/disks/zfs.rs: add zpool api 2020-06-16 13:25:53 +02:00
36c65ee0b0 src/tools/disks/zfs.rs: cleanup (rename usage properties)
And allow to parse zpool list output without -v flag.
2020-06-16 13:25:53 +02:00
3378fd9fe5 src/tools/disks/zfs.rs: parse more infos (dedup, fragmentation, health) 2020-06-16 13:25:53 +02:00
58c51cf3d9 avoid compiler warnings 2020-06-16 13:25:53 +02:00
5509b199fb use new run_command helper 2020-06-16 13:25:53 +02:00
bb59df9134 catalog: don't panic on invalid file mtimes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-16 11:25:54 +02:00
2564b0834f fix file timestamps in catalog
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-16 11:25:31 +02:00
9321bbd1f5 pxar: fix missing subdirectories in catalogs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-16 11:04:56 +02:00
4264e52220 reuse some extractor code in catalog shell
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-16 10:54:54 +02:00
6988b29bdc use O_EXCL when creating files during extraction
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-16 10:33:27 +02:00
98c54240e6 pxar: make extractor state more reusable
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-16 10:32:13 +02:00
d30c192589 AsyncReadChunk: require Send
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-16 09:50:29 +02:00
67908b47fa require pxar 0.1.7
fixes some hardlink reading issues in random-accessor

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-15 10:39:09 +02:00
ac7513e368 src/tools.rs: add setup_safe_path_env() 2020-06-15 10:38:30 +02:00
fbbcd85839 src/api2/node/disks/directory.rs: implement add-datastore feature 2020-06-15 10:01:50 +02:00
7a6b549270 dynamic index: make it hard to mess up endianess
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-15 09:19:35 +02:00
0196b9bf5b remove unnecessary .into
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 14:57:58 +02:00
739a51459a ui: Dashboard: implement subscription panel
and make it nicer

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 14:52:22 +02:00
195d7c90ce ui: Dashboard: show LongestTask/RunningTask/TaskSummary
by querying the new /status/task api every 15 seconds

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[w.bumiller@proxmox.com: fixup from d.csapak@proxmox.com]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 14:51:15 +02:00
6f3146c08c ui: add Task Panels for dashboard
LongestTasks:
grid that shows tasks sorted by duration in descending order

RunningTasks:
grid that shows all running tasks

TaskSummary:
an overview of backup,prune,gc and sync tasks (error/warning/ok)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 14:50:22 +02:00
4b12879289 ui: css: improve look of fa icons
with these changes fa icons in actioncolumns,
they have the same layout as <i> elements on the same line
(they were slightly bigger and offset before)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 14:50:20 +02:00
20b3094bcb api2/status: add task list endpoint
for now mostly copy/paste from nodes/nodename/tasks
(without the parameters)
but we should replace the 'read_task_list' with a method
that gives us the tasks since some timestamp

so that we can get a longer list of tasks than for the node
(we could of course embed this then in the nodes/node/task api call and
remove this again as long as the api is not stable)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 14:50:17 +02:00
df528ee6fa implement From<TaskListInfo> for TaskListItem
and use it where its convenient

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 14:50:12 +02:00
57e50fb906 use new Mmap helper for dynamic index
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 13:57:56 +02:00
3136792c95 bump proxmox dep to 0.1.40
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 13:57:35 +02:00
3d571d5509 some internal combinator-influenced api cleanup
The download methods used to take the destination by value
and return them again, since this was required when using
combinators before we had `async fn`.
But this is just an ugly left-over now.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 11:46:42 +02:00
8e6e18b77c client: make dump_image async, use async chunk reader
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 11:40:18 +02:00
4d16badf6f add an AsyncReadChunk trait
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 11:38:21 +02:00
a609cf210e more cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 11:01:04 +02:00
1498659b4e cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 10:59:34 +02:00
4482f3fe11 pxar, acl: cleanup acl helper usage
use NixPath for Acl::set_file to avoid memduping the c
string

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 10:52:18 +02:00
5d85847f91 client: only start catalog upload if we have one
else we start a dynamic writer and never close it, leading to a backup error

this fixes an issue with backing up vm templates
(and possibly vms without disks)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 10:38:25 +02:00
476b4acadc BackupEnvironment: do not set finished flag prematurely
we check if all dynamic_writers are closed and if the backup contains
any valid files, we can only mark the backup finished after those
checks, else the backup task gets marked as OK, even though it
is not finished and no cleanups run

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 10:37:52 +02:00
cf1bd08131 pxar: fcaps in fuse
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-12 10:37:48 +02:00
ec8f042459 api2/status: use new rrd::extract_cached_data
and drop the now unused extract_lists function

this also fixes a bug, where we did not add the datastore to the list at
all when there was no rrd data

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-10 13:31:16 +02:00
431cc7b185 rrd: move creation of serde value into api
there is now a 'extract_cached_data' which just returns
the data of the specified field, and an api function that converts
a list of fields to the correct serde value

this way we do not have to create a serde value in rrd/cache.rs
(makes for a better interface)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-10 13:31:14 +02:00
e693818afc refactor time functions to tools
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-10 13:31:10 +02:00
3d68536fc2 pxar: support .pxareclude files, error report updates
Report vanished files (instead of erroring out on them),
also only warn about files inaccessible due to permissions
instead of bailing out.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-10 12:22:18 +02:00
26e78a2efb downgrade some FIXMEs to TODOs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-10 11:09:23 +02:00
5444fa940b turn pxar::flags into bitflags, pxar::Flags
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-10 11:05:53 +02:00
d4f2397d4c add api to format disks and create datastores 2020-06-10 11:03:36 +02:00
fab2413741 catalog: remove unused SenderWriter
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-10 10:42:42 +02:00
669c137fec src/tools/systemd.rs: implement daemon_reload, start_unit, stop_unit and enable_unit 2020-06-10 08:56:04 +02:00
fc6047fcb1 pxar: don't skip list+found by default
This used to be default-off and was accidentally set to
on-by-default with the pxar crate update.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-10 08:53:10 +02:00
3014088684 pxar: sort .pxareclude-cli file and fix its mode
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-10 08:49:14 +02:00
144006fade src/tools.rs: add new run_command helper 2020-06-10 07:16:47 +02:00
b9cf6ee797 src/tools/systemd/types.rs: add Mount config 2020-06-09 18:47:10 +02:00
cdde66d277 statistics: covariance(): avoid allocation
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-09 13:57:27 +02:00
239e49f927 pxar: create .pxarexclude-cli file
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-09 13:17:59 +02:00
ae66873ce9 ui: add datastore usages to dashboard
shows an overview over the datastores, including a small chart of the
past month and an estimation of when its full

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-09 12:20:43 +02:00
bda48e04da api2: add status/datastore-usages api call
returns a list of the datastores and their usages, a list of usages of
the past month (for the gui) and an estimation of when its full
(using the linear regression)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-09 12:20:25 +02:00
ba97479848 add statistics module
provides some basic statistics functions (sum, mean, etc.)
and a function to return the parameters of the linear regression of
two variables

implemented using num_traits to be more flexible for the types

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-09 12:19:51 +02:00
6cad8ce4ce rrd: add 'extract_lists'
this is an interface to simply get the Vec<Option<f64>> out of rrd
without going through serde values

we return a list of timestamps and a HashMap with the lists we could find
(otherwise it is not in the map)

if no lists could be extracted, the time list is also empty

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-09 12:19:06 +02:00
34020b929e ui: show root disk usage
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-09 12:07:06 +02:00
33070956af let disk_usage return StorageStatus and use it for datastores/nodes
disk_usage returned the same values as defined in StorageStatus,
so simply use that

with that we can replace the logic of the datastore status with that
function and also use it for root disk usage of the nodes

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-09 12:05:39 +02:00
da84cc52f4 src/tools/systemd.rs: implement escape_unit and unescape_unit 2020-06-09 11:52:06 +02:00
9825748e5e Cargo.toml: readd commented-out proxmox crate via path for convenience
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-09 11:16:28 +02:00
2179359f40 move src/pxar.rs -> src/pxar/mod.rs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-09 10:49:59 +02:00
9bb161c881 src/tools/disks.rs: add create_single_linux_partition and create_file_system 2020-06-08 17:43:01 +02:00
297e600730 cleanup comment 2020-06-08 17:43:01 +02:00
ed7b3a7de2 src/tools/disks.rs: add get_fs_uuid helper 2020-06-08 17:43:01 +02:00
0f358204bd src/tools/disks.rs: add helper to list partitions 2020-06-08 17:43:01 +02:00
ca6124d5fa src/tools/disks.rs: make helpers pub
So that I can use them with my test code.
2020-06-08 17:43:01 +02:00
7eacdc765b pxar: split assert_relative_path
the check for a single component is only required in the dir
stack atm

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-08 15:02:52 +02:00
c443f58b09 switch to external pxar and fuse crates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-08 13:56:58 +02:00
ab1092392f Cargo.toml: pathpatterns, pxar, proxmox-fuse
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-08 13:56:58 +02:00
1e3d9b103d xattr: make xattr_name_fcaps public
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-08 13:56:58 +02:00
386990ba09 tools: add file_get_non_comment_lines
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-08 13:56:58 +02:00
bc853b028f src/tools/disks.rs: cleanup, remove unused DiskUse bitflag type 2020-06-08 09:43:07 +02:00
d406de299b src/tools/disks.rs: use dev_t to index zfs/lvm device sets 2020-06-08 09:01:34 +02:00
dfb31de8f0 proxmox_backup_manager disk list: display gpt column 2020-06-08 07:35:44 +02:00
7c3aa258f8 src/tools/disks/zfs.rs: allow empty zpool list output 2020-06-08 07:23:04 +02:00
044055062c src/tools/disks.rs: new helper to reread partition table 2020-06-08 07:22:06 +02:00
2b388026f8 src/api2/node/disks.rs: correctly use disk_by_name insteadf of disk_by_node 2020-06-08 07:20:59 +02:00
707974fdb3 src/api2/node/disks.rs: implement initgpt API 2020-06-07 10:30:34 +02:00
9069debcd8 src/api2/types.rs: define BLOCKDEVICE_NAME_SCHEMA 2020-06-07 07:20:25 +02:00
fa2bdc1309 src/config/acl.rs: add /system/disks to valid acl paths 2020-06-06 15:48:15 +02:00
8e40aa63c1 src/bin/proxmox-backup-manager.rs: add disk subcommand 2020-06-06 15:40:28 +02:00
d2522b2db6 src/tools/disks.rs: fix disk size, add completion helper 2020-06-06 15:39:25 +02:00
ce8e3de401 move disks api to /node/<node>/disks 2020-06-06 14:43:36 +02:00
7fa2779559 src/api2/disks.rs: implement smart api 2020-06-06 12:23:11 +02:00
042afd6e52 src/tools/disks.rs: new helper disk_by_name() 2020-06-06 12:22:38 +02:00
ff30caeaf8 src/api2/disks.rs - list-disks: add usage-type filter 2020-06-06 11:48:58 +02:00
553cd12ba6 src/api2/disks.rs: start disks api 2020-06-06 11:38:47 +02:00
de1e1a9d95 src/tools/disks.rs: use api macro so that we can use those types with the api 2020-06-06 11:37:24 +02:00
91960d6162 src/tools/disks.rs - get_disks: query smart status 2020-06-06 09:18:20 +02:00
4c24a48eb3 src/tools/disks/smart.rs: use model.to_string_lossy() to simplify code 2020-06-06 09:05:22 +02:00
484e761dab src/tools/disks/smart.rs: try to get correct wearout for ATA devices 2020-06-06 09:01:15 +02:00
059b7a252e src/tools/disks/smart.rs - get_smart_data: use &Disk instead of &str
So that we can query other device infos easily (model, vendor, ..)
2020-06-06 08:24:58 +02:00
1278aeec36 ui: add gc/prune schedule and options available in the ui
by adding them as columns for the config view,
and as a seperate tab on the edit window (this is done to not show
too many options at once)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-06 07:58:05 +02:00
e53a4c4577 ui: make DataStore configuration editable
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-06 07:57:39 +02:00
98ad58fbd2 ui: refactor DataStoreConfig and Edit
split them into two files and put them into the respective directory

refactor the DataStoreConfigPanel to controller/view
and the DataStoreEdit window/inputpanel to simply an editwindow
(there is no need to have a seperate inputpanel) which also
prepares the window for edit (by using pmxDisplayEditFields)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-06 07:57:19 +02:00
98bb3b9016 ui: replace DataStoreStatus with DataStoreConfig
We will show an overall status of the DataStores in the Dashboard, so there
is no need for a seperate DataStores Status.

This means we can move the Config to where the Status was, and remove
the duplicated entry in the NavigationTree, reducing confusion for users

We can still seperate the permissions by simply showing a permission
denied error, or simply leaving the list empty

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-06 07:56:35 +02:00
eb80aac288 src/tools/disks/smart.rs: parse output from smartctl 2020-06-05 18:30:06 +02:00
c26aad405f src/tools/disks.rs: implement get_disks (similar to the one in PVE::Diskmanage)
But no ceph support for now. Also no support for old cciss block devices.
2020-06-05 10:33:53 +02:00
f03a0e509e src/tools/disks.rs; use correct subdir to check holders 2020-06-05 10:33:53 +02:00
4c1e8855cc src/tools/disks.rs: fix disk type detection, remove newline from vendor string 2020-06-05 08:09:52 +02:00
85a9a5b68c depend on proxmox 0.1.39 2020-06-05 08:08:40 +02:00
f856e0774e ui: fix prune button
the remove button did not get the selModel since the xytpe was not
'grid' so give it the correct xtype

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-04 12:59:48 +02:00
43ba913977 bump version to 0.2.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-04 10:39:15 +02:00
a720894ff0 rrd: fix off-by-one in save interval calculation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-04 10:30:47 +02:00
a95a3fb893 fix csum calculation of not 'chunk_size' aligned images
the last chunk does not have to be as big as the chunk_size,
just use the already available 'chunk_end' function which does the
correct thing

this fixes restoration of images whose sizes are not a multiple of
'chunk_size' as well

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-04 10:18:30 +02:00
620911b426 src/tools/disks/lvm.rs: implement get_lvm_devices() 2020-06-04 09:12:19 +02:00
5c264c8d80 src/tools/disks.rs: add/use get_partition_type_info 2020-06-04 07:48:22 +02:00
8d78589969 improve display of 'next run' for sync jobs
if the last sync job is too far in the past (or there was none at all
for now) we run it at the next iteration, so we want to show that

we now calculate the next_run by using either the real last endtime
as time or 0

then in the frontend, we check if the next_run is < now and show 'pending'
(we do it this way also for replication on pve)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-04 07:03:54 +02:00
eed8a5ad79 tools/systemd/time: fix compute_next_event for weekdays
two things were wrong here:
* the range (x..y) does not include y, so the range
  (day_num+1..6) goes from (day_num+1) to 5 (but sunday is 6)

* WeekDays.bits() does not return the 'day_num' of that day, but
  the bit value (e.g. 64 for SUNDAY) but was treated as the index of
  the day of the week
  to fix this, we drop the map to WeekDays and use the 'indices'
  directly

this patch makes the test work again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-04 07:02:33 +02:00
538b9c1c27 systemd/time: add tests for all weekdays
this fails for now

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-04 07:02:23 +02:00
55919bf141 verify_file: add missing closing parenthesis in error message
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-03 19:10:01 +02:00
456ad0c478 src/tools/disks/zfs.rs: add parser for zpool list output 2020-06-03 12:16:08 +02:00
c76c7f8303 bump version to 0.2.2-1 2020-06-03 10:37:46 +02:00
c48aa39f3b src/bin/proxmox-backup-client.rs: implement quite flag 2020-06-03 10:11:37 +02:00
2d32fe2c04 client restore: don't add server file ending if already specified
If one executes a client command like
 # proxmox-backup-client files <snapshot> --repository ...
the files shown have already the '.fidx' or '.blob' file ending, so
if a user would just copy paste that one the client would always add
.blob, and the server would not find that file.

So avoid adding file endings if it is already a known OK one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-03 07:03:55 +02:00
dc155e9bd7 client restore: factor out archive/type parsing
will be extended in a next patch.

Also drop a dead else branch, can never get hit as we always add
.blob as fallback

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-03 07:03:12 +02:00
4e14781aec fix typo 2020-06-03 06:59:43 +02:00
a595f0fee0 client: improve connection/new fingerprint query
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-02 10:40:31 +02:00
add5861e8d typo fixes all over the place
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-30 16:39:08 +02:00
1610c45a86 src/client/pull.rs: also download client.log.blob 2020-05-30 14:51:33 +02:00
b2387eaa45 avoid compiler warnings 2020-05-30 14:05:33 +02:00
96d65fbcd0 cleanup: define/use const for predefined blob file names. 2020-05-30 14:04:15 +02:00
7cc3473a4e src/client/backup_specification.rs: split code into extra file 2020-05-30 10:54:38 +02:00
4856a21836 src/client/pull.rs: more verbose logging 2020-05-30 08:12:43 +02:00
a0153b02c9 ui: use Proxmox.Utils.setAuthData
this uses different parameters which we want to be the same for
all products (e.g. secure cookie)

leave the PBS.Utils.updateLoginData for the case that we want to do
something more here (as in pve for example)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-30 07:24:27 +02:00
04b0ca8b59 add owner to group and snapshot listings
while touching it, make columns and tbar in DataStoreContent.js
declarative members and remove the (now) unnecessary initComponent

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-30 07:24:12 +02:00
86e432b0b8 ui: add SyncView
shows a nice overview of sync jobs (incl status of last run, estimated
next run, etc.) with options to add/edit/remove and also show the
log of the last run and manually run it now

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:32:40 +02:00
f0ed6a218c ui: add SyncJobEdit window
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:32:13 +02:00
709584719d ui: add RemoteSelector and DataStoreSelector
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:31:54 +02:00
d43f86f3f3 api2: add admin/sync endpoint
this returns the list of syncjobs with status, as opposed to
config/sync (which is just the config)

also adds an api call where users can run the job manually under
/admin/sync/$ID/run

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:31:32 +02:00
997d7e19fc config/sync: add SyncJobStatus Struct/Schema
contains the config + status

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:29:39 +02:00
c67b1fa72f syncjob: change worker type for sync jobs
'sync' is used for manually pulling a remote datastore
changing it for a scheduled sync to 'syncjob' so that we can
differentiate between both types of syncs

this also adds a seperate task description for it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:28:04 +02:00
268687ddf0 api2/pull: refactor priv checking and creating pull parameters
we want to reuse those in the api call for manually running a sync job

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:27:43 +02:00
426c1e353b api2/config/sync: fix id parameter
'name' is not the correct parameter for get/post

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:24:54 +02:00
2888b27f4c create SYNC_SCHEDULE_SCHEMA to adapt description for sync jobs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:24:25 +02:00
f5d00373f3 ui: add missing comment field to remote model
when using a diffstore, we have to add all used columns to the model,
else they will not refresh on a load

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:17:15 +02:00
934f5bb8ac src/bin/proxmox-backup-proxy.rs: cleanup, move code to src/tools/disks.rs
And simplify find_mounted_device by using stat.st_dev
2020-05-29 11:13:36 +02:00
9857472211 fix removing of remotes
we have to save the remote config after removing the section

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 10:48:26 +02:00
013fa7bbcb rrd: reduce io by saving data only once a minute 2020-05-29 09:16:13 +02:00
a8d7033cb2 src/bin/proxmox-backup-proxy.rs: add test if last prune job is still running 2020-05-29 08:06:48 +02:00
04ad7bc436 src/bin/proxmox-backup-proxy.rs: test if last sync job is still running 2020-05-29 08:06:48 +02:00
77ebbefc1a src/server/worker_task.rs: make worker_is_active_local pub 2020-05-29 08:06:48 +02:00
750252ba2f src/tools/systemd/time.rs: add test for "daily" schedule 2020-05-29 07:52:09 +02:00
dc58194ebe src/bin/proxmox-backup-proxy.rs: use correct id to lookup sync jobs 2020-05-29 07:50:59 +02:00
c6887a8a4d remote config gui: add comment field 2020-05-29 06:46:56 +02:00
090decbe76 BACKUP_REPO_URL_REGEX: move to api2::types and allow all valid data store names
The repo URL consists of
* optional userid
* optional host
* datastore name

All three have defined regex or format, but none of that is used, so
for example not all valid datastore names are accepted.

Move definition of the regex over to api2::types where we can access
all required regexes easily.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-29 06:29:23 +02:00
c32186595e api2::types: factor out USER_ID regex
allows for better reuse in a next patch

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-29 06:27:38 +02:00
947f45252d www/ServerStatus.js: use term "IO wait" for CPU iowait
Because we already use "IO delay" for the storage statistics.
2020-05-29 06:12:49 +02:00
c94e1f655e rrd stats: improve io delay stats 2020-05-28 19:12:13 +02:00
d80d1f9a2b bump version to 0.2.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-28 17:39:41 +02:00
6161ac18a4 ui: remotes: fix remote remove buttons base url
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-28 17:29:54 +02:00
6bba120d14 ui: fix RemoteEdit password change
we have to remove the password from the submitvalues if it did not
change

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-28 17:24:06 +02:00
91e5bb49f5 src/bin/proxmox-backup-proxy.rs: simplify code
and gather all stats for the root disk
2020-05-28 12:30:54 +02:00
547e3c2f6c src/tools/disks/zfs.rs: use wtime + rtime (wait + run time) 2020-05-28 11:45:34 +02:00
4bf26be3bb www/DataStoreStatistic.js: add transfer rate 2020-05-28 10:20:29 +02:00
25c550bc28 src/bin/proxmox-backup-proxy.rs: gather zpool io stats 2020-05-28 10:09:13 +02:00
0146133b4b src/tools/disks/zfs.rs: helper to read zfs pool io stats 2020-05-28 10:07:52 +02:00
3eeba68785 depend on proxmox 0.1.38, use new fs helper functions 2020-05-28 10:06:44 +02:00
f5056656b2 use the sync id for the scheduled sync worker task
this way, multiple sync jobs with the same local store, can get scheduled

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-28 06:26:03 +02:00
8c87743642 fix 'remove_vanished' cli arg again
since the target side wants this to be a boolean and
serde interprets a None Value as 'null' we have to only
add this when it is really set via cli

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-28 06:25:30 +02:00
05d755b282 fix inserting of worker tasks
when starting a new task, we do two things to keep track of tasks
(in that order):
* updating the 'active' file with a list of tasks with
  'update_active_workers'
* updating the WORKER_TASK_LIST

the second also updates the status of running tasks in the file by
checking if it is still running by checking the WORKER_TASK_LIST

since those two things are not locked, it can happend that
we update the file, and before updating the WORKER_TASK_LIST,
another thread calls update_active_workers and tries to
get the status from the task log, which won't have any data yet
so the status is 'unknown'

(we do not update that status ever, likely for performance reasons,
so we have to fix this here)

by switching the order of the two operations, we make sure that only
tasks reach the 'active' file which are inserted in the WORKER_TASK_LIST

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-28 06:24:42 +02:00
143b654550 src/tools.rs - command_output: add parameter to check exit code 2020-05-27 07:25:39 +02:00
97fab7aa11 src/tools.rs: new helper to handle command_output (std::process::Output) 2020-05-27 06:53:25 +02:00
ed216fd773 ui: acl view: only update if component is activated
Avoid triggering non-required background updates during browsing a
datastores content or statistics panels. They're not expensive, but I
do not like such behavior at all (having traveled with trains and
spotty network to often)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 18:58:21 +02:00
0f13623443 ui: tasks: add sync description+
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 18:36:58 +02:00
dbd959d43f ui: tasks: render reader with full info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 18:36:58 +02:00
f68ae22cc0 ui: factor out render_datetime_utc
will be reused in the next patch

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 18:36:48 +02:00
06c3dc8a8e ui: task: improve rendering of backup/prune worker entries
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 13:37:57 +02:00
a6fbbd03c8 depend on proxmox 0.1.37 2020-05-26 13:00:34 +02:00
26956d73a2 ui: datastore prune: remove debug logging
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 12:50:06 +02:00
3f98b34705 ui: rework datastore content panel controller
Mostly refactoring, but actually fixes an issue where one seldom run
into a undefined dereference due to the store onLoad callback getting
triggered after some of the componet was destroyed - on quick
switching through the datastores.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 12:46:48 +02:00
40dc103103 fix cli pull api call
there is no 'delete' parameter, only 'remove-vanished', so fix that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 12:39:19 +02:00
12710fd3c3 ui: add missing monStoreErrors
to actually show api errors on the list call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 12:38:57 +02:00
9e2a4653b4 ui: add crud for remotes
listing/adding/editing/removing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 12:38:39 +02:00
de4db62c57 remotes: save passwords as base64
to avoid having arbitrary characters in the config (e.g. newlines)
note that this breaks existings configs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 12:38:06 +02:00
1a0d3d11d2 src/api2/admin/datastore.rs: add rrd api 2020-05-26 12:26:14 +02:00
8c03041a2c src/bin/proxmox-backup-proxy.rs: gather block device stats on datastore 2020-05-26 11:20:59 +02:00
3fcc4b4e5c src/tools/disks.rs: add helper to read block device stats 2020-05-26 11:20:22 +02:00
3ed07ed2cd src/tools/disks.rs: export read_sys 2020-05-26 09:49:13 +02:00
75410d65ef d/control: proxmox-backup-server: depend on proxmox-backup-docs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 09:37:03 +02:00
83fd4b3b1b remote: try to use Struct for api
with a catch: password is in the struct but we do not want it to return
via the api, so we only 'serialize' it when the string is not empty
(this can only happen when the format is not checked by us, iow.
when its returned from the api) and setting it manually to ""
when we return remotes from the api

this way we can still use the type but do not return the password

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:55:07 +02:00
bfa0146c00 ui: acls: include roleid into id and sort by it
this fixes missing acls on the gui

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:49:59 +02:00
5dcdcea293 api2/config/remote: remove password from read_remote
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:49:12 +02:00
99f443c6ae api2/config/remote: lock and use digest for removal
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:48:45 +02:00
4f966d0592 api2/config/remote: use rpcenv for digest for read_remote
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:48:28 +02:00
db0c228719 config/remote: add 'name' to Remote struct
and use it as section id, like with User

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:48:05 +02:00
880fa939d1 gui: move system stat RRDs to ServerStatus panel. 2020-05-26 07:33:00 +02:00
052aaeb5e9 re-bump to 0.2.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-25 20:10:38 +02:00
5f249127b2 docs: sync version with the package versions
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-25 20:10:38 +02:00
8277f4ace5 ui: navigation: sort datastores entries
adding a new one after load will append it still at the end, though.
But datastores are not something which get frequently added after
initial setup, so don't care about that for now..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-25 19:46:47 +02:00
9b1aa424b9 ui: add some task log description mappings
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-25 19:06:52 +02:00
fef2b3e04c css: fix load mask background image path
We're not using the exact same paths as in PVE/PMG here.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-25 18:41:14 +02:00
7cebe5a1f4 ui: system config: reorder big panel to bottom
Gives a better look and feel if the flex'd big panel is at the bottom

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-25 16:41:54 +02:00
309ef20d6d src/bin/proxmox-backup-proxy.rs: simplify code 2020-05-25 16:20:32 +02:00
d0833a70f7 src/bin/proxmox-backup-proxy.rs: gather datastore usage stats 2020-05-25 16:20:32 +02:00
dda246403c ui: index: load widget toolkit CSS
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-25 15:44:39 +02:00
16e0dd65f1 d/control: proxmox-widget-toolkit depend on 2.2-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-25 12:50:55 +02:00
c5a516918f bump version to 0.2.0 2020-05-25 12:48:07 +02:00
cc275e8f93 depend on proxmox 0.1.36 2020-05-25 12:13:45 +02:00
d8dc281992 www/DataStoreStatus.js: display loadavg stats 2020-05-25 11:54:15 +02:00
2c66a590c0 src/bin/proxmox-backup-proxy.rs: gather iowait stats 2020-05-25 11:54:15 +02:00
485841da2c src/bin/proxmox-backup-proxy.rs: gather loadavg stats 2020-05-25 11:40:20 +02:00
e8f5810aa1 depend on proxmox 0.1.35 2020-05-25 11:34:34 +02:00
3e930f2bdc www/DataStoreStatus.js: display root disk stats 2020-05-25 11:34:24 +02:00
dd15c0aa3b src/bin/proxmox-backup-proxy.rs: gather root disk stats 2020-05-25 11:10:07 +02:00
c1b24fbf0b www/DataStoreStatus.js: display swap stats 2020-05-25 10:39:54 +02:00
3f23b17298 src/rrd/rrd.rs: do not wrap error and return ErrorKind::NotFound 2020-05-25 10:30:04 +02:00
c25c9d8dd1 src/bin/proxmox-backup-proxy.rs: gather swap usage stats 2020-05-25 10:25:58 +02:00
84dc6adcc1 src/rrd/cache.rs: display/log error when RRD load fails 2020-05-25 10:18:53 +02:00
0c4344650d src/rrd/rrd.rs: store/verify magic number 2020-05-25 09:21:54 +02:00
4f9513996c src/bin/proxmox-backup-proxy.rs: use block_in_place for rrd update 2020-05-25 08:30:59 +02:00
736edc7a7e src/rrd/rrd.rs: implement DST_COUNTER 2020-05-25 08:14:30 +02:00
2b55de407e src/rrd/rrd.rs: correctly compute derived values
use f64 for time.
2020-05-25 07:02:04 +02:00
a608806f65 www/DataStoreStatus.js: display netin/netout 2020-05-24 19:02:35 +02:00
8f0cec2642 src/bin/proxmox-backup-proxy.rs: gather netin/netout stats 2020-05-24 19:02:35 +02:00
0ed9a2b3ae src/config/network.rs: implement is_physical_nic() helper 2020-05-24 19:02:35 +02:00
58edd33d2b src/rrd/rrd.rs: implement DST_DERIVE 2020-05-24 19:02:35 +02:00
4fb05fde17 src/rrd/rrd.rs: restructure whole code 2020-05-24 16:51:28 +02:00
daca4f7888 src/rrd/rrd.rs: reduce size by using f64:NAN as UNKNOWN 2020-05-24 09:09:09 +02:00
4e6585839b src/rrd/rrd.rs: simplify an fix old value deletion 2020-05-24 06:44:06 +02:00
8c981ae379 rrd: fix display interval, try to avoid numeric errors 2020-05-23 16:03:43 +02:00
803ab12ad4 rrd: simplify code 2020-05-23 15:37:17 +02:00
a4a3f7ca5e rrd: pack multiple rrd values into th estat list 2020-05-23 14:03:44 +02:00
ba1c249eec www/DataStoreStatus.js: add test for RRD api 2020-05-23 11:52:26 +02:00
a2f862eed6 add experimental rrd api to get cpu stats 2020-05-23 11:50:53 +02:00
eaeda365e0 start gathering stats using new rrd module 2020-05-23 10:43:08 +02:00
6359dc891a add simple rrd implementation 2020-05-23 10:42:48 +02:00
07ad6470ca src/client/pull.rs: split out pull related code 2020-05-22 08:04:20 +02:00
a6160cdfeb src/bin/proxmox-backup-proxy.rs: schedule sync jobs 2020-05-22 07:50:59 +02:00
183125d576 src/api2/pull.rs: aquire try_shared_chunk_store_lock inside pull_store 2020-05-22 07:24:17 +02:00
a3016d6583 proxmox-backup-manager: add sync-job cli 2020-05-21 11:44:45 +02:00
b29d046e89 proxmox-backup-manager: split out cert.rs 2020-05-21 11:22:20 +02:00
380bd7df97 proxmox-backup-manager: split out datastore.rs 2020-05-21 11:14:34 +02:00
ea6f404e55 proxmox-backup-manager: split out dns.rs 2020-05-21 11:10:58 +02:00
a35a211d9e proxmox-backup-manager: split out network.rs 2020-05-21 11:08:38 +02:00
53e14507c1 proxmox-backup-manager: split out acl.rs 2020-05-21 10:56:46 +02:00
6fa39e53e0 proxmox-backup-manager: split out users.rs 2020-05-21 10:53:06 +02:00
a220a4564a roxmox-backup-manager: start splitting command into several files 2020-05-21 10:46:07 +02:00
6f652b1b3a rename 'job' to 'sync' 2020-05-21 10:29:25 +02:00
b1d4edc769 src/api2/config/job.rs: add job api 2020-05-21 10:16:35 +02:00
b4900286ce src/config/jobs.rs: use SectionConfig for jobs 2020-05-21 10:16:35 +02:00
c681885227 src/bin/proxmox-backup-manager.rs: format output of show commands 2020-05-20 16:47:37 +02:00
ee8b464466 src/tools/systemd.rs: avoid compiler warnings 2020-05-20 16:47:08 +02:00
51c63475e1 ui: add '.' to path regex
since we use the path for datastore ids, which can contain a '.'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 16:33:26 +02:00
ce55db66d6 proxmox-backup-manager: add show command for remote and datastore
to show the data for a single item

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 16:33:07 +02:00
2882c881e9 api2/access/acl: add path and exact parameter to list_acl
so that we can get only a subset of the acls, filtered by the backed
also return the digest here

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 13:44:36 +02:00
c0ac207453 ui: add ACL panel to datastores
by introducing a datastorepanel (a TabPanel) which holds the content
and acl panel for now.

to be able to handle this in the router, we have to change the logic
of how to select the datastore from using the subpath to putting it
into the path (and extracting it when necessary)

if we need this again (e.g. possibly for remotes), we can further
refactor this logic to be more generic

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 13:27:13 +02:00
ee1458b61d fixup 2020-05-20 13:27:13 +02:00
0542cfdf4f ui: add ACL panel to Configuration
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 13:23:00 +02:00
12e3895399 api2/access/acl: make update_acl a protected api call
since we want to set the owner of the acl config to 'root'
which is only possible when using a protected api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 13:22:41 +02:00
11b6391c83 add 'exact' parameter to extract_acl_node_data
so that we can return acls for a single path

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 13:22:10 +02:00
2072aeaee6 ui: add UserSelector
this has to be different from pve for now, since the default of
'enabled' is reverted (pve: default disabled, pbs: default enabled)

if we decide to change this either here or in pve, we can refactor
it to the widget-toolkit

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 13:22:01 +02:00
b05672579e api2/roles: change return field of role to roleid
to be compatible with the pve api
with this, we can reuse the ui parts (RoleSelector)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 13:21:47 +02:00
5160c0e986 api2/acl: add privs array to roles
so that an admin can see which roles have which privileges

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 13:21:37 +02:00
1ad9dd08f4 acls: use constnamemap macro for privileges
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 13:21:28 +02:00
4d3369cb9a depend on proxmox 0.1.34 2020-05-20 13:21:01 +02:00
25829a879b src/bin/proxmox-backup-proxy.rs: schedule prune jobs 2020-05-20 13:00:53 +02:00
872062ee9f src/config/datastore.rs_ change prune types from i64 to u64 2020-05-20 13:00:13 +02:00
67f7ffd0db src/config/datastore.rs: add prune settings 2020-05-20 11:29:59 +02:00
0fafac2492 src/api2/access/user.rs: remove useless description
The description is not used at all if we refer to a type.
2020-05-20 11:27:58 +02:00
49ff10921c src/api2/types.rs: define PRUNE_SCHEMA_KEEP_* 2020-05-20 10:13:38 +02:00
479e4932b5 src/tools/systemd/parse_time.rs: improve error message 2020-05-20 09:43:16 +02:00
dd7a7eae8f src/bin/proxmox-backup-manager.rs: add completion helper for gc-schedule 2020-05-20 09:42:51 +02:00
8545480a31 src/bin/proxmox-backup-proxy.rs: add simple task scheduler for garbage collection 2020-05-20 08:59:45 +02:00
d6c28ddf84 src/tools/systemd/time.rs: export parse/verify 2020-05-20 08:38:39 +02:00
42fdbe5112 src/config/datastore.rs: add gc-schedule property 2020-05-20 08:38:10 +02:00
a67b70c154 depend on proxmox 0.1.33 2020-05-20 06:29:06 +02:00
9c5c383bff user: create default root user as typed struct
we added a userid attribute to the User struct, but missed that we
created the default user without that attribuet via the json! macro
which lead to a runtime panic on the deserialization

by using the struct directly, such errors will be caught by the compiler
in the future

with this change, we can remove the serde_json import here

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-20 06:09:08 +02:00
7d4e362993 depend on proxmox 0.1.32, src/api2/access/user.rs: simplify code 2020-05-19 12:58:46 +02:00
88acc86129 ui: add UserManagement panel
to add/edit users

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-19 09:34:41 +02:00
1d8ef0dcf7 ui: use Logo/RealmComboBox from widget-toolkit
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-19 09:34:19 +02:00
522c0da0a0 use new 'id_property' for user::User and use it in api calls
this allows us to return a user::User (or Vec<> of it)
instead of a generic serde value

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-19 09:33:56 +02:00
16c75c580b adapt to changes of SectionConfigPlugin
it requires not an Option<String> for the optional id_property

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-19 09:28:45 +02:00
07ce44a633 avoid compiler warnings 2020-05-19 07:03:41 +02:00
6c5024b050 Cargo.toml: remove native-tls
it's not used anymore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-05-18 13:52:56 +02:00
b2c9c793ad debcargo.toml: add missing doc build-dependencies
and mark them accordingly.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-05-18 13:48:16 +02:00
79166b3935 debcargo.toml: reflow dependencies
to make changes easier to track

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-05-18 13:08:10 +02:00
e8d1da6a15 depend on proxmox 0.1.31 - use Value to store result metadata 2020-05-18 09:57:35 +02:00
2e686e0a63 update dependencies
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-05-18 09:08:09 +02:00
7a314d18f7 src/tools/systemd/parse_time.rs: check max values 2020-05-16 13:13:50 +02:00
2d08c97ae2 CalendarEvent - compute_next_value: use change tracking to avoid repeated testing 2020-05-16 10:32:27 +02:00
50ce1f987d CalendarEvent - compute_next_value: support seconds 2020-05-16 10:21:24 +02:00
d1a5ffdf78 src/tools/systemd/tm_editor.rs: new helper class 2020-05-16 10:09:41 +02:00
99baf7afcc CalendarEvent: test and fix repeated values 2020-05-16 07:43:51 +02:00
fed270bf3f CalendarEvent: speedup/simplify repetition tests 2020-05-16 07:09:53 +02:00
e05b637c73 src/tools/systemd/parse_time.rs: move parser into separate file 2020-05-16 06:53:15 +02:00
2ee6b3fdb9 src/tools/systemd/time.rs: implement compute_next_event 2020-05-16 06:33:03 +02:00
f3a96b2cdb renamed: src/tools/systemd/parser.rs -> src/tools/systemd/config.rs 2020-05-16 06:32:28 +02:00
a260c74a12 src/tools/systemd/time.rs: add helpers to compute CalendarEvents 2020-05-15 17:55:54 +02:00
52c70f3f5e depend on proxmox 0.1.30 2020-05-15 17:51:52 +02:00
30f577248b src/api2/node/time.rs: avoid custom unsafe readlink implementations 2020-05-15 06:50:07 +02:00
00491c0230 src/tools/systemd/parser.rs: use different setups for service and timer files, code cleanup 2020-05-14 13:55:13 +02:00
2ebdbac1c4 depend on nom, add parser for systemd calendar enents and time span 2020-05-14 12:18:30 +02:00
b4a85a3fa8 update pin-utils dep to stable version
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-05-14 11:46:05 +02:00
f486e9e50e add systemd configuration file parser/writer, start job configuration 2020-05-12 13:07:49 +02:00
7f5a27d302 depend on proxmox 0.1.29 2020-05-12 13:03:09 +02:00
40a36bcc57 depend on proxmox 0.1.28 2020-05-12 09:35:57 +02:00
b61d344f01 TODO.rst: update 2020-05-12 09:35:37 +02:00
65dab0266c proxmox-backup-manager: add completion helper for port list 2020-05-08 17:28:04 +02:00
525008f7ad proxmox-backup-manager - network list: render ports/slaves
And render interface name as first comumn.
2020-05-08 16:07:23 +02:00
5bef0f43da src/config/network.rs - check_bridge_ports: correctly check vlan ports 2020-05-08 15:51:47 +02:00
0f6bdbb01f src/config/network.rs - write_config: add more consistency checks 2020-05-08 14:31:38 +02:00
a4ccb46176 src/config/network.rs: avoid duplicate port usage 2020-05-08 11:15:00 +02:00
80bf084876 src/config/network.rs: do not combine entries
It is unclear when and how to write combined entries ...
2020-05-08 10:20:57 +02:00
db5672e83e src/config/network.rs: always write bridge_ports and bond_slaves
So that we can reliable detect the interface type.
2020-05-08 09:58:03 +02:00
86a5d56c4e proxmox-backup-manager: add network create command 2020-05-08 09:55:56 +02:00
3dd27a3bf8 src/api2/node/network.rs: add protected flag to revert 2020-05-08 09:30:25 +02:00
3aedb73816 src/api2/node/network.rs: pass bridge_ports and slaves a property strings
To make it compatible with pve.
2020-05-08 08:49:17 +02:00
bab5d18c3d src/config/network.rs: implement bond_mode
and rename bond_slaves to slaves to make it compatible with pve.
2020-05-07 14:07:45 +02:00
c2ffc68554 src/api2/node/network.rs: cleanup - factor out check_duplicate_gateway 2020-05-07 11:26:30 +02:00
9651833130 src/api2/node/network.rs: allow to create bridge and bond 2020-05-07 11:09:12 +02:00
7b22acd0c2 src/config/network.rs: make it compatible with pve
and depend on proxmox 0.1.26
2020-05-07 09:28:25 +02:00
5751e49566 src/server/worker_task.rs: implement and use status command 2020-05-07 09:27:33 +02:00
197de83ffa src/server/command_socket.rs: do not abort loop on client errors, allow backup gid 2020-05-07 09:27:33 +02:00
10effc9849 add tools/disks.rs (work in progress...)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-05-05 10:14:42 +02:00
139f891087 TODO.rst: update 2020-05-05 09:22:01 +02:00
99641a6bbb garbage_collect: call fail_on_abort to abort GV when requested. 2020-05-05 09:06:34 +02:00
74f7240b8d src/bin/proxmox-backup-client.rs: add human readable date to prune list 2020-05-05 07:33:58 +02:00
a66d5898a1 docs/administration-guide.rst: fix prune command output 2020-05-05 07:24:27 +02:00
db1e061dcb src/bin/proxmox-backup-client.rs: correctly format prune result list. 2020-05-05 06:45:37 +02:00
96feecd621 administration-guide.rst: update snapshot list output 2020-05-04 13:14:03 +02:00
f9dcfa4149 administration-guide.rst: add section "Proxmox VE integration" 2020-05-04 12:41:38 +02:00
25cf09065f docs: use OpenSans as main font
Most people also read PDFs online ...
2020-05-04 10:48:09 +02:00
fc598cdbe1 docs: use better fonts for PDFs
Font XCharter and Lato have better quality.
2020-05-04 10:15:27 +02:00
bca294a17c docs/conf.py: avoid font scale option 2020-05-04 09:02:10 +02:00
a02e8b1e95 docs/conf.py: fix baselineskip in code-blocks with scaled font 2020-05-03 09:22:01 +02:00
26d29e0ec7 pdf docs: scale down monospace font 2020-05-03 08:23:35 +02:00
15d74eaaf4 use xetex to generate pdf
To correctly handle unicode art in code blocks...
2020-05-03 07:48:55 +02:00
8df51d4852 administration-guide.rst: add role definitions 2020-05-02 16:40:20 +02:00
8f3b3cc1f9 administration-guide.rst: add example to disable/remove a user 2020-05-02 11:21:05 +02:00
17ec699d79 administration-guide.rst: start user management docs 2020-05-02 11:11:36 +02:00
b080583ba8 src/bin/proxmox-backup-manager.rs: improve user list output 2020-05-01 16:22:50 +02:00
32d83bb34c TODO: update 2020-05-01 09:02:36 +02:00
e325dbd4a3 www/Dashboard.js: fix status url 2020-04-30 12:58:41 +02:00
ecb53af6d9 add ServerStatus.js GUI with Reboot and Shutdown buttons 2020-04-30 12:12:20 +02:00
ed751dc2ab src/api2/node/status.rs: rework api, implement reboot and shutdown 2020-04-30 11:52:40 +02:00
ca9dfe5fa4 src/api2/node/tasks.rs: use api macro features for default values 2020-04-30 11:51:56 +02:00
720af9f69b src/api2/node/tasks.rs: allow users to list/access there own tasks 2020-04-30 10:05:50 +02:00
f1490da82a use resonable acl paths (fixup) 2020-04-30 09:32:13 +02:00
74c08a5782 use reasonable acl paths 2020-04-30 09:30:00 +02:00
7f402dafb7 TODO.rst: update 2020-04-30 07:42:57 +02:00
bd88dc4116 cached_config: avoid parsing non-existent files multiple times 2020-04-30 07:04:23 +02:00
ebe556d0e7 www/DataStoreStatus.js: define Model for datastorte list
We want to use the admin/datastore api (instead of config/datastore),
to get the restricted list of datastores.
2020-04-30 06:50:45 +02:00
f9e3b1104e change index to templates using handlebars
using a handlebars instance in ApiConfig, to cache the templates
as long as possible, this is currently ok, as the index template
can only change when the whole package changes

if we split this in the future, we have to trigger a reload of
the daemon on gui package upgrade (so that the template gets reloaded)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-04-29 17:05:53 +02:00
bc0d03885c use proxmox 0.1.25, use new EnumEntry feature 2020-04-29 13:01:24 +02:00
acb428cdec add DataStoreStatus.js dummy 2020-04-29 11:22:05 +02:00
de1f8f1d36 Revert "gui: display DataStoreConfig above DataStoreContent"
This reverts commit 555dfe7b8e.
2020-04-29 11:09:35 +02:00
b9f2f761bb avoid problems with missing acl.cfg and user.cfg 2020-04-29 10:40:42 +02:00
30fb602578 src/api2/admin/datastore.rs - get_datastore_list: only return name and comment
We dont want to leak the full configuration to users with limited access permission.
Please use the api2::config::datastore api to get the full configuration.
2020-04-29 09:21:34 +02:00
0a00f6e01c src/api2/config/datastore.rs_ add delete property to update method 2020-04-29 09:09:59 +02:00
30003baaa4 src/api2/config/remote.rs: fix white space 2020-04-29 09:09:39 +02:00
5211705ff1 src/api2/config/remote.rs: add delete parameter to update method 2020-04-29 09:04:17 +02:00
ec67af9af3 src/api2/pull.rs: require Datastore.Prune if delete flag is set. 2020-04-29 07:19:32 +02:00
8247db5b39 src/config/acl.rs: introduice privileges and roles for remotes 2020-04-29 07:03:44 +02:00
409f44247b fix api2::types::ACL_ROLE_SCHEMA
make sure we list all roles ...
2020-04-28 13:25:02 +02:00
dd335b77f5 src/config/acl.rs - fix regression tests 2020-04-28 11:16:15 +02:00
6f6aa95abb add Datastore.Backup, Datastore.PowerUser and Datastore.Reader role 2020-04-28 11:07:25 +02:00
54552dda59 implemnt backup ownership, improve datastore access permissions 2020-04-28 10:22:25 +02:00
21690bfaef depend on proxmox 0.1.24 2020-04-28 08:23:41 +02:00
1347b1152d src/config/cached_user_info.rs - lookup_privs: correctly handle superuser 2020-04-27 13:22:03 +02:00
d00e1a216f src/config/acl.rs: introduce more/better datastore privileges 2020-04-27 07:13:50 +02:00
9c7fe29dfc src/config/acl.rs: rtename PRTIV_DATASTORE_ALLOCATE to PRIV_DATASTORE_MODIFY 2020-04-27 06:50:35 +02:00
14627d671a src/bin/proxmox-backup-manager.rs: add dns sub command
Also improved the DNS api, added a --delete option.
2020-04-26 08:23:23 +02:00
76227a6acd src/bin/proxmox-backup-manager.rs: fix node parameter handling 2020-04-25 17:20:22 +02:00
6830608855 depend on proxmox 0.1.23 2020-04-25 17:12:15 +02:00
26d9aebc28 move src/api2/config/network.rs to src/api2/node/network.rs
So that we have the same api path for network config as pve.
2020-04-25 17:00:38 +02:00
1ca540a63b src/config/network.rs: auto-add lo, and implement a few regression tests 2020-04-24 12:57:11 +02:00
9094186a57 xattr: cleanup: don't use pxar types in the API
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-04-24 11:23:48 +02:00
27a3decbfe xattr: api cleanup
Make `flistxattr()` return a `ListXAttr` helper which
provides an iterator over `&CStr`.

This exposes the property that xattr names are a
zero-terminated string without simply being an opaque
"byte vector". Using &[u8] as a type here is too lax.

Also let `fgetxattr` take a `CStr`. While this may be a
burden on the caller, we usually already have
zero-terminated strings on the call site. Currently we only
use this method coming from `flistxattr` after all.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-04-24 10:56:52 +02:00
9af76ef075 xattr: use checked_mul to increase size
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-04-24 10:56:52 +02:00
00ec8d1685 tools: pub use Fd from proxmox crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-04-24 10:56:52 +02:00
fd7c0979b4 src/bin/proxmox-backup-manager.rs: implement netwerk revert 2020-04-24 10:45:49 +02:00
c67bc9c35c src/bin/proxmox-backup-manager.rs: new command to show pending network changes 2020-04-24 10:27:43 +02:00
3181f9b625 src/bin/proxmox-backup-manager.rs: only show pending changes with "text" format 2020-04-24 10:16:57 +02:00
2eefd9aee1 src/config/network.rs: implement network reload, set "changes" attribute 2020-04-24 09:55:46 +02:00
8a6b86b8a7 src/config/network.rs: use a simple String for comments 2020-04-24 07:46:08 +02:00
96d9478668 src/config/network/parser.rs: corectly detect vanished interfaces 2020-04-24 07:26:54 +02:00
10a9be45bd src/api2/config/network.rs: implement update/delete comments 2020-04-23 16:08:35 +02:00
5f60a58fd5 src/config/network.rs; support interface comments, cleanups 2020-04-23 15:54:30 +02:00
659c3be3d5 src/config/network.rs: avoid newline after family options 2020-04-23 11:30:41 +02:00
5e4e88e83f src/api2/config/network.rs: implement update/delete for bridge_ports and bond_slaves 2020-04-23 11:21:27 +02:00
339965d720 src/api2/config/network.rs: only allow one default gateway 2020-04-23 10:37:40 +02:00
c38b4bb8b2 src/config/network.rs: do not allow to change interface type 2020-04-23 09:43:38 +02:00
42fbe91a34 src/config/network.rs: parse bond-slaves 2020-04-23 09:31:10 +02:00
1d9a68c2fc src/config/network.rs: parse bridge-ports 2020-04-23 09:24:17 +02:00
02269f3dba src/config/network.rs: introduce NetworkInterfaceType 2020-04-23 08:45:03 +02:00
d5ca9bd5df src/config/network.rs: cleanup (new helper combine_entry) 2020-04-23 07:54:12 +02:00
02e36d96ad src/config/network.rs: write changes to interfaces.new 2020-04-23 07:19:29 +02:00
2c18efd902 src/config/network.rs: use a single mtu setting (instead of mtu_v4 and mtu_v6) 2020-04-23 07:07:14 +02:00
4cb6bd894c src/bin/proxmox-backup-manager.rs: improve network list output format 2020-04-23 06:44:55 +02:00
b1564af25a src/bin/proxmox-backup-manager.rs: format datastore list output 2020-04-22 17:37:20 +02:00
bf004ecd87 src/bin/proxmox-backup-manager.rs: format network list output 2020-04-22 17:14:52 +02:00
f1026a5aa9 src/api2/config/network.rs: allow to update 'auto' flag 2020-04-22 16:46:46 +02:00
3fce3bc36e src/config/network/parser.rs: parse MTU settings 2020-04-22 13:44:51 +02:00
f8e7ac686a src/config/network.rs: only save attriubutes used by configuration method 2020-04-22 12:42:09 +02:00
c016482c7a src/api2/config/network.rs: implement delete property 2020-04-22 12:19:31 +02:00
27f2c23049 src/api2/config/network.rs: allow to update configuration method 2020-04-22 11:32:36 +02:00
df6bb03d0e src/api2/config/network.rs: improve network api 2020-04-22 10:54:07 +02:00
e2d940b949 src/config/network/parser.rs: remove debug println 2020-04-22 10:53:26 +02:00
0c226bc173 src/config/network/helper.rs: fix CIDR regex 2020-04-22 10:52:31 +02:00
76cf5208cf src/api2/types.rs: add schemas for IP/CIDR 2020-04-22 10:28:53 +02:00
2ea7bf1b3d src/api2/config/datastore.rs_ fix method docs 2020-04-22 08:53:16 +02:00
8b57cd4441 src/config/network.rs: remove netmask support
rely on cidr instead.
2020-04-22 08:45:13 +02:00
68da20bf62 src/api2/types.rs: define NETWORK_INTERFACE_NAME_SCHEMA 2020-04-21 17:54:52 +02:00
c357260d09 src/config/network.rs: move type definitions to src/api2/types.rs 2020-04-21 17:25:05 +02:00
7e02d08cd0 rename ConfigMethod to NetworkConfigMethod 2020-04-21 17:17:57 +02:00
ca0e534796 src/api2/config/network.rs: start network configuration api 2020-04-21 14:28:26 +02:00
904e988667 src/config/network.rs: impleement load/save 2020-04-21 12:55:33 +02:00
3f129233be src/config/network.rs: add Interface flags 'exists' and 'active' 2020-04-21 11:46:56 +02:00
a9bb491e35 src/config/network.rs: cleanup autostart flag handling 2020-04-21 11:06:22 +02:00
1ec7f8a0dd src/config/network/helper.rs: new helper get_network_interfaces() 2020-04-21 10:32:54 +02:00
92310d585c src/config/network.rs: simplify code 2020-04-20 18:10:15 +02:00
f34d4401f7 src/config/network.rs: read/write /etc/network/interfaces
Start implementing a recursive descent parser.
2020-04-20 14:15:57 +02:00
6e695960ca src/config/cached_user_info.rs: cache it up to 5 seconds 2020-04-18 08:49:20 +02:00
365f0f720c fix permission tests using non-uri parameters
We nood to do those tests inside the fuction body instead...
2020-04-18 08:23:04 +02:00
a737179eb4 src/config/cached_user_info.rs: new check_privs helper 2020-04-18 08:09:34 +02:00
bb072ba49c src/api2/access.rs: cleanup 2020-04-18 07:28:25 +02:00
ff329f970b src/api2/types.rs: use anyhow::Error in test cases 2020-04-18 07:05:31 +02:00
f7d4e4b506 switch from failure to anyhow
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-04-17 18:43:30 +02:00
404d78c41e src/api2/pull.rs: add access permission 2020-04-17 15:27:04 +02:00
1bfc1efa50 src/api2/subscription.rs: add access permissions 2020-04-17 15:14:28 +02:00
73ce1d1146 src/api2/reader.rs: add access permissions 2020-04-17 15:01:56 +02:00
70e5f2461d src/api2/config/remote.rs: add access permissions 2020-04-17 14:57:26 +02:00
c0ef209aeb src/api2/config/datastore.rs: impl digest check for delete, add access permissions 2020-04-17 14:51:29 +02:00
9f9f7eefa3 src/api2/backup.rs: add access permissions 2020-04-17 14:40:20 +02:00
bb34b58910 src/api2/admin/datastore.rs: add access permissions - first try
We need to refine this later (introduce backup owner concept?)
2020-04-17 14:36:27 +02:00
5972def5ec acl: change path "storage" to "datastore" 2020-04-17 14:15:44 +02:00
aa90ced3bf src/api2/access/role.rs: use schema ACL_ROLE_SCHEMA 2020-04-17 14:14:06 +02:00
ca257c8097 move type defs from src/api2/access/acl.rs to src/api2/types.rs 2020-04-17 14:13:15 +02:00
3fff55b293 src/api2/access/role.rs: new api to list roles 2020-04-17 14:03:24 +02:00
4f66423fcc src/api2/access/user.rs: add access permissions 2020-04-17 11:04:36 +02:00
d4f020f4c5 src/api2/access/user.rs: add access permissions 2020-04-17 10:08:45 +02:00
d28ddb8e04 src/api2/access/acl.rs: add access permissions 2020-04-17 10:03:09 +02:00
83b6a7cf71 src/api2/node/tasks.rs: use api macro, implement access permissions 2020-04-16 17:47:21 +02:00
e4681f9f71 src/api2/node/syslog.rs: add access permissions 2020-04-16 17:08:19 +02:00
b5037fa8ed src/api2/node/status.rs: add access permissions 2020-04-16 17:05:09 +02:00
9989d2c4e9 src/server/rest.rs: reduce delay for permission error to 500ms 2020-04-16 12:56:34 +02:00
1cf7bbf412 src/api2/node/services.rs: add access permissions 2020-04-16 12:47:16 +02:00
68ed0c629d src/api2/node/journal.rs: add access permissions 2020-04-16 12:47:16 +02:00
4b40148caa start impl. access permissions 2020-04-16 12:47:16 +02:00
423e656163 src/config/cached_user_info.rs: new helper class 2020-04-16 10:05:16 +02:00
1ce8a5d0b7 depend on proxmox 0.1.21 2020-04-16 10:04:00 +02:00
109d7817cd src/config/user.rs - cached_config: do not store/return digest 2020-04-15 11:35:57 +02:00
5354511fd0 src/config/acl.rs: implement cached_config 2020-04-15 11:30:47 +02:00
bd098a7f77 src/api2/node/dns.rs: use api macro (cleanup) 2020-04-15 10:09:18 +02:00
8d048af2bf acl: improve NoAccess handling 2020-04-15 08:11:43 +02:00
4f3db187cf Docu: first proof reading
This is a first proof reading of the currently existing documentation.

fixes (hopefully all):
* spelling
* grammar

Tries to increase readabilty and ease of understanding by simplifying
and restructing some sentences and paragraphs. Filler words which add
to the cognitive load but don't add anything are removed
(most notably `also`).
2020-04-15 06:52:59 +02:00
9a328319dd pxar extract: remove pattern from arg_param, add target instead 2020-04-15 06:41:37 +02:00
7e3d2e5b41 pxar create: remove exclude from arg_param 2020-04-15 06:31:46 +02:00
9c06f6c292 fix previous commit - use result. 2020-04-14 17:48:10 +02:00
9f4e47dd93 acl update: check path 2020-04-14 17:23:48 +02:00
d83175dd69 acl update: check if user exist. 2020-04-14 13:46:27 +02:00
68ccdf09a4 src/config/user.rs: implement user config cache 2020-04-14 13:45:45 +02:00
9765092ede acl api: implement update 2020-04-14 10:16:49 +02:00
ed3e60ae69 start ACL api 2020-04-13 11:09:44 +02:00
a83eab3c4d acl: use BTreeMap and BTreeSet to avoid sort() 2020-04-12 17:13:53 +02:00
0815ec7e65 acl: implement roles(), add regression tests. 2020-04-12 13:06:50 +02:00
5c6cdf9815 add acl config 2020-04-11 12:24:26 +02:00
9abcae1b0e gui: improve login view (use realms) 2020-04-09 13:37:14 +02:00
b88f9c5b1e PASSWORD_SCHEMA: set max_length to 1024 (for tickets) 2020-04-09 13:35:58 +02:00
879546aff6 api: add default property to domain list 2020-04-09 13:35:08 +02:00
73b40e9b46 api: correctly sort access subdirmap 2020-04-09 13:34:07 +02:00
708db4b3ae api: add list_domains 2020-04-09 11:36:45 +02:00
685e13347e api: move config/user to access/users, implement change_password
To make it similar to the pve api
2020-04-09 10:21:24 +02:00
7d817b0358 implement auth framework 2020-04-08 14:06:15 +02:00
579728c641 add user configiguration 2020-04-08 14:06:15 +02:00
cf459b1982 gc: log pending removals 2020-04-06 09:50:40 +02:00
d16122cd87 gui: preview prune selection 2020-04-01 14:14:44 +02:00
dda7015497 prune api: return a usable result (we run synchronous anyways) 2020-04-01 12:24:28 +02:00
5b5ca60a07 fix 'keep-monthly' field name
else the backend complains about a non-existant parameter

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-03-31 08:46:52 +02:00
aeee4329b0 gui - DataStoreContent: avoid useless icons, display file path 2020-03-26 18:01:04 +01:00
5f44899207 gui - DataStoreContent: move control code into controller (cleanup) 2020-03-26 17:23:51 +01:00
b1127fd0d0 gui: add prune dialog 2020-03-26 13:23:28 +01:00
4299ca727c src/server/rest.rs: use correct formatter 2020-03-26 12:54:20 +01:00
3383973532 gui: cleanup DataStoreContent.js 2020-03-26 11:17:15 +01:00
555dfe7b8e gui: display DataStoreConfig above DataStoreContent 2020-03-26 08:38:35 +01:00
e8f0ad19af gui: use a tree panel for DataStoreContent 2020-03-25 15:17:28 +01:00
a83ee10c49 depend on proxmox 0.1.20 2020-03-25 15:17:16 +01:00
9abc1166b0 bump proxmox dependency to 0.1.19
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-03-19 10:12:33 +01:00
99c287861e add 'rsync' to build_depends
a 'make deb' fails without rsync installed (a pxar test needs it)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-03-18 16:24:33 +01:00
6650a242fb rewrite future select in upgrade_to_backup_protocol using select macro
and handle all ok/err cases with at least logging

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-03-18 11:33:59 +01:00
66b4593b04 fix typo
s/Nuber/Number/
s/backups/Backups/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-03-17 16:13:54 +01:00
0e7ab0567c buildsys: add missing dependency
required for the docs built when building the deb packages

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-03-16 14:54:24 +01:00
10426c1750 Makefile - upload: upload to correct product repos 2020-03-03 10:56:07 +01:00
271 changed files with 33151 additions and 12174 deletions

1
.gitignore vendored
View File

@ -3,3 +3,4 @@ local.mak
**/*.rs.bk
/etc/proxmox-backup.service
/etc/proxmox-backup-proxy.service
build/

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "0.1.3"
version = "0.8.10"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018"
license = "AGPL-3"
@ -14,44 +14,54 @@ name = "proxmox_backup"
path = "src/lib.rs"
[dependencies]
base64 = "0.10"
apt-pkg-native = "0.3.1" # custom patched version
base64 = "0.12"
bitflags = "1.2.1"
bytes = "0.5"
chrono = "0.4" # Date and time library for Rust
crc32fast = "1"
endian_trait = { version = "0.6", features = ["arrays"] }
failure = "0.1"
anyhow = "1.0"
futures = "0.3"
h2 = { version = "0.2", features = ["stream"] }
handlebars = "3.0"
http = "0.2"
hyper = "0.13"
lazy_static = "1.4"
libc = "0.2"
log = "0.4"
native-tls = "0.2"
nix = "0.16"
num-traits = "0.2"
once_cell = "1.3.1"
openssl = "0.10"
pam = "0.7"
pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0-alpha"
proxmox = { version = "0.1.18", features = [ "sortable-macro", "api-macro" ] }
pin-utils = "0.1.0"
pathpatterns = "0.1.2"
proxmox = { version = "0.3.3", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0"
pxar = { version = "0.3.0", features = [ "tokio-io", "futures-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io", "futures-io" ] }
regex = "1.2"
rustyline = "5.0.5"
rustyline = "6"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
siphasher = "0.3"
syslog = "4.0"
tokio = { version = "0.2.9", features = [ "blocking", "fs", "io-util", "macros", "rt-threaded", "signal", "stream", "tcp", "time", "uds" ] }
tokio = { version = "0.2.9", features = [ "blocking", "fs", "dns", "io-util", "macros", "process", "rt-threaded", "signal", "stream", "tcp", "time", "uds" ] }
tokio-openssl = "0.4.0"
tokio-util = { version = "0.2.0", features = [ "codec" ] }
tokio-util = { version = "0.3", features = [ "codec" ] }
tower-service = "0.3.0"
udev = ">= 0.3, <0.5"
url = "2.1"
#valgrind_request = { git = "https://github.com/edef1c/libvalgrind_request", version = "1.1.0", optional = true }
walkdir = "2"
xdg = "2.2"
zstd = { version = "0.4", features = [ "bindgen" ] }
nom = "5.1"
[features]
default = []

View File

@ -37,10 +37,16 @@ CARGO ?= cargo
COMPILED_BINS := \
$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN))
DEBS= ${PACKAGE}-server_${DEB_VERSION}_${ARCH}.deb ${PACKAGE}-client_${DEB_VERSION}_${ARCH}.deb
export DEB_VERSION DEB_VERSION_UPSTREAM
SERVER_DEB=${PACKAGE}-server_${DEB_VERSION}_${ARCH}.deb
SERVER_DBG_DEB=${PACKAGE}-server-dbgsym_${DEB_VERSION}_${ARCH}.deb
CLIENT_DEB=${PACKAGE}-client_${DEB_VERSION}_${ARCH}.deb
CLIENT_DBG_DEB=${PACKAGE}-client-dbgsym_${DEB_VERSION}_${ARCH}.deb
DOC_DEB=${PACKAGE}-docs_${DEB_VERSION}_all.deb
DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB}
DSC = rust-${PACKAGE}_${DEB_VERSION}.dsc
DESTDIR=
@ -63,10 +69,12 @@ doc:
.PHONY: build
build:
rm -rf build
rm -f debian/control
debcargo package --config debian/debcargo.toml --changelog-ready --no-overlay-write-back --directory build proxmox-backup $(shell dpkg-parsechangelog -l debian/changelog -SVersion | sed -e 's/-.*//')
sed -e '1,/^$$/ ! d' build/debian/control > build/debian/control.src
cat build/debian/control.src build/debian/control.in > build/debian/control
rm build/debian/control.in build/debian/control.src
cp build/debian/control debian/control
rm build/Cargo.lock
find build/debian -name "*.hint" -delete
$(foreach i,$(SUBDIRS), \
@ -74,18 +82,21 @@ build:
.PHONY: proxmox-backup-docs
proxmox-backup-docs: $(DOC_DEB)
$(DOC_DEB): build
$(DOC_DEB) $(DEBS): proxmox-backup-docs
proxmox-backup-docs: build
cd build; dpkg-buildpackage -b -us -uc --no-pre-clean
lintian $(DOC_DEB)
# copy the local target/ dir as a build-cache
.PHONY: deb
deb: $(DEBS)
$(DEBS): build
$(DEBS): deb
deb: build
cd build; dpkg-buildpackage -b -us -uc --no-pre-clean --build-profiles=nodoc
lintian $(DEBS)
.PHONY: deb-all
deb-all: $(DOC_DEB) $(DEBS)
.PHONY: dsc
dsc: $(DSC)
$(DSC): build
@ -135,7 +146,8 @@ install: $(COMPILED_BINS)
$(MAKE) -C docs install
.PHONY: upload
upload: ${DEBS}
upload: ${SERVER_DEB} ${CLIENT_DEB} ${DOC_DEB}
# check if working directory is clean
git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - ${DEBS} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve" --dist buster

View File

@ -1,22 +1,37 @@
TODO list for Proxmox Backup
============================
* user management api
* disk management api
* start writing server GUI
* improve catalog shell commands
* improve user documentation
GUI
===
* fix network/dns GUI (network/dns api changed)
* user/acl/permission management GUI
* implement GUI to configure remotes
* implement fancy DatastoreStatus.js dashboard
* implement PVE GUI to add PBS storage (with convenient copy/paste
functionality, like we have for cluster join)
Chores:
=======
* move tools/xattr.rs and tools/acl.rs to proxmox/sys/linux/
* remove pbs-* systemd timers and services on package purge
Suggestions
===========

335
debian/changelog vendored
View File

@ -1,3 +1,338 @@
rust-proxmox-backup (0.8.10-1) unstable; urgency=medium
* ui: acl: add improved permission selector
* services: make reload safer and default to it in gui
* ui: rework DataStore content Panel
* ui: add search box to DataStore content
* ui: DataStoreContent: keep selection and expansion on reload
* upload_chunk: allow upload of empty blobs
* fix #2856: also check whole device for device mapper
* ui: fix error when reloading DataStoreContent
* ui: fix in-progress snapshots always showing as "Encrypted"
* update to pxar 0.3 to support negative timestamps
* fix #2873: if --pattern is used, default to not extracting
* finish_backup: test/verify manifest at server side
* finish_backup: add chunk_upload_stats to manifest
* src/api2/admin/datastore.rs: add API to get/set Notes for backus
* list_snapshots: Returns new "comment" property (first line from notes)
* pxar: create: attempt to use O_NOATIME
* systemd/time: fix weekday wrapping on month
* pxar: better error handling on extract
* pxar/extract: fixup path stack for errors
* datastore: allow browsing signed pxar files
* GC: use time pre phase1 to calculate min_atime in phase2
* gui: user: fix #2898 add dialog to set password
* fix #2909: handle missing chunks gracefully in garbage collection
* finish_backup: mark backup as finished only after checks have passed
* fix: master-key: upload RSA encoded key with backup
* admin-guide: add section explaining master keys
* backup: only allow finished backups as base snapshot
* datastore api: only decode unencrypted indices
* datastore api: verify blob/index csum from manifest
* sync, blobs and chunk readers: add more checks and verification
* verify: add more checks, don't fail on first error
* mark signed manifests as such
* backup/prune/forget: improve locking
* backup: ensure base snapshots are still available after backup
-- Proxmox Support Team <support@proxmox.com> Tue, 11 Aug 2020 15:37:29 +0200
rust-proxmox-backup (0.8.9-1) unstable; urgency=medium
* improve termprocy (console) behavior on updating proxmox-backup-server and
other daemon restarts
* client: improve upload log output and speed calculation
* fix #2885: client upload: bail on duplicate backup targets
-- Proxmox Support Team <support@proxmox.com> Fri, 24 Jul 2020 11:24:07 +0200
rust-proxmox-backup (0.8.8-1) unstable; urgency=medium
* pxar: .pxarexclude: match behavior from absolute paths to the one described
in the documentation and use byte based paths
* catalog shell: add exit command
* manifest: revert signature canonicalization to old behaviour. Fallout from
encrypted older backups is expected and was ignored due to the beta status
of Proxmox Backup.
* documentation: various improvements and additions
* cached user info: print privilege path in error message
* docs: fix #2851 Add note about GC grace period
* api2/status: fix datastore full estimation bug if there where (almost) no
change for several days
* schedules, calendar event: support the 'weekly' special expression
* ui: sync job: group remote fields and use "Source" in labels
* ui: add calendar event selector
* ui: sync job: change default to false for "remove-vanished" for new jobs
* fix #2860: skip in-progress snapshots when syncing
* fix #2865: detect and skip vanished snapshots
* fix #2871: close FDs when scanning backup group, avoid leaking
* backup: list images: handle walkdir error, catch "lost+found" special
directory
* implement AsyncSeek for AsyncIndexReader
* client: rework logging upload info like size or bandwidth
* client writer: do not output chunklist for now on verbose=true
* add initial API for listing available updates and updating the APT
database
* ui: add xterm.js console implementation
-- Proxmox Support Team <support@proxmox.com> Thu, 23 Jul 2020 12:16:05 +0200
rust-proxmox-backup (0.8.7-2) unstable; urgency=medium
* support restoring file attributes from pxar archives
* docs: additions and fixes
* ui: running tasks: update limit to 100
-- Proxmox Support Team <support@proxmox.com> Tue, 14 Jul 2020 12:05:25 +0200
rust-proxmox-backup (0.8.6-1) unstable; urgency=medium
* ui: add button for easily showing the server fingerprint dashboard
* proxmox-backup-client benchmark: add --verbose flag and improve output
format
* docs: reference PDF variant in HTML output
* proxmox-backup-client: add simple version command
* improve keyfile and signature handling in catalog and manifest
-- Proxmox Support Team <support@proxmox.com> Fri, 10 Jul 2020 11:34:14 +0200
rust-proxmox-backup (0.8.5-1) unstable; urgency=medium
* fix cross process task listing
* docs: expand datastore documentation
* docs: add remotes and sync-jobs and schedules
* bump pathpatterns to 0.1.2
* ui: align version and user-menu spacing with pve/pmg
* ui: make username a menu-button
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 15:32:39 +0200
rust-proxmox-backup (0.8.4-1) unstable; urgency=medium
* add TaskButton in header
* simpler lost+found pattern
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 14:28:24 +0200
rust-proxmox-backup (0.8.3-1) unstable; urgency=medium
* get_disks: don't fail on zfs_devices
* allow some more characters for zpool list
* ui: adapt for new sign-only crypt mode
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 13:55:06 +0200
rust-proxmox-backup (0.8.2-1) unstable; urgency=medium
* buildsys: also upload debug packages
* src/backup/manifest.rs: rename into_string -> to_string
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 11:58:51 +0200
rust-proxmox-backup (0.8.1-1) unstable; urgency=medium
* remove authhenticated data blobs (not needed)
* add signature to manifest
* improve docs
* client: introduce --keyfd parameter
* ui improvements
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 10:01:25 +0200
rust-proxmox-backup (0.8.0-1) unstable; urgency=medium
* implement get_runtime_with_builder
-- Proxmox Support Team <support@proxmox.com> Tue, 07 Jul 2020 10:15:26 +0200
rust-proxmox-backup (0.7.0-1) unstable; urgency=medium
* implement clone for RemoteChunkReader
* improve docs
* client: add --encryption boolen parameter
* client: use default encryption key if it is available
* d/rules: do not compress .pdf files
* ui: various fixes
* add beta text with link to bugtracker
-- Proxmox Support Team <support@proxmox.com> Tue, 07 Jul 2020 07:40:05 +0200
rust-proxmox-backup (0.6.0-1) unstable; urgency=medium
* make ReadChunk not require mutable self.
* ui: increase timeout for snapshot listing
* ui: consistently spell Datastore without space between words
* ui: disk create: sync and improve 'add-datastore' checkbox label
* proxmox-backup-client: add benchmark command
* pxar: fixup 'vanished-file' logic a bit
* ui: add verify button
-- Proxmox Support Team <support@proxmox.com> Fri, 03 Jul 2020 09:45:52 +0200
rust-proxmox-backup (0.5.0-1) unstable; urgency=medium
* partially revert commit 1f82f9b7b5d231da22a541432d5617cb303c0000
* ui: allow to Forget (delete) backup snapshots
* pxar: deal with files changing size during archiving
-- Proxmox Support Team <support@proxmox.com> Mon, 29 Jun 2020 13:00:54 +0200
rust-proxmox-backup (0.4.0-1) unstable; urgency=medium
* change api for incremental backups mode
* zfs disk management gui
-- Proxmox Support Team <support@proxmox.com> Fri, 26 Jun 2020 10:43:27 +0200
rust-proxmox-backup (0.3.0-1) unstable; urgency=medium
* support incremental backups mode
* new disk management
* single file restore for container backups
-- Proxmox Support Team <support@proxmox.com> Wed, 24 Jun 2020 10:12:57 +0200
rust-proxmox-backup (0.2.3-1) unstable; urgency=medium
* tools/systemd/time: fix compute_next_event for weekdays
* improve display of 'next run' for sync jobs
* fix csum calculation for images which do not have a 'chunk_size' aligned
size
* add parser for zpool list output
-- Proxmox Support Team <support@proxmox.com> Thu, 04 Jun 2020 10:39:06 +0200
rust-proxmox-backup (0.2.2-1) unstable; urgency=medium
* proxmox-backup-client.rs: implement quiet flag
* client restore: don't add server file ending if already specified
* src/client/pull.rs: also download client.log.blob
* src/client/pull.rs: more verbose logging
* gui improvements
-- Proxmox Support Team <support@proxmox.com> Wed, 03 Jun 2020 10:37:12 +0200
rust-proxmox-backup (0.2.1-1) unstable; urgency=medium
* ui: move server RRD statistics to 'Server Status' panel
* ui/api: add more server statistics
* ui/api: add per-datastore usage and performance statistics over time
* ui: add initial remote config management panel
* remotes: save passwords as base64
* gather zpool io stats
* various fixes/improvements
-- Proxmox Support Team <support@proxmox.com> Thu, 28 May 2020 17:39:33 +0200
rust-proxmox-backup (0.2.0-1) unstable; urgency=medium
* see git changelog (too many changes)
-- Proxmox Support Team <support@proxmox.com> Mon, 25 May 2020 19:17:03 +0200
rust-proxmox-backup (0.1.3-1) unstable; urgency=medium
* use SectionConfig from proxmox 0.1.18-1

132
debian/control vendored Normal file
View File

@ -0,0 +1,132 @@
Source: rust-proxmox-backup
Section: admin
Priority: optional
Build-Depends: debhelper (>= 11),
dh-cargo (>= 18),
cargo:native,
rustc:native,
libstd-rust-dev,
librust-anyhow-1+default-dev,
librust-apt-pkg-native-0.3+default-dev (>= 0.3.1-~~),
librust-base64-0.12+default-dev,
librust-bitflags-1+default-dev (>= 1.2.1-~~),
librust-bytes-0.5+default-dev,
librust-chrono-0.4+default-dev,
librust-crc32fast-1+default-dev,
librust-endian-trait-0.6+arrays-dev,
librust-endian-trait-0.6+default-dev,
librust-futures-0.3+default-dev,
librust-h2-0.2+default-dev,
librust-h2-0.2+stream-dev,
librust-handlebars-3+default-dev,
librust-http-0.2+default-dev,
librust-hyper-0.13+default-dev,
librust-lazy-static-1+default-dev (>= 1.4-~~),
librust-libc-0.2+default-dev,
librust-log-0.4+default-dev,
librust-nix-0.16+default-dev,
librust-nom-5+default-dev (>= 5.1-~~),
librust-num-traits-0.2+default-dev,
librust-once-cell-1+default-dev (>= 1.3.1-~~),
librust-openssl-0.10+default-dev,
librust-pam-0.7+default-dev,
librust-pam-sys-0.5+default-dev,
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.3+api-macro-dev (>= 0.3.3-~~),
librust-proxmox-0.3+default-dev (>= 0.3.3-~~),
librust-proxmox-0.3+sortable-macro-dev (>= 0.3.3-~~),
librust-proxmox-0.3+websocket-dev (>= 0.3.3-~~),
librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.3+default-dev,
librust-pxar-0.3+futures-io-dev,
librust-pxar-0.3+tokio-io-dev,
librust-regex-1+default-dev (>= 1.2-~~),
librust-rustyline-6+default-dev,
librust-serde-1+default-dev,
librust-serde-1+derive-dev,
librust-serde-json-1+default-dev,
librust-siphasher-0.3+default-dev,
librust-syslog-4+default-dev,
librust-tokio-0.2+blocking-dev (>= 0.2.9-~~),
librust-tokio-0.2+default-dev (>= 0.2.9-~~),
librust-tokio-0.2+dns-dev (>= 0.2.9-~~),
librust-tokio-0.2+fs-dev (>= 0.2.9-~~),
librust-tokio-0.2+io-util-dev (>= 0.2.9-~~),
librust-tokio-0.2+macros-dev (>= 0.2.9-~~),
librust-tokio-0.2+process-dev (>= 0.2.9-~~),
librust-tokio-0.2+rt-threaded-dev (>= 0.2.9-~~),
librust-tokio-0.2+signal-dev (>= 0.2.9-~~),
librust-tokio-0.2+stream-dev (>= 0.2.9-~~),
librust-tokio-0.2+tcp-dev (>= 0.2.9-~~),
librust-tokio-0.2+time-dev (>= 0.2.9-~~),
librust-tokio-0.2+uds-dev (>= 0.2.9-~~),
librust-tokio-openssl-0.4+default-dev,
librust-tokio-util-0.3+codec-dev,
librust-tokio-util-0.3+default-dev,
librust-tower-service-0.3+default-dev,
librust-udev-0.4+default-dev | librust-udev-0.3+default-dev,
librust-url-2+default-dev (>= 2.1-~~),
librust-walkdir-2+default-dev,
librust-xdg-2+default-dev (>= 2.2-~~),
librust-zstd-0.4+bindgen-dev,
librust-zstd-0.4+default-dev,
libacl1-dev,
libfuse3-dev,
libsystemd-dev,
uuid-dev,
debhelper (>= 12~),
bash-completion,
python3-docutils,
python3-pygments,
rsync,
fonts-dejavu-core <!nodoc>,
fonts-lato <!nodoc>,
fonts-open-sans <!nodoc>,
graphviz <!nodoc>,
latexmk <!nodoc>,
python3-sphinx <!nodoc>,
texlive-fonts-extra <!nodoc>,
texlive-fonts-recommended <!nodoc>,
texlive-xetex <!nodoc>,
xindy <!nodoc>
Maintainer: Proxmox Support Team <support@proxmox.com>
Standards-Version: 4.4.1
Vcs-Git:
Vcs-Browser:
Homepage: https://www.proxmox.com
Package: proxmox-backup-server
Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8),
lvm2,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4),
pve-xtermjs (>= 4.7.0-1),
smartmontools,
${misc:Depends},
${shlibs:Depends},
Recommends: zfsutils-linux,
Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface.
Package: proxmox-backup-client
Architecture: any
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: Proxmox Backup Client tools
This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups.
Package: proxmox-backup-docs
Build-Profiles: <!nodoc>
Section: doc
Depends: libjs-extjs,
${misc:Depends},
Architecture: all
Description: Proxmox Backup Documentation
This package contains the Proxmox Backup Documentation files.

7
debian/control.in vendored
View File

@ -3,10 +3,15 @@ Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8),
lvm2,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit,
proxmox-widget-toolkit (>= 2.2-4),
pve-xtermjs (>= 4.7.0-1),
smartmontools,
${misc:Depends},
${shlibs:Depends},
Recommends: zfsutils-linux,
Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface.

29
debian/debcargo.toml vendored
View File

@ -11,8 +11,31 @@ vcs_git = ""
vcs_browser = ""
maintainer = "Proxmox Support Team <support@proxmox.com>"
section = "admin"
build_depends = [ "debhelper (>= 12~)", "bash-completion" ]
build_depends_excludes = [ "debhelper (>=11)" ]
build_depends = [
"debhelper (>= 12~)",
"bash-completion",
"python3-docutils",
"python3-pygments",
"rsync",
"fonts-dejavu-core <!nodoc>",
"fonts-lato <!nodoc>",
"fonts-open-sans <!nodoc>",
"graphviz <!nodoc>",
"latexmk <!nodoc>",
"python3-sphinx <!nodoc>",
"texlive-fonts-extra <!nodoc>",
"texlive-fonts-recommended <!nodoc>",
"texlive-xetex <!nodoc>",
"xindy <!nodoc>",
]
build_depends_excludes = [
"debhelper (>=11)",
]
[packages.lib]
depends = [ "libacl1-dev", "libsystemd-dev", "libfuse3-dev", "uuid-dev" ]
depends = [
"libacl1-dev",
"libfuse3-dev",
"libsystemd-dev",
"uuid-dev",
]

2
debian/lintian-overrides vendored Normal file
View File

@ -0,0 +1,2 @@
proxmox-backup-server: package-installs-apt-sources etc/apt/sources.list.d/pbstest-beta.list
proxmox-backup-server: systemd-service-file-refers-to-unusual-wantedby-target lib/systemd/system/proxmox-backup-banner.service getty.target

28
debian/postinst vendored Normal file
View File

@ -0,0 +1,28 @@
#!/bin/sh
set -e
#DEBHELPER#
case "$1" in
configure)
# modeled after dh_systemd_start output
systemctl --system daemon-reload >/dev/null || true
if [ -n "$2" ]; then
_dh_action=try-reload-or-restart
else
_dh_action=start
fi
deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
exit 0

10
debian/prerm vendored Normal file
View File

@ -0,0 +1,10 @@
#!/bin/sh
set -e
#DEBHELPER#
# modeled after dh_systemd_start output
if [ -d /run/systemd/system ] && [ "$1" = remove ]; then
deb-systemd-invoke stop 'proxmox-backup-banner.service' 'proxmox-backup-proxy.service' 'proxmox-backup.service' >/dev/null || true
fi

1
debian/proxmox-backup-docs.links vendored Normal file
View File

@ -0,0 +1 @@
/usr/share/doc/proxmox-backup/proxmox-backup.pdf /usr/share/doc/proxmox-backup/html/proxmox-backup.pdf

View File

@ -1,10 +1,12 @@
etc/proxmox-backup-proxy.service /lib/systemd/system/
etc/proxmox-backup.service /lib/systemd/system/
etc/proxmox-backup-banner.service /lib/systemd/system/
etc/pbstest-beta.list /etc/apt/sources.list.d/
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner
usr/sbin/proxmox-backup-manager
usr/share/javascript/proxmox-backup/index.hbs
usr/share/javascript/proxmox-backup/css/ext6-pbs.css
usr/share/javascript/proxmox-backup/images/logo-128.png
usr/share/javascript/proxmox-backup/images/proxmox_logo.png

9
debian/rules vendored
View File

@ -37,11 +37,14 @@ override_dh_auto_install:
PROXY_USER=backup \
LIBDIR=/usr/lib/$(DEB_HOST_MULTIARCH)
override_dh_installinit:
dh_installinit
dh_installinit --name proxmox-backup-proxy
override_dh_installsystemd:
# note: we start/try-reload-restart services manually in postinst
dh_installsystemd --no-start --no-restart-after-upgrade
# workaround https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933541
# TODO: remove once available (Debian 11 ?)
override_dh_dwz:
dh_dwz --no-dwz-multifile
override_dh_compress:
dh_compress -X.pdf

View File

@ -1,11 +1,5 @@
include ../defines.mk
ifeq ($(BUILD_MODE), release)
COMPILEDIR := ../target/release
else
COMPILEDIR := ../target/debug
endif
GENERATED_SYNOPSIS := \
proxmox-backup-client/synopsis.rst \
proxmox-backup-client/catalog-shell-synopsis.rst \
@ -26,6 +20,15 @@ SPHINXOPTS =
SPHINXBUILD = sphinx-build
BUILDDIR = output
ifeq ($(BUILD_MODE), release)
COMPILEDIR := ../target/release
SPHINXOPTS += -t release
else
COMPILEDIR := ../target/debug
SPHINXOPTS += -t devbuild
endif
# Sphinx internal variables.
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) .
@ -73,10 +76,11 @@ html: ${GENERATED_SYNOPSIS}
.PHONY: latexpdf
latexpdf: ${GENERATED_SYNOPSIS}
@echo "Requires python3-sphinx, texlive-xetex, xindy and texlive-fonts-extra"
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
@echo "Running LaTeX files through xelatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
@echo "xelatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: epub3
epub3: ${GENERATED_SYNOPSIS}

File diff suppressed because it is too large Load Diff

View File

@ -17,10 +17,25 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- Implement custom formatter for code-blocks ---------------------------
#
# * use smaller font
# * avoid space between lines to nicely format utf8 tables
from sphinx.highlighting import PygmentsBridge
from pygments.formatters.latex import LatexFormatter
class CustomLatexFormatter(LatexFormatter):
def __init__(self, **options):
super(CustomLatexFormatter, self).__init__(**options)
self.verboptions = r"formatcom=\footnotesize\relax\let\strut\empty"
PygmentsBridge.latex_formatter = CustomLatexFormatter
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
@ -30,8 +45,11 @@
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo"]
todo_link_only = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@ -53,7 +71,7 @@ rst_epilog = epilog_file.read()
# General information about the project.
project = 'Proxmox Backup'
copyright = '2019, Proxmox Support Team'
copyright = '2019-2020, Proxmox Support Team'
author = 'Proxmox Support Team'
# The version info for the project you're documenting, acts as replacement for
@ -61,9 +79,11 @@ author = 'Proxmox Support Team'
# built documents.
#
# The short X.Y version.
version = '1.0'
vstr = lambda s: '<devbuild>' if s is None else str(s)
version = vstr(os.getenv('DEB_VERSION_UPSTREAM'))
# The full version, including alpha/beta/rc tags.
release = '1.0-1'
release = vstr(os.getenv('DEB_VERSION'))
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
@ -92,7 +112,7 @@ exclude_patterns = [
'pxar/man1.rst',
'epilog.rst',
'pbs-copyright.rst',
'sysadmin.rst',
'local-zfs.rst'
'package-repositories.rst',
]
@ -251,14 +271,24 @@ htmlhelp_basename = 'ProxmoxBackupdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_engine = 'xelatex'
latex_elements = {
'fontenc': '\\usepackage{fontspec}',
# The paper size ('letterpaper' or 'a4paper').
#
'papersize': 'a4paper',
# The font size ('10pt', '11pt' or '12pt').
#
'pointsize': '12pt',
'pointsize': '10pt',
'fontpkg': r'''
\setmainfont{Open Sans}
\setsansfont{Lato}
\setmonofont{DejaVu Sans Mono}
''',
# Additional stuff for the LaTeX preamble.
#

View File

@ -11,8 +11,10 @@
.. _Container: https://en.wikipedia.org/wiki/Container_(virtualization)
.. _Zstandard: https://en.wikipedia.org/wiki/Zstandard
.. _Proxmox: https://www.proxmox.com
.. _Proxmox Community Forum: https://forum.proxmox.com
.. _Proxmox Virtual Environment: https://www.proxmox.com/proxmox-ve
.. _Proxmox Backup: https://www.proxmox.com/proxmox-backup
.. _Proxmox Backup: https://pbs.proxmox.com/wiki/index.php/Main_Page // FIXME
.. _PBS Development List: https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
.. _reStructuredText: https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html
.. _Rust: https://www.rust-lang.org/
.. _SHA-256: https://en.wikipedia.org/wiki/SHA-2

View File

@ -5,24 +5,23 @@ Glossary
`Virtual machine`_
A Virtual machine is a program that can execute an entire
operatin system inside an emulated hardware environment.
A virtual machine is a program that can execute an entire
operating system inside an emulated hardware environment.
`Container`_
A Container is an isolated user space. Programs runs directly on
the hosts kernel, but with limited access to the host resources.
A container is an isolated user space. Programs run directly on
the host's kernel, but with limited access to the host resources.
Datastore
A place to store backups. The current implemenation is
file-system based, so this refers to a directory containing the
backup data.
A place to store backups. A directory which contains the backup data.
The current implementation is file-system based.
`Rust`_
Rust is a new, fast and memory-efficient system programming
language, with no runtime or garbage collector. Rusts rich type
language. It has no runtime or garbage collector. Rusts rich type
system and ownership model guarantee memory-safety and
thread-safety. I can eliminate many classes of bugs
at compile-time.
@ -31,11 +30,9 @@ Glossary
Is a tool that makes it easy to create intelligent and
beautiful documentation. It was originally created for the
Python documentation, and it has excellent facilities for the
documentation of the Python programming language. It has excellent facilities for the
documentation of software projects in a range of languages.
`reStructuredText`_
Is an easy-to-read, what-you-see-is-what-you-get plaintext
@ -44,8 +41,24 @@ Glossary
`FUSE`
Filesystem in Userspace (`FUSE <https://en.wikipedia.org/wiki/Filesystem_in_Userspace>`_)
defines an interface which allows to implement a filesystem in
defines an interface which makes it possible to implement a filesystem in
userspace as opposed to implementing it in the kernel. The fuse
kernel driver handles filesystem requests and sends them to an
userspace application for reply.
kernel driver handles filesystem requests and sends them to a
userspace application.
Remote
A remote Proxmox Backup Server installation and credentials for a user on it.
You can pull datastores from a remote to a local datastore in order to
have redundant backups.
Schedule
Certain tasks, for example pruning and garbage collection, need to be
performed on a regular basis. Proxmox Backup Server uses a subset of the
`systemd Time and Date Specification
<https://www.freedesktop.org/software/systemd/man/systemd.time.html#>`_.
The subset currently supports time of day specifications and weekdays, in
addition to the shorthand expressions 'minutely', 'hourly', 'daily'.
There is no support for specifying timezones, the tasks are run in the
timezone configured on the server.

View File

@ -1,19 +1,20 @@
.. Proxmox Backup documentation master file
Welcome to Proxmox Backup's documentation!
==========================================
Welcome to the Proxmox Backup documentation!
============================================
Copyright (C) 2019 Proxmox Server Solutions GmbH
Copyright (C) 2019-2020 Proxmox Server Solutions GmbH
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with no
Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A
copy of the license is included in the section entitled "GNU Free
Documentation License".
Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.3 or any later version
published by the Free Software Foundation; with no Invariant Sections, no
Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included
in the section entitled "GNU Free Documentation License".
.. todolist::
.. only:: html
A `PDF` version of the documentation is `also available here <./proxmox-backup.pdf>`_
.. toctree::
:maxdepth: 3
@ -22,6 +23,7 @@ Documentation License".
introduction.rst
installation.rst
administration-guide.rst
sysadmin.rst
.. raw:: latex
@ -37,5 +39,14 @@ Documentation License".
glossary.rst
GFDL.rst
.. only:: html and devbuild
.. toctree::
:maxdepth: 2
:caption: Developer Appendix
todos.rst
* :ref:`genindex`

View File

@ -1,55 +1,50 @@
Installation
============
`Proxmox Backup`_ is split into a server part and a client part. The
server part comes with it's own graphical installer, but we also
ship Debian_ package repositories, so you can easily install those
packages on any Debian_ based system.
`Proxmox Backup`_ is split into a server and client part. The server part
can either be installed with a graphical installer or on top of
Debian_ from the provided package repository.
.. include:: package-repositories.rst
Server installation
-------------------
The backup server stores the actual backup data, but also provides a
web based GUI for various management tasks, for example disk
management.
The backup server stores the actual backed up data and provides a web based GUI
for various management tasks such as disk management.
.. note:: You always need a backup server. It is not possible to use
`Proxmox Backup`_ without the server part.
The server is based on Debian, therefore the disk image (ISO file) provided
by us includes a complete Debian system ("buster" for version 1.x) as
well as all necessary backup packages.
The disk image (ISO file) provided by Proxmox includes a complete Debian system
("buster" for version 1.x) as well as all necessary packages for the `Proxmox Backup`_ server.
Using the installer will guide you through the setup, allowing
The installer will guide you through the setup process and allows
you to partition the local disk(s), apply basic system configurations
(e.g. timezone, language, network) and install all required packages.
Using the provided ISO will get you started in just a few minutes,
that's why we recommend this method for new and existing users.
(e.g. timezone, language, network), and installs all required packages.
The provided ISO will get you started in just a few minutes, and is the
recommended method for new and existing users.
Alternatively, `Proxmox Backup`_ server can be installed on top of an
existing Debian system.
Using the `Proxmox Backup`_ Installer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install `Proxmox Backup`_ with the Installer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can download the ISO from |DOWNLOADS|.
Download the ISO from |DOWNLOADS|.
It includes the following:
* The `Proxmox Backup`_ server installer, which partitions the local
disk(s) with ext4, ext3, xfs or ZFS, and installs the operating
system.
* Complete operating system (Debian Linux, 64-bit)
* The `Proxmox Backup`_ server installer, which partitions the local
disk(s) with ext4, ext3, xfs or ZFS and installs the operating
system.
* Our Linux kernel with ZFS support.
* Complete toolset for administering backups and all necessary
resources
* Complete tool-set to administer backups and all necessary resources
* Web based management interface for using the toolset
* Web based GUI management interface
.. note:: During the installation process, the complete server
is used by default and all existing data is removed.
@ -58,8 +53,8 @@ It includes the following:
Install `Proxmox Backup`_ server on Debian
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Proxmox ships as a set of Debian packages, so you can install it on
top of a standard Debian installation. After configuring the
Proxmox ships as a set of Debian packages which can be installed on top of a
standard Debian installation. After configuring the
:ref:`sysadmin_package_repositories`, you need to run:
.. code-block:: console
@ -67,7 +62,7 @@ top of a standard Debian installation. After configuring the
# apt-get update
# apt-get install proxmox-backup-server
Above code keeps the current (Debian) kernel and installs a minimal
The commands above keep the current (Debian) kernel and install a minimal
set of required packages.
If you want to install the same set of packages as the installer
@ -78,16 +73,19 @@ does, please use the following:
# apt-get update
# apt-get install proxmox-backup
This installs all required packages, the Proxmox kernel with ZFS_
support, and a set of commonly useful packages.
This will install all required packages, the Proxmox kernel with ZFS_
support, and a set of common and useful packages.
Installing on top of an existing Debian_ installation looks easy, but
it presumes that you have correctly installed the base system, and you
know how you want to configure and use the local storage. Network
configuration is also completely up to you.
Installing `Proxmox Backup`_ on top of an existing Debian_ installation looks easy, but
it presumes that the base system and local storage has been set up correctly.
In general, this is not trivial, especially when you use LVM_ or
ZFS_.
In general this is not trivial, especially when LVM_ or ZFS_ is used.
The network configuration is completely up to you as well.
.. note:: You can access the webinterface of the Proxmox Backup Server with
your web browser, using HTTPS on port 8007. For example at
``https://<ip-or-dns-name>:8007``
Install Proxmox Backup server on `Proxmox VE`_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -101,9 +99,13 @@ After configuring the
# apt-get install proxmox-backup-server
.. caution:: Installing the backup server directly on the hypervisor
is not recommended. It is more secure to use a separate physical
server to store backups. If the hypervisor server fails, you can
still access your backups.
is not recommended. It is safer to use a separate physical
server to store backups. Should the hypervisor server fail, you can
still access the backups.
.. note:: You can access the webinterface of the Proxmox Backup Server with
your web browser, using HTTPS on port 8007. For example at
``https://<ip-or-dns-name>:8007``
Client installation
-------------------
@ -111,8 +113,8 @@ Client installation
Install `Proxmox Backup`_ client on Debian
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Proxmox ships as a set of Debian packages, so you can install it on
top of a standard Debian installation. After configuring the
Proxmox ships as a set of Debian packages to be installed on
top of a standard Debian installation. After configuring the
:ref:`sysadmin_package_repositories`, you need to run:
.. code-block:: console

View File

@ -1,120 +1,169 @@
Introduction
============
This documentationm is written in :term:`reStructuredText` and formatted with :term:`Sphinx`.
What is Proxmox Backup Server
-----------------------------
Proxmox Backup Server is an enterprise-class, client-server backup software
package that backs up :term:`virtual machine`\ s, :term:`container`\ s, and
physical hosts. It is specially optimized for the `Proxmox Virtual Environment`_
platform and allows you to back up your data securely, even between remote
sites, providing easy management with a web-based user interface.
What is Proxmox Backup
----------------------
Proxmox Backup Server supports deduplication, compression, and authenticated
encryption (AE_). Using :term:`Rust` as the implementation language guarantees high
performance, low resource usage, and a safe, high-quality codebase.
Proxmox Backup is an enterprise class client-server backup software,
specially optimized for `Proxmox Virtual Environment`_ to backup
:term:`virtual machine`\ s and :term:`container`\ s. It is also
possible to backup physical hosts.
It supports deduplication, compression and authenticated encryption
(AE_). Using :term:`Rust` as implementation language guarantees high
performance, low resource usage, and a safe, high quality code base.
Encryption is done at the client side. This makes backups to not fully
trusted targets possible.
It features strong client-side encryption. Thus, it's possible to
backup data to targets that are not fully trusted.
Architecture
------------
Proxmox Backup uses a `Client-server model`_. The server is
responsible to store the backup data, and provides an API to create
backups and restore data. It is also possible to manage disks and
other server side resources using this API.
Proxmox Backup Server uses a `client-server model`_. The server stores the
backup data and provides an API to create backups and restore data. With the
API, it's also possible to manage disks and other server-side resources.
A backup client uses this API to access the backed up data,
i.e. ``proxmox-backup-client`` is a command line tool to create
backups and restore data. We also deliver an integrated client for
QEMU_ with `Proxmox Virtual Environment`_.
The backup client uses this API to access the backed up data. With the command
line tool ``proxmox-backup-client`` you can create backups and restore data.
For QEMU_ with `Proxmox Virtual Environment`_ we deliver an integrated client.
A single backup is allowed to contain several archives. For example,
when you backup a :term:`virtual machine`, each disk is stored as a
separate archive inside that backup. The VM configuration also gets an
extra file. This way, it is easy to access and restore important parts
of the backup, without having to scan the whole backup.
A single backup is allowed to contain several archives. For example, when you
backup a :term:`virtual machine`, each disk is stored as a separate archive
inside that backup. The VM configuration itself is stored as an extra file.
This way, it's easy to access and restore only important parts of the backup,
without the need to scan the whole backup.
Main features
Main Features
-------------
:Proxmox VE: The `Proxmox Virtual Environment`_ is fully
supported. You can backup :term:`virtual machine`\ s and
:Support for Proxmox VE: The `Proxmox Virtual Environment`_ is fully
supported and you can easily backup :term:`virtual machine`\ s and
:term:`container`\ s.
:GUI: We provide a graphical, web based user interface.
:Deduplication: Incremental backup produces large amounts of duplicate
data. The deduplication layer removes that redundancy and makes
inkremental backup small and space efficient.
:Data Integrity: The built in `SHA-256`_ checksum algorithm assures the
accuracy and consistency of your backups.
:Remote Sync: It is possible to efficently synchronize data to remote
sites. Only deltas containing new data are transfered.
:Performance: The whole software stack is written in :term:`Rust`,
which provides high speed and memory efficiency.
in order to provide high speed and memory efficiency.
:Compression: Ultra fast Zstandard_ compression is able to compress
:Deduplication: Periodic backups produce large amounts of duplicate
data. The deduplication layer avoids redundancy and minimizes the storage
space used.
:Incremental backups: Changes between backups are typically low. Reading and
sending only the delta reduces the storage and network impact of backups.
:Data Integrity: The built-in `SHA-256`_ checksum algorithm ensures accuracy and
consistency in your backups.
:Remote Sync: It is possible to efficiently synchronize data to remote
sites. Only deltas containing new data are transferred.
:Compression: The ultra-fast Zstandard_ compression is able to compress
several gigabytes of data per second.
:Encryption: Backups can be encrypted at client side using AES-256 in
GCM_ mode. This authenticated encryption mode (AE_) provides very
high performance on modern hardware.
:Encryption: Backups can be encrypted on the client-side, using AES-256 in
Galois/Counter Mode (GCM_) mode. This authenticated encryption (AE_) mode
provides very high performance on modern hardware.
:Open Source: No secrets. You have access to the whole source tree.
:Web interface: Manage the Proxmox Backup Server with the integrated, web-based
user interface.
:Support: Commercial support options available from `Proxmox`_.
:Open Source: No secrets. Proxmox Backup Server is free and open-source
software. The source code is licensed under AGPL, v3.
:Support: Enterprise support will be available from `Proxmox`_ once the beta
phase is over.
Why Backup?
-----------
Reasons for Data Backup?
------------------------
The primary purpose of backup is to protect against data loss. Data
loss can happen because of faulty hardware, but also by human errors.
The main purpose of a backup is to protect against data loss. Data loss can be
caused by both faulty hardware and human error.
A common mistake is to delete a file or folder which is still
required. Virtualization can amplify this problem, because it is now
easy to delete a whole virtual machine by a single button press.
A common mistake is to accidentally delete a file or folder which is still
required. Virtualization can even amplify this problem, as deleting a whole
virtual machine can be as easy as pressing a single button.
Backups can also serve as a toolkit for administrators to temporarily
store data. For example, it is common practice to perform full backups
before installing major software updates. If something goes wrong, you
can just restore the previous state.
For administrators, backups can serve as a useful toolkit for temporarily
storing data. For example, it is common practice to perform full backups before
installing major software updates. If something goes wrong, you can easily
restore the previous state.
Another reason for backups are legal requirements. Some data must be
kept in a safe place for several years so that you can access it if
required by law.
Another reason for backups are legal requirements. Some data, especially
business records, must be kept in a safe place for several years by law, so
that they can be accessed if required.
Data loss can be very costly as it can severely restrict your
business. Therefore, make sure that you regularly perform a backup
and run restore tests.
In general, data loss is very costly as it can severely damage your business.
Therefore, ensure that you perform regular backups and run restore tests.
Software Stack
--------------
.. todo:: Eplain why we use Rust (and Flutter)
Proxmox Backup Server consists of multiple components:
* A server-daemon providing, among other things, a RESTfull API, super-fast
asynchronous tasks, lightweight usage statistic collection, scheduling
events, strict separation of privileged and unprivileged execution
environments
* A JavaScript management web interface
* A management CLI tool for the server (`proxmox-backup-manager`)
* A client CLI tool (`proxmox-backup-client`) to access the server easily from
any `Linux amd64` environment
Aside from the web interface, everything is written in the Rust programming
language.
"The Rust programming language helps you write faster, more reliable software.
High-level ergonomics and low-level control are often at odds in programming
language design; Rust challenges that conflict. Through balancing powerful
technical capacity and a great developer experience, Rust gives you the option
to control low-level details (such as memory usage) without all the hassle
traditionally associated with such control."
-- `The Rust Programming Language <https://doc.rust-lang.org/book/ch00-00-introduction.html>`_
.. todo:: further explain the software stack
Getting Help
------------
Community Support Forum
~~~~~~~~~~~~~~~~~~~~~~~
We always encourage our users to discuss and share their knowledge using the
`Proxmox Community Forum`_. The forum is moderated by the Proxmox support team.
The large user base is spread out all over the world. Needless to say that such
a large forum is a great place to get information.
Mailing Lists
~~~~~~~~~~~~~
Proxmox Backup Server is fully open-source and contributions are welcome! Here
is the primary communication channel for developers:
:Mailing list for developers: `PBS Development List`_
Bug Tracker
~~~~~~~~~~~
Proxmox runs a public bug tracker at `<https://bugzilla.proxmox.com>`_. If an
issue appears, file your report there. An issue can be a bug as well as a
request for a new feature or enhancement. The bug tracker helps to keep track
of the issue and will send a notification once it has been solved.
License
-------
Copyright (C) 2019 Proxmox Server Solutions GmbH
Copyright (C) 2019-2020 Proxmox Server Solutions GmbH
This software is written by Proxmox Server Solutions GmbH <support@proxmox.com>
Proxmox Backup is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
Proxmox Backup Server is free and open source software: you can use it,
redistribute it, and/or modify it under the terms of the GNU Affero General
Public License as published by the Free Software Foundation, either version 3
of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
``WITHOUT ANY WARRANTY``; without even the implied warranty of

400
docs/local-zfs.rst Normal file
View File

@ -0,0 +1,400 @@
ZFS on Linux
------------
ZFS is a combined file system and logical volume manager designed by
Sun Microsystems. There is no need to manually compile ZFS modules - all
packages are included.
By using ZFS, it's possible to achieve maximum enterprise features with
low budget hardware, but also high performance systems by leveraging
SSD caching or even SSD only setups. ZFS can replace cost intense
hardware raid cards by moderate CPU and memory load combined with easy
management.
General ZFS advantages
* Easy configuration and management with GUI and CLI.
* Reliable
* Protection against data corruption
* Data compression on file system level
* Snapshots
* Copy-on-write clone
* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
* Can use SSD for cache
* Self healing
* Continuous integrity checking
* Designed for high storage capacities
* Asynchronous replication over network
* Open Source
* Encryption
Hardware
~~~~~~~~~
ZFS depends heavily on memory, so you need at least 8GB to start. In
practice, use as much you can get for your hardware/budget. To prevent
data corruption, we recommend the use of high quality ECC RAM.
If you use a dedicated cache and/or log disk, you should use an
enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
increase the overall performance significantly.
IMPORTANT: Do not use ZFS on top of hardware controller which has its
own cache management. ZFS needs to directly communicate with disks. An
HBA adapter is the way to go, or something like LSI controller flashed
in ``IT`` mode.
ZFS Administration
~~~~~~~~~~~~~~~~~~
This section gives you some usage examples for common tasks. ZFS
itself is really powerful and provides many options. The main commands
to manage ZFS are `zfs` and `zpool`. Both commands come with great
manual pages, which can be read with:
.. code-block:: console
# man zpool
# man zfs
Create a new zpool
^^^^^^^^^^^^^^^^^^
To create a new pool, at least one disk is needed. The `ashift` should
have the same sector-size (2 power of `ashift`) or larger as the
underlying disk.
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device>
Create a new pool with RAID-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 1 disk
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device1> <device2>
Create a new pool with RAID-1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 2 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
Create a new pool with RAID-10
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 4 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
Create a new pool with RAIDZ-1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 3 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
Create a new pool with RAIDZ-2
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 4 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
Create a new pool with cache (L2ARC)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to use a dedicated cache drive partition to increase
the performance (use SSD).
As `<device>` it is possible to use more devices, like it's shown in
"Create a new pool with RAID*".
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
Create a new pool with log (ZIL)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to use a dedicated cache drive partition to increase
the performance (SSD).
As `<device>` it is possible to use more devices, like it's shown in
"Create a new pool with RAID*".
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device> log <log_device>
Add cache and log to an existing pool
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have a pool without cache and log. First partition the SSD in
2 partition with `parted` or `gdisk`
.. important:: Always use GPT partition tables.
The maximum size of a log device should be about half the size of
physical memory, so this is usually quite small. The rest of the SSD
can be used as cache.
.. code-block:: console
# zpool add -f <pool> log <device-part1> cache <device-part2>
Changing a failed device
^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
# zpool replace -f <pool> <old device> <new device>
Changing a failed bootable device
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Depending on how Proxmox Backup was installed it is either using `grub` or `systemd-boot`
as bootloader.
The first steps of copying the partition table, reissuing GUIDs and replacing
the ZFS partition are the same. To make the system bootable from the new disk,
different steps are needed which depend on the bootloader in use.
.. code-block:: console
# sgdisk <healthy bootable device> -R <new device>
# sgdisk -G <new device>
# zpool replace -f <pool> <old zfs partition> <new zfs partition>
.. NOTE:: Use the `zpool status -v` command to monitor how far the resilvering process of the new disk has progressed.
With `systemd-boot`:
.. code-block:: console
# pve-efiboot-tool format <new disk's ESP>
# pve-efiboot-tool init <new disk's ESP>
.. NOTE:: `ESP` stands for EFI System Partition, which is setup as partition #2 on
bootable disks setup by the {pve} installer since version 5.4. For details, see
xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP].
With `grub`:
Usually `grub.cfg` is located in `/boot/grub/grub.cfg`
.. code-block:: console
# grub-install <new disk>
# grub-mkconfig -o /path/to/grub.cfg
Activate E-Mail Notification
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ZFS comes with an event daemon, which monitors events generated by the
ZFS kernel module. The daemon can also send emails on ZFS events like
pool errors. Newer ZFS packages ship the daemon in a separate package,
and you can install it using `apt-get`:
.. code-block:: console
# apt-get install zfs-zed
To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
.. code-block:: console
ZED_EMAIL_ADDR="root"
Please note Proxmox Backup forwards mails to `root` to the email address
configured for the root user.
IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
other settings are optional.
Limit ZFS Memory Usage
^^^^^^^^^^^^^^^^^^^^^^
It is good to use at most 50 percent (which is the default) of the
system memory for ZFS ARC to prevent performance shortage of the
host. Use your preferred editor to change the configuration in
`/etc/modprobe.d/zfs.conf` and insert:
.. code-block:: console
options zfs zfs_arc_max=8589934592
This example setting limits the usage to 8GB.
.. IMPORTANT:: If your root file system is ZFS you must update your initramfs every time this value changes:
.. code-block:: console
# update-initramfs -u
SWAP on ZFS
^^^^^^^^^^^
Swap-space created on a zvol may generate some troubles, like blocking the
server or generating a high IO load, often seen when starting a Backup
to an external Storage.
We strongly recommend to use enough memory, so that you normally do not
run into low memory situations. Should you need or want to add swap, it is
preferred to create a partition on a physical disk and use it as swapdevice.
You can leave some space free for this purpose in the advanced options of the
installer. Additionally, you can lower the `swappiness` value.
A good value for servers is 10:
.. code-block:: console
# sysctl -w vm.swappiness=10
To make the swappiness persistent, open `/etc/sysctl.conf` with
an editor of your choice and add the following line:
.. code-block:: console
vm.swappiness = 10
.. table:: Linux kernel `swappiness` parameter values
:widths:auto
==================== ===============================================================
Value Strategy
==================== ===============================================================
vm.swappiness = 0 The kernel will swap only to avoid an 'out of memory' condition
vm.swappiness = 1 Minimum amount of swapping without disabling it entirely.
vm.swappiness = 10 Sometimes recommended to improve performance when sufficient memory exists in a system.
vm.swappiness = 60 The default value.
vm.swappiness = 100 The kernel will swap aggressively.
==================== ===============================================================
ZFS Compression
^^^^^^^^^^^^^^^
To activate compression:
.. code-block:: console
# zpool set compression=lz4 <pool>
We recommend using the `lz4` algorithm, since it adds very little CPU overhead.
Other algorithms such as `lzjb` and `gzip-N` (where `N` is an integer `1-9` representing
the compression ratio, 1 is fastest and 9 is best compression) are also available.
Depending on the algorithm and how compressible the data is, having compression enabled can even increase
I/O performance.
You can disable compression at any time with:
.. code-block:: console
# zfs set compression=off <dataset>
Only new blocks will be affected by this change.
ZFS Special Device
^^^^^^^^^^^^^^^^^^
Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
pool is used to store metadata, deduplication tables, and optionally small
file blocks.
A `special` device can improve the speed of a pool consisting of slow spinning
hard disks with a lot of metadata changes. For example workloads that involve
creating, updating or deleting a large number of files will benefit from the
presence of a `special` device. ZFS datasets can also be configured to store
whole small files on the `special` device which can further improve the
performance. Use fast SSDs for the `special` device.
.. IMPORTANT:: The redundancy of the `special` device should match the one of the
pool, since the `special` device is a point of failure for the whole pool.
.. WARNING:: Adding a `special` device to a pool cannot be undone!
Create a pool with `special` device and RAID-1:
.. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
Adding a `special` device to an existing pool with RAID-1:
.. code-block:: console
# zpool add <pool> special mirror <device1> <device2>
ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
`0` to disable storing small file blocks on the `special` device or a power of
two in the range between `512B` to `128K`. After setting the property new file
blocks smaller than `size` will be allocated on the `special` device.
.. IMPORTANT:: If the value for `special_small_blocks` is greater than or equal to
the `recordsize` (default `128K`) of the dataset, *all* data will be written to
the `special` device, so be careful!
Setting the `special_small_blocks` property on a pool will change the default
value of that property for all child ZFS datasets (for example all containers
in the pool will opt in for small file blocks).
Opt in for all file smaller than 4K-blocks pool-wide:
.. code-block:: console
# zfs set special_small_blocks=4K <pool>
Opt in for small file blocks for a single dataset:
.. code-block:: console
# zfs set special_small_blocks=4K <pool>/<filesystem>
Opt out from small file blocks for a single dataset:
.. code-block:: console
# zfs set special_small_blocks=0 <pool>/<filesystem>
Troubleshooting
^^^^^^^^^^^^^^^
Corrupted cachefile
In case of a corrupted ZFS cachefile, some volumes may not be mounted during
boot until mounted manually later.
For each pool, run:
.. code-block:: console
# zpool set cachefile=/etc/zfs/zpool.cache POOLNAME
and afterwards update the `initramfs` by running:
.. code-block:: console
# update-initramfs -u -k all
and finally reboot your node.
Sometimes the ZFS cachefile can get corrupted, and `zfs-import-cache.service`
doesn't import the pools that aren't present in the cachefile.
Another workaround to this problem is enabling the `zfs-import-scan.service`,
which searches and imports pools via device scanning (usually slower).

View File

@ -3,102 +3,150 @@
Debian Package Repositories
---------------------------
All Debian based systems use APT_ as package
management tool. The list of repositories is defined in
``/etc/apt/sources.list`` and ``.list`` files found inside
``/etc/apt/sources.d/``. Updates can be installed directly using
the ``apt`` command line tool, or via the GUI.
All Debian based systems use APT_ as package management tool. The list of
repositories is defined in ``/etc/apt/sources.list`` and ``.list`` files found
in the ``/etc/apt/sources.d/`` directory. Updates can be installed directly
with the ``apt`` command line tool, or via the GUI.
APT_ ``sources.list`` files list one package repository per line, with
the most preferred source listed first. Empty lines are ignored, and a
``#`` character anywhere on a line marks the remainder of that line as a
comment. The information available from the configured sources is
acquired by ``apt update``.
APT_ ``sources.list`` files list one package repository per line, with the most
preferred source listed first. Empty lines are ignored and a ``#`` character
anywhere on a line marks the remainder of that line as a comment. The
information available from the configured sources is acquired by ``apt
update``.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
.. FIXME for 7.0: change security update suite to bullseye-security
In addition, Proxmox provides three different package repositories for
the backup server binaries.
In addition, you need a package repositories from Proxmox to get the backup
server updates.
`Proxmox Backup`_ Enterprise Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
During the Proxmox Backup beta phase only one repository (pbstest) will be
available. Once released, a Enterprise repository for production use and a
no-subscription repository will be provided.
This is the default, stable and recommended repository, available for
all `Proxmox Backup`_ subscription users. It contains the most stable packages,
and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default:
SecureApt
~~~~~~~~~
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list``
The `Release` files in the repositories are signed with GnuPG. APT is using
these signatures to verify that all packages are from a trusted source.
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise
If you install Proxmox Backup Server from an official ISO image, the key for
verification is already installed.
If you install Proxmox Backup Server on top of Debian, download and install the
key with the following commands:
.. code-block:: console
# wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
Verify the SHA512 checksum afterwards with:
.. code-block:: console
# sha512sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
The output should be:
.. code-block:: console
acca6f416917e8e11490a08a1e2842d500b3a5d9f322c6319db0927b2901c3eae23cfb5cd5df6facf2b57399d3cfa52ad7769ebdd75d9b204549ca147da52626 /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
and the md5sum:
.. code-block:: console
# md5sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
Here, the output should be:
.. code-block:: console
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
.. comment
`Proxmox Backup`_ Enterprise Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This will be the default, stable, and recommended repository. It is available for
all `Proxmox Backup`_ subscription users. It contains the most stable packages,
and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default:
.. note:: During the Proxmox Backup beta phase only one repository (pbstest)
will be available.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list``
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise
As soon as updates are available, the superuser (``root@pam`` user) is
notified via email about the available new packages. On the GUI, the
change-log of each package can be viewed (if available), showing all
details of the update. So you will never miss important security
fixes.
To never miss important security fixes, the superuser (``root@pam`` user) is
notified via email about new packages as soon as they are available. The
change-log and details of each package can be viewed in the GUI (if available).
Please note that you need a valid subscription key to access this
repository. We offer different support levels, and you can find further
details at https://www.proxmox.com/en/proxmox-backup/pricing.
Please note that you need a valid subscription key to access this
repository. More information regarding subscription levels and pricing can be
found at https://www.proxmox.com/en/proxmox-backup/pricing.
.. note:: You can disable this repository by commenting out the above
line using a `#` (at the start of the line). This prevents error
messages if you do not have a subscription key. Please configure the
``pbs-no-subscription`` repository in that case.
.. note:: You can disable this repository by commenting out the above
line using a `#` (at the start of the line). This prevents error
messages if you do not have a subscription key. Please configure the
``pbs-no-subscription`` repository in that case.
`Proxmox Backup`_ No-Subscription Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`Proxmox Backup`_ No-Subscription Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As the name suggests, you do not need a subscription key to access
this repository. It can be used for testing and non-production
use. Its not recommended to run on production servers, as these
packages are not always heavily tested and validated.
As the name suggests, you do not need a subscription key to access
this repository. It can be used for testing and non-production
use. It is not recommended to use it on production servers, because these
packages are not always heavily tested and validated.
We recommend to configure this repository in ``/etc/apt/sources.list``.
We recommend to configure this repository in ``/etc/apt/sources.list``.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
# PBS pbs-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/bps buster pbs-no-subscription
# PBS pbs-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pbs buster pbs-no-subscription
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
`Proxmox Backup`_ Test Repository
`Proxmox Backup`_ Beta Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Finally, there is a repository called ``pbstest``. This one contains the
latest packages and is heavily used by developers to test new
During the public beta, there is a repository called ``pbstest``. This one
contains the latest packages and is heavily used by developers to test new
features.
.. warning:: the ``pbstest`` repository should (as the name implies)
only be used for testing new features or bug fixes.
.. .. warning:: the ``pbstest`` repository should (as the name implies)
only be used to test new features or bug fixes.
As usual, you can configure this using ``/etc/apt/sources.list`` by
adding the following line:
You can configure this using ``/etc/apt/sources.list`` by adding the following
line:
.. code-block:: sources.list
:caption: sources.list entry for ``pbstest``
deb http://download.proxmox.com/debian/bps buster pbstest
deb http://download.proxmox.com/debian/pbs buster pbstest
If you installed Proxmox Backup Server from the official beta ISO you should
have this repository already configured in
``/etc/apt/sources.list.d/pbstest-beta.list``

View File

@ -24,7 +24,7 @@ This daemon is normally started and managed as ``systemd`` service::
systemctl status proxmox-backup-proxy
For debugging, you can start the daemon in forground using::
For debugging, you can start the daemon in foreground using::
proxmox-backup-proxy

View File

@ -1,15 +1,15 @@
Description
^^^^^^^^^^^
``pxar`` is a command line utility used to create and manipulate archives in the
``pxar`` is a command line utility to create and manipulate archives in the
:ref:`pxar-format`.
It is inspired by `casync file archive format
<http://0pointer.net/blog/casync-a-tool-for-distributing-file-system-images.html>`_,
which has a similar use-case.
The ``.pxar`` format is adapted to fulfill the specific needs of the proxmox
backup server, for example efficient storage of hardlinks.
which caters to a similar use-case.
The ``.pxar`` format is adapted to fulfill the specific needs of the Proxmox
Backup Server, for example, efficient storage of hardlinks.
The format is designed to reduce storage space needed on the server by achieving
high de-duplication.
a high level of de-duplication.
Creating an Archive
^^^^^^^^^^^^^^^^^^^
@ -18,56 +18,55 @@ Run the following command to create an archive of a folder named ``source``:
.. code-block:: console
# pxar create archive.pxar source
# pxar create archive.pxar /path/to/source
This will create a new archive called ``archive.pxar`` from the contents of the
This will create a new archive called ``archive.pxar`` with the contents of the
``source`` folder.
.. NOTE:: ``pxar`` will not overwrite any existing archives. If an archive with
the same name is already present in the target folder, the creation will
fail.
By default, ``pxar`` will skip certain mountpoints and not follow device
By default, ``pxar`` will skip certain mountpoints and will not follow device
boundaries. This design decision is based on the primary use case of creating
archives for backups, where it makes no sense to store the content of certain
archives for backups. It is sensible to not back up the contents of certain
temporary or system specific files.
In order to alter this behavior and follow device boundaries, use the
To alter this behavior and follow device boundaries, use the
``--all-file-systems`` flag.
It is possible to exclude certain files and/or folders from the archive by
passing glob match patterns as additional parameters. Whenever a file is matched
by one of the patterns, you will get a warning saying that this file is skipped
and therefore not included in the archive.
passing the ``--exclude`` parameter with ``gitignore``\-style match patterns.
For example, you can exclude all files ending in ``.txt`` from the archive
by running:
.. code-block:: console
# pxar create archive.pxar source '**/*.txt'
# pxar create archive.pxar /path/to/source --exclude '**/*.txt'
Be aware that the shell itself will try to expand all of the glob patterns before
invoking ``pxar``.
In order to avoid this, all globs have to be quoted correctly.
It is also possible to pass a list of match pattern to fulfill more complex
file exclusion/inclusion behavior, although it is recommended to use the
It is possible to pass the ``--exclude`` parameter multiple times, in order to
match more than one pattern. This allows you to use more complex
file exclusion/inclusion behavior. However, it is recommended to use
``.pxarexclude`` files instead for such cases.
For example you might want to exclude all ``.txt`` files except for a specific
one from the archive. This is achieved via the negated match pattern, prefixed
by ``!``.
All the glob pattern are relative to the ``source`` directory.
All the glob patterns are relative to the ``source`` directory.
.. code-block:: console
# pxar create archive.pxar source '**/*.txt' '!/folder/file.txt'
# pxar create archive.pxar /path/to/source --exclude '**/*.txt' --exclude '!/folder/file.txt'
.. NOTE:: The order of the glob match patterns matters as later ones win over
.. NOTE:: The order of the glob match patterns matters as later ones override
previous ones. Permutations of the same patterns lead to different results.
``pxar`` will store the list of glob match patterns passed as parameters via the
command line in a file called ``.pxarexclude-cli`` and store it at the root of
command line in a file called ``.pxarexclude-cli`` and stores it at the root of
the archive.
If a file with this name is already present in the source folder during archive
creation, this file is not included in the archive and the file containing the
@ -79,9 +78,9 @@ It is possible to create and place these files in any directory of the filesyste
tree.
These files must contain one pattern per line, again later patterns win over
previous ones.
The patterns control file exclusion of files present within the given directory
The patterns control file exclusions of files present within the given directory
or further below it in the tree.
The behaviour is the same as described in :ref:`creating-backups`.
The behavior is the same as described in :ref:`creating-backups`.
Extracting an Archive
^^^^^^^^^^^^^^^^^^^^^
@ -96,7 +95,7 @@ with the following command:
If no target is provided, the content of the archive is extracted to the current
working directory.
In order to restore only part of an archive or single files and/or folders,
In order to restore only parts of an archive, single files and/or folders,
it is possible to pass the corresponding glob match patterns as additional
parameters or use the patterns stored in a file:
@ -109,8 +108,8 @@ sub-folders in the archive ``etc.pxar`` to the target ``/restore/target/etc``.
A path to the file containing match patterns can be specified using the
``--files-from`` parameter.
List the Content of an Archive
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
List the Contents of an Archive
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To display the files and directories contained in an archive ``archive.pxar``,
run the following command:
@ -126,7 +125,7 @@ Mounting an Archive
^^^^^^^^^^^^^^^^^^^
``pxar`` allows you to mount and inspect the contents of an archive via _`FUSE`.
In order to mount an archive named ``archive.pxar`` to the mountpoint ``mnt``,
In order to mount an archive named ``archive.pxar`` to the mountpoint ``/mnt``,
run the command:
.. code-block:: console

View File

@ -1,5 +1,5 @@
Host System Administration
--------------------------
==========================
`Proxmox Backup`_ is based on the famous Debian_ Linux
distribution. That means that you have access to the whole world of
@ -23,8 +23,4 @@ either explain things which are different on `Proxmox Backup`_, or
tasks which are commonly used on `Proxmox Backup`_. For other topics,
please refer to the standard Debian documentation.
ZFS
~~~
.. todo:: Add local ZFS admin guide (local.zfs.adoc)
.. include:: local-zfs.rst

6
docs/todos.rst Normal file
View File

@ -0,0 +1,6 @@
Documentation Todo List
=======================
This is an auto-generated list of the todo references in the documentation.
.. todolist::

View File

@ -7,7 +7,7 @@ DYNAMIC_UNITS := \
proxmox-backup.service \
proxmox-backup-proxy.service
all: $(UNITS) $(DYNAMIC_UNITS)
all: $(UNITS) $(DYNAMIC_UNITS) pbstest-beta.list
clean:
rm -f $(DYNAMIC_UNITS)

1
etc/pbstest-beta.list Normal file
View File

@ -0,0 +1 @@
deb http://download.proxmox.com/debian/pbs buster pbstest

View File

@ -2,7 +2,7 @@
Description=Proxmox Backup API Proxy Server
Wants=network-online.target
After=network.target
Requires=proxmox-backup.service
Wants=proxmox-backup.service
After=proxmox-backup.service
[Service]

View File

@ -1,4 +1,4 @@
use failure::*;
use anyhow::{Error};
// chacha20-poly1305

View File

@ -1,4 +1,4 @@
use failure::*;
use anyhow::{Error};
use proxmox::api::{*, cli::*};
@ -49,7 +49,7 @@ fn hello_command(
}
#[api(input: { properties: {} })]
/// Quit command. Exit the programm.
/// Quit command. Exit the program.
///
/// Returns: nothing
fn quit_command() -> Result<(), Error> {
@ -83,7 +83,8 @@ fn main() -> Result<(), Error> {
let args = shellword_split(&line)?;
let _ = handle_command(helper.cmd_def(), "", args, None);
let rpcenv = CliEnvironment::new();
let _ = handle_command(helper.cmd_def(), "", args, rpcenv, None);
rl.add_history_entry(line);
}

View File

@ -1,9 +1,10 @@
use std::io::Write;
use failure::*;
use anyhow::{Error};
use chrono::{DateTime, Utc};
use proxmox_backup::api2::types::Userid;
use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader};
pub struct DummyWriter {
@ -27,7 +28,7 @@ async fn run() -> Result<(), Error> {
let host = "localhost";
let username = "root@pam";
let username = Userid::root_userid();
let options = HttpClientOptions::new()
.interactive(true)
@ -44,8 +45,8 @@ async fn run() -> Result<(), Error> {
let mut bytes = 0;
for _ in 0..100 {
let writer = DummyWriter { bytes: 0 };
let writer = client.speedtest(writer).await?;
let mut writer = DummyWriter { bytes: 0 };
client.speedtest(&mut writer).await?;
println!("Received {} bytes", writer.bytes);
bytes += writer.bytes;
}
@ -59,8 +60,7 @@ async fn run() -> Result<(), Error> {
Ok(())
}
#[tokio::main]
async fn main() {
fn main() {
if let Err(err) = proxmox_backup::tools::runtime::main(run()) {
eprintln!("ERROR: {}", err);
}

View File

@ -1,4 +1,4 @@
use failure::*;
use anyhow::{bail, Error};
use std::thread;
use std::path::PathBuf;
@ -16,7 +16,7 @@ use std::io::Write;
// tar: dyntest1/testfile7.dat: File shrank by 2833252864 bytes; padding with zeros
// # pxar create test.pxar ./dyntest1/
// Error: detected shrinked file "./dyntest1/testfile0.dat" (22020096 < 12679380992)
// Error: detected shrunk file "./dyntest1/testfile0.dat" (22020096 < 12679380992)
fn create_large_file(path: PathBuf) {

View File

@ -2,7 +2,7 @@ use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
use failure::*;
use anyhow::{Error};
use futures::future::TryFutureExt;
use futures::stream::Stream;
use tokio::net::TcpStream;

View File

@ -2,7 +2,7 @@ use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
use failure::*;
use anyhow::{format_err, Error};
use futures::future::TryFutureExt;
use futures::stream::Stream;

View File

@ -1,6 +1,6 @@
use std::sync::Arc;
use failure::*;
use anyhow::{format_err, Error};
use futures::*;
use hyper::{Request, Response, Body};
use openssl::ssl::{SslMethod, SslAcceptor, SslFiletype};

View File

@ -1,4 +1,4 @@
use failure::*;
use anyhow::{Error};
use futures::*;
// Simple H2 server to test H2 speed with h2client.rs

View File

@ -2,7 +2,7 @@ extern crate proxmox_backup;
// also see https://www.johndcook.com/blog/standard_deviation/
use failure::*;
use anyhow::{Error};
use std::io::{Read, Write};
use proxmox_backup::backup::*;

View File

@ -1,4 +1,4 @@
use failure::*;
use anyhow::{Error};
use futures::*;
extern crate proxmox_backup;

View File

@ -1,13 +1,14 @@
use failure::*;
use anyhow::{Error};
use proxmox_backup::api2::types::Userid;
use proxmox_backup::client::*;
async fn upload_speed() -> Result<usize, Error> {
async fn upload_speed() -> Result<f64, Error> {
let host = "localhost";
let datastore = "store2";
let username = "root@pam";
let username = Userid::root_userid();
let options = HttpClientOptions::new()
.interactive(true)
@ -17,10 +18,10 @@ async fn upload_speed() -> Result<usize, Error> {
let backup_time = chrono::Utc::now();
let client = BackupWriter::start(client, datastore, "host", "speedtest", backup_time, false).await?;
let client = BackupWriter::start(client, None, datastore, "host", "speedtest", backup_time, false).await?;
println!("start upload speed test");
let res = client.upload_speedtest().await?;
let res = client.upload_speedtest(true).await?;
Ok(res)
}

View File

@ -1,13 +1,14 @@
mod access;
pub mod access;
pub mod admin;
pub mod backup;
pub mod config;
pub mod node;
pub mod reader;
mod subscription;
pub mod status;
pub mod types;
pub mod version;
pub mod pull;
mod helpers;
use proxmox::api::router::SubdirMap;
use proxmox::api::Router;
@ -23,7 +24,7 @@ pub const SUBDIRS: SubdirMap = &[
("nodes", &NODES_ROUTER),
("pull", &pull::ROUTER),
("reader", &reader::ROUTER),
("subscription", &subscription::ROUTER),
("status", &status::ROUTER),
("version", &version::ROUTER),
];

View File

@ -1,51 +1,107 @@
use failure::*;
use anyhow::{bail, format_err, Error};
use serde_json::{json, Value};
use proxmox::api::api;
use proxmox::api::{api, RpcEnvironment, Permission};
use proxmox::api::router::{Router, SubdirMap};
use proxmox::sortable;
use proxmox::{sortable, identity};
use proxmox::{http_err, list_subdirs_api_method};
use crate::tools;
use crate::tools::ticket::*;
use crate::auth_helpers::*;
use crate::api2::types::*;
fn authenticate_user(username: &str, password: &str) -> Result<(), Error> {
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::acl::{PRIVILEGES, PRIV_PERMISSIONS_MODIFY};
pub mod user;
pub mod domain;
pub mod acl;
pub mod role;
/// returns Ok(true) if a ticket has to be created
/// and Ok(false) if not
fn authenticate_user(
userid: &Userid,
password: &str,
path: Option<String>,
privs: Option<String>,
port: Option<u16>,
) -> Result<bool, Error> {
let user_info = CachedUserInfo::new()?;
if !user_info.is_active_user(&userid) {
bail!("user account disabled or expired.");
}
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
if password.starts_with("PBS:") {
if let Ok((_age, Some(ticket_username))) = tools::ticket::verify_rsa_ticket(public_auth_key(), "PBS", password, None, -300, ticket_lifetime) {
if ticket_username == username {
return Ok(());
if *userid == ticket_username {
return Ok(true);
} else {
bail!("ticket login failed - wrong username");
bail!("ticket login failed - wrong userid");
}
}
} else if password.starts_with("PBSTERM:") {
if path.is_none() || privs.is_none() || port.is_none() {
bail!("cannot check termnal ticket without path, priv and port");
}
let path = path.unwrap();
let privilege_name = privs.unwrap();
let port = port.unwrap();
if let Ok((_age, _data)) =
tools::ticket::verify_term_ticket(public_auth_key(), &userid, &path, port, password)
{
for (name, privilege) in PRIVILEGES {
if *name == privilege_name {
let mut path_vec = Vec::new();
for part in path.split('/') {
if part != "" {
path_vec.push(part);
}
}
user_info.check_privs(userid, &path_vec, *privilege, false)?;
return Ok(false);
}
}
bail!("No such privilege");
}
}
if username == "root@pam" {
let mut auth = pam::Authenticator::with_password("proxmox-backup-auth").unwrap();
auth.get_handler().set_credentials("root", password);
auth.authenticate()?;
return Ok(());
}
bail!("inavlid credentials");
let _ = crate::auth::authenticate_user(userid, password)?;
Ok(true)
}
#[api(
input: {
properties: {
username: {
type: String,
description: "User name.",
max_length: 64,
type: Userid,
},
password: {
schema: PASSWORD_SCHEMA,
},
path: {
type: String,
description: "The secret password. This can also be a valid ticket.",
description: "Path for verifying terminal tickets.",
optional: true,
},
privs: {
type: String,
description: "Privilege for verifying terminal tickets.",
optional: true,
},
port: {
type: Integer,
description: "Port for verifying terminal tickets.",
optional: true,
},
},
},
@ -66,15 +122,23 @@ fn authenticate_user(username: &str, password: &str) -> Result<(), Error> {
},
},
protected: true,
access: {
permission: &Permission::World,
},
)]
/// Create or verify authentication ticket.
///
/// Returns: An authentication ticket with additional infos.
fn create_ticket(username: String, password: String) -> Result<Value, Error> {
match authenticate_user(&username, &password) {
Ok(_) => {
let ticket = assemble_rsa_ticket( private_auth_key(), "PBS", Some(&username), None)?;
fn create_ticket(
username: Userid,
password: String,
path: Option<String>,
privs: Option<String>,
port: Option<u16>,
) -> Result<Value, Error> {
match authenticate_user(&username, &password, path, privs, port) {
Ok(true) => {
let ticket = assemble_rsa_ticket(private_auth_key(), "PBS", Some(&username), None)?;
let token = assemble_csrf_prevention_token(csrf_secret(), &username);
@ -86,21 +150,84 @@ fn create_ticket(username: String, password: String) -> Result<Value, Error> {
"CSRFPreventionToken": token,
}))
}
Ok(false) => Ok(json!({
"username": username,
})),
Err(err) => {
let client_ip = "unknown"; // $rpcenv->get_client_ip() || '';
log::error!("authentication failure; rhost={} user={} msg={}", client_ip, username, err.to_string());
Err(http_err!(UNAUTHORIZED, "permission check failed.".into()))
Err(http_err!(UNAUTHORIZED, "permission check failed."))
}
}
}
#[api(
input: {
properties: {
userid: {
type: Userid,
},
password: {
schema: PASSWORD_SCHEMA,
},
},
},
access: {
description: "Anybody is allowed to change there own password. In addition, users with 'Permissions:Modify' privilege may change any password.",
permission: &Permission::Anybody,
},
)]
/// Change user password
///
/// Each user is allowed to change his own password. Superuser
/// can change all passwords.
fn change_password(
userid: Userid,
password: String,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let current_user: Userid = rpcenv
.get_user()
.ok_or_else(|| format_err!("unknown user"))?
.parse()?;
let mut allowed = userid == current_user;
if userid == "root@pam" { allowed = true; }
if !allowed {
let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&current_user, &[]);
if (privs & PRIV_PERMISSIONS_MODIFY) != 0 { allowed = true; }
}
if !allowed {
bail!("you are not authorized to change the password.");
}
let authenticator = crate::auth::lookup_authenticator(userid.realm())?;
authenticator.store_password(userid.name(), &password)?;
Ok(Value::Null)
}
#[sortable]
const SUBDIRS: SubdirMap = &[
const SUBDIRS: SubdirMap = &sorted!([
("acl", &acl::ROUTER),
(
"password", &Router::new()
.put(&API_METHOD_CHANGE_PASSWORD)
),
(
"ticket", &Router::new()
.post(&API_METHOD_CREATE_TICKET)
)
];
),
("domains", &domain::ROUTER),
("roles", &role::ROUTER),
("users", &user::ROUTER),
]);
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))

229
src/api2/access/acl.rs Normal file
View File

@ -0,0 +1,229 @@
use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment, Permission};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::acl;
use crate::config::acl::{Role, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
#[api(
properties: {
propagate: {
schema: ACL_PROPAGATE_SCHEMA,
},
path: {
schema: ACL_PATH_SCHEMA,
},
ugid_type: {
schema: ACL_UGID_TYPE_SCHEMA,
},
ugid: {
type: String,
description: "User or Group ID.",
},
roleid: {
type: Role,
}
}
)]
#[derive(Serialize, Deserialize)]
/// ACL list entry.
pub struct AclListItem {
path: String,
ugid: String,
ugid_type: String,
propagate: bool,
roleid: String,
}
fn extract_acl_node_data(
node: &acl::AclTreeNode,
path: &str,
list: &mut Vec<AclListItem>,
exact: bool,
) {
for (user, roles) in &node.users {
for (role, propagate) in roles {
list.push(AclListItem {
path: if path.is_empty() { String::from("/") } else { path.to_string() },
propagate: *propagate,
ugid_type: String::from("user"),
ugid: user.to_string(),
roleid: role.to_string(),
});
}
}
for (group, roles) in &node.groups {
for (role, propagate) in roles {
list.push(AclListItem {
path: if path.is_empty() { String::from("/") } else { path.to_string() },
propagate: *propagate,
ugid_type: String::from("group"),
ugid: group.to_string(),
roleid: role.to_string(),
});
}
}
if exact {
return;
}
for (comp, child) in &node.children {
let new_path = format!("{}/{}", path, comp);
extract_acl_node_data(child, &new_path, list, exact);
}
}
#[api(
input: {
properties: {
path: {
schema: ACL_PATH_SCHEMA,
optional: true,
},
exact: {
description: "If set, returns only ACL for the exact path.",
type: bool,
optional: true,
default: false,
},
},
},
returns: {
description: "ACL entry list.",
type: Array,
items: {
type: AclListItem,
}
},
access: {
permission: &Permission::Privilege(&["access", "acl"], PRIV_SYS_AUDIT, false),
},
)]
/// Read Access Control List (ACLs).
pub fn read_acl(
path: Option<String>,
exact: bool,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<AclListItem>, Error> {
//let auth_user = rpcenv.get_user().unwrap();
let (mut tree, digest) = acl::config()?;
let mut list: Vec<AclListItem> = Vec::new();
if let Some(path) = &path {
if let Some(node) = &tree.find_node(path) {
extract_acl_node_data(&node, path, &mut list, exact);
}
} else {
extract_acl_node_data(&tree.root, "", &mut list, exact);
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
protected: true,
input: {
properties: {
path: {
schema: ACL_PATH_SCHEMA,
},
role: {
type: Role,
},
propagate: {
optional: true,
schema: ACL_PROPAGATE_SCHEMA,
},
userid: {
optional: true,
type: Userid,
},
group: {
optional: true,
schema: PROXMOX_GROUP_ID_SCHEMA,
},
delete: {
optional: true,
description: "Remove permissions (instead of adding it).",
type: bool,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["access", "acl"], PRIV_PERMISSIONS_MODIFY, false),
},
)]
/// Update Access Control List (ACLs).
pub fn update_acl(
path: String,
role: String,
propagate: Option<bool>,
userid: Option<Userid>,
group: Option<String>,
delete: Option<bool>,
digest: Option<String>,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let _lock = open_file_locked(acl::ACL_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut tree, expected_digest) = acl::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let propagate = propagate.unwrap_or(true);
let delete = delete.unwrap_or(false);
if let Some(ref _group) = group {
bail!("parameter 'group' - groups are currently not supported.");
} else if let Some(ref userid) = userid {
if !delete { // Note: we allow to delete non-existent users
let user_cfg = crate::config::user::cached_config()?;
if user_cfg.sections.get(&userid.to_string()).is_none() {
bail!("no such user.");
}
}
} else {
bail!("missing 'userid' or 'group' parameter.");
}
if !delete { // Note: we allow to delete entries with invalid path
acl::check_acl_path(&path)?;
}
if let Some(userid) = userid {
if delete {
tree.delete_user_role(&path, &userid, &role);
} else {
tree.insert_user_role(&path, &userid, &role, propagate);
}
} else if let Some(group) = group {
if delete {
tree.delete_group_role(&path, &group, &role);
} else {
tree.insert_group_role(&path, &group, &role, propagate);
}
}
acl::save_config(&tree)?;
Ok(())
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_READ_ACL)
.put(&API_METHOD_UPDATE_ACL);

47
src/api2/access/domain.rs Normal file
View File

@ -0,0 +1,47 @@
use anyhow::{Error};
use serde_json::{json, Value};
use proxmox::api::{api, Permission};
use proxmox::api::router::Router;
use crate::api2::types::*;
#[api(
returns: {
description: "List of realms.",
type: Array,
items: {
type: Object,
description: "User configuration (without password).",
properties: {
realm: {
description: "Realm ID.",
type: String,
},
comment: {
schema: SINGLE_LINE_COMMENT_SCHEMA,
optional: true,
},
default: {
description: "Default realm.",
type: bool,
}
},
}
},
access: {
description: "Anyone can access this, because we need that list for the login box (before the user is authenticated).",
permission: &Permission::World,
}
)]
/// Authentication domain/realm index.
fn list_domains() -> Result<Value, Error> {
let mut list = Vec::new();
list.push(json!({ "realm": "pam", "comment": "Linux PAM standard authentication", "default": true }));
list.push(json!({ "realm": "pbs", "comment": "Proxmox Backup authentication server" }));
Ok(list.into())
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_DOMAINS);

58
src/api2/access/role.rs Normal file
View File

@ -0,0 +1,58 @@
use anyhow::Error;
use serde_json::{json, Value};
use proxmox::api::{api, Permission};
use proxmox::api::router::Router;
use crate::api2::types::*;
use crate::config::acl::{Role, ROLE_NAMES, PRIVILEGES};
#[api(
returns: {
description: "List of roles.",
type: Array,
items: {
type: Object,
description: "User name with description.",
properties: {
roleid: {
type: Role,
},
privs: {
type: Array,
description: "List of Privileges",
items: {
type: String,
description: "A Privilege",
},
},
comment: {
schema: SINGLE_LINE_COMMENT_SCHEMA,
optional: true,
},
},
}
},
access: {
permission: &Permission::Anybody,
}
)]
/// Role list
fn list_roles() -> Result<Value, Error> {
let mut list = Vec::new();
for (role, (privs, comment)) in ROLE_NAMES.iter() {
let mut priv_list = Vec::new();
for (name, privilege) in PRIVILEGES.iter() {
if privs & privilege > 0 {
priv_list.push(name.clone());
}
}
list.push(json!({ "roleid": role, "privs": priv_list, "comment": comment }));
}
Ok(list.into())
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_ROLES);

294
src/api2/access/user.rs Normal file
View File

@ -0,0 +1,294 @@
use anyhow::{bail, Error};
use serde_json::Value;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::api::schema::{Schema, StringSchema};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::user;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
.format(&PASSWORD_FORMAT)
.min_length(5)
.max_length(64)
.schema();
#[api(
input: {
properties: {},
},
returns: {
description: "List users (with config digest).",
type: Array,
items: { type: user::User },
},
access: {
permission: &Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
},
)]
/// List all users
pub fn list_users(
_param: Value,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<user::User>, Error> {
let (config, digest) = user::config()?;
let list = config.convert_to_typed_array("user")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
comment: {
schema: SINGLE_LINE_COMMENT_SCHEMA,
optional: true,
},
password: {
schema: PBS_PASSWORD_SCHEMA,
optional: true,
},
enable: {
schema: user::ENABLE_USER_SCHEMA,
optional: true,
},
expire: {
schema: user::EXPIRE_USER_SCHEMA,
optional: true,
},
firstname: {
schema: user::FIRST_NAME_SCHEMA,
optional: true,
},
lastname: {
schema: user::LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: user::EMAIL_SCHEMA,
optional: true,
},
},
},
access: {
permission: &Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
},
)]
/// Create new user.
pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let user: user::User = serde_json::from_value(param)?;
let (mut config, _digest) = user::config()?;
if let Some(_) = config.sections.get(user.userid.as_str()) {
bail!("user '{}' already exists.", user.userid);
}
let authenticator = crate::auth::lookup_authenticator(&user.userid.realm())?;
config.set_data(user.userid.as_str(), "user", &user)?;
user::save_config(&config)?;
if let Some(password) = password {
authenticator.store_password(user.userid.name(), &password)?;
}
Ok(())
}
#[api(
input: {
properties: {
userid: {
type: Userid,
},
},
},
returns: {
description: "The user configuration (with config digest).",
type: user::User,
},
access: {
permission: &Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
},
)]
/// Read user configuration data.
pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<user::User, Error> {
let (config, digest) = user::config()?;
let user = config.lookup("user", userid.as_str())?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(user)
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
password: {
schema: PBS_PASSWORD_SCHEMA,
optional: true,
},
enable: {
schema: user::ENABLE_USER_SCHEMA,
optional: true,
},
expire: {
schema: user::EXPIRE_USER_SCHEMA,
optional: true,
},
firstname: {
schema: user::FIRST_NAME_SCHEMA,
optional: true,
},
lastname: {
schema: user::LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: user::EMAIL_SCHEMA,
optional: true,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
},
)]
/// Update user configuration.
pub fn update_user(
userid: Userid,
comment: Option<String>,
enable: Option<bool>,
expire: Option<i64>,
password: Option<String>,
firstname: Option<String>,
lastname: Option<String>,
email: Option<String>,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = user::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let mut data: user::User = config.lookup("user", userid.as_str())?;
if let Some(comment) = comment {
let comment = comment.trim().to_string();
if comment.is_empty() {
data.comment = None;
} else {
data.comment = Some(comment);
}
}
if let Some(enable) = enable {
data.enable = if enable { None } else { Some(false) };
}
if let Some(expire) = expire {
data.expire = if expire > 0 { Some(expire) } else { None };
}
if let Some(password) = password {
let authenticator = crate::auth::lookup_authenticator(userid.realm())?;
authenticator.store_password(userid.name(), &password)?;
}
if let Some(firstname) = firstname {
data.firstname = if firstname.is_empty() { None } else { Some(firstname) };
}
if let Some(lastname) = lastname {
data.lastname = if lastname.is_empty() { None } else { Some(lastname) };
}
if let Some(email) = email {
data.email = if email.is_empty() { None } else { Some(email) };
}
config.set_data(userid.as_str(), "user", &data)?;
user::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
},
)]
/// Remove a user from the configuration file.
pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = user::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
match config.sections.get(userid.as_str()) {
Some(_) => { config.sections.remove(userid.as_str()); },
None => bail!("user '{}' does not exist.", userid),
}
user::save_config(&config)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_USER)
.put(&API_METHOD_UPDATE_USER)
.delete(&API_METHOD_DELETE_USER);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_USERS)
.post(&API_METHOD_CREATE_USER)
.match_all("userid", &ITEM_ROUTER);

View File

@ -2,9 +2,11 @@ use proxmox::api::router::{Router, SubdirMap};
use proxmox::list_subdirs_api_method;
pub mod datastore;
pub mod sync;
const SUBDIRS: SubdirMap = &[
("datastore", &datastore::ROUTER)
("datastore", &datastore::ROUTER),
("sync", &sync::ROUTER)
];
pub const ROUTER: Router = Router::new()

File diff suppressed because it is too large Load Diff

138
src/api2/admin/sync.rs Normal file
View File

@ -0,0 +1,138 @@
use std::collections::HashMap;
use anyhow::{Error};
use serde_json::Value;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
use proxmox::api::router::SubdirMap;
use proxmox::{list_subdirs_api_method, sortable};
use crate::api2::types::*;
use crate::api2::pull::{get_pull_parameters};
use crate::config::sync::{self, SyncJobStatus, SyncJobConfig};
use crate::server::{self, TaskListInfo, WorkerTask};
use crate::tools::systemd::time::{
parse_calendar_event, compute_next_event};
#[api(
input: {
properties: {},
},
returns: {
description: "List configured jobs and their status.",
type: Array,
items: { type: sync::SyncJobStatus },
},
)]
/// List all sync jobs
pub fn list_sync_jobs(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SyncJobStatus>, Error> {
let (config, digest) = sync::config()?;
let mut list: Vec<SyncJobStatus> = config.convert_to_typed_array("sync")?;
let mut last_tasks: HashMap<String, &TaskListInfo> = HashMap::new();
let tasks = server::read_task_list()?;
for info in tasks.iter() {
let worker_id = match &info.upid.worker_id {
Some(id) => id,
_ => { continue; },
};
if let Some(last) = last_tasks.get(worker_id) {
if last.upid.starttime < info.upid.starttime {
last_tasks.insert(worker_id.to_string(), &info);
}
} else {
last_tasks.insert(worker_id.to_string(), &info);
}
}
for job in &mut list {
let mut last = 0;
if let Some(task) = last_tasks.get(&job.id) {
job.last_run_upid = Some(task.upid_str.clone());
if let Some((endtime, status)) = &task.state {
job.last_run_state = Some(String::from(status));
job.last_run_endtime = Some(*endtime);
last = *endtime;
}
}
job.next_run = (|| -> Option<i64> {
let schedule = job.schedule.as_ref()?;
let event = parse_calendar_event(&schedule).ok()?;
compute_next_event(&event, last, false).ok()
})();
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
input: {
properties: {
id: {
schema: JOB_ID_SCHEMA,
}
}
}
)]
/// Runs the sync jobs manually.
async fn run_sync_job(
id: String,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let (config, _digest) = sync::config()?;
let sync_job: SyncJobConfig = config.lookup("sync", &id)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let delete = sync_job.remove_vanished.unwrap_or(true);
let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
let upid_str = WorkerTask::spawn("syncjob", Some(id.clone()), userid, false, move |worker| async move {
worker.log(format!("sync job '{}' start", &id));
crate::client::pull::pull_store(
&worker,
&client,
&src_repo,
tgt_store.clone(),
delete,
Userid::backup_userid().clone(),
).await?;
worker.log(format!("sync job '{}' end", &id));
Ok(())
})?;
Ok(upid_str)
}
#[sortable]
const SYNC_INFO_SUBDIRS: SubdirMap = &[
(
"run",
&Router::new()
.post(&API_METHOD_RUN_SYNC_JOB)
),
];
const SYNC_INFO_ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SYNC_INFO_SUBDIRS))
.subdirs(SYNC_INFO_SUBDIRS);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_SYNC_JOBS)
.match_all("id", &SYNC_INFO_ROUTER);

View File

@ -1,4 +1,4 @@
use failure::*;
use anyhow::{bail, format_err, Error};
use futures::*;
use hyper::header::{HeaderValue, UPGRADE};
use hyper::http::request::Parts;
@ -6,14 +6,17 @@ use hyper::{Body, Response, StatusCode};
use serde_json::{json, Value};
use proxmox::{sortable, identity, list_subdirs_api_method};
use proxmox::api::{ApiResponseFuture, ApiHandler, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{ApiResponseFuture, ApiHandler, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::api::router::SubdirMap;
use proxmox::api::schema::*;
use crate::tools::{self, WrappedReaderStream};
use crate::tools;
use crate::server::{WorkerTask, H2Service};
use crate::backup::*;
use crate::api2::types::*;
use crate::config::acl::PRIV_DATASTORE_BACKUP;
use crate::config::cached_user_info::CachedUserInfo;
use crate::tools::fs::lock_dir_noblock;
mod environment;
use environment::*;
@ -37,6 +40,10 @@ pub const API_METHOD_UPGRADE_BACKUP: ApiMethod = ApiMethod::new(
("debug", true, &BooleanSchema::new("Enable verbose debug logging.").schema()),
]),
)
).access(
// Note: parameter 'store' is no uri parameter, so we need to test inside function body
Some("The user needs Datastore.Backup privilege on /datastore/{store} and needs to own the backup group."),
&Permission::Anybody
);
fn upgrade_to_backup_protocol(
@ -47,10 +54,16 @@ fn upgrade_to_backup_protocol(
rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture {
async move {
async move {
let debug = param["debug"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let store = tools::required_string_param(&param, "store")?.to_owned();
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["datastore", &store], PRIV_DATASTORE_BACKUP, false)?;
let datastore = DataStore::lookup_datastore(&store)?;
let backup_type = tools::required_string_param(&param, "backup-type")?;
@ -73,28 +86,39 @@ fn upgrade_to_backup_protocol(
let worker_id = format!("{}_{}_{}", store, backup_type, backup_id);
let username = rpcenv.get_user().unwrap();
let env_type = rpcenv.env_type();
let backup_group = BackupGroup::new(backup_type, backup_id);
let last_backup = BackupInfo::last_backup(&datastore.base_path(), &backup_group).unwrap_or(None);
let backup_dir = BackupDir::new_with_group(backup_group, backup_time);
if let Some(last) = &last_backup {
// lock backup group to only allow one backup per group at a time
let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &userid)?;
// permission check
if owner != userid { // only the owner is allowed to create additional snapshots
bail!("backup owner check failed ({} != {})", userid, owner);
}
let last_backup = BackupInfo::last_backup(&datastore.base_path(), &backup_group, true).unwrap_or(None);
let backup_dir = BackupDir::new_with_group(backup_group.clone(), backup_time);
let _last_guard = if let Some(last) = &last_backup {
if backup_dir.backup_time() <= last.backup_dir.backup_time() {
bail!("backup timestamp is older than last backup.");
}
// fixme: abort if last backup is still running - howto test?
// Idea: write upid into a file inside snapshot dir. then test if
// it is still running here.
}
let (path, is_new) = datastore.create_backup_dir(&backup_dir)?;
if !is_new { bail!("backup directorty already exists."); }
// lock last snapshot to prevent forgetting/pruning it during backup
let full_path = datastore.snapshot_path(&last.backup_dir);
Some(lock_dir_noblock(&full_path, "snapshot", "base snapshot is already locked by another operation")?)
} else {
None
};
WorkerTask::spawn("backup", Some(worker_id), &username.clone(), true, move |worker| {
let (path, is_new, _snap_guard) = datastore.create_locked_backup_dir(&backup_dir)?;
if !is_new { bail!("backup directory already exists."); }
WorkerTask::spawn("backup", Some(worker_id), userid.clone(), true, move |worker| {
let mut env = BackupEnvironment::new(
env_type, username.clone(), worker.clone(), datastore, backup_dir);
env_type, userid, worker.clone(), datastore, backup_dir);
env.debug = debug;
env.last_backup = last_backup;
@ -106,13 +130,12 @@ fn upgrade_to_backup_protocol(
let abort_future = worker.abort_future();
let env2 = env.clone();
let env3 = env.clone();
let req_fut = req_body
let mut req_fut = req_body
.on_upgrade()
.map_err(Error::from)
.and_then(move |conn| {
env3.debug("protocol upgrade done");
env2.debug("protocol upgrade done");
let mut http = hyper::server::conn::Http::new();
http.http2_only(true);
@ -124,36 +147,44 @@ fn upgrade_to_backup_protocol(
http.serve_connection(conn, service)
.map_err(Error::from)
});
let abort_future = abort_future
let mut abort_future = abort_future
.map(|_| Err(format_err!("task aborted")));
use futures::future::Either;
future::select(req_fut, abort_future)
.map(|res| match res {
Either::Left((Ok(res), _)) => Ok(res),
Either::Left((Err(err), _)) => Err(err),
Either::Right((Ok(res), _)) => Ok(res),
Either::Right((Err(err), _)) => Err(err),
})
.and_then(move |_result| async move {
env.ensure_finished()?;
env.log("backup finished sucessfully");
Ok(())
})
.then(move |result| async move {
if let Err(err) = result {
match env2.ensure_finished() {
Ok(()) => {}, // ignore error after finish
_ => {
env2.log(format!("backup failed: {}", err));
env2.log("removing failed backup");
env2.remove_backup()?;
return Err(err);
}
}
}
Ok(())
})
async move {
// keep flock until task ends
let _group_guard = _group_guard;
let _snap_guard = _snap_guard;
let _last_guard = _last_guard;
let res = select!{
req = req_fut => req,
abrt = abort_future => abrt,
};
match (res, env.ensure_finished()) {
(Ok(_), Ok(())) => {
env.log("backup finished successfully");
Ok(())
},
(Err(err), Ok(())) => {
// ignore errors after finish
env.log(format!("backup had errors but finished: {}", err));
Ok(())
},
(Ok(_), Err(err)) => {
env.log(format!("backup ended and finish failed: {}", err));
env.log("removing unfinished backup");
env.remove_backup()?;
Err(err)
},
(Err(err), Err(_)) => {
env.log(format!("backup failed: {}", err));
env.log("removing failed backup");
env.remove_backup()?;
Err(err)
},
}
}
})?;
let response = Response::builder()
@ -180,7 +211,6 @@ pub const BACKUP_API_SUBDIRS: SubdirMap = &[
),
(
"dynamic_index", &Router::new()
.download(&API_METHOD_DYNAMIC_CHUNK_INDEX)
.post(&API_METHOD_CREATE_DYNAMIC_INDEX)
.put(&API_METHOD_DYNAMIC_APPEND)
),
@ -203,10 +233,13 @@ pub const BACKUP_API_SUBDIRS: SubdirMap = &[
),
(
"fixed_index", &Router::new()
.download(&API_METHOD_FIXED_CHUNK_INDEX)
.post(&API_METHOD_CREATE_FIXED_INDEX)
.put(&API_METHOD_FIXED_APPEND)
),
(
"previous", &Router::new()
.download(&API_METHOD_DOWNLOAD_PREVIOUS)
),
(
"speedtest", &Router::new()
.upload(&API_METHOD_UPLOAD_SPEEDTEST)
@ -265,6 +298,8 @@ pub const API_METHOD_CREATE_FIXED_INDEX: ApiMethod = ApiMethod::new(
.minimum(1)
.schema()
),
("reuse-csum", true, &StringSchema::new("If set, compare last backup's \
csum and reuse index for incremental backup if it matches.").schema()),
]),
)
);
@ -277,10 +312,9 @@ fn create_fixed_index(
let env: &BackupEnvironment = rpcenv.as_ref();
println!("PARAM: {:?}", param);
let name = tools::required_string_param(&param, "archive-name")?.to_owned();
let size = tools::required_integer_param(&param, "size")? as usize;
let reuse_csum = param["reuse-csum"].as_str();
let archive_name = name.clone();
if !archive_name.ends_with(".fidx") {
@ -288,12 +322,49 @@ fn create_fixed_index(
}
let mut path = env.backup_dir.relative_path();
path.push(archive_name);
path.push(&archive_name);
let chunk_size = 4096*1024; // todo: ??
let index = env.datastore.create_fixed_writer(&path, size, chunk_size)?;
let wid = env.register_fixed_writer(index, name, size, chunk_size as u32)?;
// do incremental backup if csum is set
let mut reader = None;
let mut incremental = false;
if let Some(csum) = reuse_csum {
incremental = true;
let last_backup = match &env.last_backup {
Some(info) => info,
None => {
bail!("cannot reuse index - no previous backup exists");
}
};
let mut last_path = last_backup.backup_dir.relative_path();
last_path.push(&archive_name);
let index = match env.datastore.open_fixed_reader(last_path) {
Ok(index) => index,
Err(_) => {
bail!("cannot reuse index - no previous backup exists for archive");
}
};
let (old_csum, _) = index.compute_csum();
let old_csum = proxmox::tools::digest_to_hex(&old_csum);
if old_csum != csum {
bail!("expected csum ({}) doesn't match last backup's ({}), cannot do incremental backup",
csum, old_csum);
}
reader = Some(index);
}
let mut writer = env.datastore.create_fixed_writer(&path, size, chunk_size)?;
if let Some(reader) = reader {
writer.clone_data_from(&reader)?;
}
let wid = env.register_fixed_writer(writer, name, size, chunk_size as u32, incremental)?;
env.log(format!("created new fixed index {} ({:?})", wid, path));
@ -359,7 +430,7 @@ fn dynamic_append (
env.dynamic_writer_append_chunk(wid, offset, size, &digest)?;
env.debug(format!("sucessfully added chunk {} to dynamic index {} (offset {}, size {})", digest_str, wid, offset, size));
env.debug(format!("successfully added chunk {} to dynamic index {} (offset {}, size {})", digest_str, wid, offset, size));
}
Ok(Value::Null)
@ -424,7 +495,7 @@ fn fixed_append (
env.fixed_writer_append_chunk(wid, offset, size, &digest)?;
env.debug(format!("sucessfully added chunk {} to fixed index {} (offset {}, size {})", digest_str, wid, offset, size));
env.debug(format!("successfully added chunk {} to fixed index {} (offset {}, size {})", digest_str, wid, offset, size));
}
Ok(Value::Null)
@ -479,7 +550,7 @@ fn close_dynamic_index (
env.dynamic_writer_close(wid, chunk_count, size, csum)?;
env.log(format!("sucessfully closed dynamic index {}", wid));
env.log(format!("successfully closed dynamic index {}", wid));
Ok(Value::Null)
}
@ -501,15 +572,15 @@ pub const API_METHOD_CLOSE_FIXED_INDEX: ApiMethod = ApiMethod::new(
(
"chunk-count",
false,
&IntegerSchema::new("Chunk count. This is used to verify that the server got all chunks.")
.minimum(1)
&IntegerSchema::new("Chunk count. This is used to verify that the server got all chunks. Ignored for incremental backups.")
.minimum(0)
.schema()
),
(
"size",
false,
&IntegerSchema::new("File size. This is used to verify that the server got all data.")
.minimum(1)
&IntegerSchema::new("File size. This is used to verify that the server got all data. Ignored for incremental backups.")
.minimum(0)
.schema()
),
("csum", false, &StringSchema::new("Digest list checksum.").schema()),
@ -533,7 +604,7 @@ fn close_fixed_index (
env.fixed_writer_close(wid, chunk_count, size, csum)?;
env.log(format!("sucessfully closed fixed index {}", wid));
env.log(format!("successfully closed fixed index {}", wid));
Ok(Value::Null)
}
@ -547,26 +618,23 @@ fn finish_backup (
let env: &BackupEnvironment = rpcenv.as_ref();
env.finish_backup()?;
env.log("sucessfully finished backup");
env.log("successfully finished backup");
Ok(Value::Null)
}
#[sortable]
pub const API_METHOD_DYNAMIC_CHUNK_INDEX: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&dynamic_chunk_index),
pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&download_previous),
&ObjectSchema::new(
r###"
Download the dynamic chunk index from the previous backup.
Simply returns an empty list if this is the first backup.
"### ,
"Download archive from previous backup.",
&sorted!([
("archive-name", false, &crate::api2::types::BACKUP_ARCHIVE_NAME_SCHEMA)
]),
)
);
fn dynamic_chunk_index(
fn download_previous(
_parts: Parts,
_req_body: Body,
param: Value,
@ -579,130 +647,38 @@ fn dynamic_chunk_index(
let archive_name = tools::required_string_param(&param, "archive-name")?.to_owned();
if !archive_name.ends_with(".didx") {
bail!("wrong archive extension: '{}'", archive_name);
}
let empty_response = {
Response::builder()
.status(StatusCode::OK)
.body(Body::empty())?
};
let last_backup = match &env.last_backup {
Some(info) => info,
None => return Ok(empty_response),
None => bail!("no previous backup"),
};
let mut path = last_backup.backup_dir.relative_path();
let mut path = env.datastore.snapshot_path(&last_backup.backup_dir);
path.push(&archive_name);
let index = match env.datastore.open_dynamic_reader(path) {
Ok(index) => index,
Err(_) => {
env.log(format!("there is no last backup for archive '{}'", archive_name));
return Ok(empty_response);
{
let index: Option<Box<dyn IndexFile>> = match archive_type(&archive_name)? {
ArchiveType::FixedIndex => {
let index = env.datastore.open_fixed_reader(&path)?;
Some(Box::new(index))
}
ArchiveType::DynamicIndex => {
let index = env.datastore.open_dynamic_reader(&path)?;
Some(Box::new(index))
}
_ => { None }
};
if let Some(index) = index {
env.log(format!("register chunks in '{}' from previous backup.", archive_name));
for pos in 0..index.index_count() {
let info = index.chunk_info(pos).unwrap();
let size = info.range.end - info.range.start;
env.register_chunk(info.digest, size as u32)?;
}
}
};
env.log(format!("download last backup index for archive '{}'", archive_name));
let count = index.index_count();
for pos in 0..count {
let (start, end, digest) = index.chunk_info(pos)?;
let size = (end - start) as u32;
env.register_chunk(digest, size)?;
}
let reader = DigestListEncoder::new(Box::new(index));
let stream = WrappedReaderStream::new(reader);
// fixme: set size, content type?
let response = http::Response::builder()
.status(200)
.body(Body::wrap_stream(stream))?;
Ok(response)
}.boxed()
}
#[sortable]
pub const API_METHOD_FIXED_CHUNK_INDEX: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&fixed_chunk_index),
&ObjectSchema::new(
r###"
Download the fixed chunk index from the previous backup.
Simply returns an empty list if this is the first backup.
"### ,
&sorted!([
("archive-name", false, &crate::api2::types::BACKUP_ARCHIVE_NAME_SCHEMA)
]),
)
);
fn fixed_chunk_index(
_parts: Parts,
_req_body: Body,
param: Value,
_info: &ApiMethod,
rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture {
async move {
let env: &BackupEnvironment = rpcenv.as_ref();
let archive_name = tools::required_string_param(&param, "archive-name")?.to_owned();
if !archive_name.ends_with(".fidx") {
bail!("wrong archive extension: '{}'", archive_name);
}
let empty_response = {
Response::builder()
.status(StatusCode::OK)
.body(Body::empty())?
};
let last_backup = match &env.last_backup {
Some(info) => info,
None => return Ok(empty_response),
};
let mut path = last_backup.backup_dir.relative_path();
path.push(&archive_name);
let index = match env.datastore.open_fixed_reader(path) {
Ok(index) => index,
Err(_) => {
env.log(format!("there is no last backup for archive '{}'", archive_name));
return Ok(empty_response);
}
};
env.log(format!("download last backup index for archive '{}'", archive_name));
let count = index.index_count();
let image_size = index.index_bytes();
for pos in 0..count {
let digest = index.index_digest(pos).unwrap();
// Note: last chunk can be smaller
let start = (pos*index.chunk_size) as u64;
let mut end = start + index.chunk_size as u64;
if end > image_size { end = image_size; }
let size = (end - start) as u32;
env.register_chunk(*digest, size)?;
}
let reader = DigestListEncoder::new(Box::new(index));
let stream = WrappedReaderStream::new(reader);
// fixme: set size, content type?
let response = http::Response::builder()
.status(200)
.body(Body::wrap_stream(stream))?;
Ok(response)
env.log(format!("download '{}' from previous backup.", archive_name));
crate::api2::helpers::create_download_response(path).await
}.boxed()
}

View File

@ -1,18 +1,21 @@
use failure::*;
use anyhow::{bail, format_err, Error};
use std::sync::{Arc, Mutex};
use std::collections::HashMap;
use serde_json::Value;
use ::serde::{Serialize};
use serde_json::{json, Value};
use proxmox::tools::digest_to_hex;
use proxmox::tools::fs::{replace_file, CreateOptions};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::server::WorkerTask;
use crate::api2::types::Userid;
use crate::backup::*;
use crate::server::WorkerTask;
use crate::server::formatter::*;
use hyper::{Body, Response};
#[derive(Copy, Clone, Serialize)]
struct UploadStatistic {
count: u64,
size: u64,
@ -31,6 +34,19 @@ impl UploadStatistic {
}
}
impl std::ops::Add for UploadStatistic {
type Output = Self;
fn add(self, other: Self) -> Self {
Self {
count: self.count + other.count,
size: self.size + other.size,
compressed_size: self.compressed_size + other.compressed_size,
duplicates: self.duplicates + other.duplicates,
}
}
}
struct DynamicWriterState {
name: String,
index: DynamicIndexWriter,
@ -47,15 +63,18 @@ struct FixedWriterState {
chunk_count: u64,
small_chunk_count: usize, // allow 0..1 small chunks (last chunk may be smaller)
upload_stat: UploadStatistic,
incremental: bool,
}
struct SharedBackupState {
finished: bool,
uid_counter: usize,
file_counter: usize, // sucessfully uploaded files
file_counter: usize, // successfully uploaded files
dynamic_writers: HashMap<usize, DynamicWriterState>,
fixed_writers: HashMap<usize, FixedWriterState>,
known_chunks: HashMap<[u8;32], u32>,
backup_size: u64, // sums up size of all files
backup_stat: UploadStatistic,
}
impl SharedBackupState {
@ -80,8 +99,8 @@ impl SharedBackupState {
#[derive(Clone)]
pub struct BackupEnvironment {
env_type: RpcEnvironmentType,
result_attributes: HashMap<String, Value>,
user: String,
result_attributes: Value,
user: Userid,
pub debug: bool,
pub formatter: &'static OutputFormatter,
pub worker: Arc<WorkerTask>,
@ -94,7 +113,7 @@ pub struct BackupEnvironment {
impl BackupEnvironment {
pub fn new(
env_type: RpcEnvironmentType,
user: String,
user: Userid,
worker: Arc<WorkerTask>,
datastore: Arc<DataStore>,
backup_dir: BackupDir,
@ -107,10 +126,12 @@ impl BackupEnvironment {
dynamic_writers: HashMap::new(),
fixed_writers: HashMap::new(),
known_chunks: HashMap::new(),
backup_size: 0,
backup_stat: UploadStatistic::new(),
};
Self {
result_attributes: HashMap::new(),
result_attributes: json!({}),
env_type,
user,
worker,
@ -237,7 +258,7 @@ impl BackupEnvironment {
}
/// Store the writer with an unique ID
pub fn register_fixed_writer(&self, index: FixedIndexWriter, name: String, size: usize, chunk_size: u32) -> Result<usize, Error> {
pub fn register_fixed_writer(&self, index: FixedIndexWriter, name: String, size: usize, chunk_size: u32, incremental: bool) -> Result<usize, Error> {
let mut state = self.state.lock().unwrap();
state.ensure_unfinished()?;
@ -245,7 +266,7 @@ impl BackupEnvironment {
let uid = state.next_uid();
state.fixed_writers.insert(uid, FixedWriterState {
index, name, chunk_count: 0, size, chunk_size, small_chunk_count: 0, upload_stat: UploadStatistic::new(),
index, name, chunk_count: 0, size, chunk_size, small_chunk_count: 0, upload_stat: UploadStatistic::new(), incremental,
});
Ok(uid)
@ -310,7 +331,13 @@ impl BackupEnvironment {
self.log(format!("Upload size: {} ({}%)", upload_stat.size, (upload_stat.size*100)/size));
let client_side_duplicates = chunk_count - upload_stat.count;
// account for zero chunk, which might be uploaded but never used
let client_side_duplicates = if chunk_count < upload_stat.count {
0
} else {
chunk_count - upload_stat.count
};
let server_side_duplicates = upload_stat.duplicates;
if (client_side_duplicates + server_side_duplicates) > 0 {
@ -346,7 +373,6 @@ impl BackupEnvironment {
let expected_csum = data.index.close()?;
println!("server checksum {:?} client: {:?}", expected_csum, csum);
if csum != expected_csum {
bail!("dynamic writer '{}' close failed - got unexpected checksum", data.name);
}
@ -354,6 +380,8 @@ impl BackupEnvironment {
self.log_upload_stat(&data.name, &csum, &uuid, size, chunk_count, &data.upload_stat);
state.file_counter += 1;
state.backup_size += size;
state.backup_stat = state.backup_stat + data.upload_stat;
Ok(())
}
@ -373,21 +401,21 @@ impl BackupEnvironment {
bail!("fixed writer '{}' close failed - received wrong number of chunk ({} != {})", data.name, data.chunk_count, chunk_count);
}
let expected_count = data.index.index_length();
if !data.incremental {
let expected_count = data.index.index_length();
if chunk_count != (expected_count as u64) {
bail!("fixed writer '{}' close failed - unexpected chunk count ({} != {})", data.name, expected_count, chunk_count);
}
if chunk_count != (expected_count as u64) {
bail!("fixed writer '{}' close failed - unexpected chunk count ({} != {})", data.name, expected_count, chunk_count);
}
if size != (data.size as u64) {
bail!("fixed writer '{}' close failed - unexpected file size ({} != {})", data.name, data.size, size);
if size != (data.size as u64) {
bail!("fixed writer '{}' close failed - unexpected file size ({} != {})", data.name, data.size, size);
}
}
let uuid = data.index.uuid;
let expected_csum = data.index.close()?;
println!("server checksum {:?} client: {:?}", expected_csum, csum);
if csum != expected_csum {
bail!("fixed writer '{}' close failed - got unexpected checksum", data.name);
}
@ -395,6 +423,8 @@ impl BackupEnvironment {
self.log_upload_stat(&data.name, &expected_csum, &uuid, size, chunk_count, &data.upload_stat);
state.file_counter += 1;
state.backup_size += size;
state.backup_stat = state.backup_stat + data.upload_stat;
Ok(())
}
@ -408,9 +438,8 @@ impl BackupEnvironment {
let blob_len = data.len();
let orig_len = data.len(); // fixme:
let blob = DataBlob::from_raw(data)?;
// always verify CRC at server side
blob.verify_crc()?;
// always verify blob/CRC at server side
let blob = DataBlob::load_from_reader(&mut &data[..])?;
let raw_data = blob.raw_data();
replace_file(&path, raw_data, CreateOptions::new())?;
@ -419,6 +448,8 @@ impl BackupEnvironment {
let mut state = self.state.lock().unwrap();
state.file_counter += 1;
state.backup_size += orig_len as u64;
state.backup_stat.size += blob_len as u64;
Ok(())
}
@ -430,8 +461,6 @@ impl BackupEnvironment {
state.ensure_unfinished()?;
state.finished = true;
if state.dynamic_writers.len() != 0 {
bail!("found open index writer - unable to finish backup");
}
@ -440,6 +469,30 @@ impl BackupEnvironment {
bail!("backup does not contain valid files (file count == 0)");
}
// check manifest
let mut manifest = self.datastore.load_manifest_json(&self.backup_dir)
.map_err(|err| format_err!("unable to load manifest blob - {}", err))?;
let stats = serde_json::to_value(state.backup_stat)?;
manifest["unprotected"]["chunk_upload_stats"] = stats;
self.datastore.store_manifest(&self.backup_dir, manifest)
.map_err(|err| format_err!("unable to store manifest blob - {}", err))?;
if let Some(base) = &self.last_backup {
let path = self.datastore.snapshot_path(&base.backup_dir);
if !path.exists() {
bail!(
"base snapshot {} was removed during backup, cannot finish as chunks might be missing",
base.backup_dir
);
}
}
// marks the backup as successful
state.finished = true;
Ok(())
}
@ -472,7 +525,7 @@ impl BackupEnvironment {
let mut state = self.state.lock().unwrap();
state.finished = true;
self.datastore.remove_backup_dir(&self.backup_dir)?;
self.datastore.remove_backup_dir(&self.backup_dir, true)?;
Ok(())
}
@ -480,12 +533,12 @@ impl BackupEnvironment {
impl RpcEnvironment for BackupEnvironment {
fn set_result_attrib(&mut self, name: &str, value: Value) {
self.result_attributes.insert(name.into(), value);
fn result_attrib_mut(&mut self) -> &mut Value {
&mut self.result_attributes
}
fn get_result_attrib(&self, name: &str) -> Option<&Value> {
self.result_attributes.get(name)
fn result_attrib(&self) -> &Value {
&self.result_attributes
}
fn env_type(&self) -> RpcEnvironmentType {
@ -497,7 +550,7 @@ impl RpcEnvironment for BackupEnvironment {
}
fn get_user(&self) -> Option<String> {
Some(self.user.clone())
Some(self.user.to_string())
}
}

View File

@ -2,7 +2,7 @@ use std::pin::Pin;
use std::sync::Arc;
use std::task::{Context, Poll};
use failure::*;
use anyhow::{bail, format_err, Error};
use futures::*;
use hyper::Body;
use hyper::http::request::Parts;
@ -243,7 +243,7 @@ pub const API_METHOD_UPLOAD_BLOB: ApiMethod = ApiMethod::new(
&sorted!([
("file-name", false, &crate::api2::types::BACKUP_ARCHIVE_NAME_SCHEMA),
("encoded-size", false, &IntegerSchema::new("Encoded blob size.")
.minimum((std::mem::size_of::<DataBlobHeader>() as isize) +1)
.minimum(std::mem::size_of::<DataBlobHeader>() as isize)
.maximum(1024*1024*16+(std::mem::size_of::<EncryptedDataBlobHeader>() as isize))
.schema()
)

View File

@ -3,10 +3,12 @@ use proxmox::list_subdirs_api_method;
pub mod datastore;
pub mod remote;
pub mod sync;
const SUBDIRS: SubdirMap = &[
("datastore", &datastore::ROUTER),
("remote", &remote::ROUTER),
("sync", &sync::ROUTER),
];
pub const ROUTER: Router = Router::new()

View File

@ -1,13 +1,16 @@
use std::path::PathBuf;
use failure::*;
use anyhow::{bail, Error};
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{api, Router, RpcEnvironment, Permission};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::backup::*;
use crate::config::datastore;
use crate::config::datastore::{self, DataStoreConfig, DIR_NAME_SCHEMA};
use crate::config::acl::{PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_MODIFY};
#[api(
input: {
@ -16,23 +19,32 @@ use crate::config::datastore;
returns: {
description: "List the configured datastores (with config digest).",
type: Array,
items: {
type: datastore::DataStoreConfig,
},
items: { type: datastore::DataStoreConfig },
},
access: {
permission: &Permission::Privilege(&["datastore"], PRIV_DATASTORE_AUDIT, false),
},
)]
/// List all datastores
pub fn list_datastores(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<DataStoreConfig>, Error> {
let (config, digest) = datastore::config()?;
Ok(config.convert_to_array("name", Some(&digest), &[]))
let list = config.convert_to_typed_array("datastore")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
// fixme: impl. const fn get_object_schema(datastore::DataStoreConfig::API_SCHEMA),
// but this need support for match inside const fn
// see: https://github.com/rust-lang/rust/issues/49146
#[api(
protected: true,
input: {
@ -40,35 +52,70 @@ pub fn list_datastores(
name: {
schema: DATASTORE_SCHEMA,
},
path: {
schema: DIR_NAME_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
path: {
schema: datastore::DIR_NAME_SCHEMA,
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
},
"prune-schedule": {
optional: true,
schema: PRUNE_SCHEDULE_SCHEMA,
},
"keep-last": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST,
},
"keep-hourly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_HOURLY,
},
"keep-daily": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_DAILY,
},
"keep-weekly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_WEEKLY,
},
"keep-monthly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_MONTHLY,
},
"keep-yearly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_YEARLY,
},
},
},
access: {
permission: &Permission::Privilege(&["datastore"], PRIV_DATASTORE_MODIFY, false),
},
)]
/// Create new datastore config.
pub fn create_datastore(name: String, param: Value) -> Result<(), Error> {
pub fn create_datastore(param: Value) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let datastore: datastore::DataStoreConfig = serde_json::from_value(param.clone())?;
let (mut config, _digest) = datastore::config()?;
if let Some(_) = config.sections.get(&name) {
bail!("datastore '{}' already exists.", name);
if let Some(_) = config.sections.get(&datastore.name) {
bail!("datastore '{}' already exists.", datastore.name);
}
let path: PathBuf = datastore.path.clone().into();
let backup_user = crate::backup::backup_user()?;
let _store = ChunkStore::create(&name, path, backup_user.uid, backup_user.gid)?;
let _store = ChunkStore::create(&datastore.name, path, backup_user.uid, backup_user.gid)?;
config.set_data(&name, "datastore", &datastore)?;
config.set_data(&datastore.name, "datastore", &datastore)?;
datastore::save_config(&config)?;
@ -87,14 +134,47 @@ pub fn create_datastore(name: String, param: Value) -> Result<(), Error> {
description: "The datastore configuration (with config digest).",
type: datastore::DataStoreConfig,
},
access: {
permission: &Permission::Privilege(&["datastore", "{name}"], PRIV_DATASTORE_AUDIT, false),
},
)]
/// Read a datastore configuration.
pub fn read_datastore(name: String) -> Result<Value, Error> {
pub fn read_datastore(
name: String,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<DataStoreConfig, Error> {
let (config, digest) = datastore::config()?;
let mut data = config.lookup_json("datastore", &name)?;
data.as_object_mut().unwrap()
.insert("digest".into(), proxmox::tools::digest_to_hex(&digest).into());
Ok(data)
let store_config = config.lookup("datastore", &name)?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(store_config)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
#[allow(non_camel_case_types)]
/// Deletable property name
pub enum DeletableProperty {
/// Delete the comment property.
comment,
/// Delete the garbage collection schedule.
gc_schedule,
/// Delete the prune job schedule.
prune_schedule,
/// Delete the keep-last property
keep_last,
/// Delete the keep-hourly property
keep_hourly,
/// Delete the keep-daily property
keep_daily,
/// Delete the keep-weekly property
keep_weekly,
/// Delete the keep-monthly property
keep_monthly,
/// Delete the keep-yearly property
keep_yearly,
}
#[api(
@ -108,21 +188,73 @@ pub fn read_datastore(name: String) -> Result<Value, Error> {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
},
"prune-schedule": {
optional: true,
schema: PRUNE_SCHEDULE_SCHEMA,
},
"keep-last": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST,
},
"keep-hourly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_HOURLY,
},
"keep-daily": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_DAILY,
},
"keep-weekly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_WEEKLY,
},
"keep-monthly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_MONTHLY,
},
"keep-yearly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_YEARLY,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{name}"], PRIV_DATASTORE_MODIFY, false),
},
)]
/// Create new datastore config.
/// Update datastore config.
pub fn update_datastore(
name: String,
comment: Option<String>,
gc_schedule: Option<String>,
prune_schedule: Option<String>,
keep_last: Option<u64>,
keep_hourly: Option<u64>,
keep_daily: Option<u64>,
keep_weekly: Option<u64>,
keep_monthly: Option<u64>,
keep_yearly: Option<u64>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
// pass/compare digest
let (mut config, expected_digest) = datastore::config()?;
@ -134,6 +266,22 @@ pub fn update_datastore(
let mut data: datastore::DataStoreConfig = config.lookup("datastore", &name)?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::comment => { data.comment = None; },
DeletableProperty::gc_schedule => { data.gc_schedule = None; },
DeletableProperty::prune_schedule => { data.prune_schedule = None; },
DeletableProperty::keep_last => { data.keep_last = None; },
DeletableProperty::keep_hourly => { data.keep_hourly = None; },
DeletableProperty::keep_daily => { data.keep_daily = None; },
DeletableProperty::keep_weekly => { data.keep_weekly = None; },
DeletableProperty::keep_monthly => { data.keep_monthly = None; },
DeletableProperty::keep_yearly => { data.keep_yearly = None; },
}
}
}
if let Some(comment) = comment {
let comment = comment.trim().to_string();
if comment.is_empty() {
@ -143,6 +291,16 @@ pub fn update_datastore(
}
}
if gc_schedule.is_some() { data.gc_schedule = gc_schedule; }
if prune_schedule.is_some() { data.prune_schedule = prune_schedule; }
if keep_last.is_some() { data.keep_last = keep_last; }
if keep_hourly.is_some() { data.keep_hourly = keep_hourly; }
if keep_daily.is_some() { data.keep_daily = keep_daily; }
if keep_weekly.is_some() { data.keep_weekly = keep_weekly; }
if keep_monthly.is_some() { data.keep_monthly = keep_monthly; }
if keep_yearly.is_some() { data.keep_yearly = keep_yearly; }
config.set_data(&name, "datastore", &data)?;
datastore::save_config(&config)?;
@ -157,16 +315,27 @@ pub fn update_datastore(
name: {
schema: DATASTORE_SCHEMA,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{name}"], PRIV_DATASTORE_MODIFY, false),
},
)]
/// Remove a datastore configuration.
pub fn delete_datastore(name: String) -> Result<(), Error> {
pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Error> {
// fixme: locking ?
// fixme: check digest ?
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, _digest) = datastore::config()?;
let (mut config, expected_digest) = datastore::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
match config.sections.get(&name) {
Some(_) => { config.sections.remove(&name); },

View File

@ -1,10 +1,14 @@
use failure::*;
use anyhow::{bail, Error};
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use base64;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::remote;
use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
#[api(
input: {
@ -14,42 +18,32 @@ use crate::config::remote;
description: "The list of configured remotes (with config digest).",
type: Array,
items: {
type: Object,
type: remote::Remote,
description: "Remote configuration (without password).",
properties: {
name: {
schema: REMOTE_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
host: {
schema: DNS_NAME_OR_IP_SCHEMA,
},
userid: {
schema: PROXMOX_USER_ID_SCHEMA,
},
fingerprint: {
optional: true,
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
},
},
},
},
access: {
permission: &Permission::Privilege(&["remote"], PRIV_REMOTE_AUDIT, false),
},
)]
/// List all remotes
pub fn list_remotes(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<remote::Remote>, Error> {
let (config, digest) = remote::config()?;
let value = config.convert_to_array("name", Some(&digest), &["password"]);
Ok(value.into())
let mut list: Vec<remote::Remote> = config.convert_to_typed_array("remote")?;
// don't return password in api
for remote in &mut list {
remote.password = "".to_string();
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
@ -67,7 +61,7 @@ pub fn list_remotes(
schema: DNS_NAME_OR_IP_SCHEMA,
},
userid: {
schema: PROXMOX_USER_ID_SCHEMA,
type: Userid,
},
password: {
schema: remote::REMOTE_PASSWORD_SCHEMA,
@ -78,21 +72,26 @@ pub fn list_remotes(
},
},
},
access: {
permission: &Permission::Privilege(&["remote"], PRIV_REMOTE_MODIFY, false),
},
)]
/// Create new remote.
pub fn create_remote(name: String, param: Value) -> Result<(), Error> {
pub fn create_remote(password: String, param: Value) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let remote: remote::Remote = serde_json::from_value(param.clone())?;
let mut data = param.clone();
data["password"] = Value::from(base64::encode(password.as_bytes()));
let remote: remote::Remote = serde_json::from_value(data)?;
let (mut config, _digest) = remote::config()?;
if let Some(_) = config.sections.get(&name) {
bail!("remote '{}' already exists.", name);
if let Some(_) = config.sections.get(&remote.name) {
bail!("remote '{}' already exists.", remote.name);
}
config.set_data(&name, "remote", &remote)?;
config.set_data(&remote.name, "remote", &remote)?;
remote::save_config(&config)?;
@ -111,16 +110,34 @@ pub fn create_remote(name: String, param: Value) -> Result<(), Error> {
description: "The remote configuration (with config digest).",
type: remote::Remote,
},
access: {
permission: &Permission::Privilege(&["remote", "{name}"], PRIV_REMOTE_AUDIT, false),
}
)]
/// Read remote configuration data.
pub fn read_remote(name: String) -> Result<Value, Error> {
pub fn read_remote(
name: String,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<remote::Remote, Error> {
let (config, digest) = remote::config()?;
let mut data = config.lookup_json("remote", &name)?;
data.as_object_mut().unwrap()
.insert("digest".into(), proxmox::tools::digest_to_hex(&digest).into());
let mut data: remote::Remote = config.lookup("remote", &name)?;
data.password = "".to_string(); // do not return password in api
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(data)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[allow(non_camel_case_types)]
/// Deletable property name
pub enum DeletableProperty {
/// Delete the comment property.
comment,
/// Delete the fingerprint property.
fingerprint,
}
#[api(
protected: true,
input: {
@ -138,7 +155,7 @@ pub fn read_remote(name: String) -> Result<Value, Error> {
},
userid: {
optional: true,
schema: PROXMOX_USER_ID_SCHEMA,
type: Userid,
},
password: {
optional: true,
@ -148,25 +165,37 @@ pub fn read_remote(name: String) -> Result<Value, Error> {
optional: true,
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["remote", "{name}"], PRIV_REMOTE_MODIFY, false),
},
)]
/// Update remote configuration.
pub fn update_remote(
name: String,
comment: Option<String>,
host: Option<String>,
userid: Option<String>,
userid: Option<Userid>,
password: Option<String>,
fingerprint: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = remote::config()?;
@ -177,6 +206,15 @@ pub fn update_remote(
let mut data: remote::Remote = config.lookup("remote", &name)?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::comment => { data.comment = None; },
DeletableProperty::fingerprint => { data.fingerprint = None; },
}
}
}
if let Some(comment) = comment {
let comment = comment.trim().to_string();
if comment.is_empty() {
@ -189,7 +227,6 @@ pub fn update_remote(
if let Some(userid) = userid { data.userid = userid; }
if let Some(password) = password { data.password = password; }
// fixme: howto delete a fingeprint?
if let Some(fingerprint) = fingerprint { data.fingerprint = Some(fingerprint); }
config.set_data(&name, "remote", &data)?;
@ -206,22 +243,35 @@ pub fn update_remote(
name: {
schema: REMOTE_ID_SCHEMA,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["remote", "{name}"], PRIV_REMOTE_MODIFY, false),
},
)]
/// Remove a remote from the configuration file.
pub fn delete_remote(name: String) -> Result<(), Error> {
pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error> {
// fixme: locking ?
// fixme: check digest ?
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, _digest) = remote::config()?;
let (mut config, expected_digest) = remote::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
match config.sections.get(&name) {
Some(_) => { config.sections.remove(&name); },
None => bail!("remote '{}' does not exist.", name),
}
remote::save_config(&config)?;
Ok(())
}

278
src/api2/config/sync.rs Normal file
View File

@ -0,0 +1,278 @@
use anyhow::{bail, Error};
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::sync::{self, SyncJobConfig};
// fixme: add access permissions
#[api(
input: {
properties: {},
},
returns: {
description: "List configured jobs.",
type: Array,
items: { type: sync::SyncJobConfig },
},
)]
/// List all sync jobs
pub fn list_sync_jobs(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SyncJobConfig>, Error> {
let (config, digest) = sync::config()?;
let list = config.convert_to_typed_array("sync")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
protected: true,
input: {
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
"remote-store": {
schema: DATASTORE_SCHEMA,
},
"remove-vanished": {
schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
},
},
)]
/// Create a new sync job.
pub fn create_sync_job(param: Value) -> Result<(), Error> {
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let sync_job: sync::SyncJobConfig = serde_json::from_value(param.clone())?;
let (mut config, _digest) = sync::config()?;
if let Some(_) = config.sections.get(&sync_job.id) {
bail!("job '{}' already exists.", sync_job.id);
}
config.set_data(&sync_job.id, "sync", &sync_job)?;
sync::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
},
},
returns: {
description: "The sync job configuration.",
type: sync::SyncJobConfig,
},
)]
/// Read a sync job configuration.
pub fn read_sync_job(
id: String,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<SyncJobConfig, Error> {
let (config, digest) = sync::config()?;
let sync_job = config.lookup("sync", &id)?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(sync_job)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
#[allow(non_camel_case_types)]
/// Deletable property name
pub enum DeletableProperty {
/// Delete the comment property.
comment,
/// Delete the job schedule.
schedule,
/// Delete the remove-vanished flag.
remove_vanished,
}
#[api(
protected: true,
input: {
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
optional: true,
},
"remote-store": {
schema: DATASTORE_SCHEMA,
optional: true,
},
"remove-vanished": {
schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
)]
/// Update sync job config.
pub fn update_sync_job(
id: String,
store: Option<String>,
remote: Option<String>,
remote_store: Option<String>,
remove_vanished: Option<bool>,
comment: Option<String>,
schedule: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
// pass/compare digest
let (mut config, expected_digest) = sync::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let mut data: sync::SyncJobConfig = config.lookup("sync", &id)?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::comment => { data.comment = None; },
DeletableProperty::schedule => { data.schedule = None; },
DeletableProperty::remove_vanished => { data.remove_vanished = None; },
}
}
}
if let Some(comment) = comment {
let comment = comment.trim().to_string();
if comment.is_empty() {
data.comment = None;
} else {
data.comment = Some(comment);
}
}
if let Some(store) = store { data.store = store; }
if let Some(remote) = remote { data.remote = remote; }
if let Some(remote_store) = remote_store { data.remote_store = remote_store; }
if schedule.is_some() { data.schedule = schedule; }
if remove_vanished.is_some() { data.remove_vanished = remove_vanished; }
config.set_data(&id, "sync", &data)?;
sync::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
)]
/// Remove a sync job configuration
pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = sync::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
match config.sections.get(&id) {
Some(_) => { config.sections.remove(&id); },
None => bail!("job '{}' does not exist.", id),
}
sync::save_config(&config)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_SYNC_JOB)
.put(&API_METHOD_UPDATE_SYNC_JOB)
.delete(&API_METHOD_DELETE_SYNC_JOB);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_SYNC_JOBS)
.post(&API_METHOD_CREATE_SYNC_JOB)
.match_all("id", &ITEM_ROUTER);

29
src/api2/helpers.rs Normal file
View File

@ -0,0 +1,29 @@
use std::path::PathBuf;
use anyhow::Error;
use futures::stream::TryStreamExt;
use hyper::{Body, Response, StatusCode, header};
use proxmox::http_bail;
pub async fn create_download_response(path: PathBuf) -> Result<Response<Body>, Error> {
let file = match tokio::fs::File::open(path.clone()).await {
Ok(file) => file,
Err(ref err) if err.kind() == std::io::ErrorKind::NotFound => {
http_bail!(NOT_FOUND, "open file {:?} failed - not found", path);
}
Err(err) => http_bail!(BAD_REQUEST, "open file {:?} failed: {}", path, err),
};
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())
.map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze()));
let body = Body::wrap_stream(payload);
// fixme: set other headers ?
Ok(Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, "application/octet-stream")
.body(body)
.unwrap())
}

View File

@ -1,24 +1,325 @@
use proxmox::api::router::{Router, SubdirMap};
use proxmox::list_subdirs_api_method;
use std::net::TcpListener;
use std::os::unix::io::AsRawFd;
pub mod tasks;
mod time;
mod network;
use anyhow::{bail, format_err, Error};
use futures::future::{FutureExt, TryFutureExt};
use hyper::body::Body;
use hyper::http::request::Parts;
use hyper::upgrade::Upgraded;
use nix::fcntl::{fcntl, FcntlArg, FdFlag};
use serde_json::{json, Value};
use tokio::io::{AsyncBufReadExt, BufReader};
use proxmox::api::router::{Router, SubdirMap};
use proxmox::api::{
api, schema::*, ApiHandler, ApiMethod, ApiResponseFuture, Permission, RpcEnvironment,
};
use proxmox::list_subdirs_api_method;
use proxmox::tools::websocket::WebSocket;
use proxmox::{identity, sortable};
use crate::api2::types::*;
use crate::config::acl::PRIV_SYS_CONSOLE;
use crate::server::WorkerTask;
use crate::tools;
pub mod disks;
pub mod dns;
mod syslog;
pub mod network;
pub mod tasks;
pub(crate) mod rrd;
mod apt;
mod journal;
mod services;
mod status;
mod subscription;
mod syslog;
mod time;
pub const SHELL_CMD_SCHEMA: Schema = StringSchema::new("The command to run.")
.format(&ApiStringFormat::Enum(&[
EnumEntry::new("login", "Login"),
EnumEntry::new("upgrade", "Upgrade"),
]))
.schema();
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
cmd: {
schema: SHELL_CMD_SCHEMA,
optional: true,
},
},
},
returns: {
type: Object,
description: "Object with the user, ticket, port and upid",
properties: {
user: {
description: "",
type: String,
},
ticket: {
description: "",
type: String,
},
port: {
description: "",
type: String,
},
upid: {
description: "",
type: String,
},
}
},
access: {
description: "Restricted to users on realm 'pam'",
permission: &Permission::Privilege(&["system"], PRIV_SYS_CONSOLE, false),
}
)]
/// Call termproxy and return shell ticket
async fn termproxy(
cmd: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv
.get_user()
.ok_or_else(|| format_err!("unknown user"))?
.parse()?;
if userid.realm() != "pam" {
bail!("only pam users can use the console");
}
let path = "/system";
// use port 0 and let the kernel decide which port is free
let listener = TcpListener::bind("localhost:0")?;
let port = listener.local_addr()?.port();
let ticket = tools::ticket::assemble_term_ticket(
crate::auth_helpers::private_auth_key(),
&userid,
&path,
port,
)?;
let mut command = Vec::new();
match cmd.as_ref().map(|x| x.as_str()) {
Some("login") | None => {
command.push("login");
if userid == "root@pam" {
command.push("-f");
command.push("root");
}
}
Some("upgrade") => {
if userid != "root@pam" {
bail!("only root@pam can upgrade");
}
// TODO: add nicer/safer wrapper like in PVE instead
command.push("sh");
command.push("-c");
command.push("apt full-upgrade; bash -l");
}
_ => bail!("invalid command"),
};
let username = userid.name().to_owned();
let upid = WorkerTask::spawn(
"termproxy",
None,
userid,
false,
move |worker| async move {
// move inside the worker so that it survives and does not close the port
// remove CLOEXEC from listenere so that we can reuse it in termproxy
let fd = listener.as_raw_fd();
let mut flags = match fcntl(fd, FcntlArg::F_GETFD) {
Ok(bits) => FdFlag::from_bits_truncate(bits),
Err(err) => bail!("could not get fd: {}", err),
};
flags.remove(FdFlag::FD_CLOEXEC);
if let Err(err) = fcntl(fd, FcntlArg::F_SETFD(flags)) {
bail!("could not set fd: {}", err);
}
let mut arguments: Vec<&str> = Vec::new();
let fd_string = fd.to_string();
arguments.push(&fd_string);
arguments.extend_from_slice(&[
"--path",
&path,
"--perm",
"Sys.Console",
"--authport",
"82",
"--port-as-fd",
"--",
]);
arguments.extend_from_slice(&command);
let mut cmd = tokio::process::Command::new("/usr/bin/termproxy");
cmd.args(&arguments)
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped());
let mut child = cmd.spawn().expect("error executing termproxy");
let stdout = child.stdout.take().expect("no child stdout handle");
let stderr = child.stderr.take().expect("no child stderr handle");
let worker_stdout = worker.clone();
let stdout_fut = async move {
let mut reader = BufReader::new(stdout).lines();
while let Some(line) = reader.next_line().await? {
worker_stdout.log(line);
}
Ok::<(), Error>(())
};
let worker_stderr = worker.clone();
let stderr_fut = async move {
let mut reader = BufReader::new(stderr).lines();
while let Some(line) = reader.next_line().await? {
worker_stderr.warn(line);
}
Ok::<(), Error>(())
};
let mut needs_kill = false;
let res = tokio::select!{
res = &mut child => {
let exit_code = res?;
if !exit_code.success() {
match exit_code.code() {
Some(code) => bail!("termproxy exited with {}", code),
None => bail!("termproxy exited by signal"),
}
}
Ok(())
},
res = stdout_fut => res,
res = stderr_fut => res,
res = worker.abort_future() => {
needs_kill = true;
res.map_err(Error::from)
}
};
if needs_kill {
if res.is_ok() {
child.kill()?;
child.await?;
return Ok(());
}
if let Err(err) = child.kill() {
worker.warn(format!("error killing termproxy: {}", err));
} else if let Err(err) = child.await {
worker.warn(format!("error awaiting termproxy: {}", err));
}
}
res
},
)?;
// FIXME: We're returning the user NAME only?
Ok(json!({
"user": username,
"ticket": ticket,
"port": port,
"upid": upid,
}))
}
#[sortable]
pub const API_METHOD_WEBSOCKET: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&upgrade_to_websocket),
&ObjectSchema::new(
"Upgraded to websocket",
&sorted!([
("node", false, &NODE_SCHEMA),
(
"vncticket",
false,
&StringSchema::new("Terminal ticket").schema()
),
("port", false, &IntegerSchema::new("Terminal port").schema()),
]),
),
)
.access(
Some("The user needs Sys.Console on /system."),
&Permission::Privilege(&["system"], PRIV_SYS_CONSOLE, false),
);
fn upgrade_to_websocket(
parts: Parts,
req_body: Body,
param: Value,
_info: &ApiMethod,
rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture {
async move {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let ticket = tools::required_string_param(&param, "vncticket")?.to_owned();
let port: u16 = tools::required_integer_param(&param, "port")? as u16;
// will be checked again by termproxy
tools::ticket::verify_term_ticket(
crate::auth_helpers::public_auth_key(),
&userid,
&"/system",
port,
&ticket,
)?;
let (ws, response) = WebSocket::new(parts.headers)?;
crate::server::spawn_internal_task(async move {
let conn: Upgraded = match req_body.on_upgrade().map_err(Error::from).await {
Ok(upgraded) => upgraded,
_ => bail!("error"),
};
let local = tokio::net::TcpStream::connect(format!("localhost:{}", port)).await?;
ws.serve_connection(conn, local).await
});
Ok(response)
}
.boxed()
}
pub const SUBDIRS: SubdirMap = &[
("apt", &apt::ROUTER),
("disks", &disks::ROUTER),
("dns", &dns::ROUTER),
("journal", &journal::ROUTER),
("network", &network::ROUTER),
("rrd", &rrd::ROUTER),
("services", &services::ROUTER),
("status", &status::ROUTER),
("subscription", &subscription::ROUTER),
("syslog", &syslog::ROUTER),
("tasks", &tasks::ROUTER),
("termproxy", &Router::new().post(&API_METHOD_TERMPROXY)),
("time", &time::ROUTER),
(
"vncwebsocket",
&Router::new().upgrade(&API_METHOD_WEBSOCKET),
),
];
pub const ROUTER: Router = Router::new()

268
src/api2/node/apt.rs Normal file
View File

@ -0,0 +1,268 @@
use apt_pkg_native::Cache;
use anyhow::{Error, bail};
use serde_json::{json, Value};
use proxmox::{list_subdirs_api_method, const_regex};
use proxmox::api::{api, RpcEnvironment, RpcEnvironmentType, Permission};
use proxmox::api::router::{Router, SubdirMap};
use crate::server::WorkerTask;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::api2::types::{APTUpdateInfo, NODE_SCHEMA, Userid, UPID_SCHEMA};
const_regex! {
VERSION_EPOCH_REGEX = r"^\d+:";
FILENAME_EXTRACT_REGEX = r"^.*/.*?_(.*)_Packages$";
}
// FIXME: Replace with call to 'apt changelog <pkg> --print-uris'. Currently
// not possible as our packages do not have a URI set in their Release file
fn get_changelog_url(
package: &str,
filename: &str,
source_pkg: &str,
version: &str,
source_version: &str,
origin: &str,
component: &str,
) -> Result<String, Error> {
if origin == "" {
bail!("no origin available for package {}", package);
}
if origin == "Debian" {
let source_version = (VERSION_EPOCH_REGEX.regex_obj)().replace_all(source_version, "");
let prefix = if source_pkg.starts_with("lib") {
source_pkg.get(0..4)
} else {
source_pkg.get(0..1)
};
let prefix = match prefix {
Some(p) => p,
None => bail!("cannot get starting characters of package name '{}'", package)
};
// note: security updates seem to not always upload a changelog for
// their package version, so this only works *most* of the time
return Ok(format!("https://metadata.ftp-master.debian.org/changelogs/main/{}/{}/{}_{}_changelog",
prefix, source_pkg, source_pkg, source_version));
} else if origin == "Proxmox" {
let version = (VERSION_EPOCH_REGEX.regex_obj)().replace_all(version, "");
let base = match (FILENAME_EXTRACT_REGEX.regex_obj)().captures(filename) {
Some(captures) => {
let base_capture = captures.get(1);
match base_capture {
Some(base_underscore) => base_underscore.as_str().replace("_", "/"),
None => bail!("incompatible filename, cannot find regex group")
}
},
None => bail!("incompatible filename, doesn't match regex")
};
return Ok(format!("http://download.proxmox.com/{}/{}_{}.changelog",
base, package, version));
}
bail!("unknown origin ({}) or component ({})", origin, component)
}
fn list_installed_apt_packages<F: Fn(&str, &str, &str) -> bool>(filter: F)
-> Vec<APTUpdateInfo> {
let mut ret = Vec::new();
// note: this is not an 'apt update', it just re-reads the cache from disk
let mut cache = Cache::get_singleton();
cache.reload();
let mut cache_iter = cache.iter();
loop {
let view = match cache_iter.next() {
Some(view) => view,
None => break
};
let current_version = match view.current_version() {
Some(vers) => vers,
None => continue
};
let candidate_version = match view.candidate_version() {
Some(vers) => vers,
// if there's no candidate (i.e. no update) get info of currently
// installed version instead
None => current_version.clone()
};
let package = view.name();
if filter(&package, &current_version, &candidate_version) {
let mut origin_res = "unknown".to_owned();
let mut section_res = "unknown".to_owned();
let mut priority_res = "unknown".to_owned();
let mut change_log_url = "".to_owned();
let mut short_desc = package.clone();
let mut long_desc = "".to_owned();
// get additional information via nested APT 'iterators'
let mut view_iter = view.versions();
while let Some(ver) = view_iter.next() {
if ver.version() == candidate_version {
if let Some(section) = ver.section() {
section_res = section;
}
if let Some(prio) = ver.priority_type() {
priority_res = prio;
}
// assume every package has only one origin file (not
// origin, but origin *file*, for some reason those seem to
// be different concepts in APT)
let mut origin_iter = ver.origin_iter();
let origin = origin_iter.next();
if let Some(origin) = origin {
if let Some(sd) = origin.short_desc() {
short_desc = sd;
}
if let Some(ld) = origin.long_desc() {
long_desc = ld;
}
// the package files appear in priority order, meaning
// the one for the candidate version is first
let mut pkg_iter = origin.file();
let pkg_file = pkg_iter.next();
if let Some(pkg_file) = pkg_file {
if let Some(origin_name) = pkg_file.origin() {
origin_res = origin_name;
}
let filename = pkg_file.file_name();
let source_pkg = ver.source_package();
let source_ver = ver.source_version();
let component = pkg_file.component();
// build changelog URL from gathered information
// ignore errors, use empty changelog instead
let url = get_changelog_url(&package, &filename, &source_pkg,
&candidate_version, &source_ver, &origin_res, &component);
if let Ok(url) = url {
change_log_url = url;
}
}
}
break;
}
}
let info = APTUpdateInfo {
package,
title: short_desc,
arch: view.arch(),
description: long_desc,
change_log_url,
origin: origin_res,
version: candidate_version,
old_version: current_version,
priority: priority_res,
section: section_res,
};
ret.push(info);
}
}
return ret;
}
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
description: "A list of packages with available updates.",
type: Array,
items: { type: APTUpdateInfo },
},
access: {
permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
},
)]
/// List available APT updates
fn apt_update_available(_param: Value) -> Result<Value, Error> {
let ret = list_installed_apt_packages(|_pkg, cur_ver, can_ver| cur_ver != can_ver);
Ok(json!(ret))
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
quiet: {
description: "Only produces output suitable for logging, omitting progress indicators.",
type: bool,
default: false,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
access: {
permission: &Permission::Privilege(&[], PRIV_SYS_MODIFY, false),
},
)]
/// Update the APT database
pub fn apt_update_database(
quiet: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let quiet = quiet.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_QUIET);
let upid_str = WorkerTask::new_thread("aptupdate", None, userid, to_stdout, move |worker| {
if !quiet { worker.log("starting apt-get update") }
// TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE
let mut command = std::process::Command::new("apt-get");
command.arg("update");
let output = crate::tools::run_command(command, None)?;
if !quiet { worker.log(output) }
// TODO: add mail notify for new updates like PVE
Ok(())
})?;
Ok(upid_str)
}
const SUBDIRS: SubdirMap = &[
("update", &Router::new()
.get(&API_METHOD_APT_UPDATE_AVAILABLE)
.post(&API_METHOD_APT_UPDATE_DATABASE)
),
];
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

188
src/api2/node/disks.rs Normal file
View File

@ -0,0 +1,188 @@
use anyhow::{bail, Error};
use serde_json::{json, Value};
use proxmox::api::{api, Permission, RpcEnvironment, RpcEnvironmentType};
use proxmox::api::router::{Router, SubdirMap};
use proxmox::{sortable, identity};
use proxmox::{list_subdirs_api_method};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::tools::disks::{
DiskUsageInfo, DiskUsageType, DiskManage, SmartData,
get_disks, get_smart_data, get_disk_usage_info, inititialize_gpt_disk,
};
use crate::server::WorkerTask;
use crate::api2::types::{Userid, UPID_SCHEMA, NODE_SCHEMA, BLOCKDEVICE_NAME_SCHEMA};
pub mod directory;
pub mod zfs;
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
skipsmart: {
description: "Skip smart checks.",
type: bool,
optional: true,
default: false,
},
"usage-type": {
type: DiskUsageType,
optional: true,
},
},
},
returns: {
description: "Local disk list.",
type: Array,
items: {
type: DiskUsageInfo,
},
},
access: {
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_AUDIT, false),
},
)]
/// List local disks
pub fn list_disks(
skipsmart: bool,
usage_type: Option<DiskUsageType>,
) -> Result<Vec<DiskUsageInfo>, Error> {
let mut list = Vec::new();
for (_, info) in get_disks(None, skipsmart)? {
if let Some(ref usage_type) = usage_type {
if info.used == *usage_type {
list.push(info);
}
} else {
list.push(info);
}
}
Ok(list)
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
disk: {
schema: BLOCKDEVICE_NAME_SCHEMA,
},
healthonly: {
description: "If true returns only the health status.",
type: bool,
optional: true,
},
},
},
returns: {
type: SmartData,
},
access: {
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_AUDIT, false),
},
)]
/// Get SMART attributes and health of a disk.
pub fn smart_status(
disk: String,
healthonly: Option<bool>,
) -> Result<SmartData, Error> {
let healthonly = healthonly.unwrap_or(false);
let manager = DiskManage::new();
let disk = manager.disk_by_name(&disk)?;
get_smart_data(&disk, healthonly)
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
disk: {
schema: BLOCKDEVICE_NAME_SCHEMA,
},
uuid: {
description: "UUID for the GPT table.",
type: String,
optional: true,
max_length: 36,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
access: {
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_MODIFY, false),
},
)]
/// Initialize empty Disk with GPT
pub fn initialize_disk(
disk: String,
uuid: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let info = get_disk_usage_info(&disk, true)?;
if info.used != DiskUsageType::Unused {
bail!("disk '{}' is already in use.", disk);
}
let upid_str = WorkerTask::new_thread(
"diskinit", Some(disk.clone()), userid, to_stdout, move |worker|
{
worker.log(format!("initialize disk {}", disk));
let disk_manager = DiskManage::new();
let disk_info = disk_manager.disk_by_name(&disk)?;
inititialize_gpt_disk(&disk_info, uuid.as_deref())?;
Ok(())
})?;
Ok(json!(upid_str))
}
#[sortable]
const SUBDIRS: SubdirMap = &sorted!([
// ("lvm", &lvm::ROUTER),
("directory", &directory::ROUTER),
("zfs", &zfs::ROUTER),
(
"initgpt", &Router::new()
.post(&API_METHOD_INITIALIZE_DISK)
),
(
"list", &Router::new()
.get(&API_METHOD_LIST_DISKS)
),
(
"smart", &Router::new()
.get(&API_METHOD_SMART_STATUS)
),
]);
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

View File

@ -0,0 +1,221 @@
use anyhow::{bail, Error};
use serde_json::json;
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Permission, RpcEnvironment, RpcEnvironmentType};
use proxmox::api::section_config::SectionConfigData;
use proxmox::api::router::Router;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::tools::disks::{
DiskManage, FileSystemType, DiskUsageType,
create_file_system, create_single_linux_partition, get_fs_uuid, get_disk_usage_info,
};
use crate::tools::systemd::{self, types::*};
use crate::server::WorkerTask;
use crate::api2::types::*;
#[api(
properties: {
"filesystem": {
type: FileSystemType,
optional: true,
},
},
)]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Datastore mount info.
pub struct DatastoreMountInfo {
/// The path of the mount unit.
pub unitfile: String,
/// The mount path.
pub path: String,
/// The mounted device.
pub device: String,
/// File system type
pub filesystem: Option<String>,
/// Mount options
pub options: Option<String>,
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
}
},
returns: {
description: "List of systemd datastore mount units.",
type: Array,
items: {
type: DatastoreMountInfo,
},
},
access: {
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_AUDIT, false),
},
)]
/// List systemd datastore mount units.
pub fn list_datastore_mounts() -> Result<Vec<DatastoreMountInfo>, Error> {
lazy_static::lazy_static! {
static ref MOUNT_NAME_REGEX: regex::Regex = regex::Regex::new(r"^mnt-datastore-(.+)\.mount$").unwrap();
}
let mut list = Vec::new();
let basedir = "/etc/systemd/system";
for item in crate::tools::fs::scan_subdir(libc::AT_FDCWD, basedir, &MOUNT_NAME_REGEX)? {
let item = item?;
let name = item.file_name().to_string_lossy().to_string();
let unitfile = format!("{}/{}", basedir, name);
let config = systemd::config::parse_systemd_mount(&unitfile)?;
let data: SystemdMountSection = config.lookup("Mount", "Mount")?;
list.push(DatastoreMountInfo {
unitfile,
device: data.What,
path: data.Where,
filesystem: data.Type,
options: data.Options,
});
}
Ok(list)
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
name: {
schema: DATASTORE_SCHEMA,
},
disk: {
schema: BLOCKDEVICE_NAME_SCHEMA,
},
"add-datastore": {
description: "Configure a datastore using the directory.",
type: bool,
optional: true,
},
filesystem: {
type: FileSystemType,
optional: true,
},
}
},
returns: {
schema: UPID_SCHEMA,
},
access: {
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_MODIFY, false),
},
)]
/// Create a Filesystem on an unused disk. Will be mounted under '/mnt/datastore/<name>'.".
pub fn create_datastore_disk(
name: String,
disk: String,
add_datastore: Option<bool>,
filesystem: Option<FileSystemType>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let info = get_disk_usage_info(&disk, true)?;
if info.used != DiskUsageType::Unused {
bail!("disk '{}' is already in use.", disk);
}
let upid_str = WorkerTask::new_thread(
"dircreate", Some(name.clone()), userid, to_stdout, move |worker|
{
worker.log(format!("create datastore '{}' on disk {}", name, disk));
let add_datastore = add_datastore.unwrap_or(false);
let filesystem = filesystem.unwrap_or(FileSystemType::Ext4);
let manager = DiskManage::new();
let disk = manager.clone().disk_by_name(&disk)?;
let partition = create_single_linux_partition(&disk)?;
create_file_system(&partition, filesystem)?;
let uuid = get_fs_uuid(&partition)?;
let uuid_path = format!("/dev/disk/by-uuid/{}", uuid);
let (mount_unit_name, mount_point) = create_datastore_mount_unit(&name, filesystem, &uuid_path)?;
systemd::reload_daemon()?;
systemd::enable_unit(&mount_unit_name)?;
systemd::start_unit(&mount_unit_name)?;
if add_datastore {
crate::api2::config::datastore::create_datastore(json!({ "name": name, "path": mount_point }))?
}
Ok(())
})?;
Ok(upid_str)
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_DATASTORE_MOUNTS)
.post(&API_METHOD_CREATE_DATASTORE_DISK);
fn create_datastore_mount_unit(
datastore_name: &str,
fs_type: FileSystemType,
what: &str,
) -> Result<(String, String), Error> {
let mount_point = format!("/mnt/datastore/{}", datastore_name);
let mut mount_unit_name = systemd::escape_unit(&mount_point, true);
mount_unit_name.push_str(".mount");
let mount_unit_path = format!("/etc/systemd/system/{}", mount_unit_name);
let unit = SystemdUnitSection {
Description: format!("Mount datatstore '{}' under '{}'", datastore_name, mount_point),
..Default::default()
};
let install = SystemdInstallSection {
WantedBy: Some(vec!["multi-user.target".to_string()]),
..Default::default()
};
let mount = SystemdMountSection {
What: what.to_string(),
Where: mount_point.clone(),
Type: Some(fs_type.to_string()),
Options: Some(String::from("defaults")),
..Default::default()
};
let mut config = SectionConfigData::new();
config.set_data("Unit", "Unit", unit)?;
config.set_data("Install", "Install", install)?;
config.set_data("Mount", "Mount", mount)?;
systemd::config::save_systemd_mount(&mount_unit_path, &config)?;
Ok((mount_unit_name, mount_point))
}

383
src/api2/node/disks/zfs.rs Normal file
View File

@ -0,0 +1,383 @@
use anyhow::{bail, Error};
use serde_json::{json, Value};
use ::serde::{Deserialize, Serialize};
use proxmox::api::{
api, Permission, RpcEnvironment, RpcEnvironmentType,
schema::{
Schema,
StringSchema,
ArraySchema,
IntegerSchema,
ApiStringFormat,
parse_property_string,
},
};
use proxmox::api::router::Router;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::tools::disks::{
zpool_list, zpool_status, parse_zpool_status_config_tree, vdev_list_to_tree,
DiskUsageType,
};
use crate::server::WorkerTask;
use crate::api2::types::*;
pub const DISK_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Disk name list.", &BLOCKDEVICE_NAME_SCHEMA)
.schema();
pub const DISK_LIST_SCHEMA: Schema = StringSchema::new(
"A list of disk names, comma separated.")
.format(&ApiStringFormat::PropertyString(&DISK_ARRAY_SCHEMA))
.schema();
pub const ZFS_ASHIFT_SCHEMA: Schema = IntegerSchema::new(
"Pool sector size exponent.")
.minimum(9)
.maximum(16)
.default(12)
.schema();
pub const ZPOOL_NAME_SCHEMA: Schema =StringSchema::new("ZFS Pool Name")
.format(&ApiStringFormat::Pattern(&ZPOOL_NAME_REGEX))
.schema();
#[api(
default: "On",
)]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// The ZFS compression algorithm to use.
pub enum ZfsCompressionType {
/// Gnu Zip
Gzip,
/// LZ4
Lz4,
/// LZJB
Lzjb,
/// ZLE
Zle,
/// Enable compression using the default algorithm.
On,
/// Disable compression.
Off,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// The ZFS RAID level to use.
pub enum ZfsRaidLevel {
/// Single Disk
Single,
/// Mirror
Mirror,
/// Raid10
Raid10,
/// RaidZ
RaidZ,
/// RaidZ2
RaidZ2,
/// RaidZ3
RaidZ3,
}
#[api()]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// zpool list item
pub struct ZpoolListItem {
/// zpool name
pub name: String,
/// Health
pub health: String,
/// Total size
pub size: u64,
/// Used size
pub alloc: u64,
/// Free space
pub free: u64,
/// ZFS fragnentation level
pub frag: u64,
/// ZFS deduplication ratio
pub dedup: f64,
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
description: "List of zpools.",
type: Array,
items: {
type: ZpoolListItem,
},
},
access: {
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_AUDIT, false),
},
)]
/// List zfs pools.
pub fn list_zpools() -> Result<Vec<ZpoolListItem>, Error> {
let data = zpool_list(None, false)?;
let mut list = Vec::new();
for item in data {
if let Some(usage) = item.usage {
list.push(ZpoolListItem {
name: item.name,
health: item.health,
size: usage.size,
alloc: usage.alloc,
free: usage.free,
frag: usage.frag,
dedup: usage.dedup,
});
}
}
Ok(list)
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
name: {
schema: ZPOOL_NAME_SCHEMA,
},
},
},
returns: {
description: "zpool vdev tree with status",
properties: {
},
},
access: {
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_AUDIT, false),
},
)]
/// Get zpool status details.
pub fn zpool_details(
name: String,
) -> Result<Value, Error> {
let key_value_list = zpool_status(&name)?;
let config = match key_value_list.iter().find(|(k, _)| k == "config") {
Some((_, v)) => v,
None => bail!("got zpool status without config key"),
};
let vdev_list = parse_zpool_status_config_tree(config)?;
let mut tree = vdev_list_to_tree(&vdev_list)?;
for (k, v) in key_value_list {
if k != "config" {
tree[k] = v.into();
}
}
tree["name"] = tree.as_object_mut().unwrap()
.remove("pool")
.unwrap_or_else(|| name.into());
Ok(tree)
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
name: {
schema: DATASTORE_SCHEMA,
},
devices: {
schema: DISK_LIST_SCHEMA,
},
raidlevel: {
type: ZfsRaidLevel,
},
ashift: {
schema: ZFS_ASHIFT_SCHEMA,
optional: true,
},
compression: {
type: ZfsCompressionType,
optional: true,
},
"add-datastore": {
description: "Configure a datastore using the zpool.",
type: bool,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
access: {
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_MODIFY, false),
},
)]
/// Create a new ZFS pool.
pub fn create_zpool(
name: String,
devices: String,
raidlevel: ZfsRaidLevel,
compression: Option<String>,
ashift: Option<usize>,
add_datastore: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let add_datastore = add_datastore.unwrap_or(false);
let ashift = ashift.unwrap_or(12);
let devices_text = devices.clone();
let devices = parse_property_string(&devices, &DISK_ARRAY_SCHEMA)?;
let devices: Vec<String> = devices.as_array().unwrap().iter()
.map(|v| v.as_str().unwrap().to_string()).collect();
let disk_map = crate::tools::disks::get_disks(None, true)?;
for disk in devices.iter() {
match disk_map.get(disk) {
Some(info) => {
if info.used != DiskUsageType::Unused {
bail!("disk '{}' is already in use.", disk);
}
}
None => {
bail!("no such disk '{}'", disk);
}
}
}
let min_disks = match raidlevel {
ZfsRaidLevel::Single => 1,
ZfsRaidLevel::Mirror => 2,
ZfsRaidLevel::Raid10 => 4,
ZfsRaidLevel::RaidZ => 3,
ZfsRaidLevel::RaidZ2 => 4,
ZfsRaidLevel::RaidZ3 => 5,
};
// Sanity checks
if raidlevel == ZfsRaidLevel::Raid10 && devices.len() % 2 != 0 {
bail!("Raid10 needs an even number of disks.");
}
if raidlevel == ZfsRaidLevel::Single && devices.len() > 1 {
bail!("Please give only one disk for single disk mode.");
}
if devices.len() < min_disks {
bail!("{:?} needs at least {} disks.", raidlevel, min_disks);
}
// check if the default path does exist already and bail if it does
// otherwise we get an error on mounting
let mut default_path = std::path::PathBuf::from("/");
default_path.push(&name);
match std::fs::metadata(&default_path) {
Err(_) => {}, // path does not exist
Ok(_) => {
bail!("path {:?} already exists", default_path);
}
}
let upid_str = WorkerTask::new_thread(
"zfscreate", Some(name.clone()), userid, to_stdout, move |worker|
{
worker.log(format!("create {:?} zpool '{}' on devices '{}'", raidlevel, name, devices_text));
let mut command = std::process::Command::new("zpool");
command.args(&["create", "-o", &format!("ashift={}", ashift), &name]);
match raidlevel {
ZfsRaidLevel::Single => {
command.arg(&devices[0]);
}
ZfsRaidLevel::Mirror => {
command.arg("mirror");
command.args(devices);
}
ZfsRaidLevel::Raid10 => {
devices.chunks(2).for_each(|pair| {
command.arg("mirror");
command.args(pair);
});
}
ZfsRaidLevel::RaidZ => {
command.arg("raidz");
command.args(devices);
}
ZfsRaidLevel::RaidZ2 => {
command.arg("raidz2");
command.args(devices);
}
ZfsRaidLevel::RaidZ3 => {
command.arg("raidz3");
command.args(devices);
}
}
worker.log(format!("# {:?}", command));
let output = crate::tools::run_command(command, None)?;
worker.log(output);
if let Some(compression) = compression {
let mut command = std::process::Command::new("zfs");
command.args(&["set", &format!("compression={}", compression), &name]);
worker.log(format!("# {:?}", command));
let output = crate::tools::run_command(command, None)?;
worker.log(output);
}
if add_datastore {
let mount_point = format!("/{}", name);
crate::api2::config::datastore::create_datastore(json!({ "name": name, "path": mount_point }))?
}
Ok(())
})?;
Ok(upid_str)
}
pub const POOL_ROUTER: Router = Router::new()
.get(&API_METHOD_ZPOOL_DETAILS);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_ZPOOLS)
.post(&API_METHOD_CREATE_ZPOOL)
.match_all("name", &POOL_ROUTER);

View File

@ -1,21 +1,34 @@
use std::sync::{Arc, Mutex};
use failure::*;
use anyhow::{Error};
use lazy_static::lazy_static;
use openssl::sha;
use regex::Regex;
use serde_json::{json, Value};
use ::serde::{Deserialize, Serialize};
use proxmox::{sortable, identity};
use proxmox::api::{ApiHandler, ApiMethod, Router, RpcEnvironment};
use proxmox::api::schema::*;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::{IPRE, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::api2::types::*;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
static RESOLV_CONF_FN: &str = "/etc/resolv.conf";
#[api()]
#[derive(Serialize, Deserialize)]
#[allow(non_camel_case_types)]
/// Deletable property name
pub enum DeletableProperty {
/// Delete first nameserver entry
dns1,
/// Delete second nameserver entry
dns2,
/// Delete third nameserver entry
dns3,
}
pub fn read_etc_resolv_conf() -> Result<Value, Error> {
let mut result = json!({});
@ -34,6 +47,8 @@ pub fn read_etc_resolv_conf() -> Result<Value, Error> {
concat!(r"^\s*nameserver\s+(", IPRE!(), r")\s*")).unwrap();
}
let mut options = String::new();
for line in data.lines() {
if let Some(caps) = DOMAIN_REGEX.captures(&line) {
@ -44,16 +59,69 @@ pub fn read_etc_resolv_conf() -> Result<Value, Error> {
let nameserver = &caps[1];
let id = format!("dns{}", nscount);
result[id] = Value::from(nameserver);
} else {
if !options.is_empty() { options.push('\n'); }
options.push_str(line);
}
}
if !options.is_empty() {
result["options"] = options.into();
}
Ok(result)
}
fn update_dns(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
#[api(
protected: true,
input: {
description: "Update DNS settings.",
properties: {
node: {
schema: NODE_SCHEMA,
},
search: {
schema: SEARCH_DOMAIN_SCHEMA,
optional: true,
},
dns1: {
optional: true,
schema: FIRST_DNS_SERVER_SCHEMA,
},
dns2: {
optional: true,
schema: SECOND_DNS_SERVER_SCHEMA,
},
dns3: {
optional: true,
schema: THIRD_DNS_SERVER_SCHEMA,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "network", "dns"], PRIV_SYS_MODIFY, false),
}
)]
/// Update DNS settings
pub fn update_dns(
search: Option<String>,
dns1: Option<String>,
dns2: Option<String>,
dns3: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
) -> Result<Value, Error> {
lazy_static! {
@ -62,33 +130,41 @@ fn update_dns(
let _guard = MUTEX.lock();
let search = crate::tools::required_string_param(&param, "search")?;
let mut config = read_etc_resolv_conf()?;
let old_digest = config["digest"].as_str().unwrap();
let raw = file_get_contents(RESOLV_CONF_FN)?;
let old_digest = proxmox::tools::digest_to_hex(&sha::sha256(&raw));
if let Some(digest) = param["digest"].as_str() {
crate::tools::assert_if_modified(&old_digest, &digest)?;
if let Some(digest) = digest {
crate::tools::assert_if_modified(old_digest, &digest)?;
}
let old_data = String::from_utf8(raw)?;
let mut data = format!("search {}\n", search);
for opt in &["dns1", "dns2", "dns3"] {
if let Some(server) = param[opt].as_str() {
data.push_str(&format!("nameserver {}\n", server));
if let Some(delete) = delete {
for delete_prop in delete {
let config = config.as_object_mut().unwrap();
match delete_prop {
DeletableProperty::dns1 => { config.remove("dns1"); },
DeletableProperty::dns2 => { config.remove("dns2"); },
DeletableProperty::dns3 => { config.remove("dns3"); },
}
}
}
// append other data
lazy_static! {
static ref SKIP_REGEX: Regex = Regex::new(r"^(search|domain|nameserver)\s+").unwrap();
if let Some(search) = search { config["search"] = search.into(); }
if let Some(dns1) = dns1 { config["dns1"] = dns1.into(); }
if let Some(dns2) = dns2 { config["dns2"] = dns2.into(); }
if let Some(dns3) = dns3 { config["dns3"] = dns3.into(); }
let mut data = String::new();
if let Some(search) = config["search"].as_str() {
data.push_str(&format!("search {}\n", search));
}
for line in old_data.lines() {
if SKIP_REGEX.is_match(line) { continue; }
data.push_str(line);
data.push('\n');
for opt in &["dns1", "dns2", "dns3"] {
if let Some(server) = config[opt].as_str() {
data.push_str(&format!("nameserver {}\n", server));
}
}
if let Some(options) = config["options"].as_str() {
data.push_str(options);
}
replace_file(RESOLV_CONF_FN, data.as_bytes(), CreateOptions::new())?;
@ -96,7 +172,45 @@ fn update_dns(
Ok(Value::Null)
}
fn get_dns(
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
description: "Returns DNS server IPs and sreach domain.",
type: Object,
properties: {
digest: {
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
search: {
optional: true,
schema: SEARCH_DOMAIN_SCHEMA,
},
dns1: {
optional: true,
schema: FIRST_DNS_SERVER_SCHEMA,
},
dns2: {
optional: true,
schema: SECOND_DNS_SERVER_SCHEMA,
},
dns3: {
optional: true,
schema: THIRD_DNS_SERVER_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "network", "dns"], PRIV_SYS_AUDIT, false),
}
)]
/// Read DNS settings.
pub fn get_dns(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
@ -105,41 +219,6 @@ fn get_dns(
read_etc_resolv_conf()
}
#[sortable]
pub const ROUTER: Router = Router::new()
.get(
&ApiMethod::new(
&ApiHandler::Sync(&get_dns),
&ObjectSchema::new(
"Read DNS settings.",
&sorted!([ ("node", false, &NODE_SCHEMA) ]),
)
).returns(
&ObjectSchema::new(
"Returns DNS server IPs and sreach domain.",
&sorted!([
("digest", false, &PROXMOX_CONFIG_DIGEST_SCHEMA),
("search", true, &SEARCH_DOMAIN_SCHEMA),
("dns1", true, &FIRST_DNS_SERVER_SCHEMA),
("dns2", true, &SECOND_DNS_SERVER_SCHEMA),
("dns3", true, &THIRD_DNS_SERVER_SCHEMA),
]),
).schema()
)
)
.put(
&ApiMethod::new(
&ApiHandler::Sync(&update_dns),
&ObjectSchema::new(
"Returns DNS server IPs and sreach domain.",
&sorted!([
("node", false, &NODE_SCHEMA),
("search", false, &SEARCH_DOMAIN_SCHEMA),
("dns1", true, &FIRST_DNS_SERVER_SCHEMA),
("dns2", true, &SECOND_DNS_SERVER_SCHEMA),
("dns3", true, &THIRD_DNS_SERVER_SCHEMA),
("digest", true, &PROXMOX_CONFIG_DIGEST_SCHEMA),
]),
)
).protected(true)
);
.get(&API_METHOD_GET_DNS)
.put(&API_METHOD_UPDATE_DNS);

View File

@ -1,12 +1,13 @@
use std::process::{Command, Stdio};
use failure::*;
use anyhow::{Error};
use serde_json::{json, Value};
use std::io::{BufRead,BufReader};
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use crate::api2::types::*;
use crate::config::acl::PRIV_SYS_AUDIT;
#[api(
protected: true,
@ -53,6 +54,9 @@ use crate::api2::types::*;
description: "Line text.",
},
},
access: {
permission: &Permission::Privilege(&["system", "log"], PRIV_SYS_AUDIT, false),
},
)]
/// Read syslog entries.
fn get_journal(
@ -90,7 +94,7 @@ fn get_journal(
let mut lines: Vec<String> = vec![];
let mut child = Command::new("/usr/bin/mini-journalreader")
let mut child = Command::new("mini-journalreader")
.args(&args)
.stdout(Stdio::piped())
.spawn()?;

View File

@ -1,28 +1,672 @@
use failure::*;
use serde_json::{json, Value};
use anyhow::{Error, bail};
use serde_json::{Value, to_value};
use ::serde::{Deserialize, Serialize};
use proxmox::api::{ApiHandler, ApiMethod, Router, RpcEnvironment};
use proxmox::api::schema::ObjectSchema;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::api::schema::parse_property_string;
use proxmox::tools::fs::open_file_locked;
use crate::config::network::{self, NetworkConfig};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::api2::types::*;
use crate::server::{WorkerTask};
fn get_network_config(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
Ok(json!({}))
fn split_interface_list(list: &str) -> Result<Vec<String>, Error> {
let value = parse_property_string(&list, &NETWORK_INTERFACE_ARRAY_SCHEMA)?;
Ok(value.as_array().unwrap().iter().map(|v| v.as_str().unwrap().to_string()).collect())
}
fn check_duplicate_gateway_v4(config: &NetworkConfig, iface: &str) -> Result<(), Error> {
let current_gateway_v4 = config.interfaces.iter()
.find(|(_, interface)| interface.gateway.is_some())
.map(|(name, _)| name.to_string());
if let Some(current_gateway_v4) = current_gateway_v4 {
if current_gateway_v4 != iface {
bail!("Default IPv4 gateway already exists on interface '{}'", current_gateway_v4);
}
}
Ok(())
}
fn check_duplicate_gateway_v6(config: &NetworkConfig, iface: &str) -> Result<(), Error> {
let current_gateway_v6 = config.interfaces.iter()
.find(|(_, interface)| interface.gateway6.is_some())
.map(|(name, _)| name.to_string());
if let Some(current_gateway_v6) = current_gateway_v6 {
if current_gateway_v6 != iface {
bail!("Default IPv6 gateway already exists on interface '{}'", current_gateway_v6);
}
}
Ok(())
}
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
description: "List network devices (with config digest).",
type: Array,
items: {
type: Interface,
},
},
access: {
permission: &Permission::Privilege(&["system", "network", "interfaces"], PRIV_SYS_AUDIT, false),
},
)]
/// List all datastores
pub fn list_network_devices(
_param: Value,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let (config, digest) = network::config()?;
let digest = proxmox::tools::digest_to_hex(&digest);
let mut list = Vec::new();
for (iface, interface) in config.interfaces.iter() {
if iface == "lo" { continue; } // do not list lo
let mut item: Value = to_value(interface)?;
item["digest"] = digest.clone().into();
item["iface"] = iface.to_string().into();
list.push(item);
}
let diff = network::changes()?;
if !diff.is_empty() {
rpcenv["changes"] = diff.into();
}
Ok(list.into())
}
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
iface: {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
},
},
},
returns: {
description: "The network interface configuration (with config digest).",
type: Interface,
},
access: {
permission: &Permission::Privilege(&["system", "network", "interfaces", "{name}"], PRIV_SYS_AUDIT, false),
},
)]
/// Read a network interface configuration.
pub fn read_interface(iface: String) -> Result<Value, Error> {
let (config, digest) = network::config()?;
let interface = config.lookup(&iface)?;
let mut data: Value = to_value(interface)?;
data["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(data)
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
iface: {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
},
"type": {
description: "Interface type.",
type: NetworkInterfaceType,
optional: true,
},
autostart: {
description: "Autostart interface.",
type: bool,
optional: true,
},
method: {
type: NetworkConfigMethod,
optional: true,
},
method6: {
type: NetworkConfigMethod,
optional: true,
},
comments: {
description: "Comments (inet, may span multiple lines)",
type: String,
optional: true,
},
comments6: {
description: "Comments (inet5, may span multiple lines)",
type: String,
optional: true,
},
cidr: {
schema: CIDR_V4_SCHEMA,
optional: true,
},
cidr6: {
schema: CIDR_V6_SCHEMA,
optional: true,
},
gateway: {
schema: IP_V4_SCHEMA,
optional: true,
},
gateway6: {
schema: IP_V6_SCHEMA,
optional: true,
},
mtu: {
description: "Maximum Transmission Unit.",
optional: true,
minimum: 46,
maximum: 65535,
default: 1500,
},
bridge_ports: {
schema: NETWORK_INTERFACE_LIST_SCHEMA,
optional: true,
},
bridge_vlan_aware: {
description: "Enable bridge vlan support.",
type: bool,
optional: true,
},
bond_mode: {
type: LinuxBondMode,
optional: true,
},
slaves: {
schema: NETWORK_INTERFACE_LIST_SCHEMA,
optional: true,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "network", "interfaces", "{iface}"], PRIV_SYS_MODIFY, false),
},
)]
/// Create network interface configuration.
pub fn create_interface(
iface: String,
autostart: Option<bool>,
method: Option<NetworkConfigMethod>,
method6: Option<NetworkConfigMethod>,
comments: Option<String>,
comments6: Option<String>,
cidr: Option<String>,
gateway: Option<String>,
cidr6: Option<String>,
gateway6: Option<String>,
mtu: Option<u64>,
bridge_ports: Option<String>,
bridge_vlan_aware: Option<bool>,
bond_mode: Option<LinuxBondMode>,
slaves: Option<String>,
param: Value,
) -> Result<(), Error> {
let interface_type = crate::tools::required_string_param(&param, "type")?;
let interface_type: NetworkInterfaceType = serde_json::from_value(interface_type.into())?;
let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, _digest) = network::config()?;
if config.interfaces.contains_key(&iface) {
bail!("interface '{}' already exists", iface);
}
let mut interface = Interface::new(iface.clone());
interface.interface_type = interface_type;
if let Some(autostart) = autostart { interface.autostart = autostart; }
if method.is_some() { interface.method = method; }
if method6.is_some() { interface.method6 = method6; }
if mtu.is_some() { interface.mtu = mtu; }
if comments.is_some() { interface.comments = comments; }
if comments6.is_some() { interface.comments6 = comments6; }
if let Some(cidr) = cidr {
let (_, _, is_v6) = network::parse_cidr(&cidr)?;
if is_v6 { bail!("invalid address type (expected IPv4, got IPv6)"); }
interface.cidr = Some(cidr);
}
if let Some(cidr6) = cidr6 {
let (_, _, is_v6) = network::parse_cidr(&cidr6)?;
if !is_v6 { bail!("invalid address type (expected IPv6, got IPv4)"); }
interface.cidr6 = Some(cidr6);
}
if let Some(gateway) = gateway {
let is_v6 = gateway.contains(':');
if is_v6 { bail!("invalid address type (expected IPv4, got IPv6)"); }
check_duplicate_gateway_v4(&config, &iface)?;
interface.gateway = Some(gateway);
}
if let Some(gateway6) = gateway6 {
let is_v6 = gateway6.contains(':');
if !is_v6 { bail!("invalid address type (expected IPv6, got IPv4)"); }
check_duplicate_gateway_v6(&config, &iface)?;
interface.gateway6 = Some(gateway6);
}
match interface_type {
NetworkInterfaceType::Bridge => {
if let Some(ports) = bridge_ports {
let ports = split_interface_list(&ports)?;
interface.set_bridge_ports(ports)?;
}
if bridge_vlan_aware.is_some() { interface.bridge_vlan_aware = bridge_vlan_aware; }
}
NetworkInterfaceType::Bond => {
if bond_mode.is_some() { interface.bond_mode = bond_mode; }
if let Some(slaves) = slaves {
let slaves = split_interface_list(&slaves)?;
interface.set_bond_slaves(slaves)?;
}
}
_ => bail!("creating network interface type '{:?}' is not supported", interface_type),
}
if interface.cidr.is_some() || interface.gateway.is_some() {
interface.method = Some(NetworkConfigMethod::Static);
} else if interface.method.is_none() {
interface.method = Some(NetworkConfigMethod::Manual);
}
if interface.cidr6.is_some() || interface.gateway6.is_some() {
interface.method6 = Some(NetworkConfigMethod::Static);
} else if interface.method6.is_none() {
interface.method6 = Some(NetworkConfigMethod::Manual);
}
config.interfaces.insert(iface, interface);
network::save_config(&config)?;
Ok(())
}
#[api()]
#[derive(Serialize, Deserialize)]
#[allow(non_camel_case_types)]
/// Deletable property name
pub enum DeletableProperty {
/// Delete the IPv4 address property.
cidr,
/// Delete the IPv6 address property.
cidr6,
/// Delete the IPv4 gateway property.
gateway,
/// Delete the IPv6 gateway property.
gateway6,
/// Delete the whole IPv4 configuration entry.
method,
/// Delete the whole IPv6 configuration entry.
method6,
/// Delete IPv4 comments
comments,
/// Delete IPv6 comments
comments6,
/// Delete mtu.
mtu,
/// Delete autostart flag
autostart,
/// Delete bridge ports (set to 'none')
bridge_ports,
/// Delete bridge-vlan-aware flag
bridge_vlan_aware,
/// Delete bond-slaves (set to 'none')
slaves,
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
iface: {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
},
"type": {
description: "Interface type. If specified, need to match the current type.",
type: NetworkInterfaceType,
optional: true,
},
autostart: {
description: "Autostart interface.",
type: bool,
optional: true,
},
method: {
type: NetworkConfigMethod,
optional: true,
},
method6: {
type: NetworkConfigMethod,
optional: true,
},
comments: {
description: "Comments (inet, may span multiple lines)",
type: String,
optional: true,
},
comments6: {
description: "Comments (inet5, may span multiple lines)",
type: String,
optional: true,
},
cidr: {
schema: CIDR_V4_SCHEMA,
optional: true,
},
cidr6: {
schema: CIDR_V6_SCHEMA,
optional: true,
},
gateway: {
schema: IP_V4_SCHEMA,
optional: true,
},
gateway6: {
schema: IP_V6_SCHEMA,
optional: true,
},
mtu: {
description: "Maximum Transmission Unit.",
optional: true,
minimum: 46,
maximum: 65535,
default: 1500,
},
bridge_ports: {
schema: NETWORK_INTERFACE_LIST_SCHEMA,
optional: true,
},
bridge_vlan_aware: {
description: "Enable bridge vlan support.",
type: bool,
optional: true,
},
bond_mode: {
type: LinuxBondMode,
optional: true,
},
slaves: {
schema: NETWORK_INTERFACE_LIST_SCHEMA,
optional: true,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "network", "interfaces", "{iface}"], PRIV_SYS_MODIFY, false),
},
)]
/// Update network interface config.
pub fn update_interface(
iface: String,
autostart: Option<bool>,
method: Option<NetworkConfigMethod>,
method6: Option<NetworkConfigMethod>,
comments: Option<String>,
comments6: Option<String>,
cidr: Option<String>,
gateway: Option<String>,
cidr6: Option<String>,
gateway6: Option<String>,
mtu: Option<u64>,
bridge_ports: Option<String>,
bridge_vlan_aware: Option<bool>,
bond_mode: Option<LinuxBondMode>,
slaves: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
param: Value,
) -> Result<(), Error> {
let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = network::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
if gateway.is_some() { check_duplicate_gateway_v4(&config, &iface)?; }
if gateway6.is_some() { check_duplicate_gateway_v6(&config, &iface)?; }
let interface = config.lookup_mut(&iface)?;
if let Some(interface_type) = param.get("type") {
let interface_type: NetworkInterfaceType = serde_json::from_value(interface_type.clone())?;
if interface_type != interface.interface_type {
bail!("got unexpected interface type ({:?} != {:?})", interface_type, interface.interface_type);
}
}
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::cidr => { interface.cidr = None; },
DeletableProperty::cidr6 => { interface.cidr6 = None; },
DeletableProperty::gateway => { interface.gateway = None; },
DeletableProperty::gateway6 => { interface.gateway6 = None; },
DeletableProperty::method => { interface.method = None; },
DeletableProperty::method6 => { interface.method6 = None; },
DeletableProperty::comments => { interface.comments = None; },
DeletableProperty::comments6 => { interface.comments6 = None; },
DeletableProperty::mtu => { interface.mtu = None; },
DeletableProperty::autostart => { interface.autostart = false; },
DeletableProperty::bridge_ports => { interface.set_bridge_ports(Vec::new())?; }
DeletableProperty::bridge_vlan_aware => { interface.bridge_vlan_aware = None; }
DeletableProperty::slaves => { interface.set_bond_slaves(Vec::new())?; }
}
}
}
if let Some(autostart) = autostart { interface.autostart = autostart; }
if method.is_some() { interface.method = method; }
if method6.is_some() { interface.method6 = method6; }
if mtu.is_some() { interface.mtu = mtu; }
if let Some(ports) = bridge_ports {
let ports = split_interface_list(&ports)?;
interface.set_bridge_ports(ports)?;
}
if bridge_vlan_aware.is_some() { interface.bridge_vlan_aware = bridge_vlan_aware; }
if let Some(slaves) = slaves {
let slaves = split_interface_list(&slaves)?;
interface.set_bond_slaves(slaves)?;
}
if bond_mode.is_some() { interface.bond_mode = bond_mode; }
if let Some(cidr) = cidr {
let (_, _, is_v6) = network::parse_cidr(&cidr)?;
if is_v6 { bail!("invalid address type (expected IPv4, got IPv6)"); }
interface.cidr = Some(cidr);
}
if let Some(cidr6) = cidr6 {
let (_, _, is_v6) = network::parse_cidr(&cidr6)?;
if !is_v6 { bail!("invalid address type (expected IPv6, got IPv4)"); }
interface.cidr6 = Some(cidr6);
}
if let Some(gateway) = gateway {
let is_v6 = gateway.contains(':');
if is_v6 { bail!("invalid address type (expected IPv4, got IPv6)"); }
interface.gateway = Some(gateway);
}
if let Some(gateway6) = gateway6 {
let is_v6 = gateway6.contains(':');
if !is_v6 { bail!("invalid address type (expected IPv6, got IPv4)"); }
interface.gateway6 = Some(gateway6);
}
if comments.is_some() { interface.comments = comments; }
if comments6.is_some() { interface.comments6 = comments6; }
if interface.cidr.is_some() || interface.gateway.is_some() {
interface.method = Some(NetworkConfigMethod::Static);
} else {
interface.method = Some(NetworkConfigMethod::Manual);
}
if interface.cidr6.is_some() || interface.gateway6.is_some() {
interface.method6 = Some(NetworkConfigMethod::Static);
} else {
interface.method6 = Some(NetworkConfigMethod::Manual);
}
network::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
iface: {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "network", "interfaces", "{iface}"], PRIV_SYS_MODIFY, false),
},
)]
/// Remove network interface configuration.
pub fn delete_interface(iface: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, expected_digest) = network::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let _interface = config.lookup(&iface)?; // check if interface exists
config.interfaces.remove(&iface);
network::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "network", "interfaces"], PRIV_SYS_MODIFY, false),
},
)]
/// Reload network configuration (requires ifupdown2).
pub async fn reload_network_config(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
network::assert_ifupdown2_installed()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let upid_str = WorkerTask::spawn("srvreload", Some(String::from("networking")), userid, true, |_worker| async {
let _ = std::fs::rename(network::NETWORK_INTERFACES_NEW_FILENAME, network::NETWORK_INTERFACES_FILENAME);
network::network_reload()?;
Ok(())
})?;
Ok(upid_str)
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "network", "interfaces"], PRIV_SYS_MODIFY, false),
},
)]
/// Revert network configuration (rm /etc/network/interfaces.new).
pub fn revert_network_config() -> Result<(), Error> {
let _ = std::fs::remove_file(network::NETWORK_INTERFACES_NEW_FILENAME);
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_INTERFACE)
.put(&API_METHOD_UPDATE_INTERFACE)
.delete(&API_METHOD_DELETE_INTERFACE);
pub const ROUTER: Router = Router::new()
.get(
&ApiMethod::new(
&ApiHandler::Sync(&get_network_config),
&ObjectSchema::new(
"Read network configuration.",
&[ ("node", false, &NODE_SCHEMA) ],
)
)
);
.get(&API_METHOD_LIST_NETWORK_DEVICES)
.put(&API_METHOD_RELOAD_NETWORK_CONFIG)
.post(&API_METHOD_CREATE_INTERFACE)
.delete(&API_METHOD_REVERT_NETWORK_CONFIG)
.match_all("iface", &ITEM_ROUTER);

87
src/api2/node/rrd.rs Normal file
View File

@ -0,0 +1,87 @@
use anyhow::Error;
use serde_json::{Value, json};
use proxmox::api::{api, Router};
use crate::api2::types::*;
use crate::tools::epoch_now_f64;
use crate::rrd::{extract_cached_data, RRD_DATA_ENTRIES};
pub fn create_value_from_rrd(
basedir: &str,
list: &[&str],
timeframe: RRDTimeFrameResolution,
cf: RRDMode,
) -> Result<Value, Error> {
let mut result = Vec::new();
let now = epoch_now_f64()?;
for name in list {
let (start, reso, list) = match extract_cached_data(basedir, name, now, timeframe, cf) {
Some(result) => result,
None => continue,
};
let mut t = start;
for index in 0..RRD_DATA_ENTRIES {
if result.len() <= index {
if let Some(value) = list[index] {
result.push(json!({ "time": t, *name: value }));
} else {
result.push(json!({ "time": t }));
}
} else {
if let Some(value) = list[index] {
result[index][name] = value.into();
}
}
t += reso;
}
}
Ok(result.into())
}
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
timeframe: {
type: RRDTimeFrameResolution,
},
cf: {
type: RRDMode,
},
},
},
)]
/// Read node stats
fn get_node_stats(
timeframe: RRDTimeFrameResolution,
cf: RRDMode,
_param: Value,
) -> Result<Value, Error> {
create_value_from_rrd(
"host",
&[
"cpu", "iowait",
"memtotal", "memused",
"swaptotal", "swapused",
"netin", "netout",
"loadavg",
"total", "used",
"read_ios", "read_bytes",
"write_ios", "write_bytes",
"io_ticks",
],
timeframe,
cf,
)
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_GET_NODE_STATS);

View File

@ -1,15 +1,15 @@
use std::process::{Command, Stdio};
use failure::*;
use anyhow::{bail, Error};
use serde_json::{json, Value};
use proxmox::{sortable, identity, list_subdirs_api_method};
use proxmox::api::{ApiHandler, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{api, Router, Permission};
use proxmox::api::router::SubdirMap;
use proxmox::api::schema::*;
use crate::api2::types::*;
use crate::tools;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
static SERVICE_NAME_LIST: [&str; 7] = [
"proxmox-backup",
@ -38,7 +38,7 @@ fn get_full_service_state(service: &str) -> Result<Value, Error> {
let real_service_name = real_service_name(service);
let mut child = Command::new("/bin/systemctl")
let mut child = Command::new("systemctl")
.args(&["show", real_service_name])
.stdout(Stdio::piped())
.spawn()?;
@ -91,11 +91,45 @@ fn json_service_state(service: &str, status: Value) -> Value {
Value::Null
}
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
description: "Returns a list of systemd services.",
type: Array,
items: {
description: "Service details.",
properties: {
service: {
schema: SERVICE_ID_SCHEMA,
},
name: {
type: String,
description: "systemd service name.",
},
desc: {
type: String,
description: "systemd service description.",
},
state: {
type: String,
description: "systemd service 'SubState'.",
},
},
},
},
access: {
permission: &Permission::Privilege(&["system", "services"], PRIV_SYS_AUDIT, false),
},
)]
/// Service list.
fn list_services(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let mut list = vec![];
@ -115,39 +149,55 @@ fn list_services(
Ok(Value::from(list))
}
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
service: {
schema: SERVICE_ID_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "services", "{service}"], PRIV_SYS_AUDIT, false),
},
)]
/// Read service properties.
fn get_service_state(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
service: String,
_param: Value,
) -> Result<Value, Error> {
let service = tools::required_string_param(&param, "service")?;
let service = service.as_str();
if !SERVICE_NAME_LIST.contains(&service) {
bail!("unknown service name '{}'", service);
}
let status = get_full_service_state(service)?;
let status = get_full_service_state(&service)?;
Ok(json_service_state(service, status))
Ok(json_service_state(&service, status))
}
fn run_service_command(service: &str, cmd: &str) -> Result<Value, Error> {
// fixme: run background worker (fork_worker) ???
match cmd {
"start"|"stop"|"restart"|"reload" => {},
let cmd = match cmd {
"start"|"stop"|"restart"=> cmd,
"reload" => "try-reload-or-restart", // some services do not implement reload
_ => bail!("unknown service command '{}'", cmd),
}
};
if service == "proxmox-backup" && cmd != "restart" {
bail!("invalid service cmd '{} {}'", service, cmd);
if service == "proxmox-backup" && cmd == "stop" {
bail!("invalid service cmd '{} {}' cannot stop essential service!", service, cmd);
}
let real_service_name = real_service_name(service);
let status = Command::new("/bin/systemctl")
let status = Command::new("systemctl")
.args(&[cmd, real_service_name])
.status()?;
@ -158,61 +208,117 @@ fn run_service_command(service: &str, cmd: &str) -> Result<Value, Error> {
Ok(Value::Null)
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
service: {
schema: SERVICE_ID_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "services", "{service}"], PRIV_SYS_MODIFY, false),
},
)]
/// Start service.
fn start_service(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
service: String,
_param: Value,
) -> Result<Value, Error> {
let service = tools::required_string_param(&param, "service")?;
log::info!("starting service {}", service);
run_service_command(service, "start")
run_service_command(&service, "start")
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
service: {
schema: SERVICE_ID_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "services", "{service}"], PRIV_SYS_MODIFY, false),
},
)]
/// Stop service.
fn stop_service(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
service: String,
_param: Value,
) -> Result<Value, Error> {
let service = tools::required_string_param(&param, "service")?;
log::info!("stopping service {}", service);
log::info!("stoping service {}", service);
run_service_command(service, "stop")
run_service_command(&service, "stop")
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
service: {
schema: SERVICE_ID_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "services", "{service}"], PRIV_SYS_MODIFY, false),
},
)]
/// Retart service.
fn restart_service(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
service: String,
_param: Value,
) -> Result<Value, Error> {
let service = tools::required_string_param(&param, "service")?;
log::info!("re-starting service {}", service);
if service == "proxmox-backup-proxy" {
if &service == "proxmox-backup-proxy" {
// special case, avoid aborting running tasks
run_service_command(service, "reload")
run_service_command(&service, "reload")
} else {
run_service_command(service, "restart")
run_service_command(&service, "restart")
}
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
service: {
schema: SERVICE_ID_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "services", "{service}"], PRIV_SYS_MODIFY, false),
},
)]
/// Reload service.
fn reload_service(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
service: String,
_param: Value,
) -> Result<Value, Error> {
let service = tools::required_string_param(&param, "service")?;
log::info!("reloading service {}", service);
run_service_command(service, "reload")
run_service_command(&service, "reload")
}
@ -221,111 +327,33 @@ const SERVICE_ID_SCHEMA: Schema = StringSchema::new("Service ID.")
.schema();
#[sortable]
const SERVICE_SUBDIRS: SubdirMap = &[
const SERVICE_SUBDIRS: SubdirMap = &sorted!([
(
"reload", &Router::new()
.post(
&ApiMethod::new(
&ApiHandler::Sync(&reload_service),
&ObjectSchema::new(
"Reload service.",
&sorted!([
("node", false, &NODE_SCHEMA),
("service", false, &SERVICE_ID_SCHEMA),
]),
)
).protected(true)
)
.post(&API_METHOD_RELOAD_SERVICE)
),
(
"restart", &Router::new()
.post(
&ApiMethod::new(
&ApiHandler::Sync(&restart_service),
&ObjectSchema::new(
"Restart service.",
&sorted!([
("node", false, &NODE_SCHEMA),
("service", false, &SERVICE_ID_SCHEMA),
]),
)
).protected(true)
)
.post(&API_METHOD_RESTART_SERVICE)
),
(
"start", &Router::new()
.post(
&ApiMethod::new(
&ApiHandler::Sync(&start_service),
&ObjectSchema::new(
"Start service.",
&sorted!([
("node", false, &NODE_SCHEMA),
("service", false, &SERVICE_ID_SCHEMA),
]),
)
).protected(true)
)
.post(&API_METHOD_START_SERVICE)
),
(
"state", &Router::new()
.get(
&ApiMethod::new(
&ApiHandler::Sync(&get_service_state),
&ObjectSchema::new(
"Read service properties.",
&sorted!([
("node", false, &NODE_SCHEMA),
("service", false, &SERVICE_ID_SCHEMA),
]),
)
)
)
.get(&API_METHOD_GET_SERVICE_STATE)
),
(
"stop", &Router::new()
.post(
&ApiMethod::new(
&ApiHandler::Sync(&stop_service),
&ObjectSchema::new(
"Stop service.",
&sorted!([
("node", false, &NODE_SCHEMA),
("service", false, &SERVICE_ID_SCHEMA),
]),
)
).protected(true)
)
.post(&API_METHOD_STOP_SERVICE)
),
];
]);
const SERVICE_ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SERVICE_SUBDIRS))
.subdirs(SERVICE_SUBDIRS);
#[sortable]
pub const ROUTER: Router = Router::new()
.get(
&ApiMethod::new(
&ApiHandler::Sync(&list_services),
&ObjectSchema::new(
"Service list.",
&sorted!([ ("node", false, &NODE_SCHEMA) ]),
)
).returns(
&ArraySchema::new(
"Returns a list of systemd services.",
&ObjectSchema::new(
"Service details.",
&sorted!([
("service", false, &SERVICE_ID_SCHEMA),
("name", false, &StringSchema::new("systemd service name.").schema()),
("desc", false, &StringSchema::new("systemd service description.").schema()),
("state", false, &StringSchema::new("systemd service 'SubState'.").schema()),
]),
).schema()
).schema()
)
)
.get(&API_METHOD_LIST_SERVICES)
.match_all("service", &SERVICE_ROUTER);

View File

@ -1,12 +1,16 @@
use failure::*;
use std::process::Command;
use std::path::Path;
use anyhow::{Error, format_err, bail};
use serde_json::{json, Value};
use proxmox::sys::linux::procfs;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, SubdirMap};
use proxmox::list_subdirs_api_method;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use crate::api2::types::*;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_POWER_MANAGEMENT};
use crate::tools::cert::CertInfo;
#[api(
input: {
@ -43,11 +47,24 @@ use crate::api2::types::*;
description: "Total CPU usage since last query.",
optional: true,
},
}
}
info: {
type: Object,
description: "contains node information",
properties: {
fingerprint: {
description: "The SSL Fingerprint",
type: String,
},
},
},
},
},
access: {
permission: &Permission::Privilege(&["system", "status"], PRIV_SYS_AUDIT, false),
},
)]
/// Read node memory, CPU and (root) disk usage
fn get_usage(
fn get_status(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
@ -55,6 +72,11 @@ fn get_usage(
let meminfo: procfs::ProcFsMemInfo = procfs::read_meminfo()?;
let kstat: procfs::ProcFsStat = procfs::read_proc_stat()?;
let disk_usage = crate::tools::disks::disk_usage(Path::new("/"))?;
// get fingerprint
let cert = CertInfo::new()?;
let fp = cert.fingerprint()?;
Ok(json!({
"memory": {
@ -63,15 +85,60 @@ fn get_usage(
"free": meminfo.memfree,
},
"cpu": kstat.cpu,
"root": {
"total": disk_usage.total,
"used": disk_usage.used,
"free": disk_usage.avail,
},
"info": {
"fingerprint": fp,
},
}))
}
pub const USAGE_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_USAGE);
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
command: {
type: NodePowerCommand,
},
}
},
access: {
permission: &Permission::Privilege(&["system", "status"], PRIV_SYS_POWER_MANAGEMENT, false),
},
)]
/// Reboot or shutdown the node.
fn reboot_or_shutdown(command: NodePowerCommand) -> Result<(), Error> {
let systemctl_command = match command {
NodePowerCommand::Reboot => "reboot",
NodePowerCommand::Shutdown => "poweroff",
};
let output = Command::new("systemctl")
.arg(systemctl_command)
.output()
.map_err(|err| format_err!("failed to execute systemctl - {}", err))?;
if !output.status.success() {
match output.status.code() {
Some(code) => {
let msg = String::from_utf8(output.stderr)
.map(|m| if m.is_empty() { String::from("no error message") } else { m })
.unwrap_or_else(|_| String::from("non utf8 error message (suppressed)"));
bail!("diff failed with status code: {} - {}", code, msg);
}
None => bail!("systemctl terminated by signal"),
}
}
Ok(())
}
pub const SUBDIRS: SubdirMap = &[
("usage", &USAGE_ROUTER),
];
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);
.get(&API_METHOD_GET_STATUS)
.post(&API_METHOD_REBOOT_OR_SHUTDOWN);

View File

@ -0,0 +1,56 @@
use anyhow::{Error};
use serde_json::{json, Value};
use proxmox::api::{api, Router, Permission};
use crate::tools;
use crate::config::acl::PRIV_SYS_AUDIT;
use crate::api2::types::NODE_SCHEMA;
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
description: "Subscription status.",
properties: {
status: {
type: String,
description: "'NotFound', 'active' or 'inactive'."
},
message: {
type: String,
description: "Human readable problem description.",
},
serverid: {
type: String,
description: "The unique server ID.",
},
url: {
type: String,
description: "URL to Web Shop.",
},
},
},
access: {
permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
},
)]
/// Read subscription info.
fn get_subscription(_param: Value) -> Result<Value, Error> {
let url = "https://www.proxmox.com/en/proxmox-backup-server/pricing";
Ok(json!({
"status": "NotFound",
"message": "There is no subscription key",
"serverid": tools::get_hardware_address()?,
"url": url,
}))
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_GET_SUBSCRIPTION);

View File

@ -1,11 +1,12 @@
use std::process::{Command, Stdio};
use failure::*;
use anyhow::{Error};
use serde_json::{json, Value};
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use crate::api2::types::*;
use crate::config::acl::PRIV_SYS_AUDIT;
fn dump_journal(
start: Option<u64>,
@ -26,7 +27,7 @@ fn dump_journal(
let start = start.unwrap_or(0);
let mut count: u64 = 0;
let mut child = Command::new("/bin/journalctl")
let mut child = Command::new("journalctl")
.args(&args)
.stdout(Stdio::piped())
.spawn()?;
@ -122,12 +123,15 @@ fn dump_journal(
}
},
},
access: {
permission: &Permission::Privilege(&["system", "log"], PRIV_SYS_AUDIT, false),
},
)]
/// Read syslog entries.
fn get_syslog(
param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let (count, lines) = dump_journal(
@ -137,7 +141,7 @@ fn get_syslog(
param["until"].as_str(),
param["service"].as_str())?;
rpcenv.set_result_attrib("total", Value::from(count));
rpcenv["total"] = Value::from(count);
Ok(json!(lines))
}

View File

@ -1,26 +1,96 @@
use std::fs::File;
use std::io::{BufRead, BufReader};
use failure::*;
use anyhow::{Error};
use serde_json::{json, Value};
use proxmox::api::{api, ApiHandler, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{api, Router, RpcEnvironment, Permission};
use proxmox::api::router::SubdirMap;
use proxmox::api::schema::*;
use proxmox::{identity, list_subdirs_api_method, sortable};
use crate::tools;
use crate::api2::types::*;
use crate::server::{self, UPID};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
fn get_task_status(
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
upid: {
schema: UPID_SCHEMA,
},
},
},
returns: {
description: "Task status nformation.",
properties: {
node: {
schema: NODE_SCHEMA,
},
upid: {
schema: UPID_SCHEMA,
},
pid: {
type: i64,
description: "The Unix PID.",
},
pstart: {
type: u64,
description: "The Unix process start time from `/proc/pid/stat`",
},
starttime: {
type: i64,
description: "The task start time (Epoch)",
},
"type": {
type: String,
description: "Worker type (arbitrary ASCII string)",
},
id: {
type: String,
optional: true,
description: "Worker ID (arbitrary ASCII string)",
},
user: {
type: String,
description: "The user who started the task.",
},
status: {
type: String,
description: "'running' or 'stopped'",
},
exitstatus: {
type: String,
optional: true,
description: "'OK', 'Error: <msg>', or 'unkwown'.",
},
},
},
access: {
description: "Users can access there own tasks, or need Sys.Audit on /system/tasks.",
permission: &Permission::Anybody,
},
)]
/// Get task status.
async fn get_task_status(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let upid = extract_upid(&param)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
if userid != upid.userid {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_AUDIT, false)?;
}
let mut result = json!({
"upid": param["upid"],
"node": upid.node,
@ -29,10 +99,10 @@ fn get_task_status(
"starttime": upid.starttime,
"type": upid.worker_type,
"id": upid.worker_id,
"user": upid.username,
"user": upid.userid,
});
if crate::server::worker_is_active(&upid) {
if crate::server::worker_is_active(&upid).await? {
result["status"] = Value::from("running");
} else {
let exitstatus = crate::server::upid_read_status(&upid).unwrap_or(String::from("unknown"));
@ -50,14 +120,54 @@ fn extract_upid(param: &Value) -> Result<UPID, Error> {
upid_str.parse::<UPID>()
}
fn read_task_log(
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
upid: {
schema: UPID_SCHEMA,
},
"test-status": {
type: bool,
optional: true,
description: "Test task status, and set result attribute \"active\" accordingly.",
},
start: {
type: u64,
optional: true,
description: "Start at this line.",
default: 0,
},
limit: {
type: u64,
optional: true,
description: "Only list this amount of lines.",
default: 50,
},
},
},
access: {
description: "Users can access there own tasks, or need Sys.Audit on /system/tasks.",
permission: &Permission::Anybody,
},
)]
/// Read task log.
async fn read_task_log(
param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let upid = extract_upid(&param)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
if userid != upid.userid {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_AUDIT, false)?;
}
let test_status = param["test-status"].as_bool().unwrap_or(false);
let start = param["start"].as_u64().unwrap_or(0);
@ -89,28 +199,50 @@ fn read_task_log(
}
}
rpcenv.set_result_attrib("total", Value::from(count));
rpcenv["total"] = Value::from(count);
if test_status {
let active = crate::server::worker_is_active(&upid);
rpcenv.set_result_attrib("active", Value::from(active));
let active = crate::server::worker_is_active(&upid).await?;
rpcenv["active"] = Value::from(active);
}
Ok(json!(lines))
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
upid: {
schema: UPID_SCHEMA,
},
},
},
access: {
description: "Users can stop there own tasks, or need Sys.Modify on /system/tasks.",
permission: &Permission::Anybody,
},
)]
/// Try to stop a task.
fn stop_task(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let upid = extract_upid(&param)?;
if crate::server::worker_is_active(&upid) {
server::abort_worker_async(upid);
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
if userid != upid.userid {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_MODIFY, false)?;
}
server::abort_worker_async(upid);
Ok(Value::Null)
}
@ -140,14 +272,16 @@ fn stop_task(
type: bool,
description: "Only list running tasks.",
optional: true,
default: false,
},
errors: {
type: bool,
description: "Only list erroneous tasks.",
optional:true,
default: false,
},
userfilter: {
optional:true,
optional: true,
type: String,
description: "Only list tasks from this user.",
},
@ -158,18 +292,26 @@ fn stop_task(
type: Array,
items: { type: TaskListItem },
},
access: {
description: "Users can only see there own tasks, unless the have Sys.Audit on /system/tasks.",
permission: &Permission::Anybody,
},
)]
/// List tasks.
pub fn list_tasks(
start: u64,
limit: u64,
errors: bool,
running: bool,
param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> {
let start = param["start"].as_u64().unwrap_or(0);
let limit = param["limit"].as_u64().unwrap_or(50);
let errors = param["errors"].as_bool().unwrap_or(false);
let running = param["running"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
let store = param["store"].as_str();
@ -181,22 +323,12 @@ pub fn list_tasks(
let mut count = 0;
for info in list.iter() {
let mut entry = TaskListItem {
upid: info.upid_str.clone(),
node: "localhost".to_string(),
pid: info.upid.pid as i64,
pstart: info.upid.pstart,
starttime: info.upid.starttime,
worker_type: info.upid.worker_type.clone(),
worker_id: info.upid.worker_id.clone(),
user: info.upid.username.clone(),
endtime: None,
status: None,
};
for info in list {
if !list_all && info.upid.userid != userid { continue; }
if let Some(username) = userfilter {
if !info.upid.username.contains(username) { continue; }
if let Some(userid) = userfilter {
if !info.upid.userid.as_str().contains(userid) { continue; }
}
if let Some(store) = store {
@ -223,9 +355,6 @@ pub fn list_tasks(
if errors && state.1 == "OK" {
continue;
}
entry.endtime = Some(state.0);
entry.status = Some(state.1.clone());
}
if (count as u64) < start {
@ -235,82 +364,31 @@ pub fn list_tasks(
count += 1;
}
if (result.len() as u64) < limit { result.push(entry); };
if (result.len() as u64) < limit { result.push(info.into()); };
}
rpcenv.set_result_attrib("total", Value::from(count));
rpcenv["total"] = Value::from(count);
Ok(result)
}
#[sortable]
const UPID_API_SUBDIRS: SubdirMap = &[
const UPID_API_SUBDIRS: SubdirMap = &sorted!([
(
"log", &Router::new()
.get(
&ApiMethod::new(
&ApiHandler::Sync(&read_task_log),
&ObjectSchema::new(
"Read task log.",
&sorted!([
("node", false, &NODE_SCHEMA),
( "test-status",
true,
&BooleanSchema::new(
"Test task status, and set result attribute \"active\" accordingly."
).schema()
),
("upid", false, &UPID_SCHEMA),
("start", true, &IntegerSchema::new("Start at this line.")
.minimum(0)
.default(0)
.schema()
),
("limit", true, &IntegerSchema::new("Only list this amount of lines.")
.minimum(0)
.default(50)
.schema()
),
]),
)
)
)
.get(&API_METHOD_READ_TASK_LOG)
),
(
"status", &Router::new()
.get(
&ApiMethod::new(
&ApiHandler::Sync(&get_task_status),
&ObjectSchema::new(
"Get task status.",
&sorted!([
("node", false, &NODE_SCHEMA),
("upid", false, &UPID_SCHEMA),
]),
)
)
)
.get(&API_METHOD_GET_TASK_STATUS)
)
];
]);
#[sortable]
pub const UPID_API_ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(UPID_API_SUBDIRS))
.delete(
&ApiMethod::new(
&ApiHandler::Sync(&stop_task),
&ObjectSchema::new(
"Try to stop a task.",
&sorted!([
("node", false, &NODE_SCHEMA),
("upid", false, &UPID_SCHEMA),
]),
)
).protected(true)
)
.delete(&API_METHOD_STOP_TASK)
.subdirs(&UPID_API_SUBDIRS);
#[sortable]
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_TASKS)
.match_all("upid", &UPID_API_ROUTER);

View File

@ -1,14 +1,11 @@
use std::mem::{self, MaybeUninit};
use chrono::prelude::*;
use failure::*;
use anyhow::{bail, format_err, Error};
use serde_json::{json, Value};
use proxmox::{sortable, identity};
use proxmox::api::{ApiHandler, ApiMethod, Router, RpcEnvironment};
use proxmox::api::schema::*;
use proxmox::api::{api, Router, Permission};
use proxmox::tools::fs::{file_read_firstline, replace_file, CreateOptions};
use crate::config::acl::PRIV_SYS_MODIFY;
use crate::api2::types::*;
fn read_etc_localtime() -> Result<String, Error> {
@ -18,34 +15,48 @@ fn read_etc_localtime() -> Result<String, Error> {
}
// otherwise guess from the /etc/localtime symlink
let mut buf = MaybeUninit::<[u8; 64]>::uninit();
let len = unsafe {
libc::readlink(
"/etc/localtime".as_ptr() as *const _,
buf.as_mut_ptr() as *mut _,
mem::size_of_val(&buf),
)
};
if len <= 0 {
bail!("failed to guess timezone");
}
let len = len as usize;
let buf = unsafe {
(*buf.as_mut_ptr())[len] = 0;
buf.assume_init()
};
let link = std::str::from_utf8(&buf[..len])?;
let link = std::fs::read_link("/etc/localtime").
map_err(|err| format_err!("failed to guess timezone - {}", err))?;
let link = link.to_string_lossy();
match link.rfind("/zoneinfo/") {
Some(pos) => Ok(link[(pos + 10)..].to_string()),
None => Ok(link.to_string()),
}
}
fn get_time(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
description: "Returns server time and timezone.",
properties: {
timezone: {
schema: TIME_ZONE_SCHEMA,
},
time: {
type: i64,
description: "Seconds since 1970-01-01 00:00:00 UTC.",
minimum: 1_297_163_644,
},
localtime: {
type: i64,
description: "Seconds since 1970-01-01 00:00:00 UTC. (local time)",
minimum: 1_297_163_644,
},
}
},
access: {
permission: &Permission::Anybody,
},
)]
/// Read server time and time zone settings.
fn get_time(_param: Value) -> Result<Value, Error> {
let datetime = Local::now();
let offset = datetime.offset();
let time = datetime.timestamp();
@ -58,13 +69,28 @@ fn get_time(
}))
}
#[api(
protected: true,
reload_timezone: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
timezone: {
schema: TIME_ZONE_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "time"], PRIV_SYS_MODIFY, false),
},
)]
/// Set time zone
fn set_timezone(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
timezone: String,
_param: Value,
) -> Result<Value, Error> {
let timezone = crate::tools::required_string_param(&param, "timezone")?;
let path = std::path::PathBuf::from(format!("/usr/share/zoneinfo/{}", timezone));
if !path.exists() {
@ -81,45 +107,6 @@ fn set_timezone(
Ok(Value::Null)
}
#[sortable]
pub const ROUTER: Router = Router::new()
.get(
&ApiMethod::new(
&ApiHandler::Sync(&get_time),
&ObjectSchema::new(
"Read server time and time zone settings.",
&sorted!([ ("node", false, &NODE_SCHEMA) ]),
)
).returns(
&ObjectSchema::new(
"Returns server time and timezone.",
&sorted!([
("timezone", false, &StringSchema::new("Time zone").schema()),
("time", false, &IntegerSchema::new("Seconds since 1970-01-01 00:00:00 UTC.")
.minimum(1_297_163_644)
.schema()
),
("localtime", false, &IntegerSchema::new("Seconds since 1970-01-01 00:00:00 UTC. (local time)")
.minimum(1_297_163_644)
.schema()
),
]),
).schema()
)
)
.put(
&ApiMethod::new(
&ApiHandler::Sync(&set_timezone),
&ObjectSchema::new(
"Set time zone.",
&sorted!([
("node", false, &NODE_SCHEMA),
("timezone", false, &StringSchema::new(
"Time zone. The file '/usr/share/zoneinfo/zone.tab' contains the list of valid names.")
.schema()
),
]),
)
).protected(true).reload_timezone(true)
);
.get(&API_METHOD_GET_TIME)
.put(&API_METHOD_SET_TIMEZONE);

View File

@ -1,370 +1,65 @@
//! Sync datastore from remote server
use std::sync::{Arc};
use failure::*;
use serde_json::json;
use std::convert::TryFrom;
use std::sync::Arc;
use std::collections::HashMap;
use std::io::{Seek, SeekFrom};
use chrono::{Utc, TimeZone};
use anyhow::{format_err, Error};
use proxmox::api::api;
use proxmox::api::{ApiMethod, Router, RpcEnvironment};
use proxmox::api::{ApiMethod, Router, RpcEnvironment, Permission};
use crate::server::{WorkerTask};
use crate::backup::*;
use crate::client::*;
use crate::config::remote;
use crate::backup::DataStore;
use crate::client::{HttpClient, HttpClientOptions, BackupRepository, pull::pull_store};
use crate::api2::types::*;
// fixme: implement filters
// fixme: delete vanished groups
// Todo: correctly lock backup groups
async fn pull_index_chunks<I: IndexFile>(
_worker: &WorkerTask,
chunk_reader: &mut RemoteChunkReader,
target: Arc<DataStore>,
index: I,
) -> Result<(), Error> {
use crate::config::{
remote,
acl::{PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ},
cached_user_info::CachedUserInfo,
};
for pos in 0..index.index_count() {
let digest = index.index_digest(pos).unwrap();
let chunk_exists = target.cond_touch_chunk(digest, false)?;
if chunk_exists {
//worker.log(format!("chunk {} exists {}", pos, proxmox::tools::digest_to_hex(digest)));
continue;
}
//worker.log(format!("sync {} chunk {}", pos, proxmox::tools::digest_to_hex(digest)));
let chunk = chunk_reader.read_raw_chunk(&digest)?;
target.insert_chunk(&chunk, &digest)?;
}
Ok(())
}
async fn download_manifest(
reader: &BackupReader,
filename: &std::path::Path,
) -> Result<std::fs::File, Error> {
let tmp_manifest_file = std::fs::OpenOptions::new()
.write(true)
.create(true)
.read(true)
.open(&filename)?;
let mut tmp_manifest_file = reader.download(MANIFEST_BLOB_NAME, tmp_manifest_file).await?;
tmp_manifest_file.seek(SeekFrom::Start(0))?;
Ok(tmp_manifest_file)
}
async fn pull_single_archive(
worker: &WorkerTask,
reader: &BackupReader,
chunk_reader: &mut RemoteChunkReader,
tgt_store: Arc<DataStore>,
snapshot: &BackupDir,
archive_name: &str,
) -> Result<(), Error> {
let mut path = tgt_store.base_path();
path.push(snapshot.relative_path());
path.push(archive_name);
let mut tmp_path = path.clone();
tmp_path.set_extension("tmp");
worker.log(format!("sync archive {}", archive_name));
let tmpfile = std::fs::OpenOptions::new()
.write(true)
.create(true)
.read(true)
.open(&tmp_path)?;
let tmpfile = reader.download(archive_name, tmpfile).await?;
match archive_type(archive_name)? {
ArchiveType::DynamicIndex => {
let index = DynamicIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read dynamic index {:?} - {}", tmp_path, err))?;
pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?;
}
ArchiveType::FixedIndex => {
let index = FixedIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read fixed index '{:?}' - {}", tmp_path, err))?;
pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?;
}
ArchiveType::Blob => { /* nothing to do */ }
}
if let Err(err) = std::fs::rename(&tmp_path, &path) {
bail!("Atomic rename file {:?} failed - {}", path, err);
}
Ok(())
}
async fn pull_snapshot(
worker: &WorkerTask,
reader: Arc<BackupReader>,
tgt_store: Arc<DataStore>,
snapshot: &BackupDir,
) -> Result<(), Error> {
let mut manifest_name = tgt_store.base_path();
manifest_name.push(snapshot.relative_path());
manifest_name.push(MANIFEST_BLOB_NAME);
let mut tmp_manifest_name = manifest_name.clone();
tmp_manifest_name.set_extension("tmp");
let mut tmp_manifest_file = download_manifest(&reader, &tmp_manifest_name).await?;
let tmp_manifest_blob = DataBlob::load(&mut tmp_manifest_file)?;
tmp_manifest_blob.verify_crc()?;
if manifest_name.exists() {
let manifest_blob = proxmox::try_block!({
let mut manifest_file = std::fs::File::open(&manifest_name)
.map_err(|err| format_err!("unable to open local manifest {:?} - {}", manifest_name, err))?;
let manifest_blob = DataBlob::load(&mut manifest_file)?;
manifest_blob.verify_crc()?;
Ok(manifest_blob)
}).map_err(|err: Error| {
format_err!("unable to read local manifest {:?} - {}", manifest_name, err)
})?;
if manifest_blob.raw_data() == tmp_manifest_blob.raw_data() {
return Ok(()); // nothing changed
}
}
let manifest = BackupManifest::try_from(tmp_manifest_blob)?;
let mut chunk_reader = RemoteChunkReader::new(reader.clone(), None, HashMap::new());
for item in manifest.files() {
let mut path = tgt_store.base_path();
path.push(snapshot.relative_path());
path.push(&item.filename);
if path.exists() {
match archive_type(&item.filename)? {
ArchiveType::DynamicIndex => {
let index = DynamicIndexReader::open(&path)?;
let (csum, size) = index.compute_csum();
match manifest.verify_file(&item.filename, &csum, size) {
Ok(_) => continue,
Err(err) => {
worker.log(format!("detected changed file {:?} - {}", path, err));
}
}
}
ArchiveType::FixedIndex => {
let index = FixedIndexReader::open(&path)?;
let (csum, size) = index.compute_csum();
match manifest.verify_file(&item.filename, &csum, size) {
Ok(_) => continue,
Err(err) => {
worker.log(format!("detected changed file {:?} - {}", path, err));
}
}
}
ArchiveType::Blob => {
let mut tmpfile = std::fs::File::open(&path)?;
let (csum, size) = compute_file_csum(&mut tmpfile)?;
match manifest.verify_file(&item.filename, &csum, size) {
Ok(_) => continue,
Err(err) => {
worker.log(format!("detected changed file {:?} - {}", path, err));
}
}
}
}
}
pull_single_archive(
worker,
&reader,
&mut chunk_reader,
tgt_store.clone(),
snapshot,
&item.filename,
).await?;
}
if let Err(err) = std::fs::rename(&tmp_manifest_name, &manifest_name) {
bail!("Atomic rename file {:?} failed - {}", manifest_name, err);
}
// cleanup - remove stale files
tgt_store.cleanup_backup_dir(snapshot, &manifest)?;
Ok(())
}
pub async fn pull_snapshot_from(
worker: &WorkerTask,
reader: Arc<BackupReader>,
tgt_store: Arc<DataStore>,
snapshot: &BackupDir,
) -> Result<(), Error> {
let (_path, is_new) = tgt_store.create_backup_dir(&snapshot)?;
if is_new {
worker.log(format!("sync snapshot {:?}", snapshot.relative_path()));
if let Err(err) = pull_snapshot(worker, reader, tgt_store.clone(), &snapshot).await {
if let Err(cleanup_err) = tgt_store.remove_backup_dir(&snapshot) {
worker.log(format!("cleanup error - {}", cleanup_err));
}
return Err(err);
}
} else {
worker.log(format!("re-sync snapshot {:?}", snapshot.relative_path()));
pull_snapshot(worker, reader, tgt_store.clone(), &snapshot).await?
}
Ok(())
}
pub async fn pull_group(
worker: &WorkerTask,
client: &HttpClient,
src_repo: &BackupRepository,
tgt_store: Arc<DataStore>,
group: &BackupGroup,
pub fn check_pull_privs(
userid: &Userid,
store: &str,
remote: &str,
remote_store: &str,
delete: bool,
) -> Result<(), Error> {
let path = format!("api2/json/admin/datastore/{}/snapshots", src_repo.store());
let user_info = CachedUserInfo::new()?;
let args = json!({
"backup-type": group.backup_type(),
"backup-id": group.backup_id(),
});
let mut result = client.get(&path, Some(args)).await?;
let mut list: Vec<SnapshotListItem> = serde_json::from_value(result["data"].take())?;
list.sort_unstable_by(|a, b| a.backup_time.cmp(&b.backup_time));
let auth_info = client.login().await?;
let fingerprint = client.fingerprint();
let last_sync = tgt_store.last_successful_backup(group)?;
let mut remote_snapshots = std::collections::HashSet::new();
for item in list {
let backup_time = Utc.timestamp(item.backup_time, 0);
remote_snapshots.insert(backup_time);
if let Some(last_sync_time) = last_sync {
if last_sync_time > backup_time { continue; }
}
let options = HttpClientOptions::new()
.password(Some(auth_info.ticket.clone()))
.fingerprint(fingerprint.clone());
let new_client = HttpClient::new(src_repo.host(), src_repo.user(), options)?;
let reader = BackupReader::start(
new_client,
None,
src_repo.store(),
&item.backup_type,
&item.backup_id,
backup_time,
true,
).await?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time);
pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot).await?;
}
user_info.check_privs(userid, &["datastore", store], PRIV_DATASTORE_BACKUP, false)?;
user_info.check_privs(userid, &["remote", remote, remote_store], PRIV_REMOTE_READ, false)?;
if delete {
let local_list = group.list_backups(&tgt_store.base_path())?;
for info in local_list {
let backup_time = info.backup_dir.backup_time();
if remote_snapshots.contains(&backup_time) { continue; }
worker.log(format!("delete vanished snapshot {:?}", info.backup_dir.relative_path()));
tgt_store.remove_backup_dir(&info.backup_dir)?;
}
user_info.check_privs(userid, &["datastore", store], PRIV_DATASTORE_PRUNE, false)?;
}
Ok(())
}
pub async fn pull_store(
worker: &WorkerTask,
client: &HttpClient,
src_repo: &BackupRepository,
tgt_store: Arc<DataStore>,
delete: bool,
) -> Result<(), Error> {
pub async fn get_pull_parameters(
store: &str,
remote: &str,
remote_store: &str,
) -> Result<(HttpClient, BackupRepository, Arc<DataStore>), Error> {
let path = format!("api2/json/admin/datastore/{}/groups", src_repo.store());
let tgt_store = DataStore::lookup_datastore(store)?;
let mut result = client.get(&path, None).await?;
let (remote_config, _digest) = remote::config()?;
let remote: remote::Remote = remote_config.lookup("remote", remote)?;
let mut list: Vec<GroupListItem> = serde_json::from_value(result["data"].take())?;
let options = HttpClientOptions::new()
.password(Some(remote.password.clone()))
.fingerprint(remote.fingerprint.clone());
list.sort_unstable_by(|a, b| {
let type_order = a.backup_type.cmp(&b.backup_type);
if type_order == std::cmp::Ordering::Equal {
a.backup_id.cmp(&b.backup_id)
} else {
type_order
}
});
let client = HttpClient::new(&remote.host, &remote.userid, options)?;
let _auth_info = client.login() // make sure we can auth
.await
.map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
let mut errors = false;
let src_repo = BackupRepository::new(Some(remote.userid), Some(remote.host), remote_store.to_string());
let mut new_groups = std::collections::HashSet::new();
for item in list {
let group = BackupGroup::new(&item.backup_type, &item.backup_id);
if let Err(err) = pull_group(worker, client, src_repo, tgt_store.clone(), &group, delete).await {
worker.log(format!("sync group {}/{} failed - {}", item.backup_type, item.backup_id, err));
errors = true;
// do not stop here, instead continue
}
new_groups.insert(group);
}
if delete {
let result: Result<(), Error> = proxmox::try_block!({
let local_groups = BackupGroup::list_groups(&tgt_store.base_path())?;
for local_group in local_groups {
if new_groups.contains(&local_group) { continue; }
worker.log(format!("delete vanished group '{}/{}'", local_group.backup_type(), local_group.backup_id()));
if let Err(err) = tgt_store.remove_backup_group(&local_group) {
worker.log(err.to_string());
errors = true;
}
}
Ok(())
});
if let Err(err) = result {
worker.log(format!("error during cleanup: {}", err));
errors = true;
};
}
if errors {
bail!("sync failed with some errors.");
}
Ok(())
Ok((client, src_repo, tgt_store))
}
#[api(
@ -379,54 +74,44 @@ pub async fn pull_store(
"remote-store": {
schema: DATASTORE_SCHEMA,
},
delete: {
description: "Delete vanished backups. This remove the local copy if the remote backup was deleted.",
type: Boolean,
"remove-vanished": {
schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
optional: true,
default: true,
},
},
},
access: {
// Note: used parameters are no uri parameters, so we need to test inside function body
description: r###"The user needs Datastore.Backup privilege on '/datastore/{store}',
and needs to own the backup group. Remote.Read is required on '/remote/{remote}/{remote-store}'.
The delete flag additionally requires the Datastore.Prune privilege on '/datastore/{store}'.
"###,
permission: &Permission::Anybody,
},
)]
/// Sync store from other repository
async fn pull (
store: String,
remote: String,
remote_store: String,
delete: Option<bool>,
remove_vanished: Option<bool>,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let username = rpcenv.get_user().unwrap();
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let delete = remove_vanished.unwrap_or(true);
let delete = delete.unwrap_or(true);
check_pull_privs(&userid, &store, &remote, &remote_store, delete)?;
let tgt_store = DataStore::lookup_datastore(&store)?;
let (remote_config, _digest) = remote::config()?;
let remote: remote::Remote = remote_config.lookup("remote", &remote)?;
let options = HttpClientOptions::new()
.password(Some(remote.password.clone()))
.fingerprint(remote.fingerprint.clone());
let client = HttpClient::new(&remote.host, &remote.userid, options)?;
let _auth_info = client.login() // make sure we can auth
.await
.map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
let src_repo = BackupRepository::new(Some(remote.userid), Some(remote.host), remote_store);
let (client, src_repo, tgt_store) = get_pull_parameters(&store, &remote, &remote_store).await?;
// fixme: set to_stdout to false?
let upid_str = WorkerTask::spawn("sync", Some(store.clone()), &username.clone(), true, move |worker| async move {
let upid_str = WorkerTask::spawn("sync", Some(store.clone()), userid.clone(), true, move |worker| async move {
worker.log(format!("sync datastore '{}' start", store));
// explicit create shared lock to prevent GC on newly created chunks
let _shared_store_lock = tgt_store.try_shared_chunk_store_lock()?;
pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete).await?;
pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, userid).await?;
worker.log(format!("sync datastore '{}' end", store));

View File

@ -1,5 +1,5 @@
//use chrono::{Local, TimeZone};
use failure::*;
use anyhow::{bail, format_err, Error};
use futures::*;
use hyper::header::{self, HeaderValue, UPGRADE};
use hyper::http::request::Parts;
@ -7,7 +7,7 @@ use hyper::{Body, Response, StatusCode};
use serde_json::Value;
use proxmox::{sortable, identity};
use proxmox::api::{ApiResponseFuture, ApiHandler, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{ApiResponseFuture, ApiHandler, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::api::schema::*;
use proxmox::http_err;
@ -15,6 +15,9 @@ use crate::api2::types::*;
use crate::backup::*;
use crate::server::{WorkerTask, H2Service};
use crate::tools;
use crate::config::acl::PRIV_DATASTORE_READ;
use crate::config::cached_user_info::CachedUserInfo;
use crate::api2::helpers;
mod environment;
use environment::*;
@ -29,18 +32,16 @@ pub const API_METHOD_UPGRADE_BACKUP: ApiMethod = ApiMethod::new(
concat!("Upgraded to backup protocol ('", PROXMOX_BACKUP_READER_PROTOCOL_ID_V1!(), "')."),
&sorted!([
("store", false, &DATASTORE_SCHEMA),
("backup-type", false, &StringSchema::new("Backup type.")
.format(&ApiStringFormat::Enum(&["vm", "ct", "host"]))
.schema()
),
("backup-id", false, &StringSchema::new("Backup ID.").schema()),
("backup-time", false, &IntegerSchema::new("Backup time (Unix epoch.)")
.minimum(1_547_797_308)
.schema()
),
("backup-type", false, &BACKUP_TYPE_SCHEMA),
("backup-id", false, &BACKUP_ID_SCHEMA),
("backup-time", false, &BACKUP_TIME_SCHEMA),
("debug", true, &BooleanSchema::new("Enable verbose debug logging.").schema()),
]),
)
).access(
// Note: parameter 'store' is no uri parameter, so we need to test inside function body
Some("The user needs Datastore.Read privilege on /datastore/{store}."),
&Permission::Anybody
);
fn upgrade_to_backup_reader_protocol(
@ -54,7 +55,12 @@ fn upgrade_to_backup_reader_protocol(
async move {
let debug = param["debug"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let store = tools::required_string_param(&param, "store")?.to_owned();
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["datastore", &store], PRIV_DATASTORE_READ, false)?;
let datastore = DataStore::lookup_datastore(&store)?;
let backup_type = tools::required_string_param(&param, "backup-type")?;
@ -75,7 +81,6 @@ fn upgrade_to_backup_reader_protocol(
bail!("unexpected http version '{:?}' (expected version < 2)", parts.version);
}
let username = rpcenv.get_user().unwrap();
let env_type = rpcenv.env_type();
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time);
@ -85,9 +90,14 @@ fn upgrade_to_backup_reader_protocol(
let worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_dir.backup_time().timestamp());
WorkerTask::spawn("reader", Some(worker_id), &username.clone(), true, move |worker| {
WorkerTask::spawn("reader", Some(worker_id), userid.clone(), true, move |worker| {
let mut env = ReaderEnvironment::new(
env_type, username.clone(), worker.clone(), datastore, backup_dir);
env_type,
userid,
worker.clone(),
datastore,
backup_dir,
);
env.debug = debug;
@ -127,7 +137,7 @@ fn upgrade_to_backup_reader_protocol(
Either::Right((Ok(res), _)) => Ok(res),
Either::Right((Err(err), _)) => Err(err),
})
.map_ok(move |_| env.log("reader finished sucessfully"))
.map_ok(move |_| env.log("reader finished successfully"))
})?;
let response = Response::builder()
@ -183,26 +193,9 @@ fn download_file(
path.push(env.backup_dir.relative_path());
path.push(&file_name);
let path2 = path.clone();
let path3 = path.clone();
env.log(format!("download {:?}", path.clone()));
let file = tokio::fs::File::open(path)
.map_err(move |err| http_err!(BAD_REQUEST, format!("open file {:?} failed: {}", path2, err)))
.await?;
env.log(format!("download {:?}", path3));
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())
.map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze()));
let body = Body::wrap_stream(payload);
// fixme: set other headers ?
Ok(Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, "application/octet-stream")
.body(body)
.unwrap())
helpers::create_download_response(path).await
}.boxed()
}
@ -237,8 +230,8 @@ fn download_chunk(
env.debug(format!("download chunk {:?}", path));
let data = tokio::fs::read(path)
.map_err(move |err| http_err!(BAD_REQUEST, format!("reading file {:?} failed: {}", path2, err)))
.await?;
.await
.map_err(move |err| http_err!(BAD_REQUEST, "reading file {:?} failed: {}", path2, err))?;
let body = Body::from(data);
@ -272,7 +265,7 @@ fn download_chunk_old(
let path3 = path.clone();
let response_future = tokio::fs::File::open(path)
.map_err(move |err| http_err!(BAD_REQUEST, format!("open file {:?} failed: {}", path2, err)))
.map_err(move |err| http_err!(BAD_REQUEST, "open file {:?} failed: {}", path2, err))
.and_then(move |file| {
env2.debug(format!("download chunk {:?}", path3));
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())

View File

@ -1,14 +1,14 @@
//use failure::*;
//use anyhow::{bail, format_err, Error};
use std::sync::Arc;
use std::collections::HashMap;
use serde_json::Value;
use serde_json::{json, Value};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::server::WorkerTask;
use crate::api2::types::Userid;
use crate::backup::*;
use crate::server::formatter::*;
use crate::server::WorkerTask;
//use proxmox::tools;
@ -16,8 +16,8 @@ use crate::server::formatter::*;
#[derive(Clone)]
pub struct ReaderEnvironment {
env_type: RpcEnvironmentType,
result_attributes: HashMap<String, Value>,
user: String,
result_attributes: Value,
user: Userid,
pub debug: bool,
pub formatter: &'static OutputFormatter,
pub worker: Arc<WorkerTask>,
@ -29,7 +29,7 @@ pub struct ReaderEnvironment {
impl ReaderEnvironment {
pub fn new(
env_type: RpcEnvironmentType,
user: String,
user: Userid,
worker: Arc<WorkerTask>,
datastore: Arc<DataStore>,
backup_dir: BackupDir,
@ -37,7 +37,7 @@ impl ReaderEnvironment {
Self {
result_attributes: HashMap::new(),
result_attributes: json!({}),
env_type,
user,
worker,
@ -61,12 +61,12 @@ impl ReaderEnvironment {
impl RpcEnvironment for ReaderEnvironment {
fn set_result_attrib(&mut self, name: &str, value: Value) {
self.result_attributes.insert(name.into(), value);
fn result_attrib_mut(&mut self) -> &mut Value {
&mut self.result_attributes
}
fn get_result_attrib(&self, name: &str) -> Option<&Value> {
self.result_attributes.get(name)
fn result_attrib(&self) -> &Value {
&self.result_attributes
}
fn env_type(&self) -> RpcEnvironmentType {
@ -78,7 +78,7 @@ impl RpcEnvironment for ReaderEnvironment {
}
fn get_user(&self) -> Option<String> {
Some(self.user.clone())
Some(self.user.to_string())
}
}

228
src/api2/status.rs Normal file
View File

@ -0,0 +1,228 @@
use proxmox::list_subdirs_api_method;
use anyhow::{Error};
use serde_json::{json, Value};
use proxmox::api::{
api,
ApiMethod,
Permission,
Router,
RpcEnvironment,
SubdirMap,
};
use crate::api2::types::{
DATASTORE_SCHEMA,
RRDMode,
RRDTimeFrameResolution,
TaskListItem,
Userid,
};
use crate::server;
use crate::backup::{DataStore};
use crate::config::datastore;
use crate::tools::epoch_now_f64;
use crate::tools::statistics::{linear_regression};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::acl::{
PRIV_SYS_AUDIT,
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
};
#[api(
returns: {
description: "Lists the Status of the Datastores.",
type: Array,
items: {
description: "Status of a Datastore",
type: Object,
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
total: {
type: Integer,
description: "The Size of the underlying storage in bytes",
},
used: {
type: Integer,
description: "The used bytes of the underlying storage",
},
avail: {
type: Integer,
description: "The available bytes of the underlying storage",
},
history: {
type: Array,
description: "A list of usages of the past (last Month).",
items: {
type: Number,
description: "The usage of a time in the past. Either null or between 0.0 and 1.0.",
}
},
"estimated-full-date": {
type: Integer,
optional: true,
description: "Estimation of the UNIX epoch when the storage will be full.\
This is calculated via a simple Linear Regression (Least Squares)\
of RRD data of the last Month. Missing if there are not enough data points yet.\
If the estimate lies in the past, the usage is decreasing.",
},
},
},
},
)]
/// List Datastore usages and estimates
fn datastore_status(
_param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let (config, _digest) = datastore::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let mut list = Vec::new();
for (store, (_, _)) in &config.sections {
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0;
if !allowed {
continue;
}
let datastore = DataStore::lookup_datastore(&store)?;
let status = crate::tools::disks::disk_usage(&datastore.base_path())?;
let mut entry = json!({
"store": store,
"total": status.total,
"used": status.used,
"avail": status.avail,
});
let rrd_dir = format!("datastore/{}", store);
let now = epoch_now_f64()?;
let rrd_resolution = RRDTimeFrameResolution::Month;
let rrd_mode = RRDMode::Average;
let total_res = crate::rrd::extract_cached_data(
&rrd_dir,
"total",
now,
rrd_resolution,
rrd_mode,
);
let used_res = crate::rrd::extract_cached_data(
&rrd_dir,
"used",
now,
rrd_resolution,
rrd_mode,
);
match (total_res, used_res) {
(Some((start, reso, total_list)), Some((_, _, used_list))) => {
let mut usage_list: Vec<f64> = Vec::new();
let mut time_list: Vec<u64> = Vec::new();
let mut history = Vec::new();
for (idx, used) in used_list.iter().enumerate() {
let total = if idx < total_list.len() {
total_list[idx]
} else {
None
};
match (total, used) {
(Some(total), Some(used)) if total != 0.0 => {
time_list.push(start + (idx as u64)*reso);
let usage = used/total;
usage_list.push(usage);
history.push(json!(usage));
},
_ => {
history.push(json!(null))
}
}
}
entry["history"] = history.into();
// we skip the calculation for datastores with not enough data
if usage_list.len() >= 7 {
if let Some((a,b)) = linear_regression(&time_list, &usage_list) {
if b != 0.0 {
let estimate = (1.0 - a) / b;
entry["estimated-full-date"] = Value::from(estimate.floor() as u64);
} else {
entry["estimated-full-date"] = Value::from(0);
}
}
}
},
_ => {},
}
list.push(entry);
}
Ok(list.into())
}
#[api(
input: {
properties: {
since: {
type: u64,
description: "Only list tasks since this UNIX epoch.",
optional: true,
},
},
},
returns: {
description: "A list of tasks.",
type: Array,
items: { type: TaskListItem },
},
access: {
description: "Users can only see there own tasks, unless the have Sys.Audit on /system/tasks.",
permission: &Permission::Anybody,
},
)]
/// List tasks.
pub fn list_tasks(
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
// TODO: replace with call that gets all task since 'since' epoch
let list: Vec<TaskListItem> = server::read_task_list()?
.into_iter()
.map(TaskListItem::from)
.filter(|entry| list_all || entry.user == userid)
.collect();
Ok(list.into())
}
const SUBDIRS: SubdirMap = &[
("datastore-usage", &Router::new().get(&API_METHOD_DATASTORE_STATUS)),
("tasks", &Router::new().get(&API_METHOD_LIST_TASKS)),
];
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

View File

@ -1,30 +0,0 @@
use failure::*;
use serde_json::{json, Value};
use proxmox::api::{ApiHandler, ApiMethod, Router, RpcEnvironment};
use proxmox::api::schema::ObjectSchema;
use crate::tools;
fn get_subscription(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let url = "https://www.proxmox.com/en/proxmox-backup-server/pricing";
Ok(json!({
"status": "NotFound",
"message": "There is no subscription key",
"serverid": tools::get_hardware_address()?,
"url": url,
}))
}
pub const ROUTER: Router = Router::new()
.get(
&ApiMethod::new(
&ApiHandler::Sync(&get_subscription),
&ObjectSchema::new("Read subscription info.", &[])
)
);

View File

@ -1,464 +0,0 @@
use failure::*;
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, schema::*};
use proxmox::const_regex;
use proxmox::{IPRE, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
// File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
if name.starts_with('.') {
bail!("file names may not start with '.'");
}
if name.contains('/') {
bail!("file names may not contain slashes");
}
Ok(())
});
macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_NAME { () => (concat!(r"(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!())) }
// we only allow a limited set of characters
// colon is not allowed, because we store usernames in
// colon separated lists)!
// slash is not allowed because it is used as pve API delimiter
// also see "man useradd"
macro_rules! USER_NAME_REGEX_STR { () => (r"(?:[^\s:/[:cntrl:]]+)") }
macro_rules! PROXMOX_SAFE_ID_REGEX_STR { () => (r"(?:[A-Za-z0-9_][A-Za-z0-9._\-]*)") }
const_regex!{
pub IP_FORMAT_REGEX = IPRE!();
pub SHA256_HEX_REGEX = r"^[a-f0-9]{64}$"; // fixme: define in common_regex ?
pub SYSTEMD_DATETIME_REGEX = r"^\d{4}-\d{2}-\d{2}( \d{2}:\d{2}(:\d{2})?)?$"; // fixme: define in common_regex ?
pub PASSWORD_REGEX = r"^[[:^cntrl:]]*$"; // everything but control characters
/// Regex for safe identifiers.
///
/// This
/// [article](https://dwheeler.com/essays/fixing-unix-linux-filenames.html)
/// contains further information why it is reasonable to restict
/// names this way. This is not only useful for filenames, but for
/// any identifier command line tools work with.
pub PROXMOX_SAFE_ID_REGEX = concat!(r"^", PROXMOX_SAFE_ID_REGEX_STR!(), r"$");
pub SINGLE_LINE_COMMENT_REGEX = r"^[[:^cntrl:]]*$";
pub HOSTNAME_REGEX = r"^(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)$";
pub DNS_NAME_REGEX = concat!(r"^", DNS_NAME!(), r"$");
pub DNS_NAME_OR_IP_REGEX = concat!(r"^", DNS_NAME!(), "|", IPRE!(), r"$");
pub PROXMOX_USER_ID_REGEX = concat!(r"^", USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!(), r"$");
pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";
}
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SYSTEMD_DATETIME_REGEX);
pub const IP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&IP_FORMAT_REGEX);
pub const PVE_CONFIG_DIGEST_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const CERT_FINGERPRINT_SHA256_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&CERT_FINGERPRINT_SHA256_REGEX);
pub const PROXMOX_SAFE_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const SINGLE_LINE_COMMENT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SINGLE_LINE_COMMENT_REGEX);
pub const HOSTNAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&HOSTNAME_REGEX);
pub const DNS_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_REGEX);
pub const DNS_NAME_OR_IP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_OR_IP_REGEX);
pub const PROXMOX_USER_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_ID_REGEX);
pub const PASSWORD_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PASSWORD_REGEX);
pub const CERT_FINGERPRINT_SHA256_SCHEMA: Schema = StringSchema::new(
"X509 certificate fingerprint (sha256)."
)
.format(&CERT_FINGERPRINT_SHA256_FORMAT)
.schema();
pub const PROXMOX_CONFIG_DIGEST_SCHEMA: Schema = StringSchema::new(r#"\
Prevent changes if current configuration file has different SHA256 digest.
This can be used to prevent concurrent modifications.
"#
)
.format(&PVE_CONFIG_DIGEST_FORMAT)
.schema();
pub const CHUNK_DIGEST_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const CHUNK_DIGEST_SCHEMA: Schema = StringSchema::new("Chunk digest (SHA256).")
.format(&CHUNK_DIGEST_FORMAT)
.schema();
pub const NODE_SCHEMA: Schema = StringSchema::new("Node name (or 'localhost')")
.format(&ApiStringFormat::VerifyFn(|node| {
if node == "localhost" || node == proxmox::tools::nodename() {
Ok(())
} else {
bail!("no such node '{}'", node);
}
}))
.schema();
pub const SEARCH_DOMAIN_SCHEMA: Schema =
StringSchema::new("Search domain for host-name lookup.").schema();
pub const FIRST_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("First name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const SECOND_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("Second name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const THIRD_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("Third name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const BACKUP_ARCHIVE_NAME_SCHEMA: Schema =
StringSchema::new("Backup archive name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.schema();
pub const BACKUP_TYPE_SCHEMA: Schema =
StringSchema::new("Backup type.")
.format(&ApiStringFormat::Enum(&["vm", "ct", "host"]))
.schema();
pub const BACKUP_ID_SCHEMA: Schema =
StringSchema::new("Backup ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.schema();
pub const BACKUP_TIME_SCHEMA: Schema =
IntegerSchema::new("Backup time (Unix epoch.)")
.minimum(1_547_797_308)
.schema();
pub const UPID_SCHEMA: Schema = StringSchema::new("Unique Process/Task ID.")
.max_length(256)
.schema();
pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const REMOTE_ID_SCHEMA: Schema = StringSchema::new("Remote ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const SINGLE_LINE_COMMENT_SCHEMA: Schema = StringSchema::new("Comment (single line).")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.schema();
pub const HOSTNAME_SCHEMA: Schema = StringSchema::new("Hostname (as defined in RFC1123).")
.format(&HOSTNAME_FORMAT)
.schema();
pub const DNS_NAME_OR_IP_SCHEMA: Schema = StringSchema::new("DNS name or IP address.")
.format(&DNS_NAME_OR_IP_FORMAT)
.schema();
pub const PROXMOX_AUTH_REALM_SCHEMA: Schema = StringSchema::new("Authentication domain ID")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const PROXMOX_USER_ID_SCHEMA: Schema = StringSchema::new("User ID")
.format(&PROXMOX_USER_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
// Complex type definitions
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"last-backup": {
schema: BACKUP_TIME_SCHEMA,
},
"backup-count": {
type: Integer,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
},
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Basic information about a backup group.
pub struct GroupListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub last_backup: i64,
/// Number of contained snapshots
pub backup_count: u64,
/// List of contained archive files.
pub files: Vec<String>,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
},
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Basic information about backup snapshot.
pub struct SnapshotListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub backup_time: i64,
/// List of contained archive files.
pub files: Vec<String>,
/// Overall snapshot size (sum of all archive sizes).
#[serde(skip_serializing_if="Option::is_none")]
pub size: Option<u64>,
}
#[api(
properties: {
"filename": {
schema: BACKUP_ARCHIVE_NAME_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Basic information about archive files inside a backup snapshot.
pub struct BackupContent {
pub filename: String,
/// Archive size (from backup manifest).
#[serde(skip_serializing_if="Option::is_none")]
pub size: Option<u64>,
}
#[api(
properties: {
"upid": {
optional: true,
schema: UPID_SCHEMA,
},
},
)]
#[derive(Clone, Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Garbage collection status.
pub struct GarbageCollectionStatus {
pub upid: Option<String>,
/// Number of processed index files.
pub index_file_count: usize,
/// Sum of bytes referred by index files.
pub index_data_bytes: u64,
/// Bytes used on disk.
pub disk_bytes: u64,
/// Chunks used on disk.
pub disk_chunks: usize,
/// Sum of removed bytes.
pub removed_bytes: u64,
/// Number of removed chunks.
pub removed_chunks: usize,
}
impl Default for GarbageCollectionStatus {
fn default() -> Self {
GarbageCollectionStatus {
upid: None,
index_file_count: 0,
index_data_bytes: 0,
disk_bytes: 0,
disk_chunks: 0,
removed_bytes: 0,
removed_chunks: 0,
}
}
}
#[api()]
#[derive(Serialize, Deserialize)]
/// Storage space usage information.
pub struct StorageStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
}
#[api(
properties: {
"upid": { schema: UPID_SCHEMA },
},
)]
#[derive(Serialize, Deserialize)]
/// Task properties.
pub struct TaskListItem {
pub upid: String,
/// The node name where the task is running on.
pub node: String,
/// The Unix PID
pub pid: i64,
/// The task start time (Epoch)
pub pstart: u64,
/// The task start time (Epoch)
pub starttime: i64,
/// Worker type (arbitrary ASCII string)
pub worker_type: String,
/// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>,
/// The user who started the task
pub user: String,
/// The task end time (Epoch)
#[serde(skip_serializing_if="Option::is_none")]
pub endtime: Option<i64>,
/// Task end status
#[serde(skip_serializing_if="Option::is_none")]
pub status: Option<String>,
}
// Regression tests
#[test]
fn test_cert_fingerprint_schema() -> Result<(), Error> {
let schema = CERT_FINGERPRINT_SHA256_SCHEMA;
let invalid_fingerprints = [
"86:88:7c:be:26:77:a5:62:67:d9:06:f5:e4::61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
"88:7C:BE:26:77:a5:62:67:D9:06:f5:e4:14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
"86:88:7c:be:26:77:a5:62:67:d9:06:f5:e4::14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8:ff",
"XX:88:7c:be:26:77:a5:62:67:d9:06:f5:e4::14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
"86:88:Y4:be:26:77:a5:62:67:d9:06:f5:e4:14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
"86:88:0:be:26:77:a5:62:67:d9:06:f5:e4:14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
];
for fingerprint in invalid_fingerprints.iter() {
if let Ok(_) = parse_simple_value(fingerprint, &schema) {
bail!("test fingerprint '{}' failed - got Ok() while expection an error.", fingerprint);
}
}
let valid_fingerprints = [
"86:88:7c:be:26:77:a5:62:67:d9:06:f5:e4:14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
"86:88:7C:BE:26:77:a5:62:67:D9:06:f5:e4:14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
];
for fingerprint in valid_fingerprints.iter() {
let v = match parse_simple_value(fingerprint, &schema) {
Ok(v) => v,
Err(err) => {
bail!("unable to parse fingerprint '{}' - {}", fingerprint, err);
}
};
if v != serde_json::json!(fingerprint) {
bail!("unable to parse fingerprint '{}' - got wrong value {:?}", fingerprint, v);
}
}
Ok(())
}
#[test]
fn test_proxmox_user_id_schema() -> Result<(), Error> {
let schema = PROXMOX_USER_ID_SCHEMA;
let invalid_user_ids = [
"x", // too short
"xx", // too short
"xxx", // no realm
"xxx@", // no realm
"xx x@test", // contains space
"xx\nx@test", // contains control character
"x:xx@test", // contains collon
"xx/x@test", // contains slash
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx@test", // too long
];
for name in invalid_user_ids.iter() {
if let Ok(_) = parse_simple_value(name, &schema) {
bail!("test userid '{}' failed - got Ok() while expection an error.", name);
}
}
let valid_user_ids = [
"xxx@y",
"name@y",
"xxx@test-it.com",
"xxx@_T_E_S_T-it.com",
"x_x-x.x@test-it.com",
];
for name in valid_user_ids.iter() {
let v = match parse_simple_value(name, &schema) {
Ok(v) => v,
Err(err) => {
bail!("unable to parse userid '{}' - {}", name, err);
}
};
if v != serde_json::json!(name) {
bail!("unable to parse userid '{}' - got wrong value {:?}", name, v);
}
}
Ok(())
}

4
src/api2/types/macros.rs Normal file
View File

@ -0,0 +1,4 @@
//! Macros exported from api2::types.
#[macro_export]
macro_rules! PROXMOX_SAFE_ID_REGEX_STR { () => (r"(?:[A-Za-z0-9_][A-Za-z0-9._\-]*)") }

969
src/api2/types/mod.rs Normal file
View File

@ -0,0 +1,969 @@
use anyhow::bail;
use serde::{Deserialize, Serialize};
use proxmox::api::{api, schema::*};
use proxmox::const_regex;
use proxmox::{IPRE, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::CryptMode;
#[macro_use]
mod macros;
#[macro_use]
mod userid;
pub use userid::{Realm, RealmRef};
pub use userid::{Username, UsernameRef};
pub use userid::Userid;
pub use userid::PROXMOX_GROUP_ID_SCHEMA;
// File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
if name.starts_with('.') {
bail!("file names may not start with '.'");
}
if name.contains('/') {
bail!("file names may not contain slashes");
}
Ok(())
});
macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_NAME { () => (concat!(r"(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!())) }
macro_rules! CIDR_V4_REGEX_STR { () => (concat!(r"(?:", IPV4RE!(), r"/\d{1,2})$")) }
macro_rules! CIDR_V6_REGEX_STR { () => (concat!(r"(?:", IPV6RE!(), r"/\d{1,3})$")) }
const_regex!{
pub IP_V4_REGEX = concat!(r"^", IPV4RE!(), r"$");
pub IP_V6_REGEX = concat!(r"^", IPV6RE!(), r"$");
pub IP_REGEX = concat!(r"^", IPRE!(), r"$");
pub CIDR_V4_REGEX = concat!(r"^", CIDR_V4_REGEX_STR!(), r"$");
pub CIDR_V6_REGEX = concat!(r"^", CIDR_V6_REGEX_STR!(), r"$");
pub CIDR_REGEX = concat!(r"^(?:", CIDR_V4_REGEX_STR!(), "|", CIDR_V6_REGEX_STR!(), r")$");
pub SHA256_HEX_REGEX = r"^[a-f0-9]{64}$"; // fixme: define in common_regex ?
pub SYSTEMD_DATETIME_REGEX = r"^\d{4}-\d{2}-\d{2}( \d{2}:\d{2}(:\d{2})?)?$"; // fixme: define in common_regex ?
pub PASSWORD_REGEX = r"^[[:^cntrl:]]*$"; // everything but control characters
/// Regex for safe identifiers.
///
/// This
/// [article](https://dwheeler.com/essays/fixing-unix-linux-filenames.html)
/// contains further information why it is reasonable to restict
/// names this way. This is not only useful for filenames, but for
/// any identifier command line tools work with.
pub PROXMOX_SAFE_ID_REGEX = concat!(r"^", PROXMOX_SAFE_ID_REGEX_STR!(), r"$");
pub SINGLE_LINE_COMMENT_REGEX = r"^[[:^cntrl:]]*$";
pub HOSTNAME_REGEX = r"^(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)$";
pub DNS_NAME_REGEX = concat!(r"^", DNS_NAME!(), r"$");
pub DNS_NAME_OR_IP_REGEX = concat!(r"^", DNS_NAME!(), "|", IPRE!(), r"$");
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE!() ,"):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$";
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
}
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SYSTEMD_DATETIME_REGEX);
pub const IP_V4_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&IP_V4_REGEX);
pub const IP_V6_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&IP_V6_REGEX);
pub const IP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&IP_REGEX);
pub const PVE_CONFIG_DIGEST_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const CERT_FINGERPRINT_SHA256_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&CERT_FINGERPRINT_SHA256_REGEX);
pub const PROXMOX_SAFE_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const SINGLE_LINE_COMMENT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SINGLE_LINE_COMMENT_REGEX);
pub const HOSTNAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&HOSTNAME_REGEX);
pub const DNS_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_REGEX);
pub const DNS_NAME_OR_IP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_OR_IP_REGEX);
pub const PASSWORD_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PASSWORD_REGEX);
pub const ACL_PATH_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&ACL_PATH_REGEX);
pub const NETWORK_INTERFACE_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const CIDR_V4_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&CIDR_V4_REGEX);
pub const CIDR_V6_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&CIDR_V6_REGEX);
pub const CIDR_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&CIDR_REGEX);
pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX);
pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
.format(&PASSWORD_FORMAT)
.min_length(5)
.max_length(64)
.schema();
pub const CERT_FINGERPRINT_SHA256_SCHEMA: Schema = StringSchema::new(
"X509 certificate fingerprint (sha256)."
)
.format(&CERT_FINGERPRINT_SHA256_FORMAT)
.schema();
pub const PROXMOX_CONFIG_DIGEST_SCHEMA: Schema = StringSchema::new(r#"\
Prevent changes if current configuration file has different SHA256 digest.
This can be used to prevent concurrent modifications.
"#
)
.format(&PVE_CONFIG_DIGEST_FORMAT)
.schema();
pub const CHUNK_DIGEST_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const CHUNK_DIGEST_SCHEMA: Schema = StringSchema::new("Chunk digest (SHA256).")
.format(&CHUNK_DIGEST_FORMAT)
.schema();
pub const NODE_SCHEMA: Schema = StringSchema::new("Node name (or 'localhost')")
.format(&ApiStringFormat::VerifyFn(|node| {
if node == "localhost" || node == proxmox::tools::nodename() {
Ok(())
} else {
bail!("no such node '{}'", node);
}
}))
.schema();
pub const SEARCH_DOMAIN_SCHEMA: Schema =
StringSchema::new("Search domain for host-name lookup.").schema();
pub const FIRST_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("First name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const SECOND_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("Second name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const THIRD_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("Third name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const IP_V4_SCHEMA: Schema =
StringSchema::new("IPv4 address.")
.format(&IP_V4_FORMAT)
.max_length(15)
.schema();
pub const IP_V6_SCHEMA: Schema =
StringSchema::new("IPv6 address.")
.format(&IP_V6_FORMAT)
.max_length(39)
.schema();
pub const IP_SCHEMA: Schema =
StringSchema::new("IP (IPv4 or IPv6) address.")
.format(&IP_FORMAT)
.max_length(39)
.schema();
pub const CIDR_V4_SCHEMA: Schema =
StringSchema::new("IPv4 address with netmask (CIDR notation).")
.format(&CIDR_V4_FORMAT)
.max_length(18)
.schema();
pub const CIDR_V6_SCHEMA: Schema =
StringSchema::new("IPv6 address with netmask (CIDR notation).")
.format(&CIDR_V6_FORMAT)
.max_length(43)
.schema();
pub const CIDR_SCHEMA: Schema =
StringSchema::new("IP address (IPv4 or IPv6) with netmask (CIDR notation).")
.format(&CIDR_FORMAT)
.max_length(43)
.schema();
pub const TIME_ZONE_SCHEMA: Schema = StringSchema::new(
"Time zone. The file '/usr/share/zoneinfo/zone.tab' contains the list of valid names.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const ACL_PATH_SCHEMA: Schema = StringSchema::new(
"Access control path.")
.format(&ACL_PATH_FORMAT)
.min_length(1)
.max_length(128)
.schema();
pub const ACL_PROPAGATE_SCHEMA: Schema = BooleanSchema::new(
"Allow to propagate (inherit) permissions.")
.default(true)
.schema();
pub const ACL_UGID_TYPE_SCHEMA: Schema = StringSchema::new(
"Type of 'ugid' property.")
.format(&ApiStringFormat::Enum(&[
EnumEntry::new("user", "User"),
EnumEntry::new("group", "Group")]))
.schema();
pub const BACKUP_ARCHIVE_NAME_SCHEMA: Schema =
StringSchema::new("Backup archive name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.schema();
pub const BACKUP_TYPE_SCHEMA: Schema =
StringSchema::new("Backup type.")
.format(&ApiStringFormat::Enum(&[
EnumEntry::new("vm", "Virtual Machine Backup"),
EnumEntry::new("ct", "Container Backup"),
EnumEntry::new("host", "Host Backup")]))
.schema();
pub const BACKUP_ID_SCHEMA: Schema =
StringSchema::new("Backup ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.schema();
pub const BACKUP_TIME_SCHEMA: Schema =
IntegerSchema::new("Backup time (Unix epoch.)")
.minimum(1_547_797_308)
.schema();
pub const UPID_SCHEMA: Schema = StringSchema::new("Unique Process/Task ID.")
.max_length(256)
.schema();
pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const SYNC_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run sync job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event))
.schema();
pub const GC_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run garbage collection job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event))
.schema();
pub const PRUNE_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run prune job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event))
.schema();
pub const REMOTE_ID_SCHEMA: Schema = StringSchema::new("Remote ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const JOB_ID_SCHEMA: Schema = StringSchema::new("Job ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const REMOVE_VANISHED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
"Delete vanished backups. This remove the local copy if the remote backup was deleted.")
.default(true)
.schema();
pub const SINGLE_LINE_COMMENT_SCHEMA: Schema = StringSchema::new("Comment (single line).")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.schema();
pub const HOSTNAME_SCHEMA: Schema = StringSchema::new("Hostname (as defined in RFC1123).")
.format(&HOSTNAME_FORMAT)
.schema();
pub const DNS_NAME_OR_IP_SCHEMA: Schema = StringSchema::new("DNS name or IP address.")
.format(&DNS_NAME_OR_IP_FORMAT)
.schema();
pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name (/sys/block/<name>).")
.format(&BLOCKDEVICE_NAME_FORMAT)
.min_length(3)
.max_length(64)
.schema();
// Complex type definitions
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"last-backup": {
schema: BACKUP_TIME_SCHEMA,
},
"backup-count": {
type: Integer,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
},
},
owner: {
type: Userid,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Basic information about a backup group.
pub struct GroupListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub last_backup: i64,
/// Number of contained snapshots
pub backup_count: u64,
/// List of contained archive files.
pub files: Vec<String>,
/// The owner of group
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Userid>,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
},
},
owner: {
type: Userid,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Basic information about backup snapshot.
pub struct SnapshotListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub backup_time: i64,
/// The first line from manifest "notes"
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
/// List of contained archive files.
pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes).
#[serde(skip_serializing_if="Option::is_none")]
pub size: Option<u64>,
/// The owner of the snapshots group
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Userid>,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Prune result.
pub struct PruneListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub backup_time: i64,
/// Keep snapshot
pub keep: bool,
}
pub const PRUNE_SCHEMA_KEEP_DAILY: Schema = IntegerSchema::new(
"Number of daily backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_HOURLY: Schema = IntegerSchema::new(
"Number of hourly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_LAST: Schema = IntegerSchema::new(
"Number of backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_MONTHLY: Schema = IntegerSchema::new(
"Number of monthly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_WEEKLY: Schema = IntegerSchema::new(
"Number of weekly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_YEARLY: Schema = IntegerSchema::new(
"Number of yearly backups to keep.")
.minimum(1)
.schema();
#[api(
properties: {
"filename": {
schema: BACKUP_ARCHIVE_NAME_SCHEMA,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Basic information about archive files inside a backup snapshot.
pub struct BackupContent {
pub filename: String,
/// Info if file is encrypted, signed, or neither.
#[serde(skip_serializing_if="Option::is_none")]
pub crypt_mode: Option<CryptMode>,
/// Archive size (from backup manifest).
#[serde(skip_serializing_if="Option::is_none")]
pub size: Option<u64>,
}
#[api(
properties: {
"upid": {
optional: true,
schema: UPID_SCHEMA,
},
},
)]
#[derive(Clone, Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Garbage collection status.
pub struct GarbageCollectionStatus {
pub upid: Option<String>,
/// Number of processed index files.
pub index_file_count: usize,
/// Sum of bytes referred by index files.
pub index_data_bytes: u64,
/// Bytes used on disk.
pub disk_bytes: u64,
/// Chunks used on disk.
pub disk_chunks: usize,
/// Sum of removed bytes.
pub removed_bytes: u64,
/// Number of removed chunks.
pub removed_chunks: usize,
/// Sum of pending bytes (pending removal - kept for safety).
pub pending_bytes: u64,
/// Number of pending chunks (pending removal - kept for safety).
pub pending_chunks: usize,
}
impl Default for GarbageCollectionStatus {
fn default() -> Self {
GarbageCollectionStatus {
upid: None,
index_file_count: 0,
index_data_bytes: 0,
disk_bytes: 0,
disk_chunks: 0,
removed_bytes: 0,
removed_chunks: 0,
pending_bytes: 0,
pending_chunks: 0,
}
}
}
#[api()]
#[derive(Serialize, Deserialize)]
/// Storage space usage information.
pub struct StorageStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
}
#[api(
properties: {
upid: { schema: UPID_SCHEMA },
user: { type: Userid },
},
)]
#[derive(Serialize, Deserialize)]
/// Task properties.
pub struct TaskListItem {
pub upid: String,
/// The node name where the task is running on.
pub node: String,
/// The Unix PID
pub pid: i64,
/// The task start time (Epoch)
pub pstart: u64,
/// The task start time (Epoch)
pub starttime: i64,
/// Worker type (arbitrary ASCII string)
pub worker_type: String,
/// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>,
/// The user who started the task
pub user: Userid,
/// The task end time (Epoch)
#[serde(skip_serializing_if="Option::is_none")]
pub endtime: Option<i64>,
/// Task end status
#[serde(skip_serializing_if="Option::is_none")]
pub status: Option<String>,
}
impl From<crate::server::TaskListInfo> for TaskListItem {
fn from(info: crate::server::TaskListInfo) -> Self {
let (endtime, status) = info
.state
.map_or_else(|| (None, None), |(a,b)| (Some(a), Some(b)));
TaskListItem {
upid: info.upid_str,
node: "localhost".to_string(),
pid: info.upid.pid as i64,
pstart: info.upid.pstart,
starttime: info.upid.starttime,
worker_type: info.upid.worker_type,
worker_id: info.upid.worker_id,
user: info.upid.userid,
endtime,
status,
}
}
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Node Power command type.
pub enum NodePowerCommand {
/// Restart the server
Reboot,
/// Shutdown the server
Shutdown,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Interface configuration method
pub enum NetworkConfigMethod {
/// Configuration is done manually using other tools
Manual,
/// Define interfaces with statically allocated addresses.
Static,
/// Obtain an address via DHCP
DHCP,
/// Define the loopback interface.
Loopback,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
#[allow(non_camel_case_types)]
#[repr(u8)]
/// Linux Bond Mode
pub enum LinuxBondMode {
/// Round-robin policy
balance_rr = 0,
/// Active-backup policy
active_backup = 1,
/// XOR policy
balance_xor = 2,
/// Broadcast policy
broadcast = 3,
/// IEEE 802.3ad Dynamic link aggregation
//#[serde(rename = "802.3ad")]
ieee802_3ad = 4,
/// Adaptive transmit load balancing
balance_tlb = 5,
/// Adaptive load balancing
balance_alb = 6,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Network interface type
pub enum NetworkInterfaceType {
/// Loopback
Loopback,
/// Physical Ethernet device
Eth,
/// Linux Bridge
Bridge,
/// Linux Bond
Bond,
/// Linux VLAN (eth.10)
Vlan,
/// Interface Alias (eth:1)
Alias,
/// Unknown interface type
Unknown,
}
pub const NETWORK_INTERFACE_NAME_SCHEMA: Schema = StringSchema::new("Network interface name.")
.format(&NETWORK_INTERFACE_FORMAT)
.min_length(1)
.max_length(libc::IFNAMSIZ-1)
.schema();
pub const NETWORK_INTERFACE_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Network interface list.", &NETWORK_INTERFACE_NAME_SCHEMA)
.schema();
pub const NETWORK_INTERFACE_LIST_SCHEMA: Schema = StringSchema::new(
"A list of network devices, comma separated.")
.format(&ApiStringFormat::PropertyString(&NETWORK_INTERFACE_ARRAY_SCHEMA))
.schema();
#[api(
properties: {
name: {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
},
"type": {
type: NetworkInterfaceType,
},
method: {
type: NetworkConfigMethod,
optional: true,
},
method6: {
type: NetworkConfigMethod,
optional: true,
},
cidr: {
schema: CIDR_V4_SCHEMA,
optional: true,
},
cidr6: {
schema: CIDR_V6_SCHEMA,
optional: true,
},
gateway: {
schema: IP_V4_SCHEMA,
optional: true,
},
gateway6: {
schema: IP_V6_SCHEMA,
optional: true,
},
options: {
description: "Option list (inet)",
type: Array,
items: {
description: "Optional attribute line.",
type: String,
},
},
options6: {
description: "Option list (inet6)",
type: Array,
items: {
description: "Optional attribute line.",
type: String,
},
},
comments: {
description: "Comments (inet, may span multiple lines)",
type: String,
optional: true,
},
comments6: {
description: "Comments (inet6, may span multiple lines)",
type: String,
optional: true,
},
bridge_ports: {
schema: NETWORK_INTERFACE_ARRAY_SCHEMA,
optional: true,
},
slaves: {
schema: NETWORK_INTERFACE_ARRAY_SCHEMA,
optional: true,
},
bond_mode: {
type: LinuxBondMode,
optional: true,
}
}
)]
#[derive(Debug, Serialize, Deserialize)]
/// Network Interface configuration
pub struct Interface {
/// Autostart interface
#[serde(rename = "autostart")]
pub autostart: bool,
/// Interface is active (UP)
pub active: bool,
/// Interface name
pub name: String,
/// Interface type
#[serde(rename = "type")]
pub interface_type: NetworkInterfaceType,
#[serde(skip_serializing_if="Option::is_none")]
pub method: Option<NetworkConfigMethod>,
#[serde(skip_serializing_if="Option::is_none")]
pub method6: Option<NetworkConfigMethod>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv4 address with netmask
pub cidr: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv4 gateway
pub gateway: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv6 address with netmask
pub cidr6: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv6 gateway
pub gateway6: Option<String>,
#[serde(skip_serializing_if="Vec::is_empty")]
pub options: Vec<String>,
#[serde(skip_serializing_if="Vec::is_empty")]
pub options6: Vec<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub comments: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub comments6: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// Maximum Transmission Unit
pub mtu: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub bridge_ports: Option<Vec<String>>,
/// Enable bridge vlan support.
#[serde(skip_serializing_if="Option::is_none")]
pub bridge_vlan_aware: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub slaves: Option<Vec<String>>,
#[serde(skip_serializing_if="Option::is_none")]
pub bond_mode: Option<LinuxBondMode>,
}
// Regression tests
#[test]
fn test_cert_fingerprint_schema() -> Result<(), anyhow::Error> {
let schema = CERT_FINGERPRINT_SHA256_SCHEMA;
let invalid_fingerprints = [
"86:88:7c:be:26:77:a5:62:67:d9:06:f5:e4::61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
"88:7C:BE:26:77:a5:62:67:D9:06:f5:e4:14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
"86:88:7c:be:26:77:a5:62:67:d9:06:f5:e4::14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8:ff",
"XX:88:7c:be:26:77:a5:62:67:d9:06:f5:e4::14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
"86:88:Y4:be:26:77:a5:62:67:d9:06:f5:e4:14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
"86:88:0:be:26:77:a5:62:67:d9:06:f5:e4:14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
];
for fingerprint in invalid_fingerprints.iter() {
if let Ok(_) = parse_simple_value(fingerprint, &schema) {
bail!("test fingerprint '{}' failed - got Ok() while exception an error.", fingerprint);
}
}
let valid_fingerprints = [
"86:88:7c:be:26:77:a5:62:67:d9:06:f5:e4:14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
"86:88:7C:BE:26:77:a5:62:67:D9:06:f5:e4:14:61:3e:20:dc:cd:43:92:07:7f:fb:65:54:6c:ff:d2:96:36:f8",
];
for fingerprint in valid_fingerprints.iter() {
let v = match parse_simple_value(fingerprint, &schema) {
Ok(v) => v,
Err(err) => {
bail!("unable to parse fingerprint '{}' - {}", fingerprint, err);
}
};
if v != serde_json::json!(fingerprint) {
bail!("unable to parse fingerprint '{}' - got wrong value {:?}", fingerprint, v);
}
}
Ok(())
}
#[test]
fn test_proxmox_user_id_schema() -> Result<(), anyhow::Error> {
let invalid_user_ids = [
"x", // too short
"xx", // too short
"xxx", // no realm
"xxx@", // no realm
"xx x@test", // contains space
"xx\nx@test", // contains control character
"x:xx@test", // contains collon
"xx/x@test", // contains slash
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx@test", // too long
];
for name in invalid_user_ids.iter() {
if let Ok(_) = parse_simple_value(name, &Userid::API_SCHEMA) {
bail!("test userid '{}' failed - got Ok() while exception an error.", name);
}
}
let valid_user_ids = [
"xxx@y",
"name@y",
"xxx@test-it.com",
"xxx@_T_E_S_T-it.com",
"x_x-x.x@test-it.com",
];
for name in valid_user_ids.iter() {
let v = match parse_simple_value(name, &Userid::API_SCHEMA) {
Ok(v) => v,
Err(err) => {
bail!("unable to parse userid '{}' - {}", name, err);
}
};
if v != serde_json::json!(name) {
bail!("unable to parse userid '{}' - got wrong value {:?}", name, v);
}
}
Ok(())
}
#[api()]
#[derive(Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "UPPERCASE")]
pub enum RRDMode {
/// Maximum
Max,
/// Average
Average,
}
#[api()]
#[repr(u64)]
#[derive(Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum RRDTimeFrameResolution {
/// 1 min => last 70 minutes
Hour = 60,
/// 30 min => last 35 hours
Day = 60*30,
/// 3 hours => about 8 days
Week = 60*180,
/// 12 hours => last 35 days
Month = 60*720,
/// 1 week => last 490 days
Year = 60*10080,
}
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
/// Describes a package for which an update is available.
pub struct APTUpdateInfo {
/// Package name
pub package: String,
/// Package title
pub title: String,
/// Package architecture
pub arch: String,
/// Human readable package description
pub description: String,
/// New version to be updated to
pub version: String,
/// Old version currently installed
pub old_version: String,
/// Package origin
pub origin: String,
/// Package priority in human-readable form
pub priority: String,
/// Package section
pub section: String,
/// URL under which the package's changelog can be retrieved
pub change_log_url: String,
}

420
src/api2/types/userid.rs Normal file
View File

@ -0,0 +1,420 @@
//! Types for user handling.
//!
//! We have [`Username`]s and [`Realm`]s. To uniquely identify a user, they must be combined into a [`Userid`].
//!
//! Since they're all string types, they're organized as follows:
//!
//! * [`Username`]: an owned user name. Internally a `String`.
//! * [`UsernameRef`]: a borrowed user name. Pairs with a `Username` the same way a `str` pairs
//! with `String`, meaning you can only make references to it.
//! * [`Realm`]: an owned realm (`String` equivalent).
//! * [`RealmRef`]: a borrowed realm (`str` equivalent).
//! * [`Userid`]: an owned user id (`"user@realm"`). Note that this does not have a separte
//! borrowed type.
//!
//! Note that `Username`s are not unique, therefore they do not implement `Eq` and cannot be
//! compared directly. If a direct comparison is really required, they can be compared as strings
//! via the `as_str()` method. [`Realm`]s and [`Userid`]s on the other hand can be compared with
//! each other, as in those two cases the comparison has meaning.
use std::borrow::Borrow;
use std::convert::TryFrom;
use std::fmt;
use anyhow::{bail, format_err, Error};
use lazy_static::lazy_static;
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{ApiStringFormat, Schema, StringSchema};
use proxmox::const_regex;
// we only allow a limited set of characters
// colon is not allowed, because we store usernames in
// colon separated lists)!
// slash is not allowed because it is used as pve API delimiter
// also see "man useradd"
macro_rules! USER_NAME_REGEX_STR { () => (r"(?:[^\s:/[:cntrl:]]+)") }
macro_rules! GROUP_NAME_REGEX_STR { () => (USER_NAME_REGEX_STR!()) }
macro_rules! USER_ID_REGEX_STR { () => (concat!(USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!())) }
const_regex! {
pub PROXMOX_USER_NAME_REGEX = concat!(r"^", USER_NAME_REGEX_STR!(), r"$");
pub PROXMOX_USER_ID_REGEX = concat!(r"^", USER_ID_REGEX_STR!(), r"$");
pub PROXMOX_GROUP_ID_REGEX = concat!(r"^", GROUP_NAME_REGEX_STR!(), r"$");
}
pub const PROXMOX_USER_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_NAME_REGEX);
pub const PROXMOX_USER_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_ID_REGEX);
pub const PROXMOX_GROUP_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_GROUP_ID_REGEX);
pub const PROXMOX_GROUP_ID_SCHEMA: Schema = StringSchema::new("Group ID")
.format(&PROXMOX_GROUP_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const PROXMOX_AUTH_REALM_STRING_SCHEMA: StringSchema =
StringSchema::new("Authentication domain ID")
.format(&super::PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32);
pub const PROXMOX_AUTH_REALM_SCHEMA: Schema = PROXMOX_AUTH_REALM_STRING_SCHEMA.schema();
#[api(
type: String,
format: &PROXMOX_USER_NAME_FORMAT,
)]
/// The user name part of a user id.
///
/// This alone does NOT uniquely identify the user and therefore does not implement `Eq`. In order
/// to compare user names directly, they need to be explicitly compared as strings by calling
/// `.as_str()`.
///
/// ```compile_fail
/// fn test(a: Username, b: Username) -> bool {
/// a == b // illegal and does not compile
/// }
/// ```
#[derive(Clone, Debug, Hash, Deserialize, Serialize)]
pub struct Username(String);
/// A reference to a user name part of a user id. This alone does NOT uniquely identify the user.
///
/// This is like a `str` to the `String` of a [`Username`].
#[derive(Debug, Hash)]
pub struct UsernameRef(str);
#[doc(hidden)]
/// ```compile_fail
/// let a: Username = unsafe { std::mem::zeroed() };
/// let b: Username = unsafe { std::mem::zeroed() };
/// let _ = <Username as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(a, b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(&a, &b);
/// ```
struct _AssertNoEqImpl;
impl UsernameRef {
fn new(s: &str) -> &Self {
unsafe { &*(s as *const str as *const UsernameRef) }
}
pub fn as_str(&self) -> &str {
&self.0
}
}
impl std::ops::Deref for Username {
type Target = UsernameRef;
fn deref(&self) -> &UsernameRef {
self.borrow()
}
}
impl Borrow<UsernameRef> for Username {
fn borrow(&self) -> &UsernameRef {
UsernameRef::new(self.as_str())
}
}
impl AsRef<UsernameRef> for Username {
fn as_ref(&self) -> &UsernameRef {
UsernameRef::new(self.as_str())
}
}
impl ToOwned for UsernameRef {
type Owned = Username;
fn to_owned(&self) -> Self::Owned {
Username(self.0.to_owned())
}
}
impl TryFrom<String> for Username {
type Error = Error;
fn try_from(s: String) -> Result<Self, Error> {
if !PROXMOX_USER_NAME_REGEX.is_match(&s) {
bail!("invalid user name");
}
Ok(Self(s))
}
}
impl<'a> TryFrom<&'a str> for &'a UsernameRef {
type Error = Error;
fn try_from(s: &'a str) -> Result<&'a UsernameRef, Error> {
if !PROXMOX_USER_NAME_REGEX.is_match(s) {
bail!("invalid name in user id");
}
Ok(UsernameRef::new(s))
}
}
#[api(schema: PROXMOX_AUTH_REALM_SCHEMA)]
/// An authentication realm.
#[derive(Clone, Debug, Eq, PartialEq, Hash, Deserialize, Serialize)]
pub struct Realm(String);
/// A reference to an authentication realm.
///
/// This is like a `str` to the `String` of a `Realm`.
#[derive(Debug, Hash, Eq, PartialEq)]
pub struct RealmRef(str);
impl RealmRef {
fn new(s: &str) -> &Self {
unsafe { &*(s as *const str as *const RealmRef) }
}
pub fn as_str(&self) -> &str {
&self.0
}
}
impl std::ops::Deref for Realm {
type Target = RealmRef;
fn deref(&self) -> &RealmRef {
self.borrow()
}
}
impl Borrow<RealmRef> for Realm {
fn borrow(&self) -> &RealmRef {
RealmRef::new(self.as_str())
}
}
impl AsRef<RealmRef> for Realm {
fn as_ref(&self) -> &RealmRef {
RealmRef::new(self.as_str())
}
}
impl ToOwned for RealmRef {
type Owned = Realm;
fn to_owned(&self) -> Self::Owned {
Realm(self.0.to_owned())
}
}
impl TryFrom<String> for Realm {
type Error = Error;
fn try_from(s: String) -> Result<Self, Error> {
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(&s)
.map_err(|_| format_err!("invalid realm"))?;
Ok(Self(s))
}
}
impl<'a> TryFrom<&'a str> for &'a RealmRef {
type Error = Error;
fn try_from(s: &'a str) -> Result<&'a RealmRef, Error> {
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(s)
.map_err(|_| format_err!("invalid realm"))?;
Ok(RealmRef::new(s))
}
}
impl PartialEq<str> for Realm {
fn eq(&self, rhs: &str) -> bool {
self.0 == rhs
}
}
impl PartialEq<&str> for Realm {
fn eq(&self, rhs: &&str) -> bool {
self.0 == *rhs
}
}
impl PartialEq<str> for RealmRef {
fn eq(&self, rhs: &str) -> bool {
self.0 == *rhs
}
}
impl PartialEq<&str> for RealmRef {
fn eq(&self, rhs: &&str) -> bool {
self.0 == **rhs
}
}
impl PartialEq<RealmRef> for Realm {
fn eq(&self, rhs: &RealmRef) -> bool {
self.0 == &rhs.0
}
}
impl PartialEq<Realm> for RealmRef {
fn eq(&self, rhs: &Realm) -> bool {
self.0 == rhs.0
}
}
impl PartialEq<Realm> for &RealmRef {
fn eq(&self, rhs: &Realm) -> bool {
(*self).0 == rhs.0
}
}
/// A complete user id consting of a user name and a realm.
#[derive(Clone, Debug, Hash)]
pub struct Userid {
data: String,
name_len: usize,
//name: Username,
//realm: Realm,
}
impl Userid {
pub const API_SCHEMA: Schema = StringSchema::new("User ID")
.format(&PROXMOX_USER_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
const fn new(data: String, name_len: usize) -> Self {
Self { data, name_len }
}
pub fn name(&self) -> &UsernameRef {
UsernameRef::new(&self.data[..self.name_len])
}
pub fn realm(&self) -> &RealmRef {
RealmRef::new(&self.data[(self.name_len + 1)..])
}
pub fn as_str(&self) -> &str {
&self.data
}
/// Get the "backup@pam" user id.
pub fn backup_userid() -> &'static Self {
&*BACKUP_USERID
}
/// Get the "root@pam" user id.
pub fn root_userid() -> &'static Self {
&*ROOT_USERID
}
}
lazy_static! {
pub static ref BACKUP_USERID: Userid = Userid::new("backup@pam".to_string(), 6);
pub static ref ROOT_USERID: Userid = Userid::new("root@pam".to_string(), 4);
}
impl Eq for Userid {}
impl PartialEq for Userid {
fn eq(&self, rhs: &Self) -> bool {
self.data == rhs.data && self.name_len == rhs.name_len
}
}
impl From<(Username, Realm)> for Userid {
fn from(parts: (Username, Realm)) -> Self {
Self::from((parts.0.as_ref(), parts.1.as_ref()))
}
}
impl From<(&UsernameRef, &RealmRef)> for Userid {
fn from(parts: (&UsernameRef, &RealmRef)) -> Self {
let data = format!("{}@{}", parts.0.as_str(), parts.1.as_str());
let name_len = parts.0.as_str().len();
Self { data, name_len }
}
}
impl fmt::Display for Userid {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.data.fmt(f)
}
}
impl std::str::FromStr for Userid {
type Err = Error;
fn from_str(id: &str) -> Result<Self, Error> {
let (name, realm) = match id.as_bytes().iter().rposition(|&b| b == b'@') {
Some(pos) => (&id[..pos], &id[(pos + 1)..]),
None => bail!("not a valid user id"),
};
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(realm)
.map_err(|_| format_err!("invalid realm in user id"))?;
Ok(Self::from((UsernameRef::new(name), RealmRef::new(realm))))
}
}
impl TryFrom<String> for Userid {
type Error = Error;
fn try_from(data: String) -> Result<Self, Error> {
let name_len = data
.as_bytes()
.iter()
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(&data[(name_len + 1)..])
.map_err(|_| format_err!("invalid realm in user id"))?;
Ok(Self { data, name_len })
}
}
impl PartialEq<str> for Userid {
fn eq(&self, rhs: &str) -> bool {
rhs.len() > self.name_len + 2 // make sure range access below is allowed
&& rhs.starts_with(self.name().as_str())
&& rhs.as_bytes()[self.name_len] == b'@'
&& &rhs[(self.name_len + 1)..] == self.realm().as_str()
}
}
impl PartialEq<&str> for Userid {
fn eq(&self, rhs: &&str) -> bool {
*self == **rhs
}
}
impl PartialEq<String> for Userid {
fn eq(&self, rhs: &String) -> bool {
self == rhs.as_str()
}
}
proxmox::forward_deserialize_to_from_str!(Userid);
proxmox::forward_serialize_to_display!(Userid);

View File

@ -1,7 +1,7 @@
use failure::*;
use anyhow::{Error};
use serde_json::{json, Value};
use proxmox::api::{ApiHandler, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{ApiHandler, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::api::schema::ObjectSchema;
pub const PROXMOX_PKG_VERSION: &str =
@ -31,6 +31,6 @@ pub const ROUTER: Router = Router::new()
&ApiMethod::new(
&ApiHandler::Sync(&get_version),
&ObjectSchema::new("Proxmox Backup Server API version.", &[])
)
).access(None, &Permission::Anybody)
);

153
src/auth.rs Normal file
View File

@ -0,0 +1,153 @@
//! Proxmox Backup Server Authentication
//!
//! This library contains helper to authenticate users.
use std::process::{Command, Stdio};
use std::io::Write;
use std::ffi::{CString, CStr};
use base64;
use anyhow::{bail, format_err, Error};
use serde_json::json;
use crate::api2::types::{Userid, UsernameRef, RealmRef};
pub trait ProxmoxAuthenticator {
fn authenticate_user(&self, username: &UsernameRef, password: &str) -> Result<(), Error>;
fn store_password(&self, username: &UsernameRef, password: &str) -> Result<(), Error>;
}
pub struct PAM();
impl ProxmoxAuthenticator for PAM {
fn authenticate_user(&self, username: &UsernameRef, password: &str) -> Result<(), Error> {
let mut auth = pam::Authenticator::with_password("proxmox-backup-auth").unwrap();
auth.get_handler().set_credentials(username.as_str(), password);
auth.authenticate()?;
return Ok(());
}
fn store_password(&self, username: &UsernameRef, password: &str) -> Result<(), Error> {
let mut child = Command::new("passwd")
.arg(username.as_str())
.stdin(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.map_err(|err| format_err!(
"unable to set password for '{}' - execute passwd failed: {}",
username.as_str(),
err,
))?;
// Note: passwd reads password twice from stdin (for verify)
writeln!(child.stdin.as_mut().unwrap(), "{}\n{}", password, password)?;
let output = child
.wait_with_output()
.map_err(|err| format_err!(
"unable to set password for '{}' - wait failed: {}",
username.as_str(),
err,
))?;
if !output.status.success() {
bail!(
"unable to set password for '{}' - {}",
username.as_str(),
String::from_utf8_lossy(&output.stderr),
);
}
Ok(())
}
}
pub struct PBS();
pub fn crypt(password: &[u8], salt: &str) -> Result<String, Error> {
#[link(name="crypt")]
extern "C" {
#[link_name = "crypt"]
fn __crypt(key: *const libc::c_char, salt: *const libc::c_char) -> * mut libc::c_char;
}
let salt = CString::new(salt)?;
let password = CString::new(password)?;
let res = unsafe {
CStr::from_ptr(
__crypt(
password.as_c_str().as_ptr(),
salt.as_c_str().as_ptr()
)
)
};
Ok(String::from(res.to_str()?))
}
pub fn encrypt_pw(password: &str) -> Result<String, Error> {
let salt = proxmox::sys::linux::random_data(8)?;
let salt = format!("$5${}$", base64::encode_config(&salt, base64::CRYPT));
crypt(password.as_bytes(), &salt)
}
pub fn verify_crypt_pw(password: &str, enc_password: &str) -> Result<(), Error> {
let verify = crypt(password.as_bytes(), enc_password)?;
if &verify != enc_password {
bail!("invalid credentials");
}
Ok(())
}
const SHADOW_CONFIG_FILENAME: &str = configdir!("/shadow.json");
impl ProxmoxAuthenticator for PBS {
fn authenticate_user(&self, username: &UsernameRef, password: &str) -> Result<(), Error> {
let data = proxmox::tools::fs::file_get_json(SHADOW_CONFIG_FILENAME, Some(json!({})))?;
match data[username.as_str()].as_str() {
None => bail!("no password set"),
Some(enc_password) => verify_crypt_pw(password, enc_password)?,
}
Ok(())
}
fn store_password(&self, username: &UsernameRef, password: &str) -> Result<(), Error> {
let enc_password = encrypt_pw(password)?;
let mut data = proxmox::tools::fs::file_get_json(SHADOW_CONFIG_FILENAME, Some(json!({})))?;
data[username.as_str()] = enc_password.into();
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0600);
let options = proxmox::tools::fs::CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(nix::unistd::Gid::from_raw(0));
let data = serde_json::to_vec_pretty(&data)?;
proxmox::tools::fs::replace_file(SHADOW_CONFIG_FILENAME, &data, options)?;
Ok(())
}
}
/// Lookup the autenticator for the specified realm
pub fn lookup_authenticator(realm: &RealmRef) -> Result<Box<dyn ProxmoxAuthenticator>, Error> {
match realm.as_str() {
"pam" => Ok(Box::new(PAM())),
"pbs" => Ok(Box::new(PBS())),
_ => bail!("unknown realm '{}'", realm.as_str()),
}
}
/// Authenticate users
pub fn authenticate_user(userid: &Userid, password: &str) -> Result<(), Error> {
lookup_authenticator(userid.realm())?
.authenticate_user(userid.name(), password)
}

View File

@ -1,4 +1,4 @@
use failure::*;
use anyhow::{bail, format_err, Error};
use lazy_static::lazy_static;
use openssl::rsa::{Rsa};
@ -10,14 +10,17 @@ use std::path::PathBuf;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::try_block;
use crate::api2::types::Userid;
use crate::tools::epoch_now_u64;
fn compute_csrf_secret_digest(
timestamp: i64,
secret: &[u8],
username: &str,
userid: &Userid,
) -> String {
let mut hasher = sha::Sha256::new();
let data = format!("{:08X}:{}:", timestamp, username);
let data = format!("{:08X}:{}:", timestamp, userid);
hasher.update(data.as_bytes());
hasher.update(secret);
@ -26,20 +29,19 @@ fn compute_csrf_secret_digest(
pub fn assemble_csrf_prevention_token(
secret: &[u8],
username: &str,
userid: &Userid,
) -> String {
let epoch = std::time::SystemTime::now().duration_since(
std::time::SystemTime::UNIX_EPOCH).unwrap().as_secs() as i64;
let epoch = epoch_now_u64().unwrap() as i64;
let digest = compute_csrf_secret_digest(epoch, secret, username);
let digest = compute_csrf_secret_digest(epoch, secret, userid);
format!("{:08X}:{}", epoch, digest)
}
pub fn verify_csrf_prevention_token(
secret: &[u8],
username: &str,
userid: &Userid,
token: &str,
min_age: i64,
max_age: i64,
@ -61,14 +63,13 @@ pub fn verify_csrf_prevention_token(
let ttime = i64::from_str_radix(timestamp, 16).
map_err(|err| format_err!("timestamp format error - {}", err))?;
let digest = compute_csrf_secret_digest(ttime, secret, username);
let digest = compute_csrf_secret_digest(ttime, secret, userid);
if digest != sig {
bail!("invalid signature.");
}
let now = std::time::SystemTime::now().duration_since(
std::time::SystemTime::UNIX_EPOCH)?.as_secs() as i64;
let now = epoch_now_u64()? as i64;
let age = now - ttime;
if age < min_age {

View File

@ -40,21 +40,21 @@
//!
//! Acquire shared lock for ChunkStore (process wide).
//!
//! Note: When creating .idx files, we create temporary (.tmp) file,
//! Note: When creating .idx files, we create temporary a (.tmp) file,
//! then do an atomic rename ...
//!
//!
//! * Garbage Collect:
//!
//! Acquire exclusive lock for ChunkStore (process wide). If we have
//! already an shared lock for ChunkStore, try to updraged that
//! already a shared lock for the ChunkStore, try to upgrade that
//! lock.
//!
//!
//! * Server Restart
//!
//! Try to abort running garbage collection to release exclusive
//! ChunkStore lock asap. Start new service with existing listening
//! Try to abort the running garbage collection to release exclusive
//! ChunkStore locks ASAP. Start the new service with the existing listening
//! socket.
//!
//!
@ -62,10 +62,10 @@
//!
//! Deleting backups is as easy as deleting the corresponding .idx
//! files. Unfortunately, this does not free up any storage, because
//! those files just contains references to chunks.
//! those files just contain references to chunks.
//!
//! To free up some storage, we run a garbage collection process at
//! regular intervals. The collector uses an mark and sweep
//! regular intervals. The collector uses a mark and sweep
//! approach. In the first phase, it scans all .idx files to mark used
//! chunks. The second phase then removes all unmarked chunks from the
//! store.
@ -90,12 +90,12 @@
//! amount of time ago (by default 24h). So we may only delete chunks
//! with `atime` older than 24 hours.
//!
//! Another problem arise from running backups. The mark phase does
//! Another problem arises from running backups. The mark phase does
//! not find any chunks from those backups, because there is no .idx
//! file for them (created after the backup). Chunks created or
//! touched by those backups may have an `atime` as old as the start
//! time of those backup. Please not that the backup start time may
//! predate the GC start time. Se we may only delete chunk older than
//! time of those backups. Please note that the backup start time may
//! predate the GC start time. So we may only delete chunks older than
//! the start time of those running backup jobs.
//!
//!
@ -103,7 +103,7 @@
//!
//! Not sure if this is better. TODO
use failure::*;
use anyhow::{bail, Error};
// Note: .pcat1 => Proxmox Catalog Format version 1
pub const CATALOG_NAME: &str = "catalog.pcat1.didx";
@ -198,5 +198,11 @@ pub use prune::*;
mod datastore;
pub use datastore::*;
mod verify;
pub use verify::*;
mod catalog_shell;
pub use catalog_shell::*;
mod async_index_reader;
pub use async_index_reader::*;

View File

@ -0,0 +1,204 @@
use std::future::Future;
use std::task::{Poll, Context};
use std::pin::Pin;
use std::io::SeekFrom;
use anyhow::Error;
use futures::future::FutureExt;
use futures::ready;
use tokio::io::{AsyncRead, AsyncSeek};
use proxmox::sys::error::io_err_other;
use proxmox::io_format_err;
use super::IndexFile;
use super::read_chunk::AsyncReadChunk;
use super::index::ChunkReadInfo;
enum AsyncIndexReaderState<S> {
NoData,
WaitForData(Pin<Box<dyn Future<Output = Result<(S, Vec<u8>), Error>> + Send + 'static>>),
HaveData,
}
pub struct AsyncIndexReader<S, I: IndexFile> {
store: Option<S>,
index: I,
read_buffer: Vec<u8>,
current_chunk_offset: u64,
current_chunk_idx: usize,
current_chunk_info: Option<ChunkReadInfo>,
position: u64,
seek_to_pos: i64,
state: AsyncIndexReaderState<S>,
}
// ok because the only public interfaces operates on &mut Self
unsafe impl<S: Sync, I: IndexFile + Sync> Sync for AsyncIndexReader<S, I> {}
impl<S: AsyncReadChunk, I: IndexFile> AsyncIndexReader<S, I> {
pub fn new(index: I, store: S) -> Self {
Self {
store: Some(store),
index,
read_buffer: Vec::with_capacity(1024 * 1024),
current_chunk_offset: 0,
current_chunk_idx: 0,
current_chunk_info: None,
position: 0,
seek_to_pos: 0,
state: AsyncIndexReaderState::NoData,
}
}
}
impl<S, I> AsyncRead for AsyncIndexReader<S, I>
where
S: AsyncReadChunk + Unpin + Sync + 'static,
I: IndexFile + Unpin,
{
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context,
buf: &mut [u8],
) -> Poll<tokio::io::Result<usize>> {
let this = Pin::get_mut(self);
loop {
match &mut this.state {
AsyncIndexReaderState::NoData => {
let (idx, offset) = if this.current_chunk_info.is_some() &&
this.position == this.current_chunk_info.as_ref().unwrap().range.end
{
// optimization for sequential chunk read
let next_idx = this.current_chunk_idx + 1;
(next_idx, 0)
} else {
match this.index.chunk_from_offset(this.position) {
Some(res) => res,
None => return Poll::Ready(Ok(0))
}
};
if idx >= this.index.index_count() {
return Poll::Ready(Ok(0));
}
let info = this
.index
.chunk_info(idx)
.ok_or(io_format_err!("could not get digest"))?;
this.current_chunk_offset = offset;
this.current_chunk_idx = idx;
let old_info = this.current_chunk_info.replace(info.clone());
if let Some(old_info) = old_info {
if old_info.digest == info.digest {
// hit, chunk is currently in cache
this.state = AsyncIndexReaderState::HaveData;
continue;
}
}
// miss, need to download new chunk
let store = match this.store.take() {
Some(store) => store,
None => {
return Poll::Ready(Err(io_format_err!("could not find store")));
}
};
let future = async move {
store.read_chunk(&info.digest)
.await
.map(move |x| (store, x))
};
this.state = AsyncIndexReaderState::WaitForData(future.boxed());
}
AsyncIndexReaderState::WaitForData(ref mut future) => {
match ready!(future.as_mut().poll(cx)) {
Ok((store, mut chunk_data)) => {
this.read_buffer.clear();
this.read_buffer.append(&mut chunk_data);
this.state = AsyncIndexReaderState::HaveData;
this.store = Some(store);
}
Err(err) => {
return Poll::Ready(Err(io_err_other(err)));
}
};
}
AsyncIndexReaderState::HaveData => {
let offset = this.current_chunk_offset as usize;
let len = this.read_buffer.len();
let n = if len - offset < buf.len() {
len - offset
} else {
buf.len()
};
buf[0..n].copy_from_slice(&this.read_buffer[offset..(offset + n)]);
this.position += n as u64;
if offset + n == len {
this.state = AsyncIndexReaderState::NoData;
} else {
this.current_chunk_offset += n as u64;
this.state = AsyncIndexReaderState::HaveData;
}
return Poll::Ready(Ok(n));
}
}
}
}
}
impl<S, I> AsyncSeek for AsyncIndexReader<S, I>
where
S: AsyncReadChunk + Unpin + Sync + 'static,
I: IndexFile + Unpin,
{
fn start_seek(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
pos: SeekFrom,
) -> Poll<tokio::io::Result<()>> {
let this = Pin::get_mut(self);
this.seek_to_pos = match pos {
SeekFrom::Start(offset) => {
offset as i64
},
SeekFrom::End(offset) => {
this.index.index_bytes() as i64 + offset
},
SeekFrom::Current(offset) => {
this.position as i64 + offset
}
};
Poll::Ready(Ok(()))
}
fn poll_complete(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
) -> Poll<tokio::io::Result<u64>> {
let this = Pin::get_mut(self);
let index_bytes = this.index.index_bytes();
if this.seek_to_pos < 0 {
return Poll::Ready(Err(io_format_err!("cannot seek to negative values")));
} else if this.seek_to_pos > index_bytes as i64 {
this.position = index_bytes;
} else {
this.position = this.seek_to_pos as u64;
}
// even if seeking within one chunk, we need to go to NoData to
// recalculate the current_chunk_offset (data is cached anyway)
this.state = AsyncIndexReaderState::NoData;
Poll::Ready(Ok(this.position))
}
}

View File

@ -1,6 +1,6 @@
use crate::tools;
use failure::*;
use anyhow::{bail, format_err, Error};
use regex::Regex;
use std::os::unix::io::RawFd;
@ -59,17 +59,6 @@ impl BackupGroup {
&self.backup_id
}
pub fn parse(path: &str) -> Result<Self, Error> {
let cap = GROUP_PATH_REGEX.captures(path)
.ok_or_else(|| format_err!("unable to parse backup group path '{}'", path))?;
Ok(Self {
backup_type: cap.get(1).unwrap().as_str().to_owned(),
backup_id: cap.get(2).unwrap().as_str().to_owned(),
})
}
pub fn group_path(&self) -> PathBuf {
let mut relative_path = PathBuf::new();
@ -117,7 +106,11 @@ impl BackupGroup {
use nix::fcntl::{openat, OFlag};
match openat(l2_fd, &manifest_path, OFlag::O_RDONLY, nix::sys::stat::Mode::empty()) {
Ok(_) => { /* manifest exists --> assume backup was successful */ },
Ok(rawfd) => {
/* manifest exists --> assume backup was successful */
/* close else this leaks! */
nix::unistd::close(rawfd)?;
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => { return Ok(()); }
Err(err) => {
bail!("last_successful_backup: unexpected error - {}", err);
@ -152,10 +145,35 @@ impl BackupGroup {
}
}
impl std::fmt::Display for BackupGroup {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let backup_type = self.backup_type();
let id = self.backup_id();
write!(f, "{}/{}", backup_type, id)
}
}
impl std::str::FromStr for BackupGroup {
type Err = Error;
/// Parse a backup group path
///
/// This parses strings like `vm/100".
fn from_str(path: &str) -> Result<Self, Self::Err> {
let cap = GROUP_PATH_REGEX.captures(path)
.ok_or_else(|| format_err!("unable to parse backup group path '{}'", path))?;
Ok(Self {
backup_type: cap.get(1).unwrap().as_str().to_owned(),
backup_id: cap.get(2).unwrap().as_str().to_owned(),
})
}
}
/// Uniquely identify a Backup (relative to data store)
///
/// We also call this a backup snaphost.
#[derive(Debug, Clone)]
#[derive(Debug, Eq, PartialEq, Clone)]
pub struct BackupDir {
/// Backup group
group: BackupGroup,
@ -188,16 +206,6 @@ impl BackupDir {
self.backup_time
}
pub fn parse(path: &str) -> Result<Self, Error> {
let cap = SNAPSHOT_PATH_REGEX.captures(path)
.ok_or_else(|| format_err!("unable to parse backup snapshot path '{}'", path))?;
let group = BackupGroup::new(cap.get(1).unwrap().as_str(), cap.get(2).unwrap().as_str());
let backup_time = cap.get(3).unwrap().as_str().parse::<DateTime<Utc>>()?;
Ok(BackupDir::from((group, backup_time.timestamp())))
}
pub fn relative_path(&self) -> PathBuf {
let mut relative_path = self.group.group_path();
@ -212,6 +220,31 @@ impl BackupDir {
}
}
impl std::str::FromStr for BackupDir {
type Err = Error;
/// Parse a snapshot path
///
/// This parses strings like `host/elsa/2020-06-15T05:18:33Z".
fn from_str(path: &str) -> Result<Self, Self::Err> {
let cap = SNAPSHOT_PATH_REGEX.captures(path)
.ok_or_else(|| format_err!("unable to parse backup snapshot path '{}'", path))?;
let group = BackupGroup::new(cap.get(1).unwrap().as_str(), cap.get(2).unwrap().as_str());
let backup_time = cap.get(3).unwrap().as_str().parse::<DateTime<Utc>>()?;
Ok(BackupDir::from((group, backup_time.timestamp())))
}
}
impl std::fmt::Display for BackupDir {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let backup_type = self.group.backup_type();
let id = self.group.backup_id();
let time = Self::backup_time_to_string(self.backup_time);
write!(f, "{}/{}/{}", backup_type, id, time)
}
}
impl From<(BackupGroup, i64)> for BackupDir {
fn from((group, timestamp): (BackupGroup, i64)) -> Self {
Self { group, backup_time: Utc.timestamp(timestamp, 0) }
@ -239,9 +272,13 @@ impl BackupInfo {
}
/// Finds the latest backup inside a backup group
pub fn last_backup(base_path: &Path, group: &BackupGroup) -> Result<Option<BackupInfo>, Error> {
pub fn last_backup(base_path: &Path, group: &BackupGroup, only_finished: bool)
-> Result<Option<BackupInfo>, Error>
{
let backups = group.list_backups(base_path)?;
Ok(backups.into_iter().max_by_key(|item| item.backup_dir.backup_time()))
Ok(backups.into_iter()
.filter(|item| !only_finished || item.is_finished())
.max_by_key(|item| item.backup_dir.backup_time()))
}
pub fn sort_list(list: &mut Vec<BackupInfo>, ascendending: bool) {
@ -284,6 +321,11 @@ impl BackupInfo {
})?;
Ok(list)
}
pub fn is_finished(&self) -> bool {
// backup is considered unfinished if there is no manifest
self.files.iter().any(|name| name == super::MANIFEST_BLOB_NAME)
}
}
fn list_backup_files<P: ?Sized + nix::NixPath>(dirfd: RawFd, path: &P) -> Result<Vec<String>, Error> {

View File

@ -1,23 +1,21 @@
use failure::*;
use std::fmt;
use std::ffi::{CStr, CString, OsStr};
use std::os::unix::ffi::OsStrExt;
use std::io::{Read, Write, Seek, SeekFrom};
use std::convert::TryFrom;
use std::ffi::{CStr, CString, OsStr};
use std::fmt;
use std::io::{Read, Write, Seek, SeekFrom};
use std::os::unix::ffi::OsStrExt;
use anyhow::{bail, format_err, Error};
use chrono::offset::{TimeZone, Local};
use pathpatterns::{MatchList, MatchType};
use proxmox::tools::io::ReadExt;
use proxmox::sys::error::io_err_other;
use crate::pxar::catalog::BackupCatalogWriter;
use crate::pxar::{MatchPattern, MatchPatternSlice, MatchType};
use crate::backup::file_formats::PROXMOX_CATALOG_FILE_MAGIC_1_0;
use crate::tools::runtime::block_on;
use crate::pxar::catalog::BackupCatalogWriter;
#[repr(u8)]
#[derive(Copy,Clone,PartialEq)]
enum CatalogEntryType {
pub(crate) enum CatalogEntryType {
Directory = b'd',
File = b'f',
Symlink = b'l',
@ -46,6 +44,21 @@ impl TryFrom<u8> for CatalogEntryType {
}
}
impl From<&DirEntryAttribute> for CatalogEntryType {
fn from(value: &DirEntryAttribute) -> Self {
match value {
DirEntryAttribute::Directory { .. } => CatalogEntryType::Directory,
DirEntryAttribute::File { .. } => CatalogEntryType::File,
DirEntryAttribute::Symlink => CatalogEntryType::Symlink,
DirEntryAttribute::Hardlink => CatalogEntryType::Hardlink,
DirEntryAttribute::BlockDevice => CatalogEntryType::BlockDevice,
DirEntryAttribute::CharDevice => CatalogEntryType::CharDevice,
DirEntryAttribute::Fifo => CatalogEntryType::Fifo,
DirEntryAttribute::Socket => CatalogEntryType::Socket,
}
}
}
impl fmt::Display for CatalogEntryType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", char::from(*self as u8))
@ -63,7 +76,7 @@ pub struct DirEntry {
}
/// Used to specific additional attributes inside DirEntry
#[derive(Clone, PartialEq)]
#[derive(Clone, Debug, PartialEq)]
pub enum DirEntryAttribute {
Directory { start: u64 },
File { size: u64, mtime: u64 },
@ -106,6 +119,23 @@ impl DirEntry {
}
}
/// Get file mode bits for this entry to be used with the `MatchList` api.
pub fn get_file_mode(&self) -> Option<u32> {
Some(
match self.attr {
DirEntryAttribute::Directory { .. } => pxar::mode::IFDIR,
DirEntryAttribute::File { .. } => pxar::mode::IFREG,
DirEntryAttribute::Symlink => pxar::mode::IFLNK,
DirEntryAttribute::Hardlink => return None,
DirEntryAttribute::BlockDevice => pxar::mode::IFBLK,
DirEntryAttribute::CharDevice => pxar::mode::IFCHR,
DirEntryAttribute::Fifo => pxar::mode::IFIFO,
DirEntryAttribute::Socket => pxar::mode::IFSOCK,
}
as u32
)
}
/// Check if DirEntry is a directory
pub fn is_directory(&self) -> bool {
match self.attr {
@ -383,32 +413,6 @@ impl <W: Write> BackupCatalogWriter for CatalogWriter<W> {
}
}
// fixme: move to somehere else?
/// Implement Write to tokio mpsc channel Sender
pub struct SenderWriter(tokio::sync::mpsc::Sender<Result<Vec<u8>, Error>>);
impl SenderWriter {
pub fn new(sender: tokio::sync::mpsc::Sender<Result<Vec<u8>, Error>>) -> Self {
Self(sender)
}
}
impl Write for SenderWriter {
fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
block_on(async move {
self.0
.send(Ok(buf.to_vec()))
.await
.map_err(io_err_other)
.and(Ok(buf.len()))
})
}
fn flush(&mut self) -> Result<(), std::io::Error> {
Ok(())
}
}
/// Read Catalog files
pub struct CatalogReader<R> {
reader: R,
@ -476,7 +480,7 @@ impl <R: Read + Seek> CatalogReader<R> {
&mut self,
parent: &DirEntry,
filename: &[u8],
) -> Result<DirEntry, Error> {
) -> Result<Option<DirEntry>, Error> {
let start = match parent.attr {
DirEntryAttribute::Directory { start } => start,
@ -496,10 +500,7 @@ impl <R: Read + Seek> CatalogReader<R> {
Ok(false) // stop parsing
})?;
match item {
None => bail!("no such file"),
Some(entry) => Ok(entry),
}
Ok(item)
}
/// Read the raw directory info block from current reader position.
@ -532,7 +533,10 @@ impl <R: Read + Seek> CatalogReader<R> {
self.dump_dir(&path, pos)?;
}
CatalogEntryType::File => {
let dt = Local.timestamp(mtime as i64, 0);
let dt = Local
.timestamp_opt(mtime as i64, 0)
.single() // chrono docs say timestamp_opt can only be None or Single!
.unwrap_or_else(|| Local.timestamp(0, 0));
println!(
"{} {:?} {} {}",
@ -555,38 +559,30 @@ impl <R: Read + Seek> CatalogReader<R> {
/// provided callback on them.
pub fn find(
&mut self,
mut entry: &mut Vec<DirEntry>,
pattern: &[MatchPatternSlice],
callback: &Box<fn(&[DirEntry])>,
parent: &DirEntry,
file_path: &mut Vec<u8>,
match_list: &impl MatchList, //&[MatchEntry],
callback: &mut dyn FnMut(&[u8]) -> Result<(), Error>,
) -> Result<(), Error> {
let parent = entry.last().unwrap();
if !parent.is_directory() {
return Ok(())
}
let file_len = file_path.len();
for e in self.read_dir(parent)? {
match MatchPatternSlice::match_filename_include(
&CString::new(e.name.clone())?,
e.is_directory(),
pattern,
)? {
(MatchType::Positive, _) => {
entry.push(e);
callback(&entry);
let pattern = MatchPattern::from_line(b"**/*").unwrap().unwrap();
let child_pattern = vec![pattern.as_slice()];
self.find(&mut entry, &child_pattern, callback)?;
entry.pop();
}
(MatchType::PartialPositive, child_pattern)
| (MatchType::PartialNegative, child_pattern) => {
entry.push(e);
self.find(&mut entry, &child_pattern, callback)?;
entry.pop();
}
_ => {}
let is_dir = e.is_directory();
file_path.truncate(file_len);
if !e.name.starts_with(b"/") {
file_path.reserve(e.name.len() + 1);
file_path.push(b'/');
}
file_path.extend(&e.name);
match match_list.matches(&file_path, e.get_file_mode()) {
Some(MatchType::Exclude) => continue,
Some(MatchType::Include) => callback(&file_path)?,
None => (),
}
if is_dir {
self.find(&e, file_path, match_list, callback)?;
}
}
file_path.truncate(file_len);
Ok(())
}

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
use failure::*;
use anyhow::{Error};
use std::sync::Arc;
use std::io::Read;

View File

@ -1,7 +1,7 @@
use std::sync::Arc;
use std::io::Write;
use failure::*;
use anyhow::{Error};
use super::CryptConfig;
use crate::tools::borrow::Tied;

View File

@ -1,4 +1,4 @@
use failure::*;
use anyhow::{bail, format_err, Error};
use std::path::{Path, PathBuf};
use std::io::Write;
@ -80,8 +80,9 @@ impl ChunkStore {
let default_options = CreateOptions::new();
if let Err(err) = create_path(&base, Some(default_options.clone()), Some(options.clone())) {
bail!("unable to create chunk store '{}' at {:?} - {}", name, base, err);
match create_path(&base, Some(default_options.clone()), Some(options.clone())) {
Err(err) => bail!("unable to create chunk store '{}' at {:?} - {}", name, base, err),
Ok(res) => if ! res { nix::unistd::chown(&base, Some(uid), Some(gid))? },
}
if let Err(err) = create_dir(&chunk_dir, options.clone()) {
@ -157,8 +158,8 @@ impl ChunkStore {
let (chunk_path, _digest_str) = self.chunk_path(digest);
const UTIME_NOW: i64 = ((1 << 30) - 1);
const UTIME_OMIT: i64 = ((1 << 30) - 2);
const UTIME_NOW: i64 = (1 << 30) - 1;
const UTIME_OMIT: i64 = (1 << 30) - 2;
let times: [libc::timespec; 2] = [
libc::timespec { tv_sec: 0, tv_nsec: UTIME_NOW },
@ -177,28 +178,12 @@ impl ChunkStore {
return Ok(false);
}
bail!("updata atime failed for chunk {:?} - {}", chunk_path, err);
bail!("update atime failed for chunk {:?} - {}", chunk_path, err);
}
Ok(true)
}
pub fn read_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let (chunk_path, digest_str) = self.chunk_path(digest);
let mut file = std::fs::File::open(&chunk_path)
.map_err(|err| {
format_err!(
"store '{}', unable to read chunk '{}' - {}",
self.name,
digest_str,
err,
)
})?;
DataBlob::load(&mut file)
}
pub fn get_chunk_iterator(
&self,
) -> Result<
@ -289,20 +274,17 @@ impl ChunkStore {
pub fn sweep_unused_chunks(
&self,
oldest_writer: Option<i64>,
oldest_writer: i64,
phase1_start_time: i64,
status: &mut GarbageCollectionStatus,
worker: Arc<WorkerTask>,
worker: &WorkerTask,
) -> Result<(), Error> {
use nix::sys::stat::fstatat;
let now = unsafe { libc::time(std::ptr::null_mut()) };
let mut min_atime = phase1_start_time - 3600*24; // at least 24h (see mount option relatime)
let mut min_atime = now - 3600*24; // at least 24h (see mount option relatime)
if let Some(stamp) = oldest_writer {
if stamp < min_atime {
min_atime = stamp;
}
if oldest_writer < min_atime {
min_atime = oldest_writer;
}
min_atime -= 300; // add 5 mins gap for safety
@ -316,6 +298,7 @@ impl ChunkStore {
worker.log(format!("percentage done: {}, chunk count: {}", percentage, chunk_count));
}
worker.fail_on_abort()?;
tools::fail_on_shutdown()?;
let (dirfd, entry) = match entry {
@ -338,10 +321,9 @@ impl ChunkStore {
let lock = self.mutex.lock();
if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) {
let age = now - stat.st_atime;
//println!("FOUND {} {:?}", age/(3600*24), filename);
if stat.st_atime < min_atime {
println!("UNLINK {} {:?}", age/(3600*24), filename);
//let age = now - stat.st_atime;
//println!("UNLINK {} {:?}", age/(3600*24), filename);
let res = unsafe { libc::unlinkat(dirfd, filename.as_ptr(), 0) };
if res != 0 {
let err = nix::Error::last();
@ -354,9 +336,14 @@ impl ChunkStore {
}
status.removed_chunks += 1;
status.removed_bytes += stat.st_size as u64;
} else {
status.disk_chunks += 1;
status.disk_bytes += stat.st_size as u64;
} else {
if stat.st_atime < oldest_writer {
status.pending_chunks += 1;
status.pending_bytes += stat.st_size as u64;
} else {
status.disk_chunks += 1;
status.disk_bytes += stat.st_size as u64;
}
}
}
drop(lock);
@ -426,6 +413,10 @@ impl ChunkStore {
full_path
}
pub fn name(&self) -> &str {
&self.name
}
pub fn base_path(&self) -> PathBuf {
self.base.clone()
}

View File

@ -2,7 +2,7 @@ use std::pin::Pin;
use std::task::{Context, Poll};
use bytes::BytesMut;
use failure::*;
use anyhow::{Error};
use futures::ready;
use futures::stream::{Stream, TryStream};

View File

@ -5,15 +5,15 @@
/// use hash value 0 to detect a boundary.
const CA_CHUNKER_WINDOW_SIZE: usize = 64;
/// Slinding window chunker (Buzhash)
/// Sliding window chunker (Buzhash)
///
/// This is a rewrite of *casync* chunker (cachunker.h) in rust.
///
/// Hashing by cyclic polynomial (also called Buzhash) has the benefit
/// of avoiding multiplications, using barrel shifts instead. For more
/// information please take a look at the [Rolling
/// Hash](https://en.wikipedia.org/wiki/Rolling_hash) artikel from
/// wikipedia.
/// Hash](https://en.wikipedia.org/wiki/Rolling_hash) article from
/// Wikipedia.
pub struct Chunker {
h: u32,

View File

@ -6,12 +6,30 @@
//! See the Wikipedia Artikel for [Authenticated
//! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption)
//! for a short introduction.
use failure::*;
use openssl::pkcs5::pbkdf2_hmac;
use openssl::hash::MessageDigest;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
use std::io::Write;
use anyhow::{bail, Error};
use chrono::{Local, TimeZone, DateTime};
use openssl::hash::MessageDigest;
use openssl::pkcs5::pbkdf2_hmac;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
use serde::{Deserialize, Serialize};
use proxmox::api::api;
#[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Defines whether data is encrypted (using an AEAD cipher), only signed, or neither.
pub enum CryptMode {
/// Don't encrypt.
None,
/// Encrypt.
Encrypt,
/// Only sign.
SignOnly,
}
/// Encryption Configuration with secret key
///
@ -26,7 +44,6 @@ pub struct CryptConfig {
id_pkey: openssl::pkey::PKey<openssl::pkey::Private>,
// The private key used by the cipher.
enc_key: [u8; 32],
}
impl CryptConfig {
@ -63,10 +80,9 @@ impl CryptConfig {
/// chunk digest values do not clash with values computed for
/// other sectret keys.
pub fn compute_digest(&self, data: &[u8]) -> [u8; 32] {
// FIXME: use HMAC-SHA256 instead??
let mut hasher = openssl::sha::Sha256::new();
hasher.update(&self.id_key);
hasher.update(data);
hasher.update(&self.id_key); // at the end, to avoid length extensions attacks
hasher.finish()
}
@ -203,7 +219,7 @@ impl CryptConfig {
created: DateTime<Local>,
) -> Result<Vec<u8>, Error> {
let modified = Local.timestamp(Local::now().timestamp(), 0);
let modified = Local.timestamp(Local::now().timestamp(), 0);
let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() };
let data = serde_json::to_string(&key_config)?.as_bytes().to_vec();

View File

@ -1,4 +1,4 @@
use failure::*;
use anyhow::{bail, Error};
use std::sync::Arc;
use std::io::{Read, BufRead};

View File

@ -1,4 +1,4 @@
use failure::*;
use anyhow::{Error};
use std::sync::Arc;
use std::io::Write;

Some files were not shown because too many files have changed in this diff Show More