Compare commits

...

574 Commits

Author SHA1 Message Date
a67874b6ae bump version to 1.1.14-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:08:02 +02:00
9402e9f357 cargo: update proxmox-acme-rs to 0.3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:05:19 +02:00
b75bb5434e d/control.in: update
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:05:06 +02:00
ec44c3113b backport "datastore: lookup: reuse ChunkStore on stale datastore re-open"
Backport of commit 0bd9c87010

When re-opening a datastore due to the cached entry being stale
(only on verify-new config change). On datastore open the chunk store
was also re-opened, which in turn creates a new ProcessLocker,
loosing any existing shared lock which can cause conflicts between
long running (24h+) backups  and GC.

To fix this, reuse the existing ChunkStore, and thus  its
ProcessLocker, when creating a up-to-date datastore instance on
lookup, since only the datastore config should be reloaded. This is
fine as the ChunkStore path is not updatable over our API.

Note that this is a precaution backport, the underlying issue this
fixes is relatively unlikely to cause any trouble in the 1.x branch
due to not often re-opening the datastore.

Originally-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:00:01 +02:00
cb21bf7454 ui: add notice for nearing PBS 1.1 End-of-Life
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 17:35:03 +02:00
a1cffef503 pbs-tools: LruCache: implement Drop
this fixes the leaked memory for the cache, as we had only pointers
in the map/list which were freed, not the underlying chunks

moves the 'clear' implementation out of the trait bounds so that
Drop can reuse it

this is used e.g. for file download from a pxar

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
(cherry picked from commit 98983a9dab)
2022-01-20 15:46:35 +01:00
9b00099ead drop RawWaker usage
this was also leaking a refcount before, this is fixed now

See-also: proxmox/proxmox-async:
  * d0a3e38006fe ("drop RawWaker usage")
  * ff132e93c6fd ("rustfmt")

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-01-20 15:41:00 +01:00
d2351f1a81 bump version to 1.1.13-3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-19 10:21:23 +02:00
869e4601b4 api daemons: fix sending log-reopen command
send_command serializes everything so it cannot be used to send a
raw, optimized command. Normally that means we get an error like
> 'unable to parse parameters (expected json object)'
when used that way.

Switch over to send_raw_command which does not re-serializes the
command.

Fixes: 45b8a032 ("refactor send_command")
Originally-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-11 14:56:30 +02:00
238e5b573e buildsys: prune-sim is not generated, do not cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-26 16:41:34 +02:00
996680a336 bump version to 1.1.13-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-26 16:40:37 +02:00
94f6127711 Revert "auth: 'crypt' is not thread safe"
With this I'm getting coredumps on every log in:

> Process 20957 (proxmox-backup-) of user 34 dumped core.
>
> Stack trace of thread 20987:
> #0  0x0000563dec9ac37f _ZN3std3sys4unix14stack_overflow3imp14signal_handler17ha95ed06a038ca319E.llvm.11547235952357801165 (proxmox-backup-proxy)
> #1  0x00007f2638de9840 __restore_rt (libc.so.6)
> #2  0x00007f2638e51dac __stpncpy_sse2_unaligned (libc.so.6)
> #3  0x00007f26393b1340 __sha256_crypt_r (libcrypt.so.1)
> #4  0x00007f26393b0553 __crypt_r (libcrypt.so.1)
> #5  0x0000563dec6e44df _ZN14proxmox_backup4auth5crypt17hd5165f960093dfe7E (proxmox-backup-proxy)

This reverts commit acefa2bb6e.
2021-07-26 16:38:16 +02:00
3841301ee9 d/control: update generated build-deps
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 12:36:36 +02:00
f406202825 bump version to 1.1.13-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 12:35:06 +02:00
ba50f57e93 file-restore: increase lock timeout on QEMU map
This lock is held during VM startup, so that multiple calls will not
start VMs twice. But this means that the timeout needs to incorporate
the time it might take a VM to boot, so increase it quite a bit.

This could previously lead to "interrupted system call" errors when
accessing backups with many disks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
(cherry picked from commit 66501529a2)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 12:30:09 +02:00
61a758f67d build.rs: tell cargo to only rerun build.rs step if .git/HEAD changes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 12:43:19 +02:00
847c27fbee build.rs: factor out getting git command output into helper fn
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 12:43:19 +02:00
7d79f3d5f7 file restore daemon: log about basic steps
to make the log more useful..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 9a06eb1618)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:15 +02:00
fa3fdea590 file restore daemon: reword warning about manual execution
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 309e14ebb7)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:10 +02:00
aa2cd76c58 restore daemon: use millisecond log resolution
During startup most of the stuff is happening in milliseconds (or
less), so the timestamp granularity of seconds made it hard to tell
if the previous command required 990ms or 1ms, which is quite the
difference in the restore daemon context.

Using micros seems not to bring too much additional information, a
millisecond is already an ok lower time resolution for logging, so
switch only to millis for now.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit ecd66ecaf6)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:00 +02:00
e2d82c7d4d restore daemon: create /run/proxmox-backup on startup
fixes file restore again.

The new Memcom tracking file lives in `/run/proxmox-backup` and is
always created on REST interaction, as CachedUserInfo uses it to
efficiently track config changes, and such a cache is used in each
REST handle_request.

Further, the Memcom infra expects the base run PBS dir to exists
already, which is an OK assumption to have, but in the file-restore
daemon we have a significantly more minimal environment, and the run
dir was simply not required there, even /run isn't a tmpfs yet.

Fixes fda19dcc6f ("fix CachedUserInfo by using a shared memory version counter")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 33d7292f29)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:00 +02:00
e9c2a34def REST: set error message extenesion for bad-request response log
We send it already to the user via the response body, but the
log_response does not has, nor wants to have FWIW, access to the
async body stream, so pass it through the ErrorMessageExtension
mechanism like we do else where.

Note that this is not only useful for PBS API proxy/daemon but also
the REST server of the file-restore daemon running inside the restore
VM, and it really is *very* helpful to debug things there..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit f4d371d2d2)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:07:41 +02:00
0fad95f032 REST: rust fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 2d48533378)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:07:41 +02:00
683595940b fix #3496: acme: plugin: add sleep for dns propagation
the dns plugin config allow for a specified amount of time to wait for
the TXT record to be set and propagated through DNS.

This patch adds a sleep for this amount of time.
The log message was taken from the perl implementation in proxmox-acme
for consistency.

Tested with the powerdns plugin in my test setup.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
(cherry picked from commit 3f84541412)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:29 +02:00
40060c1fed config: acme: make validation_delay crate public
we need the setting in acme::plugin.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
(cherry picked from commit 4d8bd03668)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:29 +02:00
2abee30fdd acme: plugin: fix error message
extract_challenge is used by both dns-01 and http-01 challenges.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
(cherry picked from commit f9bd5e1691)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:29 +02:00
7cdc53bbf7 buildsys: docs: clean: also clean generated JS files
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 13a2445744)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:04 +02:00
dac877252b api: disk list: sort by name
So callers get more stable results. Most noticeable, the disk list in
the web UI doesn't jump around upon reloading, and while sorting could
be done directly there, like this other callers get the benefit too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
(cherry picked from commit bbff317aa7)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:04:57 +02:00
dd749b0e47 disks: also check for file systems with lsblk
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
(cherry picked from commit 20429238e0)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:04:57 +02:00
f98c02cbc6 disks: refactor partition type handling
in preparation to also get the file system type from lsblk.

Co-developed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
(cherry picked from commit 364299740f)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:04:57 +02:00
218d7e3ec6 rest: log response: avoid unnecessary mut on variable
a match expresses the fallback slightly nicer and needs no mut,
which is always nice to avoid.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 6b5013edb3)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:02:47 +02:00
acefa2bb6e auth: 'crypt' is not thread safe
According to crypt(3):
"crypt places its result in a static storage area, which will be
overwritten by subsequent calls to crypt. It is not safe to call crypt
from multiple threads simultaneously."

This means that multiple login calls as a PBS-realm user can collide and
produce intermittent authentication failures. A visible case is for
file-restore, where VMs with many disks lead to just as many auth-calls
at the same time, as the GUI tries to expand each tree element on load.

Instead, use the thread-safe variant 'crypt_r', which places the result
into a pre-allocated buffer of type 'crypt_data'. The C struct is laid
out according to 'lib/crypt.h.in' and the man page mentioned above.

Use the opportunity and make both arguments to the rust 'crypt' function
take a &[u8].

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
(cherry picked from commit c4c4b5a3ef)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:02:01 +02:00
36551172f3 depend on proxmox 0.11.6 (changed make_tmp_file() return type)
(cherry picked from commit bfd357c5a1)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 09:45:28 +02:00
c26f4ef385 buildsys: Prepare new way for path dependencies
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
(cherry picked from commit 9f5b57a348)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 09:39:12 +02:00
60816a8a82 Cargo.toml: regroup imports
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
(cherry picked from commit aceae32baa)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 09:34:18 +02:00
d7d09712ef bump version to 1.1.12-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:58:14 +02:00
825f019226 buildsys: call dpkg-buildpackage directly in deb-all
else we may double-build

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit a2c73c78dd)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:58:14 +02:00
ca5e5bb67f ui: datastore/OptionView: only navigate up when we removed the datastore
and not on window close

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
(cherry picked from commit 82cae19d19)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:54:50 +02:00
8191ff150e ui: dashboard/DataStoreStatistics: fix closing <i> tag
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
(cherry picked from commit 4a489ae3de)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:49:42 +02:00
f2aeb13c68 subscription: set higher-level error to message instead of bailing
While the PVE one "bails" too, it has an eval around those and moves
the error to the message property, so lets do so too to ensure a user
can force an update on a too old subscription

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit b81818b6ad)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:48:03 +02:00
ce76b4b3c2 bump version to 1.1-11-1 2021-06-30 11:25:11 +02:00
44b9d6f162 tape/drive: fix logging when requesting media
we try to load the correct media in a loop until we find the correct tape.
when encountering an error or wrong tape, we want to log that (and send
an email if one is set) that requests the correct tape.

while trying to avoid printing the same errors more than once in a row,
we had at least one case (starting with an empty tape in the drive)
which would not print/send any tape request.

reworking that code to use a custom 'TapeRequest' enum, which contains
the state + error message, and a helper that prints and sends an email
when the state changes

this reduces the change check/log to a single variable, instead of 4
(tried, last_media_uuid, last_error, failure_reason)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-30 11:22:04 +02:00
53e80e8aa2 tape: fix LTO locate_file for HP drives
Add test code to the first locate_file command, compute locate_offset.
Subsequent locate_file commands use that offset.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-30 11:22:04 +02:00
f94aa5ceb1 fix #3393 (again): pxar/create: try to read xattrs/fcaps/acls by default
we have a static list of filesystems and their capabilities regarding
file attributes and fs features (e.g. sockets/fifos/etc) which also
includes xattrs,acls and fcaps

if we did not know a filesystem by its magic number (for example cephfs),
we did not even attempt to read xattrs, etc.

this patch adds those flags by default to unknown filesystems, and
removes them when we encounter EOPNOTSUPP (to remove the number
of syscalls)

with this, we should be able to catch xattrs/acls/fcaps on all
(unknown) fs types that support them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-30 11:22:04 +02:00
3e4b9868a0 proxmox-backup-manager: show task log on datastore create
since the output:
Result: "<UPID>"
is not really interesting, show instead the task log while
the datastore is creating, since it is now run in a worker

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-30 11:22:04 +02:00
4d86df04a0 bump version to 1.1.10-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-16 09:55:47 +02:00
2165f0d450 api: define and use REALM_ID_SCHEMA 2021-06-10 11:10:00 +02:00
1e7639bfc4 fixup minimum lru capacity
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-08 10:13:46 +02:00
4121628d99 tools/lru_cache: make minimum capacity 1
Setting this to 0 is not just useless, but breaks the logic horribly
enough to cause random segfaults - better forbid this, to avoid someone
else having to debug it again ;)

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:55 +02:00
da78b90f9c backup: remove AsyncIndexReader
superseded by CachedChunkReader, with less code and more speed

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:46 +02:00
1ef6e8b6a7 replace AsyncIndexReader with SeekableCachedChunkReader
admin/datastore reads linearly only, so no need for cache (capacity of 1
basically means no cache except for the currently active chunk).
mount can do random access too, so cache last 8 chunks for possibly a
mild performance improvement.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:44 +02:00
10351f7075 backup: add AsyncRead/Seek to CachedChunkReader
Implemented as a seperate struct SeekableCachedChunkReader that contains
the original as an Arc, since the read_at future captures the
CachedChunkReader, which would otherwise not work with the lifetimes
required by AsyncRead. This is also the reason we cannot use a shared
read buffer and have to allocate a new one for every read. It also means
that the struct items required for AsyncRead/Seek do not need to be
included in a regular CachedChunkReader.

This is intended as a replacement for AsyncIndexReader, so we have less
code duplication and can utilize the LRU cache there too (even though
actual request concurrency is not supported in these traits).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:40 +02:00
70a152deb7 backup: add CachedChunkReader utilizing AsyncLruCache
Provides a fast arbitrary read implementation with full async and
concurrency support.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:37 +02:00
5446bfbba8 tools: add AsyncLruCache as a wrapper around sync LruCache
Supports concurrent 'access' calls to the same key via a
BroadcastFuture. These are stored in a seperate HashMap, the LruCache
underneath is only modified once a valid value has been retrieved.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:34 +02:00
400885e620 tools/BroadcastFuture: add testcase for better understanding
Explicitly test that data will stay available and can be retrieved
immediately via listen(), even if the future producing the data and
notifying the consumers was already run in the past.

Wasn't broken or anything, but helps with understanding IMO.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:29 +02:00
f960fc3b6f fix #3433: use PVE's wearout logic in PBS
in PVE, the logic how wearout gets read from the smartctl output was
changed from a vendor -> id map to a sorted list of specific
attribute field names.

copy that list to pbs (in the same order), and use that to get the
wearout

in the future we might want to split the disk logic into its own crate
and reuse it in pve

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-08 08:31:37 +02:00
ddfa4d679a ui: tape: DriveSelector: make wider and fine-tune column flex
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:57:45 +02:00
10e8026786 ui: tape: DriveSelector: code cleanup, group config together
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:30:57 +02:00
2527c039df ui: tape: TapeBackupJob: use correct default value for pbsUserSelector
if we want the empty value as a valid default value in a combogrid,
we have to explicitely select 'null' else the field will be marked as
dirty

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:30:29 +02:00
93d8a2044e ui: tape: DriveSelector: do not autoselect the drive
in case an invalid drive was configured, now it marks the field
invalid instead of autoselecting the first valid one

this could have lead to users configuring the wrong drive in a
tape-backup-job when they edited one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:30:13 +02:00
d2354a16cd client/pull: log snapshots that are skipped because of time
we skip snapshots that are older than the newest snapshot of the group in
the target datastore, log it so the user can know why it is not synced

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-07 10:51:25 +02:00
34ee1f1c76 ui: DataStoreList: add remove button
so that a user can remove a datastore from the gui,
though no data is deleted, this has to be done elsewhere (for now)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-04 09:41:52 +02:00
2de4dc3a81 backup/chunk_store: optionally log progress on creation
and enable it for the worker variants

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-04 09:32:09 +02:00
b90036dadd cleanup: factor out config::datastore::lock_config() 2021-06-04 09:04:14 +02:00
4708f4fc21 api2/config/datastore: change create datastore api call to a worker
so that longer running creates (e.g. a slow storage), does not
run in a timeout and we can follow its creation

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-04 09:02:05 +02:00
062cf75cdf proxmox-backup-proxy: fix leftover references on datastore removal
when we remove a datastore via api/cli, the proxy
has sometimes leftover references to that datastore in its
DATASTORE_MAP which includes an open filehandle on the
'.lock' file

this prevents unmounting/exporting the datastore even after removal,
only a reload/restart of the proxy did help

add a command to our command socket, which removes all non
configured datastores from the map, dropping the open filehandle

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-04 08:22:53 +02:00
e5950360ca tape/drive: improve tape device locking behaviour
by implementing a custom error type that is either 'TimeOut' or
'Other'.

In the api, check in the worker loop for exactly 'TimeOut' errors and continue only
then. All other errors lead to a aborted task.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-02 17:08:00 +02:00
5b358ff0b1 server/prune_job: fix locking during prune jobs
removing the backup dir must acquire the snapshot lock, else it can
happen that we remove a snapshot while it is being restored
or backed up to tape

the original commit that adds the force flag
(c9756b40d1)
mentions that the prune checks itself if the snapshot is in use,
but i could not find such code, so simply set force to false

to avoid failing and aborting the prune job, warn if it could not
and continue

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-02 17:04:49 +02:00
4c00391d78 ui: dashboard/TaskSummary: add type 'close' to the close tool
otherwise the button is not visible

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-01 16:38:07 +02:00
9594362e35 ui: datastore/DataStoreListSummary: catch and show errors per datastore
so that the update does not get canceled because of a bad datastore
hide the irrelevant fields in that case

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-01 16:38:06 +02:00
3420029b5e Revert "file-restore-daemon: work around tokio DuplexStream bug"
This reverts commit 75f9f40922, which is
no longer needed now that we use tokio >= 1.6 which contains the proper
fix.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-01 10:31:19 +02:00
f432a1c927 bump tokio dependency to 1.6
it contains a bug fix that allows dropping the workaround in

75f9f40922 file-restore-daemon: work around tokio DuplexStream bug

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-01 10:30:57 +02:00
e8b32f2d87 bump version to 1.1.9-1 2021-06-01 08:27:18 +02:00
3e3b505cc8 reorder serde usage/derive
this is deprecated with rustc 1.52+, and will become a hard error at
some point:

https://github.com/rust-lang/rust/issues/79202

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-31 14:53:08 +02:00
0bca966ec5 fix typo: s/dies/does/ 2021-05-31 11:01:15 +02:00
84737fb33f lto/sg_tape/encryption: remove non lto-4 supported byte
from the SspDataEncryptionCapabilityPage

it seems we do not need it, since the EXTDECC flag is only used for
determining if the drive is capable to be configured via
ADI (Automation/Drive Interface) which we do not use at all.

this makes the call work with LTO-4 again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-31 10:58:38 +02:00
e21a15ab17 ui: tape: s/Restore Wizard/Restore/
Mostly to avoid an extra translation text for basically no gain.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-27 11:26:46 +02:00
90066d22a0 ui: MainView: use new beforeChangePath signature
subpath can be optional in extjs 7.0, so handle that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
dbf5dad1c4 ui: css: fix text-align pmx-button-badge
this was previously set on the button class, but has since been removed
add it here to have the badge number centered again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
c793da1edc ui: MainView: navigation: use different ui class
by default the treelist gets the 'nav' ui, which in newer extjs
versions has a custom styling (unlike before)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
f8735e5988 ui: datastore/Summary: change destroy listener
by using beforedestroy instead of destroy (like we do everywhere else)
to avoid race condition when the controller has
already removed some handlers on destruction

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
e9805b2486 ui: panel/UsageChart: change downloadServerUrl
to not have the sencha url by default

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
eb90405a78 ui: form/CalendarEvent: do not set displayField
we use displayTpl here, setting displayField will override the template

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
ecf5f468c3 ui: MainView: do not use unnecessary panels
using container here is fine, we do not need panel behaviour which
is more bloated. Removes two ARIA warnings.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
51aee8cac8 ui: tape restore wizard: set emptyText to media set selector
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:10:16 +02:00
7d5049c350 ui: tape restore wizard: always show snapshot grid
looks (almost confusingly) empty else and no real disadvantage in
showing the disabled one until a media-set is selected and loaded

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:09:36 +02:00
01a99f5651 ui: tape overview: rename to "Restore Wizard" and use icons
To create a correlation with the restore action column

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:05:31 +02:00
2914e99ff3 ui: tape overview: use correct icon for Media-Pools
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:04:07 +02:00
f9b824ac30 ui: tape overview: include more context in restore tooltips
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:03:35 +02:00
9a535ec77b ui: tape restore: small code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:00:12 +02:00
ffba023c91 ui: tape/TapeRestore: fix some properties
remove leftover from when it was an Proxmox.window.Edit, and
add the missing 'modal'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
e01689978e ui: tape/TapeRestore: allow preselecting a datastore
for that we need to split the prefilter additions, else
we always filter the snaphots too and giving 'undefined' filters
all snapshots...

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
68ac8976eb ui: tape/TapeRestore: don't send snapshotlist when restoring whole datastores
for the case that the user selects only whole datastores, we do not
want to send and (exhaustive) list of snapshots that get restored,
but we only want to honor the mapping the user gives

this avoids using the backup restore codepath that iterates twice
over the tapes and would generally be slower for a lot of snapshots

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
afb790db73 ui: tape/BackupOverview: add generic 'Restore' button
this will open the restore window without anything preselected

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
0732de361a ui: tape/TapeRestore: add MediaSetSelector
when no uuid/mediaset is given.
we change a bit how we use the uuid by moving it into the viewmodel
(instead of a simple property on the view) so that we can always
use the selected one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
d455270fa1 ui: tape: add MediaSetSelector
so that we can let the user select a media-set

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
1336be16c9 ui: tape/BackupOverview: rename action column to restore
to make it clear that this button is for restore and for
now we do not have any plans to add buttons here

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
03380db560 api2/tape: add api call to list media sets
we want a 'media-set' selector in the gui, this makes it
very easy to do and is not as costly as reusing the media list,
since we do not need to iterate over all media (e.g. unassigned)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
927ebc702c ui: tape/BackupOverview: expand pools by default
normally, users will not have many tape media pools,
and are more interested in the actual media-sets, so
expand those nodes by default

if the list gets very long, the user can collapse some pools anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
c24cb13382 api: node/journal: fix parameter extraction of /nodes/node/journal
by extracting them via the api macro into the function signature

this fixes an issue, where giving 'since' and 'until' where not
used since we tried to extract them as 'str' while they were numbers.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-25 13:26:51 +02:00
3a804a8a20 file-restore-daemon: limit concurrent download calls
While the issue with vsock packets starving kernel memory is mostly
worked around by the '64k -> 4k buffer' patch in
'proxmox-backup-restore-image', let's be safe and also limit the number
of concurrent transfers. 8 downloads per VM seems like a fair value.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 11:56:43 +02:00
1fde4167ea file-restore-daemon: watchdog: add inhibit for long downloads
The extract API call may be active for more than the watchdog timeout,
so a simple ping is not enough.

This adds an "inhibit" API, which will stop the watchdog from completing
as long as at least one WatchdogInhibitor instance is alive. Keep one in
the download task, so it will be dropped once it completes (or errors).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 11:56:43 +02:00
75f9f40922 file-restore-daemon: work around tokio DuplexStream bug
See this PR for more info: https://github.com/tokio-rs/tokio/pull/3756

As a workaround use a pair of connected unix sockets - this obviously
incurs some overhead, albeit not measureable on my machine. Once tokio
includes the fix we can go back to a DuplexStream for performance and
simplicity.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 11:56:43 +02:00
e9c2638f90 apt: fix removal of non-existant http-proxy config
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-25 11:54:46 +02:00
338c545f85 tasks: fix typos in API description
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2021-05-25 07:54:57 +02:00
e379b4a31c file-restore-daemon: disk: add RawFs bucket type
Used to specify a filesystem placed directly on a disk, without a
partition table inbetween. Detected by simply attempting to mount the
disk itself.

A helper "make_dev_node" is extracted to avoid code duplication.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 07:53:22 +02:00
3d7ca2bdb9 file-restore-daemon: disk: allow arbitrary component count per bucket
A bucket might contain multiple (or 0) layers of components in its path
specification, so allow a mapping between bucket type strings and
expected component depth. For partitions, this is 1, as there is only
the partition number layer below the "part" node.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 07:53:22 +02:00
d34019e246 file-restore-daemon: disk: ignore "invalid fs" error
Mainly just causes log spam, we print a more useful error in the end if
all mounts fail anyway.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 07:53:22 +02:00
7cb2ebba79 bump version to 1.1.8-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:25:31 +02:00
4e8581950e cargo: bump proxmox-http version to 0.2.1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:25:31 +02:00
2a9a3d632e ui: config: move node ops (http-proxy) into existing "Authentication"
Mainly as Config -> Option is a weird name, Authentication has only
one obj. grid, the node options are only the http-proxy for now and
that is a sort of authentication, so good enough for me for now, but
should be rethought for 2.0 and/or once more node opts are added

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:23:35 +02:00
b6d07fa038 d/control: bump versioned dependency for proxmox-widget-toolkit
for the new gridRows feature the ObjectGrid gained.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:23:35 +02:00
4599e7959c ui: rework node-config to static
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:23:35 +02:00
82ed13c7d7 ui: add node options under 'Configuration -> Options'
for now only http-proxy lives there, but we will add more options later,
such as
* email from
* default gui language

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 18:23:35 +02:00
5aaa81ab89 docs: add short initial http-proxy docs
better than nothing and something to point to in the UI

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:23:35 +02:00
8a06d1935e ui: webauthn view: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:23:35 +02:00
f44254b4bd ui: hyphenate "Media-Set" to make it clearer that its one noun
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:53:50 +02:00
07875ce13e ui: tape content: set icon-class for reload button
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:51:57 +02:00
98dc770efa ui: tape restore: drop (now) unused references
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:45:15 +02:00
8848f1d487 ui: tape restore: avoid component/value lookup, use parameters
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:44:16 +02:00
5128ae48a0 tape: restore: cope with not fully instantiated components
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:24:52 +02:00
104ae6093a ui: tape: small code/style cleanups
It was not actually bad, so they're quite opinionated to be honest,
but at least xtypes props must go first and variable declaration
should try to be as near as possible to the actual use as long as
code stays sensible readable/short.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:19:51 +02:00
e830d63f6a ui: tape restore: update datastore map emptyText depending on default
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 14:31:05 +02:00
ce32cd487a ui: webauthn: drop bogus destroy stopStore call
will only result in an exception (in debug mode at least)...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 14:29:11 +02:00
f36c659365 ui: tape/BackupOverview: do not reload on restore
a restore does not change the tape content, so a reload has no benefit here.
since we're touching those lines, change to 'autoShow' property

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 13:25:45 +02:00
47e5cbdb03 ui: tape/BackupOverview: also allow to filter by group for restore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 13:25:45 +02:00
4923a76f22 ui: tape/window/TapeRestore: enabling selecting multiple snapshots
by including the new snapshotselector. If a whole media-set is to be
restored, select all snapshots

to achieve this, we drop the 'restoreid' and 'datastores' properties
for the restore window, and replace them by a 'prefilter' object
(with 'store' and 'snapshot' properties)

to be able to show the snapshots, we now have to always load the
content of that media-set, so drop the short-circuit if we have
the datastores already.

change the layout of the restore window into a two-step window
so that the first tab is the selection what to restore, and on the
second tab the user chooses where to restore (drive, datastore, etc.)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 13:25:45 +02:00
e01ca6a2dd ui: tape/TapeRestore: improve SnapshotGrid
* handle not rendered call of getErrors
* return 'all' as value if all snaphots where selected
  (for better distinction)
* remove the default height
* add checkChange on stores filterChange
  (now change also fires on the gridfilter plugin change)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 13:25:45 +02:00
5e989333cd ui: tape/TapeRestore: fix small DataStoreMappingGrid bugs
enable scrolling by default, and handle the case that getErrors gets
called when the component is not yet rendered

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 13:25:45 +02:00
af39c399bc ui: dashboards statistics: visualize datastores where quering the usage failed
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 13:22:07 +02:00
64591e731e api: status: graceful-degrade when a datastore lookup fails
This can happen if the underlying storage failed, in which case we do
not want to fail the whole API call, as it should report the status
of all datastores. So rather add the error inline to the related
store entry and continue.

Allows to nicely visualize those stores in the gui.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 13:20:55 +02:00
5658504b90 dashboard statistics: prepare a more graceful error handling in datastore-usage
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 13:20:55 +02:00
64e0786aa9 api: datastore status: refactor reused rrd get-data code into closure
Nicer and shorter than just using a variable for the common parameters

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 13:20:55 +02:00
90761f0f62 api: datastore status: code cleanup, reduce indentation level
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 13:20:55 +02:00
74f74d1e64 ui: tape/window/TapeRestore: add SnapshotGrid Component
this will be used for letting the user select multiple, individual
snapshots on restore (instead of having a single or the whole media-set)

if a 'prefilter' object is given, we filter the grid by those
values using the gridfilter plugins (like in pve's bulk action windows)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-18 07:51:23 +02:00
4db4b9706c ui: tape/BackupOverview: move restore buttons inline
instead of having them in the toolbar. This makes the UI more consistent
with the datastore content view.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-18 07:51:23 +02:00
00a5072ad3 ui: tape/BackupOverview: fix wrong media-set text for singlerestore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-18 07:51:23 +02:00
3d3d698bb3 buildsys: split long debcargo invocation into multiple lines
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-18 07:49:01 +02:00
1b9521bb87 bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-17 11:46:27 +02:00
1d781c5b20 update proxmox-http dependency
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-17 11:29:24 +02:00
8e8836d1ea d/control: update after http refactoring
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 11:02:57 +02:00
a904e3755d ui: datastore/Content: change group remove to SafeDestroy Window
so that a user does not accidentally remove a whole group instead
of a snapshot

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-17 10:42:03 +02:00
7ba99fef86 ui: datastore/Content: fix wrong tooltip for forgetting
sometimes it's a group, not a snapshot

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-17 10:41:44 +02:00
7d2be91bc9 move SimpleHttp to proxmox_http
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:32:33 +02:00
578895336a SimpleHttp: factor out product-specific bits
in preparation of moving the abstraction to proxmox_http

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:32:22 +02:00
8c090937f5 move tools::http to proxmox_http
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:31:54 +02:00
4229633d98 move ProxyConfig to proxmox_http
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:31:27 +02:00
3ed7e87538 HttpsConnector: make keepalive configurable
it's the only PBS-specific part in there, so let's make it
product-agnostic before moving it off to proxmox-http.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:31:15 +02:00
5b43cc4487 move MaybeTlsStream wrapper to proxmox_http
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:30:05 +02:00
3241392117 refactor: move socket helper to proxmox crate
and constant to tools module.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:29:42 +02:00
c474a66b41 move websocket to new 'proxmox_http' crate
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:26:41 +02:00
b32cf6a1e0 ui: datastore/Content: add forget button for groups
since we can remove whole groups via api now

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-17 08:45:10 +02:00
f32791b4b2 api2/admin/datastore: add delete for groups
so that a user can delete a whole group at once, until now, the fastest
way for this was to prune to one snapshot, and delete that

code is basically a copy/paste from the snapshot delete, sans
the 'backup-time' parameter

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-17 08:45:10 +02:00
8f33fe8e59 d/control: update proxmox tools dependency
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-14 13:19:20 +02:00
d19010481d tape/test: repair tests after changing 'start_write_session'
i added a parameter and forgot to adapt the tests

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-14 10:01:54 +02:00
6b11524a8b ui: tape: add 'Force new Media Set' checkbox to manual backup
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-14 08:58:46 +02:00
e953029e8f api2/tape/backup: add 'force-media-set' parameter to manual backup
so that a user can force a new media set, e.g. if he uses the
allocation policy 'continue', but wants to manually start a new
media-set.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-14 08:58:33 +02:00
10f788b7eb ui: tape: ChangerStatus fixup for empty barcode
empty barcode means that label-text is '', not undefined
we forgot this line

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-14 08:48:10 +02:00
9348544e46 ui: tape: TapeRestoreWindow: fix button text
s/Create/Restore/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-12 21:20:20 +02:00
126ccbcfa6 acme: improve errors when account loading fails
if the account does not exist, error with its name
if file loading fails, the error includes the full path
if the content fails to parse, show file & parse error
and in each case mention that it's about loading the acme account file

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-12 12:22:21 +02:00
440472cb32 correctly set apt proxy configuration 2021-05-12 12:19:24 +02:00
4ce7da516d reload cert inside command socket handler 2021-05-12 12:03:27 +02:00
a7f8efcf35 ui: add task descriptions for ACME related tasks
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-11 18:08:10 +02:00
9fe4c79005 api: acme accounts: use name as worker ID
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-11 18:07:03 +02:00
f09f4d5fd5 config: acme: rust format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-11 17:35:54 +02:00
38b4f9b534 config: acme: fall-back to the "default" account
syncs behavior with both, the displayed state in the PBS
web-interface, and the behavior of PVE/PMG.

Without this a standard setup would result in a Error like:
> TASK ERROR: no acme client configured

which was pretty confusing, as the actual error was something else
(no account configured), and the web-interface showed "default" as
selected account, so a user had no idea what actually was wrong and
how to fix it.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-11 17:33:07 +02:00
fca1cef29f hot-reload proxy certificate when updating via the API
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
45b8a0327f refactor send_command
- refactor the combinators,
- make it take a `&T: Serialize` instead of a Value, and
  allow sending the raw string via `send_raw_command`.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
a723c08715 proxy: implement 'reload-certificate' command
to be used via the command socket

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
c381a162fb proxy: factor out tls acceptor creation
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
b4931192c3 proxy: Arc usage cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
cc269b9ff9 proxy: "continue on error" for the accept call, too
as this gets rid of 2 levels of indentation

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
a5e3be4992 proxy: factor out accept_connection
no functional changes, moved code and named the channel's
type for more readability

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
137309cc4e bump version to 1.1.7-1 2021-05-11 13:23:29 +02:00
85f4e834d8 client: use stderr for all fingerprint confirm msgs
an interactive client might still want machine-readable output on
stdout.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-11 13:12:54 +02:00
065013ccec client: refactor verification callback
return a result with optional fingerprint instead of tuple, allowing
easy extraction of a meaningful error message.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-11 13:12:54 +02:00
56d98ba966 client: improve fingerprint variable names
and pass as reference instead of cloning.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-11 13:12:54 +02:00
dda1b4fa44 fix #3391: improve mismatched fingerprint handling
if the expected fingerprint and the one returned by the server don't
match, print a warning and allow confirmation and proceeding if running
interactive.

previous:

$ proxmox-backup-client ...
Error: error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:../ssl/statem/statem_clnt.c:1915:

new:

$ proxmox-backup-client ...
WARNING: certificate fingerprint does not match expected fingerprint!
expected:    ac:cb:6a:bc:d6:b7:b4:77:3e:17:05:d6:b6:29:dd:1f:05:9c:2b:3a:df:84:3b:4d:f9:06:2c:be:da:06:52:12
fingerprint: ab:cb:6a:bc:d6:b7:b4:77:3e:17:05:d6:b6:29:dd:1f:05:9c:2b:3a:df:84:3b:4d:f9:06:2c:be:da:06:52:12
Are you sure you want to continue connecting? (y/n): n
Error: error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:../ssl/statem/statem_clnt.c:1915:

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-11 13:12:54 +02:00
68b102269f ui: tape: add single snapshot restore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 12:57:33 +02:00
0ecdaa0dc0 bin/proxmox-tape: add optional snapshots to restore command
and add the appropriate completion helper

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 12:57:14 +02:00
13f435caab tape/inventory: add completion helper for tape snapshots
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 12:56:55 +02:00
ff99780303 api2/tape/restore: add optional snapshots to 'restore'
this makes it possible to only restore some snapshots from a tape media-set
instead of the whole. If the user selects only a small part, this will
probably be faster (and definitely uses less space on the target
datastores).

the user has to provide a list of snapshots to restore in the form of
'store:type/group/id'
e.g. 'mystore:ct/100/2021-01-01T00:00:00Z'

we achieve this by first restoring the index to a temp dir, retrieving
a list of chunks, and using the catalog, we generate a list of
media/files that we need to (partially) restore.

finally, we copy the snapshots to the correct dir in the datastore,
and clean up the temp dir

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 12:53:38 +02:00
fa9507020a api2/tape/restore: refactor restore code into its own function
and create the 'email' and 'restore_owner' variable at the beginning,
so that we can reuse them and do not have to pass the sources of those
through too many functions

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 12:53:25 +02:00
1bff50afea tape locate_file: fix off by one error 2021-05-11 12:37:04 +02:00
37ff72720b docs/api-viewer: improve rendering of array format
by showing
'[format, ...]'
where 'format' is the simple format from the type of the items

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-11 09:12:55 +02:00
2d5d264f99 tape/pool_writer: do not unwrap on channel send
if the reader thread is already gone here, we panic here, resulting in
a nondescript error message, so simply ignore/warn in that case and
return gracefully

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 09:07:45 +02:00
c9c07445b7 ui: window/SyncJobEdit: disable autoSelect for remote datastore
when changin the remote, there is a high chance that there are different
datastores, and if a user does not pay attention, now the first store
of the new remote is selected, instead of the one with the same name

disable autoSelect and let the user manually select a remote datastore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-10 16:56:42 +02:00
a4388ffc36 ui: tape: rename 'Datastore' to 'Target Datastore'
we have 2 modi in that window:
* backup has multiple datastores
* backup has single datastore

In the first case we show a 'mapping' grid so that
the user can only restore a part. Here a user sees all source
Datastores and can select a target for each one.

In the second case we only have a single 'Datastore' selector, but
we do not show the source. Because of this, the naming is slightly ambiguous
(is it the 'Source' or the 'Target' ?), so rename it to 'Target Datastore'.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-10 16:56:37 +02:00
ea1458923e manager: acme plugin: auto-complete available DNS challenge types
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 15:55:49 +02:00
e857f1fae8 completion: ACME plugin type: comment out http type for now, not useful
It may make sense in the future, e.g., if the built-in standalone
type is not enough, e.g., as HTTP**s**, HTTP 2 or even QUIC (HTTP 3)
is wanted in some setups, but for now there's no scenario where one
would profit from adding a new HTTP plugin, especially as it requires
the `data` property to be set, which makes no sense..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 15:50:08 +02:00
3ec42e81b1 manager: acme plugin: remove ID completion helper from add command
we cannot add a plugin with an existing ID so this completion helper
is rather counterproductive...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 15:47:37 +02:00
be1163acfe config: acme: drop now unused foreach_dns_plugin
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 14:41:44 +02:00
d308dc8af7 acme: use proxmox-acme-plugins and load schema from there
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 14:41:12 +02:00
60643023ad api: move AcmeChallengeSchema to acme types module
It will be reused in a later patch in another module which should not
depend on the actual API implementation (ugly and cyclic)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 14:39:07 +02:00
875d53ef6c api: acme: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 11:56:38 +02:00
b41f9e9fec acme: fix bad nonce retry counter
Actually return the error on the 3rd try.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-10 11:52:04 +02:00
a1b71c3c7d fix #3296: use proxy client to retrieve changelog
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-05-10 08:48:52 +02:00
013fa2d886 fix #3296: use proxy for subscriptions
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-05-10 08:48:05 +02:00
72e311c6b2 fix 3296: add http_proxy to node config, and provide a cli
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-05-10 08:37:46 +02:00
2732c47466 cleanup src/api2/node/config.rs
- add return type
- fix permissions
- fix descriptions
2021-05-10 08:25:43 +02:00
0466089316 move api related type/regx definition from backup_info.rs to src/api2/types/mod.rs 2021-05-07 12:45:44 +02:00
5e42d38598 api2/types: add TAPE_RESTORE_SNAPSHOT_SCHEMA
which is 'store:type/id/time'

needed to refactor SNAPSHOT_PATH_REGEX_STR from backup_info

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-07 12:27:50 +02:00
82a4bb5e80 api2/tape/restore: return backup manifest in try_restore_snapshot_archive
we'll use that for partial snapshot restore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-07 12:25:30 +02:00
94bc7957c1 progress: shorter format
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-07 12:14:37 +02:00
c9e6b07145 progress: add current group to output
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-07 12:14:23 +02:00
3c06eba17a docs: online help info: surpress warnings during scan
We get lots of warnings due to sphinx complaining about missing
includes for generated synopsis. We do not reference to any of those
for now, so we can ignore that now and supress all standard and
warning output.

Note: Errors are still reported.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-07 11:51:30 +02:00
8081e4aa7b fix #3331: improve progress for last snapshot in group
especially for the last group, without this the progress would report:

"percentage done: 100.00% (1 of 2 groups, 1 of 1 group snapshots)"

instead of the more logical

"percentage done: 100.00% (2 of 2 groups)"

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-07 11:20:17 +02:00
d8769d659e use build.rs to pass REPOID to rustc-env 2021-05-07 10:11:39 +02:00
572cd0381b file-restore: add debug mode with serial access
Set PBS_QEMU_DEBUG=1 on a command that starts a VM and then connect to
the debug root shell via:
  minicom -D \unix#/run/proxmox-backup/file-restore-serial-10.sock
or similar.

Note that this requires 'proxmox-backup-restore-image-debug' to work,
the postinst script is updated to also generate the corresponding image.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-07 10:00:12 +02:00
5e91b40087 d/control: update for cargo manifest update
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-07 09:26:41 +02:00
936eceda61 file-restore: support more drives
A PCI bus can only support up to 32 devices, so excluding built-in
devices that left us with a maximum of about 25 drives. By adding a new
PCI bridge every 32 devices (starting at bridge ID 2 to avoid conflicts
with automatic bridges), we can theoretically support up to 8096 drives.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-07 09:03:17 +02:00
61c4087041 file-restore: add more RAM for VMs with many drives or debug
The guest kernel requires more memory depending on how many disks are
attached. 256 seems to be enough for basically any reasonable and
unreasonable amount of disks though.

For debug instance, make it 1G, as these are never started automatically
anyway, and need at least 512MB since the initramfs (especially when
including a debug build of the daemon) is substantially bigger.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-07 09:03:17 +02:00
7d39e47182 file-restore: try to kill VM when stale
Helps to clean up a VM that has crashed, is not responding to vsock API
calls, but still has a running QEMU instance.

We always check the process commandline to ensure we don't kill a random
process that took over the PID.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-07 09:03:17 +02:00
c4e1af3069 make sure URI paths start with a slash
Otherwise we get an empty error message.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-07 08:46:47 +02:00
3e234af16e tape: improve inline docs for READ POSITION LONG 2021-05-06 11:45:40 +02:00
bbbf662d20 tape: use LOCATE(16) SCSI command
Turns out this works on LTO4 and newer.
2021-05-06 10:51:59 +02:00
25d78b1068 client: use build_authority in build_uri
so we don't need to also duplicate the IPv6 bracket logic

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-06 10:27:40 +02:00
78bf292343 call create_run_dir() at daemon startup 2021-05-06 10:23:54 +02:00
e5ef69ecf7 cleanup: split SimpleHttp client into extra file 2021-05-06 10:22:24 +02:00
b7b9a57425 api2/tape/restore: remove unnecessary params from (try_)restore_snapshot_archive
we do not need them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-06 08:02:14 +02:00
c4a04b7c62 api2/tape/restore: factor out check_datastore_privs
so that we can reuse it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-06 08:01:31 +02:00
2e41dbe828 tape/media_catalog: add helpers to look for snapshot/chunk files
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-06 07:58:03 +02:00
56d36ca439 tape/drive: add 'move_to_file' to TapeDriver trait
so that we can directly move to a specified file on the tape

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-06 07:55:08 +02:00
e0ba5553be http proxy: add necessary brackets for IPv6 proxy 2021-05-05 11:57:04 +02:00
8d6fb677c1 proxmox_restore_daemon: mount ntfs with 'utf8' option
otherwise, the kernel driver exposes file names as iso 8859-1,
but we want to have them as utf8.

This mapping should always work, since UTF16 can be cleanly converted
to UTF8.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-05 11:07:31 +02:00
a2daecc25d client/http_client: add necessary brackets
if we are given a 'naked' ipv6 without square brackets around it,
we need to add them ourselves, since the address is ambigious otherwise
when we add the port.

e.g. giving 'fe80::1' as address we arrive at the url (with the default port)
'https://fe80::1:8007/'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-05-05 10:30:39 +02:00
ee0c5c8e01 use api_string_type macro
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-05-05 08:24:37 +02:00
ae5b1e188f docs: tape: clarify LTO-4/5 support
some features we need (e.g. READ POSITION long form) are only officially
available with LTO-5, but work on many LTO-4 drives, so move LTO-4 to
'best-effort' support.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-04 13:21:44 +02:00
49f9aca627 tape/restore: optimize chunk restore behaviour
by checking the 'checked_chunks' before trying to write to disk
and by doing the existance check in the parallel handler. This way,
we do not have to check the existance of a chunk multiple times
(if multiple source datastores gets restored to the same target
datastore) and also we do not have to wait on the stat before reading
the next chunk.

We have to change the &WorkerTask to an Arc though, otherwise we
cannot log to the worker from the parallel handler

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-04 13:06:31 +02:00
4cba875379 bump version to 1.1.6-2 2021-05-04 12:25:16 +02:00
7ab4382476 update debian/control 2021-05-04 12:24:16 +02:00
eaef6c8d00 Revert "temporarily disable broken test"
This reverts commit 888d89e2dd.

The code this depends on should now be available.
2021-05-04 12:11:35 +02:00
95f3692545 fix permissions set in create_run_dir
This directory needs to be owned by the backup user instead
of root.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 12:11:35 +02:00
686173dc2a bump version to 1.1.6-1 2021-05-04 12:09:56 +02:00
39c5db7f0f move basic ACME types into src/api2/types/acme.rs
And rename AccountName into AcmeAccountName.
2021-05-04 11:32:18 +02:00
603aa09d54 tape restore: do not verify restored files
Because this is too slow and causes the tape motor to stop. Instead,
remove the verify_state from the manifest.
2021-05-04 11:05:32 +02:00
88aa3076f0 tape restore: add restore speed to logs 2021-05-04 11:05:32 +02:00
5400fe171c tape restore: write datastore in separate thread 2021-05-04 11:05:32 +02:00
87bf9f569f tape restore: split restore_chunk_archive
Split out a separate function scan_chunk_archive() for catalog restores.

Note: Required, because we need to optimize restore_chunk_archive() to
write datastore in separate threads (else thape drive will stop during restore)
2021-05-04 11:05:32 +02:00
8fb24a2c0a daily-update: check acme certificates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:43:50 +02:00
4b5d9b6e64 ui: add certificate & acme view
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:42:50 +02:00
72bd8293e3 add acme commands to proxmox-backup-manager
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:39:16 +02:00
09989d9963 add node/{node}/config api path
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:32:42 +02:00
4088d5bc62 add node/{node}/certificates api call
API like in PVE:

GET    .../info             => current cert information
POST   .../custom           => upload custom certificate
DELETE .../custom           => delete custom certificate
POST   .../acme/certificate => order acme certificate
PUT    .../acme/certificate => renew expiring acme cert

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:31:30 +02:00
d4b84c1dec add config/acme api path
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:30:49 +02:00
426847e1ce node config cleanups 2021-05-04 09:29:31 +02:00
79b902d512 add node config
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:29:31 +02:00
73c607497e cleanup acme client 2021-05-04 09:28:53 +02:00
f2f526b61d add acme client
This is the highlevel part using proxmox-acme-rs to create
requests and our hyper code to issue them to the acme
server.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 07:56:52 +02:00
cb67ecaddb add acme config
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 07:43:43 +02:00
5bf9b0b0bb docs: user-management: add note about untrusted certificates for webauthn
Since currently it works fine with untrusted certs, but that may change
anytime.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-03 12:01:23 +02:00
7a61f89e5a tape backup job: fix typo in error message
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-03 12:01:23 +02:00
671c6a96e7 bin: use extract_output_format where necessary
else we sometimes forget to remove it from the 'params' variable
and use that further, running into 'invalid parameter' errors

found by giving 'output-format' paramter to proxmox-tape status

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-03 08:58:35 +02:00
f0d23e5370 add ctime and size function to IndexFile trait
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2021-04-30 11:40:45 +02:00
d1bee4344d ui: tape: handle tapes in changers without barcode
by checking for definedness of the label (tapes without barcode
have the empty string as label-text) and falling back to the
source slot for the load action

Note: Changed the load-slot API from PUT to POST

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-04-30 10:23:53 +02:00
d724116c0c add dns alias schema
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-30 08:10:57 +02:00
888d89e2dd temporarily disable broken test
this test was added before the used NodeConfig schema was committed,
cannot work...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-29 16:18:20 +02:00
a6471bc346 bump version to 1.1.5-3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-29 15:26:24 +02:00
6b1da1c166 file restore: log which filesystems we support
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-29 15:24:45 +02:00
18210d8958 file-restore: use 'norecovery' for xfs filesystem
This allows mounting XFS partitons with 'dirty' states, like from a
running VM. Otherwise XFS tries to write recovery information, which
fails on a read-only mount.

Tested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-29 15:09:09 +02:00
bc5c1a9aa6 add 'config file format' to tools::config
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 12:11:54 +02:00
3df77ef5da config::acl: make /system/certificates a valid path
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 12:08:00 +02:00
e8d9d9adfa bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 12:05:22 +02:00
01d152720f Cargo.toml: depend on proxmox-acme-rs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 12:04:28 +02:00
5e58381ea9 catalog shell: replace LoopState with ControlFlow
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 11:17:21 +02:00
0b6d9442bd tools: add ControlFlow type
modeled after std::ops::ControlFlow

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 11:15:20 +02:00
134ed9e14f CertInfo: add is_expired_after_epoch
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 11:07:04 +02:00
0796b642de CertInfo: add not_{after, before}_unix
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 10:59:02 +02:00
f912ba6a3e config: factor out certificate writing
for reuse in the certificate api

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-28 12:58:41 +02:00
a576e6685b tools::fs::scan_subdir: use nix::Error instead of anyhow
allows using SysError trait on it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-28 12:32:57 +02:00
b1c793cfa5 systemd: add reload_unit
via try-reload-or-restart

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-28 12:15:26 +02:00
c0147e49c4 tools/http: make user agent configurable 2021-04-28 12:15:26 +02:00
d52b120905 tools/http: set USER_AGENT inside request 2021-04-28 12:15:26 +02:00
84c8a580b5 bump version to 1.1.5-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-28 11:35:12 +02:00
467bd01cdf api: add schema for http proxy configuration - HTTP_PROXY_SCHEMA 2021-04-28 11:23:06 +02:00
7a7fcb4715 http: add helper to parse proxy configuration 2021-04-28 11:23:06 +02:00
cf8e44bc30 HttpsConnector: add proxy authorization support 2021-04-28 11:23:06 +02:00
279e7eb497 buildsys: add pbs-client repo in upload target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-28 09:41:45 +02:00
606828cc65 file-restore: strip .img.fidx suffix from drive serials
Drive serials have a character limit of 20, longer names like
"drive-virtio0.img.fidx" or "drive-efidisk0.img.fidx" would get cut off.

Fix this by removing the suffix, it is not necessary to uniquely
identify an image.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-27 16:41:29 +02:00
aac424674c bump version to 1.1.5-1 2021-04-27 12:21:08 +02:00
8fd1e10830 tools/sgutils2: add size workaround for mode_sense
Some drives will always return the number of bytes given in the
allocation_length field, but correctly report the data len in the mode
sense header. Simply ignore the excess data.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-27 11:37:03 +02:00
12509a6d9e tape: improve inline docs 2021-04-27 11:37:03 +02:00
5e169f387c tape: add read_medium_configuration_page() to detect WORM media
And use it inside format_media().
2021-04-27 11:37:03 +02:00
8369ade880 file-restore: fix package name for kernel/initramfs image
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-27 11:17:38 +02:00
73cef112eb tape: remove MediumType struct, which is only valid on IBM drives
HP drives do not return this information.

Note: This breaks format on WORM media, because we have not way
to detect WOREM media (how?).
2021-04-27 09:58:27 +02:00
4a0132382a bump version to 1.1.4-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-27 08:41:05 +02:00
6ee69fccd3 tools/sgutils2: improve error messages
include the expected and unexpected sizes in the error message,
so that it's easier to debug in case of an error

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-27 08:24:50 +02:00
a862835be2 file-restore: use less memory for VM and reboot on panic
With the vsock-pkt-buffer fix in proxmox-backup-restore-image, we can
use way less memory for the VM without risking any crashes. 128 MiB
seems to be the lowest it will go and still be fully reliable.

While at it, add the "panic=1" argument to the kernel command line, so
in case the kernel *does* run out of memory, it will at least restart
automatically.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-26 15:46:37 +02:00
ddbd63ed5f file-restore: exit with code 1 in case streaming fails
This way the task gets marked as "failed" in PVE.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-26 15:46:37 +02:00
6a59fa0e18 file-restore: add size to image files and components
Read image sizes (.pxar.fidx/.img.didx) from manifest and partition
sizes from /sys/...

Requires a change to ArchiveEntry, as DirEntryAttribute::Directory
does not have a size associated with it (and that's probably good).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-26 15:46:37 +02:00
1ed9069ad3 http proxy: improve response parser
Avoid strange error message in case of connect error (only parse status + headers).
We are not interested in the response body, so simply ignore it.
2021-04-26 11:21:11 +02:00
a588b67906 api2/config/datastore: use update_job_last_run_time for schedules
this way, the api call does not error out when the file is locked
currently (which means that job is running and we do not need
to update the time)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-26 10:51:06 +02:00
37a634f550 server/jobstate: improve name of 'try_update_state_file'
and improve comment

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-26 10:50:36 +02:00
951fe0cb7d server/jobstate: add 'updatd' to Finish variant
when a user updates a job schedule, we want to save that point in time
to calculate future runs, otherwise when a user updates a schedule to
a time that would have been between the last run and 'now' the
schedule is triggered instantly

for example:
schedule 08:00
last run today 08:00
now it is 12:00

before this patch:
update schedule to 11:00
 -> triggered instantly since we calculate from 08:00

after this patch:
update schedule to 11:00
 -> triggered tomorrow 11:00 since we calculate from today 12:00

the change in the enum type is ok, since by default serde does not
error on unknown fields and the new field is optional

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-26 09:48:34 +02:00
4ca3f0c6ae api2/tape/backup: list backed up snapshots on failed backup notification
if a backup task failed (e.g. it was aborted), show the snapshots
which were successfully backed up in the notification

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-23 16:25:17 +02:00
69e5ba29c4 ui: tape: reload drive status on user actions
when the user start an action where we know that it locks the drive,
reload the tape store, so that the state is refreshed

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-23 16:25:17 +02:00
e045d154e9 file-restore: avoid unnecessary clone
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-23 13:22:30 +02:00
6526709d48 file-restore: add context to b64-decode error
to make the following cryptic error:

 proxmox-file-restore failed: Error: Invalid byte 46, offset 5.

more understandable:

 proxmox-file-restore failed: Error: Failed base64-decoding path '/root.pxar.didx' - Invalid byte 46, offset 5.

when a user passes in a non-base64 path but sets `--base64`.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-04-23 13:19:40 +02:00
603f80d813 bump version to 1.1.3-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-23 10:52:17 +02:00
398636b61c api2/node/status: extend node status
to be more on par with pve

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-23 10:30:30 +02:00
eb70464839 api2/nodes/status: use NodeStatus struct
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-23 10:30:30 +02:00
75054859ff api2/types: add necessary types for node status
we want to use concrete types instead of value

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-23 10:30:30 +02:00
8e898895cc tape: do not query density_code in SgTape::new()
Because this can fail with NoSense/MediumChanged and other informational
Sense codes.
2021-04-23 09:56:44 +02:00
4be6beab6f tape: format_media - implement special case for WORM media 2021-04-23 08:33:13 +02:00
a3b4b5b50e tape: define and use MediumType enum 2021-04-23 07:54:42 +02:00
33b8d7e5e8 tape: use loaded media_type in format_media (instead of drive_density)
Required to format LTO4 media loaded in LTO5 drive).

Also contains some SCSI code cleanups.
2021-04-23 07:27:30 +02:00
f2f43e1904 server/rest: fix new type ambiguity
basically the same as commit eeff085d9d
Will be required once we get to use a newer rustc, at least the
client build for archlinux was broken due to this.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-22 21:24:44 +02:00
c002d48b0c bump version to 1.1.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-22 20:15:18 +02:00
15998ed12a file-restore: support encrypted VM backups
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-22 17:55:30 +02:00
9d8ab62769 client-tools: add crypto_parameters_keep_fd
same functionality as crypto_parameters, except it keeps the file
descriptor passed as "keyfd" open (and seeks to the beginning after
reading), if one is given.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-22 17:55:30 +02:00
3526a76ef3 file-restore: don't force PBS_FINGERPRINT env var
It is valid to not set it, in case the server has a valid certificate.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-22 17:55:30 +02:00
b9e0fcbdcd tape: implement report_desnity 2021-04-22 13:54:31 +02:00
a7188b3a75 tape: fix FORMAT for LTO-4 drives
FORMAT requires LTO-5 or newer, so we do a rewind/erase if FORMAT fails.
2021-04-22 11:44:49 +02:00
b6c06dce9d http proxy: implement read_connect_response()
Limit memory usage in case we get strange data from proxy.
2021-04-22 10:06:14 +02:00
4adf47b606 file-restore: allow extracting a full pxar archive
If the path for within the archive is empty, assume "/" to extract all
of it.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-21 17:20:54 +02:00
4d0dc29951 file-restore: Add 'v' (Virtual) ArchiveEntry type
For the actual partitions and blockdevices in a backup, which the
user sees like folders in the file-restore ui

Encoded as "None", to avoid cluttering DirEntryAttribute, where it
wouldn't make any sense to have.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-21 17:19:40 +02:00
1011fb552b file-restore: print warnings on stderr
as we print JSON on stdout to be parsed

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-21 17:18:12 +02:00
2fd2d29281 file-restore: don't list non-pxar/-img *idx archives
These can't be entered or restored anyway, and cause issues with catalog
files for example.

Also a clippy fix.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-21 17:18:06 +02:00
9104152a83 HttpsConnector: add proxy support 2021-04-21 15:29:17 +02:00
02a58862dd HttpsConnector: code cleanup 2021-04-21 15:29:17 +02:00
26153589ba new http client implementation SimpleHttp (avoid static HTTP_CLIENT)
This one will have proxy support.
2021-04-21 15:29:17 +02:00
17b3e4451f MaybeTlsStream: implement poll_write_vectored()
This is just an performance optimization.
2021-04-21 15:29:17 +02:00
a2072cc346 http: rename EitherStream to MaybeTlsStream
And rename the enum values. Added an additional enum called Proxied.

The enum in now more specialized, but we only use it for the http client anyways.
2021-04-21 15:29:17 +02:00
fea23d0323 fix #3393: tools/xattr: allow xattr 'security.NTACL'
in some configurations, samba stores NTFS-ACLs in this xattr[0], so
we should backup (if we can)

altough the 'security' namespace is special (e.g. in use by
selinux, etc.) this value is normally only used by samba and we
should be able to back it up.

to restore it, the user needs at least 'CAP_SYS_ADMIN' rights, otherwise
it cannot be set

0: https://www.samba.org/samba/docs/current/man-html/vfs_acl_xattr.8.html

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-21 14:49:46 +02:00
71e83e1b1f tape/changer/sg_pt_changer: read whole descriptor size for each entry
Some changer seem to append more data than we expect, but correctly
annotates that size in the subheader.

For each descriptor entry, read as much as the size given in the
subheader (or until the end of the reader), else our position in
the reader is wrong for the next entry, and we will parse
incorrect data.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-21 14:07:41 +02:00
28570d19a6 tape restore: avoid multiple stat calls for same chunk 2021-04-16 13:17:17 +02:00
1369bcdbba tape restore: verify if all chunks exist 2021-04-16 12:20:44 +02:00
5e4d81e957 tape restore: simplify log (list datastores on single line) 2021-04-16 11:35:05 +02:00
0f4721f305 tape restore: fix datastore locking 2021-04-16 09:09:05 +02:00
5547f90ba7 bump version to 1.1.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 13:26:59 +02:00
2e1b63fb25 backup verify: do not check every loop iteration for abort/shutdown
only check every 1024'th, which is cheaper to do than a modulo, as we
can just mask the 10 least-significant-bits and check if the result
is zero.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 13:21:36 +02:00
7b2d3a5fe9 backup verify: unify check if chunk can be skipped
This also re-checks the corrupt chunk list before actually loading a
chunk.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 13:21:07 +02:00
0216f56241 config: tfa: drop now unused schema::Updatable
was used in a macro expansion, now handled otherwise

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 12:35:11 +02:00
80acdd71fa tape: do not try to backup unfinished backups 2021-04-15 10:24:14 +02:00
26af61debc backup verify: re-check if we can skip a chunk in the actual verify loop
Fixes a non-negligible performance regression from commit
7f394c807b

While we skip known-verified chunks in the stat-and-inode-sort loop,
those are only the ones from previous indexes. If there's a repeated
chunk in one index they would get re-verified more often as required.

So, add the check again explicitly to the read+verify loop.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 10:00:06 +02:00
e7f94010d3 cargo toml: update proxmox version
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 09:56:45 +02:00
a4e871f52c api2/access/user: remove password for @pbs users on removal
so that their password entry is not left in the shadow.json

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-15 08:33:20 +02:00
bc3072ef7a bump version to 1.1.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-14 14:50:41 +02:00
f4bb2510b9 docs: tape: replace changer overview screenshot
Replace previous screenshot with one that shows a more realistic amount
of drives.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-04-14 14:41:00 +02:00
2ab12cd0cb verify: add comment for inode sorting
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-14 14:39:24 +02:00
c894909e17 verify: partially rust fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-14 14:39:24 +02:00
7f394c807b backup/verify: improve speed by sorting chunks by inode
before reading the chunks from disk in the order of the index file,
stat them first and sort them by inode number.

this can have a very positive impact on read speed on spinning disks,
even with the additional stat'ing of the chunks.

memory footprint should be tolerable, for 1_000_000 chunks
we need about ~16MiB of memory (Vec of 64bit position + 64bit inode)
(assuming 4MiB Chunks, such an index would reference 4TiB of data)

two small benchmarks (single spinner, ext4) here showed an improvement from
~430 seconds to ~330 seconds for a 32GiB fixed index
and from
~160 seconds to ~120 seconds for a 10GiB dynamic index

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-14 14:39:24 +02:00
7afb98a912 docs: fix tape.cfg format description 2021-04-14 14:30:20 +02:00
3847008e1b docs: pmt - remove old linux driver options 2021-04-14 14:26:39 +02:00
f6ed2eff47 docs: move tape config/syntax to appendix 2021-04-14 14:25:10 +02:00
23eed6755a ui: tape/ChangerStatus: hide drive-slots without assigned drives
if a user has not configured a drive for a specified driveslot of the
changer, simply hide that slot

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-14 14:24:07 +02:00
384a2b4d4f docs: faq: fix encryption link
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-14 14:18:05 +02:00
910177a388 docs: pve-integration: mention gui integration
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-14 14:18:05 +02:00
54311a38c6 docs: update proxmox-tape status output 2021-04-14 14:03:45 +02:00
983edbc54a ui: tape/ChangerStatus: add Format button to drivegrid
so that the user can also format an already inserted tape directly

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-14 13:53:16 +02:00
10439718e2 docs: use defininition list for tape Terminology
To avoid those strange line breaks in Field list labels.
2021-04-14 13:25:29 +02:00
ebddccef5f docs: gui: add short tape backup section
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-14 12:47:05 +02:00
9cfe0ff350 docs: include tape backup in TOC
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-14 12:47:05 +02:00
295bae14b7 docs: tape: add a bit of text to changers for better image flow
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-14 12:47:05 +02:00
53939bb438 docs: tape screenshots
Add tape screenshots throughout the tape docs with some references to the GUI

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-04-14 12:47:05 +02:00
329c2cbe66 docs: reorder maintenance & network after client usage/tools
The idea is that people first need to make actual backups before they
need to do maintenance tasks.
Network is already setup when installing with the ISO or on-top of
Debian, so that is not a priority either.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-14 12:47:05 +02:00
55334cf45a docs: use "Backup Storage" heading
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-14 12:47:05 +02:00
a2e30cd51d ui: tape: rename erase to format
erase is a different action, so correctly call it 'format'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-14 12:25:53 +02:00
4bf2ab1109 cleanup: remove debug println 2021-04-14 10:39:29 +02:00
1dd1c9eb5c api2/tape/restore: restore_chunk_archive: only ignore tape related errors
when we get an error from the tape, we possibly want to ignore it,
i.e. when the file was incomplete, but we still want to error
out if the error came from e.g, the datastore, so we have to move
the error checking code to the 'next_chunk' call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-14 10:38:26 +02:00
6dde015f8c bump version to 1.1.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-13 14:42:50 +02:00
5f3b2330c8 docs: typo fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-13 14:42:50 +02:00
4ba5d3b3dd docs: mention client repository and rework client installation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-13 14:42:50 +02:00
e7e3d7360a docs: get help: actually link customer portal
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-13 14:42:50 +02:00
fd8b00aed7 docs: client: add "Backup" to "Repository Locations" heading to avoid confusion
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-13 14:42:50 +02:00
2631e57d20 fix regression tests 2021-04-13 14:02:37 +02:00
90461b76fb TapeRead: add skip_data() 2021-04-13 13:32:45 +02:00
629103d60c d/rules: update binary path for proxmox-backup-file-restore
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-13 12:18:29 +02:00
dc232b8946 d/control: update to track thiserror build dependency
was in Cargo.toml since commit
64ad7b706148b289d4ca67b87630cdc60f464cc

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-13 12:16:42 +02:00
6fed819dc2 proxmox-backup-file-restore: fix various postinst bugs/bad-behavior
1. The exit was never called as `test ... || echo "foo" || exit 1`
   can never come to the exit, as echo will not fail

2. The echo was meant to be redirected to stderr (FD #2) but it was
   actually redirected to a file named '2'

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-13 12:06:10 +02:00
646fc7f086 ui: tape/DriveStatus: show that no tape is loaded in grid title
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-13 11:48:45 +02:00
ecc5602c88 ui: tape/DriveStatus: remove buffer-mode
since this will almost always be set to '1', it has no real
value to show to the user

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-13 11:47:37 +02:00
6a15cce540 tape: SgTapeReader::read_block - disable reading beyond EOF 2021-04-13 11:46:30 +02:00
f281b8d3a9 tape: cleanup MediaCatalog on tape reuse 2021-04-13 11:46:30 +02:00
4465b76812 d/control: bump versioned dependency for proxmox-widget-toolkit
to ensure we have the moved out file browser widget available

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-13 09:14:11 +02:00
61df02cda1 ui: file browser: adapt to abi changes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-13 09:06:30 +02:00
3b0321365b use FileBrowser from proxmox-widget-toolkit
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-13 08:44:48 +02:00
0dfce17a43 api/datastore: allow pxar file download of entire archive
Treat filepaths like "/root.pxar.didx" without a trailing slash as
wanting to download the entire archive content instead of erroring. The
zip-creation code already works fine for this scenario.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-13 08:26:41 +02:00
a38dccf0e8 ui: index: drop enableTapeUI
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 15:55:58 +02:00
f05085ab22 ui: nav tree: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 15:55:58 +02:00
bc42bb3c6e ui: nav tree: make datastore-add button less special cased
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 15:55:58 +02:00
94b7f56e65 ui: nav tree: code cleanup and unification between datastore and tapes
datastore and tape entries are very similar but differ in some points
in such a way that a nice unification is not really that helpful, but
making similar key parts the same is still nice when reading the code

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 15:55:58 +02:00
0417e9af1b tools/async_io: do not error on Accept for StaticIncoming
in proxmox-backup-proxy, we log and discard any errors on 'accept',
so that we can continue to server requests

in proxmox-backup-api, we just have the StaticIncoming that accepts,
which will forward any errors from the underlying TcpListener

this patch also logs and discards the errors, like in the proxy.
Otherwise it could happen that if the api-daemon has more files open
than the proxy, it will shut itself down because of a
'too many open files' error if there are many open connections

(the service should also restart on exit i think, but this is
a separate issue)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-12 15:43:13 +02:00
ce5327badc tape: fix regression tests 2021-04-12 14:08:05 +02:00
368f4c5416 fix gathering io stats for zpools
if a datastore or root is not used directly on the pool dir
(e.g. the installer creates 2 sub datasets ROOT/pbs-1), info in
/proc/self/mountinfo returns not the pool, but the path to the
dataset, which has no iostats itself in /proc/spl/kstat/zfs/
but only the pool itself

so instead of not gathering data at all, gather the info from the
underlying pool instead. if one has multiple datastores on the same
pool those rrd stats will be the same for all those datastores now
(instead of empty) similar to 'normal' directories

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-12 13:35:38 +02:00
318b310638 tape: improve EOT error handling 2021-04-12 13:27:34 +02:00
164ad7b706 sgutils2: use thiserror to derive Error 2021-04-12 13:27:34 +02:00
a5322f3c50 buildsys: fix restore package names
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 12:52:19 +02:00
fa29d7eb49 ui: improve code-readability s/tapestore/tapeStore/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 12:34:26 +02:00
a21f9852fd enable tape backup by default
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 12:31:56 +02:00
79e2473c63 d/control: file restore: only recommend proxmox-backup-restore-image
should not be a hard dependency, as one can use the file-restore tool
for pxar archives without it too

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 07:56:31 +02:00
375b1f6150 d/control: rename proxmox-file-restore to proxmox-backup-file-restore
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 07:56:31 +02:00
109ccd300f cleanup: move tape SCSI code to src/tape/drive/lto/sg_tape/ 2021-04-09 11:34:45 +02:00
c287b28725 buildsys: dinstall: only install server/client debs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 10:09:15 +02:00
c560cfddca tape: read_drive_status - ignore media changed sense info 2021-04-09 09:46:19 +02:00
44f6bb019c sgutils2: implement scsi_request_sense() 2021-04-09 09:46:19 +02:00
d6d42702d1 bump version to 1.0.14-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-08 18:59:07 +02:00
3fafd0e2a1 d/postinst: check for old tape.cfg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-08 18:59:07 +02:00
59648eac3d avoid extra separate upload to pbs repo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-08 18:45:30 +02:00
5b6b5bba68 upload file restore package only to PVE for now
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-08 18:45:30 +02:00
b13089cdf5 file-restore: add 'extract' command for VM file restore
The data on the restore daemon is either encoded into a pxar archive, to
provide the most accurate data for local restore, or encoded directly
into a zip file (or written out unprocessed for files), depending on the
'pxar' argument to the 'extract' API call.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-08 14:43:41 +02:00
1f03196c0b tools/zip: add zip_directory helper
Encodes an entire local directory into an AsyncWrite recursively.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-08 14:32:03 +02:00
edf0940649 pxar/extract: add sequential variant of extract_sub_dir
extract_sub_dir_seq, together with seq_files_extractor, allow extracting
files from a pxar Decoder, along with the existing option for an
Accessor. To facilitate code re-use, some helper functions are extracted
in the process.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-08 14:24:23 +02:00
801ec1dbf9 file-restore(-daemon): implement list API
Allows listing files and directories on a block device snapshot.
Hierarchy displayed is:

/archive.img.fidx/bucket/component/<path>
e.g.
/drive-scsi0.img.fidx/part/2/etc/passwd
(corresponding to /etc/passwd on the second partition of drive-scsi0)

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-08 14:24:14 +02:00
34ac5cd889 debian/client: add postinst hook to rebuild file-restore initramfs
This will be triggered on updating proxmox-file-restore (via configure,
necessary since the daemon binary might change) and
proxmox-backup-restore-image (via 'activate-noawait', necessary since
the base image might change).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-08 14:20:05 +02:00
58421ec112 file-restore: add basic VM/block device support
Includes methods to start, stop and list QEMU file-restore VMs, as well
as CLI commands do the latter two (start is implicit).

The implementation is abstracted behind the concept of a
"BlockRestoreDriver", so other methods can be implemented later (e.g.
mapping directly to loop devices on the host, using other hypervisors
then QEMU, etc...).

Starting VMs is currently unused but will be needed for further changes.

The design for the QEMU driver uses a locked 'map' file
(/run/proxmox-backup/$UID/restore-vm-map.json) containing a JSON
encoding of currently running VMs. VMs are addressed by a 'name', which
is a systemd-unit encoded combination of repository and snapshot string,
thus uniquely identifying it.

Note that currently you need to run proxmox-file-restore as root to use
this method of restoring.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-08 14:11:02 +02:00
a5bdc987dc add tools/cpio encoding module
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-08 14:10:45 +02:00
d32a8652bd file-restore-daemon: add disk module
Includes functionality for scanning and referring to partitions on
attached disks (i.e. snapshot images).

Fairly modular structure, so adding ZFS/LVM/etc... support in the future
should be easy.

The path is encoded as "/disk/bucket/component/path/to/file", e.g.
"/drive-scsi0/part/0/etc/passwd". See the comments for further
explanations on the design.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-08 14:03:54 +02:00
a26ebad5f9 file-restore-daemon: add watchdog module
Add a watchdog that will automatically shut down the VM after 10
minutes, if no API call is received.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-08 13:58:29 +02:00
dd9cef56fc file-restore-daemon: add binary with virtio-vsock API server
Implements the base of a small daemon to run within a file-restore VM.

The binary spawns an API server on a virtio-vsock socket, listening for
connections from the host. This happens mostly manually via the standard
Unix socket API, since tokio/hyper do not have support for vsock built
in. Once we have the accept'ed file descriptor, we can create a
UnixStream and use our tower service implementation for that.

The binary is deliberately not installed in the usual $PATH location,
since it shouldn't be executed on the host by a user anyway.

For now, only the API calls 'status' and 'stop' are implemented, to
demonstrate and test proxmox::api functionality.

Authorization is provided via a custom ApiAuth only checking a header
value against a static /ticket file.

Since the REST server implementation uses the log!() macro, we can
redirect its output to stdout by registering env_logger as the logging
target. env_logger is already in our dependency tree via zstd/bindgen.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-08 13:57:57 +02:00
26858dba84 server/rest: add ApiAuth trait to make user auth generic
This allows switching the base user identification/authentication method
in the rest server. Will initially be used for single file restore VMs,
where authentication is based on a ticket file, not the PBS user
backend (PAM/local).

To avoid putting generic types into the RestServer type for this, we
merge the two calls "extract_auth_data" and "check_auth" into a single
one, which can use whatever type it wants internally.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-08 13:57:57 +02:00
9fe3358ce6 file-restore: allow specifying output-format
Makes CLI use more comfortable by not just printing JSON to the
terminal.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-08 13:57:57 +02:00
76425d84b3 file-restore: add binary and basic commands
For now it only supports 'list' and 'extract' commands for 'pxar.didx'
files. This should be the foundation for a general file-restore
interface that is shared with block-level snapshots.

This is packaged as a seperate .deb file, since for block level restore
it will need to depend on pve-qemu-kvm, which we want to seperate from
proxmox-backup-client.

[original code for proxmox-file-restore.rs]
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

[code cleanups/clippy, use helpers::list_dir_content/ArchiveEntry, no
/block subdir for .fidx files, seperate binary and package]
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-08 13:57:57 +02:00
42355b11a4 update d/control
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-08 13:57:57 +02:00
511e4f6987 ui: tape/DriveStatus: improve status grid a bit
by using format_boolean for compression/write protect,
combining file/block posiition into one (saves a line)

and adding the missing alert-flags

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-08 13:56:46 +02:00
3f0e344bc1 ui: tape/ChangerStatus: hide selector for single drives in barcode-label
it is rather pointless to let the user select something were there
is no choice. We have to keep the window though, since the user may
want to choose a pool

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-08 13:56:46 +02:00
a316178768 ui: tape/ChangerStatus: shortcut Inventory for single drives
like 'load-media'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-08 13:56:46 +02:00
dff8ea92aa ui: tape/ChangerStatus: shortcut 'load-media' for single drive
if a changer only has a single drive, there is no point in showing
a window with a DriveSelector, just do want the user wanted.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-08 13:56:46 +02:00
88e1f7997c ui: tape/ChangerStatus: rework EraseWindow
to make it more like a 'dangerous' remove window
also works in the singleDrive logic to hide/show the driveselector

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-08 13:56:46 +02:00
4c3eabeaf3 ui: tape/ChangerStatus: save assigned drives
so that we can shortcut later if we only have one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-08 13:56:46 +02:00
4c7be5f59d ui: tape/ChangerStatus: add missing property
it will actually not fail, but we declare it nonetheless to indicate
that it exists

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-08 13:56:46 +02:00
6d4fbbc3ea ui: dashobard/DataStoreStatistics: add 'Available' column
for some storages, it is valuable information, e.g. if one has datastores
on separate datasets of the same zpool

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-08 13:27:22 +02:00
1a23132262 tape: add TapeDensity::Unknown 2021-04-08 12:23:54 +02:00
48c4193f7c ui: update tape DriveStatus for new driver 2021-04-08 12:04:14 +02:00
8204d9b095 tape: avoid unneccessary SCSI request in Drop 2021-04-08 11:26:08 +02:00
fad95a334a tape: clear encryption key after backup (for security reasons) 2021-04-08 10:37:49 +02:00
973e985d73 cleanup: remove unused linux tape driver code 2021-04-08 10:15:52 +02:00
e5a13382b2 ui: tape/TapeRestore: use correct value check for store & mapping
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-08 10:02:31 +02:00
81c0b90447 ui: tape/TapeRestore: fix restoring without mapping
we have to delete the 'mapping' variable in any case since it's not
a valid api parameter

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-08 10:02:17 +02:00
ee9fa953de docs: Mention our new user space tape driver, adopt device path names 2021-04-08 09:50:09 +02:00
09acf0a70d do not depend on mt-st and mtx
We now use our own driver, so those tools are no longer required.
2021-04-08 09:48:47 +02:00
15d1435789 tape: add vendor, product and revision to LtoDriveAndMediaStatus 2021-04-08 08:34:46 +02:00
80ea23e1b9 tape: pmt - implement options command 2021-04-08 08:34:45 +02:00
5d6379f8db tape: implement locate_file without LOCATE(10) 2021-04-08 08:34:45 +02:00
566b946f9b tape: pmt - re-implement lock/unlock command 2021-04-08 07:28:30 +02:00
7f7459677d tape: pmt - re-implement fsr/bsr 2021-04-08 07:28:30 +02:00
0892a512bc tape: correctly set/display drive option 2021-04-08 07:28:30 +02:00
b717871d2a sgutils2: add scsi_mode_sense helper 2021-04-08 07:28:30 +02:00
7b11a8098d tape: make sure there is a filemark at the end of the tape 2021-04-08 07:28:30 +02:00
8b2c6f5dbc tape: make fsf/bsf driver specific
Because the virtual tape driver behaves different than LTO drives.
2021-04-08 07:28:30 +02:00
d26985a600 tape: fix LEOM handling 2021-04-08 07:28:30 +02:00
e29f456efc tape: implement format/erase 2021-04-08 07:28:30 +02:00
a79082a0dd tape: implement LTO userspace driver 2021-04-08 07:28:30 +02:00
1336ae8249 tape: introduce trait BlockWrite 2021-04-08 07:28:30 +02:00
0db5712493 tape: introduce trait BlockRead 2021-04-08 07:28:30 +02:00
c47609fedb server: rest: collapse nested if for less indentation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-07 17:57:46 +02:00
b84e8aaee9 server: rest: switch from fastest to default deflate compression level
I made some comparision with bombardier[0], the one listed here are
30s looped requests with two concurrent clients:

[ static download of ext-all.js ]:
  lvl                              avg /   stdev  / max
 none        1.98 MiB  100 %    5.17ms /  1.30ms / 32.38ms
 fastest   813.14 KiB   42 %   20.53ms /  2.85ms / 58.71ms
 default   626.35 KiB   30 %   39.70ms /  3.98ms / 85.47ms

[ deterministic (pre-defined data), but real API call ]:
  lvl                              avg /   stdev  / max
 none      129.09 KiB  100 %    2.70ms / 471.58us / 26.93ms
 fastest    42.12 KiB   33 %    3.47ms / 606.46us / 32.42ms
 default    34.82 KiB   27 %    4.28ms / 737.99us / 33.75ms

The reduction is quite better with default, but it's also slower, but
only when testing over unconstrained network. For real world
scenarios where compression actually matters, e.g., when using a
spotty train connection, we will be faster again with better
compression.

A GPRS limited connection (Firefox developer console) requires the
following load (until the DOMContentLoaded event triggered) times:
  lvl        t      x faster
 none      9m 18.6s   x 1.0
 fastest   3m 20.0s   x 2.8
 default   2m 30.0s   x 3.7

So for worst case using sligthly more CPU time on the server has a
tremendous effect on the client load time.

Using a more realistical example and limiting for "Good 2G" gives:

 none      1m  1.8s   x 1.0
 fastest      22.6s   x 2.7
 default      16.6s   x 3.7

16s is somewhat OK, >1m just isn't...

So, use default level to ensure we get bearable load times on
clients, and if we want to improve transmission size AND speed then
we could always use a in-memory cache, only a few MiB would be
required for the compressable static files we server.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-07 17:57:42 +02:00
d84e4073af tools/zip: compress zips with deflate
by using our DeflateEncoder

for this to work, we have to create wrapper reader that generates the crc32
checksum while reading.

also we need to put the target writer in an Option, so that we can take
it out of self and move it into the DeflateEncoder while writing
compressed

we can drop the internal buffer then, since that is managed by the
deflate encoder now

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
e8656da70d tools/zip: run rustfmt
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
59477ad252 server/rest: compress static files
compress them on the fly
and refactor the size limit for chunking files

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
2f29f1c765 server/rest: compress api calls
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
4d84e869bf server/rest: add helper to extract compression headers
for now we only extract 'deflate'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
79d841014e tools/compression: add DeflateEncoder and helpers
implements a deflate encoder that can compress anything that implements
AsyncRead + Unpin into a file with the helper 'compress'

if the inner type is a Stream, it implements Stream itself, this way
some streaming data can be streamed compressed

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
ea62611d8e tools: add compression module
only contains a basic enum for the different compresssion methods

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
f3c867a034 docs: fix MathJax inclusion in build
Else it does not gets picked up on release builds...

Also the mathjax path option affects HTML not EPUB so move it to the
correct section in conf.py

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-05 11:59:45 +02:00
aae5db916e docs: fix heading
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-05 11:59:18 +02:00
a417c8a93e bump version to 1.0.13-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-02 15:32:27 +02:00
79e58a903e pxar: handle missing GROUP_OBJ ACL entries
Previously we did not store GROUP_OBJ ACL entries for
directories, this means that these were lost which may
potentially elevate group permissions if they were masked
before via ACLs, so we also show a warning.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-02 11:10:20 +02:00
9f40e09d0a pxar: fix directory ACL entry creation
Don't override `group_obj` with `None` when handling
`ACL_TYPE_DEFAULT` entries for directories.

Reproducer: /var/log/journal ends up without a `MASK` type
entry making it invalid as it has `USER` and `GROUP`
entries.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-02 10:22:04 +02:00
553e57f914 server/rest: drop now unused imports
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-01 11:53:13 +02:00
2200a38671 code cleanup: drop extra newlines at EOF
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-01 11:27:07 +02:00
ba39ab20fb server/rest: extract auth to seperate module
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:26:28 +02:00
ff8945fd2f proxmox_client_tools: move common key related functions to key_source.rs
Add a new module containing key-related functions and schemata from all
over, code moved is not changed as much as possible.

Requires adapting some 'use' statements across proxmox-backup-client and
putting the XDG helpers quite cozily into proxmox_client_tools/mod.rs

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
4876393562 vsock_client: support authorization header
Pass in an optional auth tag, which will be passed as an Authorization
header on every subsequent call.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
971bc6f94b vsock_client: remove some &mut restrictions and rustfmt
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
cab92acb3c vsock_client: remove wrong comment
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:09:28 +02:00
a1d90719e4 bump pxar dep to 0.10.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-31 14:00:20 +02:00
eeff085d9d server/rest: fix type ambiguity
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-29 12:02:30 +02:00
d43c407a00 server/rest: rust format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-29 08:17:26 +02:00
6bc87d3952 ui: verification job: fix subject of edit window
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:57:00 +02:00
04c1c68f31 ui: verify job: fix subject of edit window
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:45:45 +02:00
94b17c804a ui: task descriptions: sort alphabetically
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 16:45:23 +02:00
94352256b7 ui: task descriptions: fix casing
Enforce title-case. Affects mostly the new tape related task
description.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:50:51 +02:00
b3bed7e41f docs: tape/pool: add backend/ui setting name for allocation policy
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:40:23 +02:00
a4672dd0b1 ui: tape/pool: set onlineHelp for edit/add window
To let users find the good explanation about allocation and retention
policies from the docs easier.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:29:02 +02:00
17bbcb57d7 ui: tape: retention/allocation are Policies, note so
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:28:36 +02:00
843146479a ui: gettext; s/blocksize/block size/
Blocksize is not a word in the English language

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-28 13:04:19 +02:00
cf1e117fc7 sgutils2: use enum for ScsiError
This avoids string allocation when we return SenseInfo.
2021-03-27 15:57:48 +01:00
03eac20b87 SgRaw: add do_in_command() 2021-03-27 15:38:08 +01:00
11f5d59396 tape: page-align BlockHeader so that we can use it with SG_IO 2021-03-27 15:36:35 +01:00
6f63c29306 Cargo.toml: fix: set version to 1.0.12 2021-03-26 14:14:12 +01:00
c0e365fd49 bump version to 1.0.12-1 2021-03-26 14:09:30 +01:00
93fb2e0d21 api2/types: add type_text to DATASTORE_MAP_FORMAT
This way we get a better rendering in the api-viewer.
before:
 [<string>, ... ]

after:
 [(<source>=)?<target>, ... ]

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 13:18:10 +01:00
c553407e98 tape: add --scan option for catalog restore 2021-03-25 13:08:34 +01:00
4830de408b tape: avoid writing catalogs for empty backup tasks 2021-03-25 12:50:40 +01:00
7f78528308 OnlineHelpInfo.js: new link for client-repository 2021-03-25 12:26:57 +01:00
2843ba9017 avoid compiler warning 2021-03-25 12:25:23 +01:00
e244b9d03d api2/types: expand DATASTORE_MAP_LIST_SCHEMA description
and give an example

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 12:18:14 +01:00
657c47db35 tape: ui: TapeRestore: make datastore mapping selectable
by adding a custom field (grid) where the user can select
a target datastore for each source datastore on tape

if we have not loaded the content of the media set yet,
we have to load it on window open to get the list of datastores
on the tape

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 12:17:46 +01:00
a32bb86df9 api subscription: drop old hack for api-macro issue
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-25 12:03:33 +01:00
654c56e05d docs: client benchmark: note that tls is only done if repo is set
and remove misleading note about no network involved in tls
speedtest, as normally there is!

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-25 10:33:45 +01:00
589c4dad9e tape: add fsf/bsf to TapeDriver trait 2021-03-25 10:10:16 +01:00
0320deb0a9 proxmox-tape: fix clean api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 08:14:13 +01:00
4c4e5c2b1e api2/tape/restore: enable restore mapping of datastores
by changing the 'store' parameter of the restore api call to a
list of mappings (or a single default datastore)

for example giving:
a=b,c=d,e

would restore
datastore 'a' from tape to local datastore 'b'
datastore 'c' from tape to local datastore 'e'
all other datastores to 'e'

this way, only a single datastore can also be restored, by only
giving a single mapping, e.g. 'a=b'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-25 07:46:12 +01:00
924373d2df client/backup_writer: clarify backup and upload size
The text 'had to upload [KMG]iB' implies that this is the size we
actually had to send to the server, while in reality it is the
raw data size before compression.

Count the size of the compressed chunks and print it separately.
Split the average speed into its own line so they do not get too long.

Rename 'uploaded' into 'size_dirty' and 'vsize_h' into 'size'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 18:24:56 +01:00
3b60b5098f client/backup_writer: introduce UploadStats struct
instead of using a big anonymous tuple. This way the returned values
are properly named.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 18:24:56 +01:00
4abb3edd9f docs: fix horizontal scrolling issues on desktop and mobile
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 18:24:39 +01:00
932e69a837 docs: improve navigation coloring on mobile
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 18:24:20 +01:00
ef6d49670b client: backup writer: run rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 17:12:05 +01:00
52ea00e9df docs: only apply toctree color override to sidebar one
else the TOC on the index page has some white text on white back
ground

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-24 17:09:30 +01:00
870681013a tape: fix catalog restore
We need to rewind the tape if fast_catalog_restore() fail ...
2021-03-24 10:09:23 +01:00
c046739461 tape: fix MediaPool regression tests 2021-03-24 09:44:30 +01:00
8b1289f3e4 tape: skip catalog archives in restore 2021-03-24 09:33:39 +01:00
f1d76ecf6c fix #3359: fix blocking writes in async code during pxar create
in commit `asyncify pxar create_archive`, we changed from a
separate thread for creating a pxar to using async code, but the
StdChannelWriter used for both pxar and catalog can block, which
may block the tokio runtime for single (and probably dual) core
environments

this patch adds a wrapper struct for any writer that implements
'std::io::Write' and wraps the write calls with 'block_in_place'
so that if called in a tokio runtime, it knows that this code
potentially blocks

Fixes: 6afb60abf5 ("asyncify pxar create_archive")

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-24 09:00:07 +01:00
074503f288 tape: implement fast catalog restore 2021-03-24 08:40:34 +01:00
c6f55139f8 tape: impl. MediaCatalog::parse_catalog_header
This is just an optimization, avoiding to read the catalog into memory.

We also expose create_temporary_database_file() now (will be
used for catalog restore).
2021-03-24 06:32:59 +01:00
20cc25d749 tape: add TapeDriver::move_to_last_file 2021-03-24 06:32:59 +01:00
30316192b3 tape: improve locking (lock media-sets)
- new helper: lock_media_set()

- MediaPool: lock media set

- Expose Inventory::new() to avoid double loading

- do not lock pool on restore (only lock media-set)

- change pool lock name to ".pool-{name}"
2021-03-24 06:32:59 +01:00
e93263be1e taoe: implement MediaCatalog::destroy_unrelated_catalog() helper 2021-03-22 12:03:11 +01:00
2ab2ca9c24 tape: add MediaPool::lock_unassigned_media_pool() helper 2021-03-19 10:13:38 +01:00
54fcb7f5d8 api2/tape/backup: wait indefinitely for lock in scheduled backup jobs
so that a user can schedule multiple backup jobs onto a single
media pool without having to consider timing them apart

this makes sense since we can backup multiple datastores onto
the same media-set but can only specify one datastore per backup job

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 09:04:32 +01:00
4abd4dbe38 api2/tape/backup: include a summary on notification e-mails
for now only contains the list of included snapshots (if any),
as well as the backup duration

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 09:03:52 +01:00
eac1beef3c tape: cleanup PoolWriter - factor out common code 2021-03-19 08:56:14 +01:00
166a48f903 tape: cleanup - split PoolWriter into several files 2021-03-19 08:19:13 +01:00
82775c4764 tape: make sure we only commit/write valid catalogs 2021-03-19 07:50:32 +01:00
88bc9635aa tape: store media_uuid in PoolWriterState
This is mainly a cleanup, avoiding to access the catalog_set to get the uuid.
2021-03-19 07:33:59 +01:00
1037f2bc2d tape: cleanup - rename CatalogBuilder to CatalogSet 2021-03-19 07:22:54 +01:00
f24cbee77d server/email_notifications: do not double html escape
the default escape handler is handlebars::html_escape, but this are
plain text emails and we manually escape them for the html part, so
set the default escape handler to 'no_escape'

this avoids double html escape for the characters: '&"<>' in emails

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:02:49 +01:00
25b4d52dce server/email_notifications: do not panic on template registration
instead print an error and continue, the rendering functions will error
out if one of the templates could not be registered

if we `.unwrap()` here, it can lead to problems if the templates are
not correct, i.e. we could panic while holding a lock, if something holds
a mutex while this is called for the first time

add a test to catch registration issues during package build

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:02:17 +01:00
2729d134bd tools/systemd/time: implement some Traits for TimeSpan
namely
* From<Duration> (to convert easily from duration to timespan)
* Display (for better formatting)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-19 07:00:55 +01:00
32b75d36a8 tape: backup media catalogs 2021-03-19 06:58:46 +01:00
c4430a937d bump version to 1.0.11-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-18 12:36:28 +01:00
237314ad0d tape: improve catalog consistency checks
Try to check if we read the correct catalog by verifying uuid, media_set_uuid
and seq_nr.

Note: this changes the catalog format again.
2021-03-18 08:43:55 +01:00
caf76ec592 tools/subscription: ignore ENOENT for apt auth config removal
deleting a nonexistant file is hardly an error worth mentioning

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 20:12:58 +01:00
0af8c26b74 ui: tape/BackupOverview: insert a datastore level
since we can now backup multiple datastores in the same media-set,
we show the datastores as first level below that

the final tree structucture looks like this:

tapepool A
- media set 1
 - datastore I
  - tape x
   - ct/100
    - ct/100/2020-01-01T00:00:00Z

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:49 +01:00
825dfe7e0d ui: tape/DriveStatus: fix updating pointer+click handler on info widget
we can only do this after it is rendered, the element does not exist
before

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:39 +01:00
30a0809553 ui: tape/DriveStatus: add erase button
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:17 +01:00
6ee3035523 tape: define magic number for catalog archives 2021-03-17 13:35:23 +01:00
b627ebbf40 tape: improve catalog parser 2021-03-17 11:29:23 +01:00
ef4bdf6b8b tape: proxmox-tape media content - add 'store' attribute 2021-03-17 11:17:54 +01:00
54722acada tape: store datastore name in tape archives and media catalog
So that we can store multiple datastores on a single media set.
Deduplication is now per datastore (not per media set).
2021-03-17 11:08:51 +01:00
0e2bf3aa1d SnapshotReader: add self.datastore_name() helper 2021-03-17 10:16:34 +01:00
365126efa9 tape: PoolWriter - remove unnecessary move_to_eom 2021-03-17 10:16:34 +01:00
03d4c9217d update OnlineHelpInfo.js 2021-03-17 10:16:34 +01:00
8498290848 docs: technically not everything is in rust/js
I mean the whole distro uses quite some C and the like as base, so
avoid being overly strict here.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
654db565cb docs: features: mention that there are no client/data limits
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
51f83548ed docs: drop uncommon spelled out GCM
It does not help users if that is spelled out, and its not a common
use of GCM, and especially in the AES 256 context its clear what is
meant. The link to Wikipedia stays, so interested people can still
read up on it and others get a better overview due to the text being
more concise.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
5847a6bdb5 docs: fix linking, avoid over long text in main feature list
The main feature list should provide a short overview of the, well,
main features. While enterprise support *is* a main and important
feature, it's not the place here to describe things like personal
volume/ngo/... offers and the like.

Move parts of it to getting help, which lacked mentioning the
enterprise support too and is a good place to describe the customer
portal.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
313e5e2047 docs: mention support subscription plans
and change enterprise repository section to present tense.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-03-16 19:24:23 +01:00
7914e62b10 tools/zip: only add zip64 field when necessary
if neither offset nor size exceeds 32bit, do not add the
zip64 extension field

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 09:13:39 +01:00
84d3284609 ui: tape/DriveStatus: open task window on click on state
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 09:00:07 +01:00
70fab5b46e ui: tape: convert slot selection on transfer to combogrid
this is much handier than number field, and the user can instantly
see which one is an import/export slot

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:57:48 +01:00
e36135031d ui: tape/Restore: let the user choose an owner
so that the tape backup can be restored as any user, given
the current logged in user has the correct permission.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:55:42 +01:00
5a5ee0326e proxmox-tape: add missing notify-user to 'proxmox-tape restore'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:54:38 +01:00
776dabfb2e tape: use MB/s for backup speed (to match drive speed specification) 2021-03-16 08:51:49 +01:00
5c4755ad08 tape: speedup backup by doing read/write in parallel 2021-03-16 08:51:49 +01:00
7c1666289d tools/zip: add missing start_disk field for zip64 extension
it is not optional, even though we give the size explicitely

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-15 12:36:40 +01:00
cded320e92 backup info: run rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-14 19:18:35 +01:00
b31cdec225 update to pxar 0.10
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-12 10:48:09 +01:00
591b120d35 fix feature flag logic in pxar create
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-12 10:17:51 +01:00
e8913fea12 tape: write_chunk_archive - do not consume partially written chunk at EOT
So that it is re-written to the next tape.
2021-03-12 07:14:50 +01:00
355a41a763 bump version to 1.0.10-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 13:40:22 +01:00
5bd4825432 d/postinst: fixup tape permissions if existing with wrong permissions
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 13:40:22 +01:00
8f7e5b028a d/postinst: only check for broken task index on upgrade
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 13:40:22 +01:00
2a29d9a1ee d/postinst: tell user that we restart when updating from older version 2021-03-11 13:40:00 +01:00
e056966bc7 d/postinst: restart when updating from older version
Else one has quite a terrible UX when installing from 1.0 ISO and
then upgrading to latest release..

commit 0ec79339f7 for the fix and some other details

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 09:56:12 +01:00
ef0ea4ba05 server/worker_task: improve endtime for unknown tasks
instead of always using the starttime, use the last timestamp from the log
this way, one can see when the task was aborted without having to read
the log

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-11 09:56:12 +01:00
2892624783 tape/send_load_media_email: move to server/email_notifications
and reuse 'send_job_status_mail' there so that we get consistent
formatted mails from pbs (e.g. html part and author)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-11 09:56:12 +01:00
2c10410b0d tape: improve backup task log 2021-03-11 08:43:13 +01:00
d1d74c4367 typo fixes all over the place
found and semi-manually replaced by using:
 codespell -L mut -L crate -i 3 -w

Mostly in comments, but also email notification and two occurrences
of misspelled  'reserved' struct member, which where not used and
cargo build did not complain about the change, soo ...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-10 16:39:57 +01:00
8b7f3b8f1d ui: fix typo in comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-10 15:24:39 +01:00
3f6c2efb8d ui: fix typo in options
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2021-03-10 15:23:17 +01:00
227f36497a d/postinst: fix typo in comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-10 15:22:52 +01:00
5ef4c7bcd3 tape: fix scsi volume_statistics and cartridge_memory for quantum drives 2021-03-10 14:13:48 +01:00
70d00e0149 tape: documentation language fixup
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-03-10 11:05:30 +01:00
dcf155dac9 ui: tape: increase tapestore interval
from 2 to 60 seconds. To retain the response time of the gui
when adding/editing/removing, trigger a manual reload on these actions

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-10 11:00:10 +01:00
3c5b523631 ui: NavigationTree: do not modify list while iterating
iterating over a nodeinterfaces children while removing them
will lead to 'child' being undefined

instead collect the children to remove in a separate list
and iterate over them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-10 10:59:59 +01:00
6396bace3d tape: improve backup task log (show percentage) 2021-03-10 10:59:13 +01:00
713a128adf tape: improve backup task log format 2021-03-10 09:54:51 +01:00
affc224aca tape: read_tape_mam - pass correct allocation len 2021-03-10 09:24:38 +01:00
6f82d32977 tape: cleanup - remove wrong inline comment 2021-03-10 08:11:51 +01:00
2a06e08618 api2/tape/backup: continue on vanishing snapshots
when we do a prune during a tape backup, do not cancel the tape backup,
but continue with a warning

the task still fails and prompts the user to check the log

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-09 10:20:54 +01:00
1057b1f5a5 tape: lock artificial "__UNASSIGNED__" pool to avoid races 2021-03-09 10:00:26 +01:00
af76234112 tape: improve MediaPool allocation by sorting tapes by ctime and label_text 2021-03-09 08:33:21 +01:00
277 changed files with 19395 additions and 6082 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "1.0.9"
version = "1.1.14"
authors = [
"Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>",
@ -15,6 +15,7 @@ edition = "2018"
license = "AGPL-3"
description = "Proxmox Backup"
homepage = "https://www.proxmox.com"
build = "build.rs"
exclude = [ "build", "debian", "tests/catar_data/test_symlink/symlink1"]
@ -29,7 +30,11 @@ bitflags = "1.2.1"
bytes = "1.0"
crc32fast = "1"
endian_trait = { version = "0.6", features = ["arrays"] }
env_logger = "0.7"
flate2 = "1.0"
anyhow = "1.0"
foreign-types = "0.3"
thiserror = "1.0"
futures = "0.3"
h2 = { version = "0.3", features = [ "stream" ] }
handlebars = "3.0"
@ -47,23 +52,16 @@ pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pin-project = "1.0"
pathpatterns = "0.1.2"
proxmox = { version = "0.11.0", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.1"
pxar = { version = "0.9.0", features = [ "tokio-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io" ] }
regex = "1.2"
rustyline = "7"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
siphasher = "0.3"
syslog = "4.0"
tokio = { version = "1.0", features = [ "fs", "io-util", "macros", "net", "parking_lot", "process", "rt", "rt-multi-thread", "signal", "time" ] }
tokio = { version = "1.6", features = [ "fs", "io-util", "io-std", "macros", "net", "parking_lot", "process", "rt", "rt-multi-thread", "signal", "time" ] }
tokio-openssl = "0.6.1"
tokio-stream = "0.1.0"
tokio-util = { version = "0.6", features = [ "codec" ] }
tokio-util = { version = "0.6", features = [ "codec", "io" ] }
tower-service = "0.3.0"
udev = ">= 0.3, <0.5"
url = "2.1"
@ -75,6 +73,21 @@ zstd = { version = "0.4", features = [ "bindgen" ] }
nom = "5.1"
crossbeam-channel = "0.5"
pathpatterns = "0.1.2"
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
proxmox = { version = "0.11.6", features = [ "sortable-macro", "api-macro", "cli", "router", "tfa" ] }
proxmox-acme-rs = "0.3"
proxmox-fuse = "0.1.1"
proxmox-http = { version = "0.2.1", features = [ "client", "http-helpers", "websocket" ] }
# Local path overrides
# NOTE: You must run `cargo update` after changing this for it to take effect!
[patch.crates-io]
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "cli", "router", "tfa" ] }
#proxmox-http = { path = "../proxmox/proxmox-http", features = [ "client", "http-helpers", "websocket" ] }
#pxar = { path = "../pxar", features = [ "tokio-io" ] }
[features]
default = []
#valgrind = ["valgrind_request"]

View File

@ -9,6 +9,7 @@ SUBDIRS := etc www docs
# Binaries usable by users
USR_BIN := \
proxmox-backup-client \
proxmox-file-restore \
pxar \
proxmox-tape \
pmtx \
@ -25,6 +26,10 @@ SERVICE_BIN := \
proxmox-backup-proxy \
proxmox-daily-update
# Single file restore daemon
RESTORE_BIN := \
proxmox-restore-daemon
ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release
COMPILEDIR := target/release
@ -39,7 +44,7 @@ endif
CARGO ?= cargo
COMPILED_BINS := \
$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN))
$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN) $(RESTORE_BIN))
export DEB_VERSION DEB_VERSION_UPSTREAM
@ -47,9 +52,12 @@ SERVER_DEB=${PACKAGE}-server_${DEB_VERSION}_${ARCH}.deb
SERVER_DBG_DEB=${PACKAGE}-server-dbgsym_${DEB_VERSION}_${ARCH}.deb
CLIENT_DEB=${PACKAGE}-client_${DEB_VERSION}_${ARCH}.deb
CLIENT_DBG_DEB=${PACKAGE}-client-dbgsym_${DEB_VERSION}_${ARCH}.deb
RESTORE_DEB=proxmox-backup-file-restore_${DEB_VERSION}_${ARCH}.deb
RESTORE_DBG_DEB=proxmox-backup-file-restore-dbgsym_${DEB_VERSION}_${ARCH}.deb
DOC_DEB=${PACKAGE}-docs_${DEB_VERSION}_all.deb
DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB}
DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} \
${RESTORE_DEB} ${RESTORE_DBG_DEB}
DSC = rust-${PACKAGE}_${DEB_VERSION}.dsc
@ -74,7 +82,13 @@ doc:
build:
rm -rf build
rm -f debian/control
debcargo package --config debian/debcargo.toml --changelog-ready --no-overlay-write-back --directory build proxmox-backup $(shell dpkg-parsechangelog -l debian/changelog -SVersion | sed -e 's/-.*//')
debcargo package \
--config debian/debcargo.toml \
--changelog-ready \
--no-overlay-write-back \
--directory build \
proxmox-backup \
$(shell dpkg-parsechangelog -l debian/changelog -SVersion | sed -e 's/-.*//')
sed -e '1,/^$$/ ! d' build/debian/control > build/debian/control.src
cat build/debian/control.src build/debian/control.in > build/debian/control
rm build/debian/control.in build/debian/control.src
@ -99,7 +113,9 @@ deb: build
lintian $(DEBS)
.PHONY: deb-all
deb-all: $(DOC_DEB) $(DEBS)
deb-all: build
cd build; dpkg-buildpackage -b -us -uc --no-pre-clean
lintian $(DEBS) $(DOC_DEB)
.PHONY: dsc
dsc: $(DSC)
@ -117,8 +133,8 @@ clean:
find . -name '*~' -exec rm {} ';'
.PHONY: dinstall
dinstall: ${DEBS}
dpkg -i ${DEBS}
dinstall: ${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB}
dpkg -i $^
# make sure we build binaries before docs
docs: cargo-build
@ -144,6 +160,9 @@ install: $(COMPILED_BINS)
install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(SBINDIR)/ ; \
install -m644 zsh-completions/_$(i) $(DESTDIR)$(ZSH_COMPL_DEST)/ ;)
install -dm755 $(DESTDIR)$(LIBEXECDIR)/proxmox-backup
install -dm755 $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/file-restore
$(foreach i,$(RESTORE_BIN), \
install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/file-restore/ ;)
# install sg-tape-cmd as setuid binary
install -m4755 -o root -g root $(COMPILEDIR)/sg-tape-cmd $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/sg-tape-cmd
$(foreach i,$(SERVICE_BIN), \
@ -152,8 +171,10 @@ install: $(COMPILED_BINS)
$(MAKE) -C docs install
.PHONY: upload
upload: ${SERVER_DEB} ${CLIENT_DEB} ${DOC_DEB}
upload: ${SERVER_DEB} ${CLIENT_DEB} ${RESTORE_DEB} ${DOC_DEB}
# check if working directory is clean
git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve,pmg" --dist buster
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} | \
ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve,pmg,pbs-client" --dist buster
tar cf - ${RESTORE_DEB} ${RESTORE_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve" --dist buster

23
build.rs Normal file
View File

@ -0,0 +1,23 @@
// build.rs
use std::env;
use std::process::Command;
fn git_command(args: &[&str]) -> String {
match Command::new("git").args(args).output() {
Ok(output) => String::from_utf8(output.stdout).unwrap().trim_end().to_string(),
Err(err) => {
panic!("git {:?} failed: {}", args, err);
}
}
}
fn main() {
let repo_path = git_command(&["rev-parse", "--show-toplevel"]);
let repoid = match env::var("REPOID") {
Ok(repoid) => repoid,
Err(_) => git_command(&["rev-parse", "HEAD"]),
};
println!("cargo:rustc-env=REPOID={}", repoid);
println!("cargo:rerun-if-changed={}/.git/HEAD", repo_path);
}

428
debian/changelog vendored
View File

@ -1,3 +1,431 @@
rust-proxmox-backup (1.1.14-1) buster; urgency=medium
* drop RawWaker usage to avoid a leaking a refcount
* pbs-tools: LruCache: implement Drop to fix a memory leak for the cache
* ui: add notice for nearing PBS 1.1 End-of-Life
* backport "datastore: lookup: reuse ChunkStore on stale datastore re-open"
-- Proxmox Support Team <support@proxmox.com> Thu, 02 Jun 2022 18:07:54 +0200
rust-proxmox-backup (1.1.13-3) buster; urgency=medium
* fix sending log-rotation command to API daemons
-- Proxmox Support Team <support@proxmox.com> Tue, 19 Oct 2021 10:21:18 +0200
rust-proxmox-backup (1.1.13-2) buster; urgency=medium
* revert "auth: improve thread safety of 'crypt' C-library", not safe for
Debian buster based releases.
-- Proxmox Support Team <support@proxmox.com> Mon, 26 Jul 2021 16:40:07 +0200
rust-proxmox-backup (1.1.13-1) buster; urgency=medium
* auth: improve thread safety of 'crypt' C-library
* file-restore: increase lock timeout on QEMU map
* file restore daemon: log basic startup steps
* REST-API: set error message extension for bad-request response log to
ensure the actual error is logged in any (access) log, making debugging
such issues easier.
* restore daemon: use millisecond log resolution
* fix #3496: acme: plugin: actually sleep after setting the TXT record,
ensuring DNS propagation of that record. This makes it catch up with the
docs/web-interface, where the option was already available.
-- Proxmox Support Team <support@proxmox.com> Fri, 23 Jul 2021 12:34:29 +0200
rust-proxmox-backup (1.1.12-1) buster; urgency=medium
* subscription: set higher-level error to message instead of bailing out, to
ensure a force-check gets through
* ui: dashboard: datastore stats: fix closing <i> tag
* ui: datastore: option view: only navigate up when we actually removed the
datastore
-- Proxmox Support Team <support@proxmox.com> Fri, 09 Jul 2021 12:56:35 +0200
rust-proxmox-backup (1.1.11-1) buster; urgency=medium
* tape/drive: fix logging when requesting media
* tape: fix LTO locate_file for HP drives
* fix #3393 (again): pxar/create: try to read xattrs/fcaps/acls by default
* proxmox-backup-manager: show task log on datastore create
-- Proxmox Support Team <support@proxmox.com> Wed, 30 Jun 2021 11:24:20 +0200
rust-proxmox-backup (1.1.10-1) buster; urgency=medium
* ui: datastore list summary: catch and show errors per datastore
* ui: dashboard: task summary: add a 'close' tool to the header
* ensure that backups which are currently being restored or backed up to a
tape won't get pruned
* improve error handling when locking a tape drive for a backup job
* client/pull: log snapshots that are skipped because of creation time being
older than last sync time
* ui: datastore options: add remove button to drop a datastore from the
configuration, without removing any actual data
* ui: tape: drive selector: do not autoselect the drive
* ui: tape: backup job: use correct default value for pbsUserSelector
* fix #3433: disks: port over Proxmox VE's S.M.A.R.T wearout logic
* backup: add helpers for async last recently used (LRU) caches for chunk
and index reading of backup snapshot
-- Proxmox Support Team <support@proxmox.com> Wed, 16 Jun 2021 09:46:15 +0200
rust-proxmox-backup (1.1.9-1) stable; urgency=medium
* lto/sg_tape/encryption: remove non lto-4 supported byte
* ui: improve tape restore
* ui: panel/UsageChart: change downloadServerUrl
* ui: css fixes and cleanups
* api2/tape: add api call to list media sets
* ui: tape/BackupOverview: expand pools by default
* api: node/journal: fix parameter extraction of /nodes/node/journal
* file-restore-daemon: limit concurrent download calls
* file-restore-daemon: watchdog: add inhibit for long downloads
* file-restore-daemon: work around tokio DuplexStream bug
* apt: fix removal of non-existant http-proxy config
* file-restore-daemon: disk: add RawFs bucket type
* file-restore-daemon: disk: ignore "invalid fs" error
-- Proxmox Support Team <support@proxmox.com> Tue, 01 Jun 2021 08:24:01 +0200
rust-proxmox-backup (1.1.8-1) stable; urgency=medium
* api-proxy: implement 'reload-certificate' command and hot-reload proxy
certificate when updating via the API
* ui: add task descriptions for ACME/Let's Encrypt related tasks
* correctly set apt proxy configuration
* ui: configuration: support setting a HTTP proxy for APT and subscription
checks.
* ui: tape: add 'Force new Media-Set' checkbox to manual backup
* ui: datastore/Content: add forget (delete) button for whole backup groups
* ui: tape: backup overview: move restore buttons inline to action-buttons,
making the UX more similar to the datastore content tree-view
* ui: tape restore: enabling selecting multiple snapshots
* ui: dashboards statistics: visualize datastores where querying the usage
failed
-- Proxmox Support Team <support@proxmox.com> Fri, 21 May 2021 18:21:28 +0200
rust-proxmox-backup (1.1.7-1) unstable; urgency=medium
* client: use stderr for all fingerprint confirm msgs
* fix #3391: improve mismatched fingerprint handling
* tape: add single snapshot restore
* docs/api-viewer: improve rendering of array format
* tape/pool_writer: do not unwrap on channel send
* ui: window/SyncJobEdit: disable autoSelect for remote datastore
* ui: tape: rename 'Datastore' to 'Target Datastore'
* manager: acme plugin: auto-complete available DNS challenge types
* manager: acme plugin: remove ID completion helper from add command
* completion: ACME plugin type: comment out http type for now, not useful
* acme: use proxmox-acme-plugins and load schema from there
* fix 3296: add http_proxy to node config, and provide a cli
* fix #3331: improve progress for last snapshot in group
* file-restore: add debug mode with serial access
* file-restore: support more drives
* file-restore: add more RAM for VMs with many drives or debug
* file-restore: try to kill VM when stale
* make sure URI paths start with a slash
* tape: use LOCATE(16) SCSI command
* call create_run_dir() at daemon startup
* tape/drive: add 'move_to_file' to TapeDriver trait
* proxmox_restore_daemon: mount ntfs with 'utf8' option
* client/http_client: add necessary brackets for ipv6
* docs: tape: clarify LTO-4/5 support
* tape/restore: optimize chunk restore behaviour
-- Proxmox Support Team <support@proxmox.com> Tue, 11 May 2021 13:22:49 +0200
rust-proxmox-backup (1.1.6-2) unstable; urgency=medium
* fix permissions set in create_run_dir
-- Proxmox Support Team <support@proxmox.com> Tue, 04 May 2021 12:25:00 +0200
rust-proxmox-backup (1.1.6-1) unstable; urgency=medium
* tape restore: do not verify restored files
* tape restore: add restore speed to logs
* tape restore: write datastore in separate thread
* add ACME support
* add node config
* docs: user-management: add note about untrusted certificates for
webauthn
* bin: use extract_output_format where necessary
* add ctime and size function to IndexFile trait
* ui: tape: handle tapes in changers without barcode
-- Proxmox Support Team <support@proxmox.com> Tue, 04 May 2021 12:09:25 +0200
rust-proxmox-backup (1.1.5-3) stable; urgency=medium
* file-restore: use 'norecovery' for XFS filesystem to allow mounting
those which where not un-mounted during backup
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Apr 2021 15:26:13 +0200
rust-proxmox-backup (1.1.5-2) stable; urgency=medium
* file-restore: strip .img.fidx suffix from drive serials to avoid running
in the 20 character limit SCSI serial values have.
-- Proxmox Support Team <support@proxmox.com> Wed, 28 Apr 2021 11:15:08 +0200
rust-proxmox-backup (1.1.5-1) unstable; urgency=medium
* tools/sgutils2: add size workaround for mode_sense
* tape: add read_medium_configuration_page() to detect WORM media
* file-restore: fix package name for kernel/initramfs image
* tape: remove MediumType struct, which is only valid on IBM drives
-- Proxmox Support Team <support@proxmox.com> Tue, 27 Apr 2021 12:20:04 +0200
rust-proxmox-backup (1.1.4-1) unstable; urgency=medium
* file-restore: add size to image files and components
* file-restore: exit with code 1 in case streaming fails
* file-restore: use less memory for VM (now 128 MiB) and reboot on panic
* ui: tape: improve reload drive-status logic on user actions
* tape backup: list the snapshots we could back up on failed backup
notification
* Improve on a scheduling issue when updating the calendar event such, that
it would had triggered between the last-run and now. Use the next future
event as actual next trigger instead.
* SCSI mode sense: include the expected and unexpected sizes in the error
message, to allow easier debugging
-- Proxmox Support Team <support@proxmox.com> Tue, 27 Apr 2021 08:27:10 +0200
rust-proxmox-backup (1.1.3-2) unstable; urgency=medium
* improve check for LTO4 tapes
* api: node status: return further information about SWAP, IO-wait, CPU info
and Kernel version
-- Proxmox Support Team <support@proxmox.com> Fri, 23 Apr 2021 10:52:08 +0200
rust-proxmox-backup (1.1.3-1) unstable; urgency=medium
* tape restore: improve datastore locking when GC runs at the same time
* tape restore: always do quick chunk verification
* tape: improve compatibillity with some changers
* tape: work-around missing format command on LTO-4 drives, fall-back to
slower rewind erease
* fix #3393: pxar: allow and safe the 'security.NTACL' extended attribute
* file-restore: support encrypted VM backups
-- Proxmox Support Team <support@proxmox.com> Thu, 22 Apr 2021 20:14:58 +0200
rust-proxmox-backup (1.1.2-1) unstable; urgency=medium
* backup verify: always re-check if we can skip a chunk in the actual verify
loop.
* tape: do not try to backup unfinished backups
-- Proxmox Support Team <support@proxmox.com> Thu, 15 Apr 2021 13:26:52 +0200
rust-proxmox-backup (1.1.1-1) unstable; urgency=medium
* docs: include tape in table of contents
* docs: tape: improve definition-list format and add screenshots
* docs: reorder maintenance and network chapters after client-usage/tools
chapters
* ui: tape changer status: add Format button to drive grid
* backup/verify: improve speed on disks with slow random-IO (spinners) by
iterating over chunks sorted by inode
-- Proxmox Support Team <support@proxmox.com> Wed, 14 Apr 2021 14:50:29 +0200
rust-proxmox-backup (1.1.0-1) unstable; urgency=medium
* enable tape backup as technology preview by default
* tape: read drive status: clear deferred error or media changed events.
* tape: improve end-of-tape (EOT) error handling
* tape: cleanup media catalog on tape reuse
* zfs: re-use underlying pool wide IO stats for datasets
* api daemon: only log error from accepting new connections to avoid opening
to many file descriptors
* api/datastore: allow downloading the entire archive as ZIP archive, not
only sub-paths
-- Proxmox Support Team <support@proxmox.com> Tue, 13 Apr 2021 14:42:18 +0200
rust-proxmox-backup (1.0.14-1) unstable; urgency=medium
* server: compress API call response and static files if client accepts that
* compress generated ZIP archives with deflate
* tape: implement LTO userspace driver
* docs: mention new user space tape driver, adopt device path names
* tape: always clear encryption key after backup (for security reasons)
* ui: improve changer status view
* add proxmox-file-restore package, providing a central file-restore binary
with preparations for restoring files also from block level backups using
QEMU for a safe encapsulation.
-- Proxmox Support Team <support@proxmox.com> Thu, 08 Apr 2021 16:35:11 +0200
rust-proxmox-backup (1.0.13-1) unstable; urgency=medium
* pxar: improve handling ACL entries on create and restore
-- Proxmox Support Team <support@proxmox.com> Fri, 02 Apr 2021 15:32:01 +0200
rust-proxmox-backup (1.0.12-1) unstable; urgency=medium
* tape: write catalogs to tape (speedup catalog restore)
* tape: add --scan option for catalog restore
* tape: improve locking (lock media-sets)
* tape: ui: enable datastore mappings
* fix #3359: fix blocking writes in async code during pxar create
* api2/tape/backup: wait indefinitely for lock in scheduled backup jobs
* docu improvements
-- Proxmox Support Team <support@proxmox.com> Fri, 26 Mar 2021 14:08:47 +0100
rust-proxmox-backup (1.0.11-1) unstable; urgency=medium
* fix feature flag logic in pxar create
* tools/zip: add missing start_disk field for zip64 extension to improve
compatibility with some strict archive tools
* tape: speedup backup by doing read/write in parallel
* tape: store datastore name in tape archives and media catalog
-- Proxmox Support Team <support@proxmox.com> Thu, 18 Mar 2021 12:36:01 +0100
rust-proxmox-backup (1.0.10-1) unstable; urgency=medium
* tape: improve MediaPool allocation by sorting tapes by creation time and
label text
* api: tape backup: continue on vanishing snapshots, as a prune during long
running tape backup jobs is OK
* tape: fix scsi volume_statistics and cartridge_memory for quantum drives
* typo fixes all over the place
* d/postinst: restart, not reload, when updating from a to old version
-- Proxmox Support Team <support@proxmox.com> Thu, 11 Mar 2021 08:24:31 +0100
rust-proxmox-backup (1.0.9-1) unstable; urgency=medium
* client: track key source, print when used

63
debian/control vendored
View File

@ -15,6 +15,9 @@ Build-Depends: debhelper (>= 11),
librust-crossbeam-channel-0.5+default-dev,
librust-endian-trait-0.6+arrays-dev,
librust-endian-trait-0.6+default-dev,
librust-env-logger-0.7+default-dev,
librust-flate2-1+default-dev,
librust-foreign-types-0.3+default-dev,
librust-futures-0.3+default-dev,
librust-h2-0.3+default-dev,
librust-h2-0.3+stream-dev,
@ -36,13 +39,20 @@ Build-Depends: debhelper (>= 11),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-1+default-dev,
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.11+api-macro-dev,
librust-proxmox-0.11+default-dev,
librust-proxmox-0.11+sortable-macro-dev,
librust-proxmox-0.11+websocket-dev,
librust-proxmox-0.11+api-macro-dev (>= 0.11.6-~~),
librust-proxmox-0.11+cli-dev (>= 0.11.6-~~),
librust-proxmox-0.11+default-dev (>= 0.11.6-~~),
librust-proxmox-0.11+router-dev (>= 0.11.6-~~),
librust-proxmox-0.11+sortable-macro-dev (>= 0.11.6-~~),
librust-proxmox-0.11+tfa-dev (>= 0.11.6-~~),
librust-proxmox-acme-rs-0.3+default-dev,
librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~),
librust-pxar-0.9+default-dev,
librust-pxar-0.9+tokio-io-dev,
librust-proxmox-http-0.2+client-dev (>= 0.2.1-~~),
librust-proxmox-http-0.2+default-dev (>= 0.2.1-~~),
librust-proxmox-http-0.2+http-helpers-dev (>= 0.2.1-~~),
librust-proxmox-http-0.2+websocket-dev (>= 0.2.1-~~),
librust-pxar-0.10+default-dev (>= 0.10.1-~~),
librust-pxar-0.10+tokio-io-dev (>= 0.10.1-~~),
librust-regex-1+default-dev (>= 1.2-~~),
librust-rustyline-7+default-dev,
librust-serde-1+default-dev,
@ -50,21 +60,24 @@ Build-Depends: debhelper (>= 11),
librust-serde-json-1+default-dev,
librust-siphasher-0.3+default-dev,
librust-syslog-4+default-dev,
librust-tokio-1+default-dev,
librust-tokio-1+fs-dev,
librust-tokio-1+io-util-dev,
librust-tokio-1+macros-dev,
librust-tokio-1+net-dev,
librust-tokio-1+parking-lot-dev,
librust-tokio-1+process-dev,
librust-tokio-1+rt-dev,
librust-tokio-1+rt-multi-thread-dev,
librust-tokio-1+signal-dev,
librust-tokio-1+time-dev,
librust-thiserror-1+default-dev,
librust-tokio-1+default-dev (>= 1.6-~~),
librust-tokio-1+fs-dev (>= 1.6-~~),
librust-tokio-1+io-std-dev (>= 1.6-~~),
librust-tokio-1+io-util-dev (>= 1.6-~~),
librust-tokio-1+macros-dev (>= 1.6-~~),
librust-tokio-1+net-dev (>= 1.6-~~),
librust-tokio-1+parking-lot-dev (>= 1.6-~~),
librust-tokio-1+process-dev (>= 1.6-~~),
librust-tokio-1+rt-dev (>= 1.6-~~),
librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~),
librust-tokio-1+signal-dev (>= 1.6-~~),
librust-tokio-1+time-dev (>= 1.6-~~),
librust-tokio-openssl-0.6+default-dev (>= 0.6.1-~~),
librust-tokio-stream-0.1+default-dev,
librust-tokio-util-0.6+codec-dev,
librust-tokio-util-0.6+default-dev,
librust-tokio-util-0.6+io-dev,
librust-tower-service-0.3+default-dev,
librust-udev-0.4+default-dev | librust-udev-0.3+default-dev,
librust-url-2+default-dev (>= 2.1-~~),
@ -106,17 +119,16 @@ Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libjs-qrcodejs (>= 1.20201119),
libproxmox-acme-plugins,
libsgutils2-2,
libzstd1 (>= 1.3.8),
lvm2,
mt-st,
mtx,
openssh-server,
pbs-i18n,
postfix | mail-transport-agent,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.3-6),
proxmox-widget-toolkit (>= 2.6-2),
pve-xtermjs (>= 4.7.0-1),
sg3-utils,
smartmontools,
@ -146,3 +158,14 @@ Depends: libjs-extjs,
Architecture: all
Description: Proxmox Backup Documentation
This package contains the Proxmox Backup Documentation files.
Package: proxmox-backup-file-restore
Architecture: any
Depends: ${misc:Depends},
${shlibs:Depends},
Recommends: pve-qemu-kvm (>= 5.0.0-9),
proxmox-backup-restore-image,
Description: Proxmox Backup single file restore tools for pxar and block device backups
This package contains the Proxmox Backup single file restore client for
restoring individual files and folders from both host/container and VM/block
device backups. It includes a block device restore driver using QEMU.

16
debian/control.in vendored
View File

@ -3,17 +3,16 @@ Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libjs-qrcodejs (>= 1.20201119),
libproxmox-acme-plugins,
libsgutils2-2,
libzstd1 (>= 1.3.8),
lvm2,
mt-st,
mtx,
openssh-server,
pbs-i18n,
postfix | mail-transport-agent,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.3-6),
proxmox-widget-toolkit (>= 2.6-2),
pve-xtermjs (>= 4.7.0-1),
sg3-utils,
smartmontools,
@ -43,3 +42,14 @@ Depends: libjs-extjs,
Architecture: all
Description: Proxmox Backup Documentation
This package contains the Proxmox Backup Documentation files.
Package: proxmox-backup-file-restore
Architecture: any
Depends: ${misc:Depends},
${shlibs:Depends},
Recommends: pve-qemu-kvm (>= 5.0.0-9),
proxmox-backup-restore-image,
Description: Proxmox Backup single file restore tools for pxar and block device backups
This package contains the Proxmox Backup single file restore client for
restoring individual files and folders from both host/container and VM/block
device backups. It includes a block device restore driver using QEMU.

25
debian/postinst vendored
View File

@ -6,13 +6,21 @@ set -e
case "$1" in
configure)
# need to have user backup in the tapoe group
# need to have user backup in the tape group
usermod -a -G tape backup
# modeled after dh_systemd_start output
systemctl --system daemon-reload >/dev/null || true
if [ -n "$2" ]; then
if dpkg --compare-versions "$2" 'lt' '1.0.7-1'; then
# there was an issue with reloading and systemd being confused in older daemon versions
# so restart instead of reload if upgrading from there, see commit 0ec79339f7aebf9
# FIXME: remove with PBS 2.1
echo "Upgrading from older proxmox-backup-server: restart (not reload) daemons"
_dh_action=try-restart
else
_dh_action=try-reload-or-restart
fi
else
_dh_action=start
fi
@ -40,12 +48,27 @@ case "$1" in
/etc/proxmox-backup/remote.cfg || true
fi
fi
if dpkg --compare-versions "$2" 'le' '1.0.14-1'; then
# FIXME: Remove with 2.0
if grep -s -q -P -e '^linux:' /etc/proxmox-backup/tape.cfg; then
echo "========="
echo "= NOTE: You have now unsupported 'linux' tape drives configured."
echo "= * Execute 'udevadm control --reload-rules && udevadm trigger' to update /dev"
echo "= * Edit '/etc/proxmox-backup/tape.cfg', remove 'linux' entries and re-add over CLI/GUI"
echo "========="
fi
fi
# FIXME: remove with 2.0
if [ -d "/var/lib/proxmox-backup/tape" ] &&
[ "$(stat --printf '%a' '/var/lib/proxmox-backup/tape')" != "750" ]; then
chmod 0750 /var/lib/proxmox-backup/tape || true
fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active || true
fi
fi
;;
abort-upgrade|abort-remove|abort-deconfigure)

View File

@ -0,0 +1 @@
debian/proxmox-file-restore.bc proxmox-file-restore

8
debian/proxmox-backup-file-restore.bc vendored Normal file
View File

@ -0,0 +1,8 @@
# proxmox-file-restore bash completion
# see http://tiswww.case.edu/php/chet/bash/FAQ
# and __ltrim_colon_completions() in /usr/share/bash-completion/bash_completion
# this modifies global var, but I found no better way
COMP_WORDBREAKS=${COMP_WORDBREAKS//:}
complete -C 'proxmox-file-restore bashcomplete' proxmox-file-restore

View File

@ -0,0 +1,4 @@
usr/bin/proxmox-file-restore
usr/share/man/man1/proxmox-file-restore.1
usr/share/zsh/vendor-completions/_proxmox-file-restore
usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore/proxmox-restore-daemon

74
debian/proxmox-backup-file-restore.postinst vendored Executable file
View File

@ -0,0 +1,74 @@
#!/bin/sh
set -e
update_initramfs() {
# regenerate initramfs for single file restore VM
INST_PATH="/usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore"
CACHE_PATH="/var/cache/proxmox-backup/file-restore-initramfs.img"
CACHE_PATH_DBG="/var/cache/proxmox-backup/file-restore-initramfs-debug.img"
# cleanup first, in case proxmox-file-restore was uninstalled since we do
# not want an unuseable image lying around
rm -f "$CACHE_PATH"
if [ ! -f "$INST_PATH/initramfs.img" ]; then
echo "proxmox-backup-restore-image is not installed correctly, skipping update" >&2
exit 0
fi
echo "Updating file-restore initramfs..."
# avoid leftover temp file
cleanup() {
rm -f "$CACHE_PATH.tmp" "$CACHE_PATH_DBG.tmp"
}
trap cleanup EXIT
mkdir -p "/var/cache/proxmox-backup"
cp "$INST_PATH/initramfs.img" "$CACHE_PATH.tmp"
# cpio uses passed in path as offset inside the archive as well, so we need
# to be in the same dir as the daemon binary to ensure it's placed in /
( cd "$INST_PATH"; \
printf "./proxmox-restore-daemon" \
| cpio -o --format=newc -A -F "$CACHE_PATH.tmp" )
mv -f "$CACHE_PATH.tmp" "$CACHE_PATH"
if [ -f "$INST_PATH/initramfs-debug.img" ]; then
echo "Updating file-restore debug initramfs..."
cp "$INST_PATH/initramfs-debug.img" "$CACHE_PATH_DBG.tmp"
( cd "$INST_PATH"; \
printf "./proxmox-restore-daemon" \
| cpio -o --format=newc -A -F "$CACHE_PATH_DBG.tmp" )
mv -f "$CACHE_PATH_DBG.tmp" "$CACHE_PATH_DBG"
fi
trap - EXIT
}
case "$1" in
configure)
# in case restore daemon was updated
update_initramfs
;;
triggered)
if [ "$2" = "proxmox-backup-restore-image-update" ]; then
# in case base-image was updated
update_initramfs
else
echo "postinst called with unknown trigger name: \`$2'" >&2
fi
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
exit 0

View File

@ -0,0 +1 @@
interest-noawait proxmox-backup-restore-image-update

18
debian/proxmox-backup-server.udev vendored Normal file
View File

@ -0,0 +1,18 @@
# do not edit this file, it will be overwritten on update
# persistent storage links: /dev/tape/{by-id,by-path}
ACTION=="remove", GOTO="persistent_storage_tape_end"
ENV{UDEV_DISABLE_PERSISTENT_STORAGE_RULES_FLAG}=="1", GOTO="persistent_storage_tape_end"
# also see: /lib/udev/rules.d/60-persistent-storage-tape.rules
SUBSYSTEM=="scsi_generic", SUBSYSTEMS=="scsi", ATTRS{type}=="1", IMPORT{program}="scsi_id --sg-version=3 --export --whitelisted -d $devnode", \
SYMLINK+="tape/by-id/scsi-$env{ID_SERIAL}-sg"
# iSCSI devices from the same host have all the same ID_SERIAL,
# but additionally a property named ID_SCSI_SERIAL.
SUBSYSTEM=="scsi_generic", SUBSYSTEMS=="scsi", ATTRS{type}=="1", ENV{ID_SCSI_SERIAL}=="?*", \
SYMLINK+="tape/by-id/scsi-$env{ID_SCSI_SERIAL}-sg"
LABEL="persistent_storage_tape_end"

7
debian/rules vendored
View File

@ -52,8 +52,11 @@ override_dh_dwz:
override_dh_strip:
dh_strip
for exe in $$(find debian/proxmox-backup-client/usr \
debian/proxmox-backup-server/usr -executable -type f); do \
for exe in $$(find \
debian/proxmox-backup-client/usr \
debian/proxmox-backup-server/usr \
debian/proxmox-backup-file-restore \
-executable -type f); do \
debian/scripts/elf-strip-unused-dependencies.sh "$$exe" || true; \
done

View File

@ -5,6 +5,7 @@ GENERATED_SYNOPSIS := \
proxmox-backup-client/synopsis.rst \
proxmox-backup-client/catalog-shell-synopsis.rst \
proxmox-backup-manager/synopsis.rst \
proxmox-file-restore/synopsis.rst \
pxar/synopsis.rst \
pmtx/synopsis.rst \
pmt/synopsis.rst \
@ -25,7 +26,8 @@ MAN1_PAGES := \
proxmox-tape.1 \
proxmox-backup-proxy.1 \
proxmox-backup-client.1 \
proxmox-backup-manager.1
proxmox-backup-manager.1 \
proxmox-file-restore.1
MAN5_PAGES := \
media-pool.cfg.5 \
@ -179,10 +181,16 @@ proxmox-backup-manager.1: proxmox-backup-manager/man1.rst proxmox-backup-manage
proxmox-backup-proxy.1: proxmox-backup-proxy/man1.rst proxmox-backup-proxy/description.rst
rst2man $< >$@
proxmox-file-restore/synopsis.rst: ${COMPILEDIR}/proxmox-file-restore
${COMPILEDIR}/proxmox-file-restore printdoc > proxmox-file-restore/synopsis.rst
proxmox-file-restore.1: proxmox-file-restore/man1.rst proxmox-file-restore/description.rst proxmox-file-restore/synopsis.rst
rst2man $< >$@
.PHONY: onlinehelpinfo
onlinehelpinfo:
@echo "Generating OnlineHelpInfo.js..."
$(SPHINXBUILD) -b proxmox-scanrefs $(ALLSPHINXOPTS) $(BUILDDIR)/scanrefs
$(SPHINXBUILD) -b proxmox-scanrefs -Q $(ALLSPHINXOPTS) $(BUILDDIR)/scanrefs
@echo "Build finished. OnlineHelpInfo.js is in $(BUILDDIR)/scanrefs."
api-viewer/apidata.js: ${COMPILEDIR}/docgen
@ -220,6 +228,7 @@ epub3: ${GENERATED_SYNOPSIS}
clean:
rm -r -f *~ *.1 ${BUILDDIR} ${GENERATED_SYNOPSIS} api-viewer/apidata.js
rm -f api-viewer/apidoc.js lto-barcode/lto-barcode-generator.js
install_manual_pages: ${MAN1_PAGES} ${MAN5_PAGES}

View File

@ -86,13 +86,9 @@ Ext.onReady(function() {
return pdef['enum'] ? 'enum' : (pdef.type || 'string');
};
var render_format = function(value, metaData, record) {
var pdef = record.data;
metaData.style = 'white-space:normal;'
let render_simple_format = function(pdef, type_fallback) {
if (pdef.typetext)
return Ext.htmlEncode(pdef.typetext);
return pdef.typetext;
if (pdef['enum'])
return pdef['enum'].join(' | ');
@ -101,9 +97,28 @@ Ext.onReady(function() {
return pdef.format;
if (pdef.pattern)
return Ext.htmlEncode(pdef.pattern);
return pdef.pattern;
return '';
if (pdef.type === 'boolean')
return `<true|false>`;
if (type_fallback && pdef.type)
return `<${pdef.type}>`;
return;
};
let render_format = function(value, metaData, record) {
let pdef = record.data;
metaData.style = 'white-space:normal;'
if (pdef.type === 'array' && pdef.items) {
let format = render_simple_format(pdef.items, true);
return `[${Ext.htmlEncode(format)}, ...]`;
}
return Ext.htmlEncode(render_simple_format(pdef) || '');
};
var real_path = function(path) {
@ -143,7 +158,7 @@ Ext.onReady(function() {
permhtml += "</div></div>";
} else {
//console.log(permission);
permhtml += "Unknown systax!";
permhtml += "Unknown syntax!";
}
return permhtml;

View File

@ -3,9 +3,10 @@ Backup Client Usage
The command line client is called :command:`proxmox-backup-client`.
.. _client_repository:
Repository Locations
--------------------
Backup Repository Locations
---------------------------
The client uses the following notation to specify a datastore repository
on the backup server.
@ -471,7 +472,7 @@ located in ``/etc``, you could do the following:
pxar:/ > restore target/ --pattern etc/**/*.conf
...
The above will scan trough all the directories below ``/etc`` and restore all
The above will scan through all the directories below ``/etc`` and restore all
files ending in ``.conf``.
.. todo:: Explain interactive restore in more detail
@ -691,8 +692,15 @@ Benchmarking
------------
The backup client also comes with a benchmarking tool. This tool measures
various metrics relating to compression and encryption speeds. You can run a
benchmark using the ``benchmark`` subcommand of ``proxmox-backup-client``:
various metrics relating to compression and encryption speeds. If a Proxmox
Backup repository (remote or local) is specified, the TLS upload speed will get
measured too.
You can run a benchmark using the ``benchmark`` subcommand of
``proxmox-backup-client``:
.. note:: The TLS speed test is only included if a :ref:`backup server
repository is specified <client_repository>`.
.. code-block:: console
@ -723,8 +731,7 @@ benchmark using the ``benchmark`` subcommand of ``proxmox-backup-client``:
.. note:: The percentages given in the output table correspond to a
comparison against a Ryzen 7 2700X. The TLS test connects to the
local host, so there is no network involved.
comparison against a Ryzen 7 2700X.
You can also pass the ``--output-format`` parameter to output stats in ``json``,
rather than the default table format.

View File

@ -6,6 +6,11 @@ Command Line Tools
.. include:: proxmox-backup-client/description.rst
``proxmox-file-restore``
~~~~~~~~~~~~~~~~~~~~~~~~~
.. include:: proxmox-file-restore/description.rst
``proxmox-backup-manager``
~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -26,6 +26,27 @@ Those command are available when you start an interactive restore shell:
.. include:: proxmox-backup-manager/synopsis.rst
``proxmox-tape``
----------------
.. include:: proxmox-tape/synopsis.rst
``pmt``
-------
.. include:: pmt/options.rst
....
.. include:: pmt/synopsis.rst
``pmtx``
--------
.. include:: pmtx/synopsis.rst
``pxar``
--------

View File

@ -49,7 +49,7 @@ PygmentsBridge.latex_formatter = CustomLatexFormatter
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo", "proxmox-scanrefs"]
extensions = ["sphinx.ext.graphviz", 'sphinx.ext.mathjax', "sphinx.ext.todo", "proxmox-scanrefs"]
todo_link_only = True
@ -307,6 +307,9 @@ html_show_sourcelink = False
# Output file base name for HTML help builder.
htmlhelp_basename = 'ProxmoxBackupdoc'
# use local mathjax package, symlink comes from debian/proxmox-backup-docs.links
mathjax_path = "mathjax/MathJax.js?config=TeX-AMS-MML_HTMLorMML"
# -- Options for LaTeX output ---------------------------------------------
latex_engine = 'xelatex'
@ -464,6 +467,3 @@ epub_exclude_files = ['search.html']
# If false, no index is generated.
#
# epub_use_index = True
# use local mathjax package, symlink comes from debian/proxmox-backup-docs.links
mathjax_path = "mathjax/MathJax.js?config=TeX-AMS-MML_HTMLorMML"

View File

@ -1,4 +1,4 @@
Each drive configuration section starts with a header ``linux: <name>``,
Each LTO drive configuration section starts with a header ``lto: <name>``,
followed by the drive configuration options.
Tape changer configurations starts with ``changer: <name>``,
@ -6,7 +6,7 @@ followed by the changer configuration options.
::
linux: hh8
lto: hh8
changer sl3
path /dev/tape/by-id/scsi-10WT065325-nst

View File

@ -37,8 +37,53 @@ Options
.. include:: config/datastore/config.rst
``media-pool.cfg``
~~~~~~~~~~~~~~~~~~
File Format
^^^^^^^^^^^
.. include:: config/media-pool/format.rst
Options
^^^^^^^
.. include:: config/media-pool/config.rst
``tape.cfg``
~~~~~~~~~~~~
File Format
^^^^^^^^^^^
.. include:: config/tape/format.rst
Options
^^^^^^^
.. include:: config/tape/config.rst
``tape-job.cfg``
~~~~~~~~~~~~~~~~
File Format
^^^^^^^^^^^
.. include:: config/tape-job/format.rst
Options
^^^^^^^
.. include:: config/tape-job/config.rst
``user.cfg``
~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~
File Format
^^^^^^^^^^^

View File

@ -57,6 +57,11 @@ div.sphinxsidebar h3 {
div.sphinxsidebar h1.logo-name {
display: none;
}
div.document, div.footer {
width: min(100%, 1320px);
}
@media screen and (max-width: 875px) {
div.sphinxsidebar p.logo {
display: initial;
@ -65,9 +70,19 @@ div.sphinxsidebar h1.logo-name {
display: block;
}
div.sphinxsidebar span {
color: #AAA;
color: #EEE;
}
ul li.toctree-l1 > a {
.sphinxsidebar ul li.toctree-l1 > a, div.sphinxsidebar a {
color: #FFF;
}
div.sphinxsidebar {
background-color: #555;
}
div.body {
min-width: 300px;
}
div.footer {
display: block;
margin: 15px auto 0px auto;
}
}

View File

@ -61,9 +61,7 @@ attacker gains access to the server or any point of the network, they will not
be able to read the data.
.. note:: Encryption is not enabled by default. To set up encryption, see the
`Encryption
<https://pbs.proxmox.com/docs/administration-guide.html#encryption>`_ section
of the Proxmox Backup Server Administration Guide.
:ref:`backup client encryption section <client_encryption>`.
Is the backup incremental/deduplicated?

View File

@ -112,6 +112,18 @@ The administration menu item also contains a disk management subsection:
* **Directory**: Create and view information on *ext4* and *xfs* disks
* **ZFS**: Create and view information on *ZFS* disks
Tape Backup
^^^^^^^^^^^
.. image:: images/screenshots/pbs-gui-tape-changer-overview.png
:align: right
:alt: Tape Backup: Tape changer overview
The `Tape Backup`_ section contains a top panel, managing tape media sets,
inventories, drives, changers and the tape backup jobs itself.
It also contains a subsection per standalone drive and per changer, with a
status and management view for those devices.
Datastore
^^^^^^^^^

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

View File

@ -25,14 +25,15 @@ in the section entitled "GNU Free Documentation License".
terminology.rst
gui.rst
storage.rst
network-management.rst
user-management.rst
managing-remotes.rst
maintenance.rst
backup-client.rst
pve-integration.rst
pxar-tool.rst
tape-backup.rst
managing-remotes.rst
maintenance.rst
sysadmin.rst
network-management.rst
technical-overview.rst
faq.rst

View File

@ -113,9 +113,9 @@ Client Installation
Install `Proxmox Backup`_ Client on Debian
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Proxmox ships as a set of Debian packages to be installed on
top of a standard Debian installation. After configuring the
:ref:`sysadmin_package_repositories`, you need to run:
Proxmox ships as a set of Debian packages to be installed on top of a standard
Debian installation. After configuring the :ref:`package_repositories_client_only_apt`,
you need to run:
.. code-block:: console
@ -123,12 +123,6 @@ top of a standard Debian installation. After configuring the
# apt-get install proxmox-backup-client
Installing from source
~~~~~~~~~~~~~~~~~~~~~~
.. note:: The client-only repository should be usable by most recent Debian and
Ubuntu derivatives.
.. todo:: Add section "Installing from source"
Installing statically linked binary
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. todo:: Add section "Installing statically linked binary"

View File

@ -65,10 +65,10 @@ Main Features
:Compression: The ultra-fast Zstandard_ compression is able to compress
several gigabytes of data per second.
:Encryption: Backups can be encrypted on the client-side, using AES-256 in
Galois/Counter Mode (GCM_). This authenticated encryption (AE_) mode
provides very high performance on modern hardware. In addition to client-side
encryption, all data is transferred via a secure TLS connection.
:Encryption: Backups can be encrypted on the client-side, using AES-256 GCM_.
This authenticated encryption (AE_) mode provides very high performance on
modern hardware. In addition to client-side encryption, all data is
transferred via a secure TLS connection.
:Web interface: Manage the Proxmox Backup Server with the integrated, web-based
user interface.
@ -76,8 +76,16 @@ Main Features
:Open Source: No secrets. Proxmox Backup Server is free and open-source
software. The source code is licensed under AGPL, v3.
:Support: Enterprise support will be available from `Proxmox`_ once the beta
phase is over.
:No Limits: Proxmox Backup Server has no artificial limits for backup storage or
backup-clients.
:Enterprise Support: Proxmox Server Solutions GmbH offers enterprise support in
form of `Proxmox Backup Server Subscription Plans
<https://www.proxmox.com/en/proxmox-backup-server/pricing>`_. Users at every
subscription level get access to the Proxmox Backup :ref:`Enterprise
Repository <sysadmin_package_repos_enterprise>`. In addition, with a Basic,
Standard or Premium subscription, users have access to the :ref:`Proxmox
Customer Portal <get_help_enterprise_support>`.
Reasons for Data Backup?
@ -117,8 +125,8 @@ Proxmox Backup Server consists of multiple components:
* A client CLI tool (`proxmox-backup-client`) to access the server easily from
any `Linux amd64` environment
Aside from the web interface, everything is written in the Rust programming
language.
Aside from the web interface, most parts of Proxmox Backup Server are written in
the Rust programming language.
"The Rust programming language helps you write faster, more reliable software.
High-level ergonomics and low-level control are often at odds in programming
@ -134,6 +142,17 @@ language.
Getting Help
------------
.. _get_help_enterprise_support:
Enterprise Support
~~~~~~~~~~~~~~~~~~
Users with a `Proxmox Backup Server Basic, Standard or Premium Subscription Plan
<https://www.proxmox.com/en/proxmox-backup-server/pricing>`_ have access to the
`Proxmox Customer Portal <https://my.proxmox.com>`_. The customer portal
provides support with guaranteed response times from the Proxmox developers.
For more information or for volume discounts, please contact office@proxmox.com.
Community Support Forum
~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -148,7 +148,7 @@ are checked again. The interface for creating verify jobs can be found under the
**Verify Jobs** tab of the datastore.
.. Note:: It is recommended that you reverify all backups at least monthly, even
if a previous verification was successful. This is becuase physical drives
if a previous verification was successful. This is because physical drives
are susceptible to damage over time, which can cause an old, working backup
to become corrupted in a process known as `bit rot/data degradation
<https://en.wikipedia.org/wiki/Data_degradation>`_. It is good practice to

View File

@ -29,6 +29,8 @@ update``.
In addition, you need a package repository from Proxmox to get Proxmox Backup
updates.
.. _package_repos_secure_apt:
SecureApt
~~~~~~~~~
@ -69,10 +71,12 @@ Here, the output should be:
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
.. _sysadmin_package_repos_enterprise:
`Proxmox Backup`_ Enterprise Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This will be the default, stable, and recommended repository. It is available for
This is the stable, recommended repository. It is available for
all `Proxmox Backup`_ subscription users. It contains the most stable packages,
and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default:
@ -137,3 +141,56 @@ You can access this repository by adding the following line to
:caption: sources.list entry for ``pbstest``
deb http://download.proxmox.com/debian/pbs buster pbstest
.. _package_repositories_client_only:
Proxmox Backup Client-only Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you want to :ref:`use the the Proxmox Backup Client <client_creating_backups>`
on systems using a Linux distribution not based on Proxmox projects, you can
use the client-only repository.
Currently there's only a client-repository for APT based systems.
.. _package_repositories_client_only_apt:
APT-based Proxmox Backup Client Repository
++++++++++++++++++++++++++++++++++++++++++
For modern Linux distributions using `apt` as package manager, like all Debian
and Ubuntu Derivative do, you may be able to use the APT-based repository.
This repository is tested with:
- Debian Buster
- Ubuntu 20.04 LTS
It may work with older, and should work with more recent released versions.
In order to configure this repository you need to first :ref:`setup the Proxmox
release key <package_repos_secure_apt>`. After that, add the repository URL to
the APT sources lists.
Edit the file ``/etc/apt/sources.list.d/pbs-client.list`` and add the following
snipped
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://download.proxmox.com/debian/pbs-client buster main
.. _node_options_http_proxy:
Repository Access Behind HTTP Proxy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some setups have restricted access to the internet, sometimes only through a
central proxy. You can setup a HTTP proxy through the Proxmox Backup Server's
web-interface in the `Configuration -> Authentication` tab.
Once configured this proxy will be used for apt network requests and for
checking a Proxmox Backup Server support subscription.
Standard HTTP proxy configurations are accepted, `[http://]<host>[:port]` where
the `<host>` part may include an authorization, for example:
`http://user:pass@proxy.example.org:12345`

View File

@ -1,11 +1,11 @@
All command supports the following parameters to specify the tape device:
All commands support the following parameters to specify the tape device:
--device <path> Path to the Linux tape device
--drive <name> Use drive from Proxmox Backup Server configuration.
Commands generating output supports the ``--output-format``
Commands which generate output support the ``--output-format``
parameter. It accepts the following values:
:``text``: Text format (default). Human readable.
@ -13,39 +13,3 @@ parameter. It accepts the following values:
:``json``: JSON (single line).
:``json-pretty``: JSON (multiple lines, nicely formatted).
Device driver options can be specified as integer numbers (see
``/usr/include/linux/mtio.h``), or using symbolic names:
:``buffer-writes``: Enable buffered writes
:``async-writes``: Enable async writes
:``read-ahead``: Use read-ahead for fixed block size
:``debugging``: Enable debugging if compiled into the driver
:``two-fm``: Write two file marks when closing the file
:``fast-mteom``: Space directly to eod (and lose file number)
:``auto-lock``: Automatically lock/unlock drive door
:``def-writes``: Defaults are meant only for writes
:``can-bsr``: Indicates that the drive can space backwards
:``no-blklims``: Drive does not support read block limits
:``can-partitions``: Drive can handle partitioned tapes
:``scsi2locical``: Seek and tell use SCSI-2 logical block addresses
:``sysv``: Enable the System V semantics
:``nowait``: Do not wait for rewind, etc. to complete
:``sili``: Enables setting the SILI bit in SCSI commands when reading
in variable block mode to enhance performance when reading blocks
shorter than the byte count

View File

@ -0,0 +1,3 @@
Command line tool for restoring files and directories from PBS archives. In contrast to
proxmox-backup-client, this supports both container/host and VM backups.

View File

@ -0,0 +1,28 @@
==========================
proxmox-file-restore
==========================
.. include:: ../epilog.rst
-----------------------------------------------------------------------
Command line tool for restoring files and directories from PBS archives
-----------------------------------------------------------------------
:Author: |AUTHOR|
:Version: Version |VERSION|
:Manual section: 1
Synopsis
==========
.. include:: synopsis.rst
Description
============
.. include:: description.rst
.. include:: ../pbs-copyright.rst

View File

@ -3,6 +3,26 @@
`Proxmox VE`_ Integration
-------------------------
A Proxmox Backup Server can be integrated into a Proxmox VE setup by adding the
former as a storage in a Proxmox VE standalone or cluster setup.
See also the `Proxmox VE Storage - Proxmox Backup Server
<https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_pbs>`_ section
of the Proxmox VE Administration Guide for Proxmox VE specific documentation.
Using the Proxmox VE Web-Interface
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Proxmox VE has native API and web-interface integration of Proxmox Backup
Server since the `Proxmox VE 6.3 release
<https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.3>`_.
A Proxmox Backup Server can be added under ``Datacenter -> Storage``.
Using the Proxmox VE Command-Line
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You need to define a new storage with type 'pbs' on your `Proxmox VE`_
node. The following example uses ``store2`` as storage name, and
assumes the server address is ``localhost``, and you want to connect
@ -41,9 +61,9 @@ After that you should be able to see storage status with:
Name Type Status Total Used Available %
store2 pbs active 3905109820 1336687816 2568422004 34.23%
Having added the PBS datastore to `Proxmox VE`_, you can backup VMs and
containers in the same way you would for any other storage device within the
environment (see `PVE Admin Guide: Backup and Restore
Having added the Proxmox Backup Server datastore to `Proxmox VE`_, you can
backup VMs and containers in the same way you would for any other storage
device within the environment (see `Proxmox VE Admin Guide: Backup and Restore
<https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_vzdump>`_.

View File

@ -1,5 +1,5 @@
Storage
=======
Backup Storage
==============
.. _storage_disk_management:

View File

@ -4,46 +4,45 @@ Tape Backup
===========
.. CAUTION:: Tape Backup is a technical preview feature, not meant for
production usage. To enable the GUI, you need to issue the
following command (as root user on the console):
production use.
.. code-block:: console
# touch /etc/proxmox-backup/tape.cfg
.. image:: images/screenshots/pbs-gui-tape-changer-overview.png
:align: right
:alt: Tape Backup: Tape changer overview
Proxmox tape backup provides an easy way to store datastore content
onto magnetic tapes. This increases data safety because you get:
- an additional copy of the data
- to a different media type (tape)
- an additional copy of the data,
- on a different media type (tape),
- to an additional location (you can move tapes off-site)
In most restore jobs, only data from the last backup job is restored.
Restore requests further decline the older the data
Restore requests further decline, the older the data
gets. Considering this, tape backup may also help to reduce disk
usage, because you can safely remove data from disk once archived on
tape. This is especially true if you need to keep data for several
usage, because you can safely remove data from disk, once it's archived on
tape. This is especially true if you need to retain data for several
years.
Tape backups do not provide random access to the stored data. Instead,
you need to restore the data to disk before you can access it
you need to restore the data to disk, before you can access it
again. Also, if you store your tapes off-site (using some kind of tape
vaulting service), you need to bring them on-site before you can do any
restore. So please consider that restores from tapes can take much
longer than restores from disk.
vaulting service), you need to bring them back on-site, before you can do any
restores. So please consider that restoring from tape can take much
longer than restoring from disk.
Tape Technology Primer
----------------------
.. _Linear Tape Open: https://en.wikipedia.org/wiki/Linear_Tape-Open
.. _Linear Tape-Open: https://en.wikipedia.org/wiki/Linear_Tape-Open
As of 2021, the only broadly available tape technology standard is
`Linear Tape Open`_, and different vendors offers LTO Ultrium tape
drives, auto-loaders and LTO tape cartridges.
As of 2021, the only widely available tape technology standard is
`Linear Tape-Open`_ (LTO). Different vendors offer LTO Ultrium tape
drives, auto-loaders, and LTO tape cartridges.
There are a few vendors offering proprietary drives with
slight advantages in performance and capacity, but they have
There are a few vendors that offer proprietary drives with
slight advantages in performance and capacity. Nevertheless, they have
significant disadvantages:
- proprietary (single vendor)
@ -53,13 +52,13 @@ So we currently do not test such drives.
In general, LTO tapes offer the following advantages:
- Durable (30 years)
- Durability (30 year lifespan)
- High Capacity (12 TB)
- Relatively low cost per TB
- Cold Media
- Movable (storable inside vault)
- Multiple vendors (for both media and drives)
- Build in AES-CGM Encryption engine
- Built in AES-GCM Encryption engine
Note that `Proxmox Backup Server` already stores compressed data, so using the
tape compression feature has no advantage.
@ -68,41 +67,47 @@ tape compression feature has no advantage.
Supported Hardware
------------------
Proxmox Backup Server supports `Linear Tape Open`_ generation 4 (LTO4)
or later. In general, all SCSI2 tape drives supported by the Linux
kernel should work, but feature like hardware encryptions needs LTO4
or later.
Proxmox Backup Server supports `Linear Tape-Open`_ generation 5 (LTO-5)
or later and has best-effort support for generation 4 (LTO-4). While
many LTO-4 systems are known to work, some might need firmware updates or
do not implement necessary features to work with Proxmox Backup Server.
Tape changer support is done using the Linux 'mtx' command line
tool. So any changer device supported by that tool should work.
Tape changing is carried out using the SCSI Medium Changer protocol,
so all modern tape libraries should work.
.. Note:: We use a custom user space tape driver written in Rust_. This
driver directly communicates with the tape drive using the SCSI
generic interface. This may have negative side effects when used with the old
Linux kernel tape driver, so you should not use that driver with
Proxmox tape backup.
Drive Performance
~~~~~~~~~~~~~~~~~
Current LTO-8 tapes provide read/write speeds up to 360 MB/s. This means,
Current LTO-8 tapes provide read/write speeds of up to 360 MB/s. This means,
that it still takes a minimum of 9 hours to completely write or
read a single tape (even at maximum speed).
The only way to speed up that data rate is to use more than one
drive. That way you can run several backup jobs in parallel, or run
The only way to speed that data rate up is to use more than one
drive. That way, you can run several backup jobs in parallel, or run
restore jobs while the other dives are used for backups.
Also consider that you need to read data first from your datastore
(disk). But a single spinning disk is unable to deliver data at this
Also consider that you first need to read data from your datastore
(disk). However, a single spinning disk is unable to deliver data at this
rate. We measured a maximum rate of about 60MB/s to 100MB/s in practice,
so it takes 33 hours to read 12TB to fill up an LTO-8 tape. If you want
to run your tape at full speed, please make sure that the source
datastore is able to deliver that performance (e.g, by using SSDs).
so it takes 33 hours to read the 12TB needed to fill up an LTO-8 tape. If you want
to write to your tape at full speed, please make sure that the source
datastore is able to deliver that performance (for example, by using SSDs).
Terminology
-----------
:Tape Labels: are used to uniquely identify a tape. You normally use
some sticky paper labels and apply them on the front of the
cartridge. We additionally store the label text magnetically on the
tape (first file on tape).
**Tape Labels:**
are used to uniquely identify a tape. You would normally apply a
sticky paper label to the front of the cartridge. We additionally
store the label text magnetically on the tape (first file on tape).
.. _Code 39: https://en.wikipedia.org/wiki/Code_39
@ -110,57 +115,65 @@ Terminology
.. _LTO Barcode Generator: lto-barcode/index.html
:Barcodes: are a special form of tape labels, which are electronically
**Barcodes:**
are a special form of tape labels, which are electronically
readable. Most LTO tape robots use an 8 character string encoded as
`Code 39`_, as defined in the `LTO Ultrium Cartridge Label
Specification`_.
You can either buy such barcode labels from your cartridge vendor,
or print them yourself. You can use our `LTO Barcode Generator`_ App
for that.
or print them yourself. You can use our `LTO Barcode Generator`_
app, if you would like to print them yourself.
.. Note:: Physical labels and the associated adhesive shall have an
.. Note:: Physical labels and the associated adhesive should have an
environmental performance to match or exceed the environmental
specifications of the cartridge to which it is applied.
:Media Pools: A media pool is a logical container for tapes. A backup
job targets one media pool, so a job only uses tapes from that
pool. The pool additionally defines how long a backup job can
append data to tapes (allocation policy) and how long you want to
keep the data (retention policy).
**Media Pools:**
A media pool is a logical container for tapes. A backup job targets
one media pool, so a job only uses tapes from that pool. The pool
additionally defines how long a backup job can append data to tapes
(allocation policy) and how long you want to keep the data
(retention policy).
:Media Set: A group of continuously written tapes (all from the same
media pool).
**Media Set:**
A group of continuously written tapes (all from the same media pool).
:Tape drive: The device used to read and write data to the tape. There
are standalone drives, but drives often ship within tape libraries.
**Tape drive:**
The device used to read and write data to the tape. There are
standalone drives, but drives are usually shipped within tape
libraries.
:Tape changer: A device which can change the tapes inside a tape drive
(tape robot). They are usually part of a tape library.
**Tape changer:**
A device which can change the tapes inside a tape drive (tape
robot). They are usually part of a tape library.
.. _Tape Library: https://en.wikipedia.org/wiki/Tape_library
:`Tape library`_: A storage device that contains one or more tape drives,
a number of slots to hold tape cartridges, a barcode reader to
identify tape cartridges and an automated method for loading tapes
(a robot).
`Tape library`_:
A storage device that contains one or more tape drives, a number of
slots to hold tape cartridges, a barcode reader to identify tape
cartridges, and an automated method for loading tapes (a robot).
This is also commonly known as 'autoloader', 'tape robot' or 'tape jukebox'.
This is also commonly known as an 'autoloader', 'tape robot' or
'tape jukebox'.
:Inventory: The inventory stores the list of known tapes (with
additional status information).
**Inventory:**
The inventory stores the list of known tapes (with additional status
information).
:Catalog: A media catalog stores information about the media content.
**Catalog:**
A media catalog stores information about the media content.
Tape Quickstart
---------------
Tape Quick Start
----------------
1. Configure your tape hardware (drives and changers)
2. Configure one or more media pools
3. Label your tape cartridges.
3. Label your tape cartridges
4. Start your first tape backup job ...
@ -169,7 +182,7 @@ Configuration
-------------
Please note that you can configure anything using the graphical user
interface or the command line interface. Both methods results in the
interface or the command line interface. Both methods result in the
same configuration.
.. _tape_changer_config:
@ -177,10 +190,17 @@ same configuration.
Tape changers
~~~~~~~~~~~~~
Tape changers (robots) are part of a `Tape Library`_. You can skip
this step if you are using a standalone drive.
.. image:: images/screenshots/pbs-gui-tape-changers.png
:align: right
:alt: Tape Backup: Tape Changers
Linux is able to auto detect those devices, and you can get a list
Tape changers (robots) are part of a `Tape Library`_. They contain a number of
slots to hold tape cartridges, a barcode reader to identify tape cartridges and
an automated method for loading tapes.
You can skip this step if you are using a standalone drive.
Linux is able to auto detect these devices, and you can get a list
of available devices using:
.. code-block:: console
@ -192,7 +212,7 @@ of available devices using:
│ /dev/tape/by-id/scsi-CC2C52 │ Quantum │ Superloader3 │ CC2C52 │
└─────────────────────────────┴─────────┴──────────────┴────────┘
In order to use that device with Proxmox, you need to create a
In order to use a device with Proxmox Backup Server, you need to create a
configuration entry:
.. code-block:: console
@ -201,11 +221,18 @@ configuration entry:
Where ``sl3`` is an arbitrary name you can choose.
.. Note:: Please use stable device path names from inside
.. Note:: Please use the persistent device path names from inside
``/dev/tape/by-id/``. Names like ``/dev/sg0`` may point to a
different device after reboot, and that is not what you want.
You can show the final configuration with:
.. image:: images/screenshots/pbs-gui-tape-changers-add.png
:align: right
:alt: Tape Backup: Add a new tape changer
This operation can also be carried out from the GUI, by navigating to the
**Changers** tab of **Tape Backup** and clicking **Add**.
You can display the final configuration with:
.. code-block:: console
@ -218,7 +245,8 @@ You can show the final configuration with:
│ path │ /dev/tape/by-id/scsi-CC2C52 │
└──────┴─────────────────────────────┘
Or simply list all configured changer devices:
Or simply list all configured changer devices (as seen in the **Changers** tab
of the GUI):
.. code-block:: console
@ -229,7 +257,7 @@ Or simply list all configured changer devices:
│ sl3 │ /dev/tape/by-id/scsi-CC2C52 │ Quantum │ Superloader3 │ CC2C52 │
└──────┴─────────────────────────────┴─────────┴──────────────┴────────────┘
The Vendor, Model and Serial number are auto detected, but only shown
The Vendor, Model and Serial number are auto-detected, but only shown
if the device is online.
To test your setup, please query the status of the changer device with:
@ -255,19 +283,19 @@ Tape libraries usually provide some special import/export slots (also
called "mail slots"). Tapes inside those slots are accessible from
outside, making it easy to add/remove tapes to/from the library. Those
tapes are considered to be "offline", so backup jobs will not use
them. Those special slots are auto-detected and marked as
them. Those special slots are auto-detected and marked as an
``import-export`` slot in the status command.
It's worth noting that some of the smaller tape libraries don't have
such slots. While they have something called "Mail Slot", that slot
is just a way to grab the tape from the gripper. But they are unable
such slots. While they have something called a "Mail Slot", that slot
is just a way to grab the tape from the gripper. They are unable
to hold media while the robot does other things. They also do not
expose that "Mail Slot" over the SCSI interface, so you wont see them in
expose that "Mail Slot" over the SCSI interface, so you won't see them in
the status output.
As a workaround, you can mark some of the normal slots as export
slot. The software treats those slots like real ``import-export``
slots, and the media inside those slots is considered to be 'offline'
slots, and the media inside those slots are considered to be 'offline'
(not available for backup):
.. code-block:: console
@ -303,6 +331,10 @@ the status output:
Tape drives
~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-tape-drives.png
:align: right
:alt: Tape Backup: Drive list
Linux is able to auto detect tape drives, and you can get a list
of available tape drives using:
@ -312,18 +344,23 @@ of available tape drives using:
┌────────────────────────────────┬────────┬─────────────┬────────┐
│ path │ vendor │ model │ serial │
╞════════════════════════════════╪════════╪═════════════╪════════╡
│ /dev/tape/by-id/scsi-12345-nst │ IBM │ ULT3580-TD4 │ 12345 │
│ /dev/tape/by-id/scsi-12345-sg │ IBM │ ULT3580-TD4 │ 12345 │
└────────────────────────────────┴────────┴─────────────┴────────┘
.. image:: images/screenshots/pbs-gui-tape-drives-add.png
:align: right
:alt: Tape Backup: Add a tape drive
In order to use that drive with Proxmox, you need to create a
configuration entry:
configuration entry. This can be done through **Tape Backup -> Drives** in the
GUI or by using the command below:
.. code-block:: console
# proxmox-tape drive create mydrive --path /dev/tape/by-id/scsi-12345-nst
# proxmox-tape drive create mydrive --path /dev/tape/by-id/scsi-12345-sg
.. Note:: Please use stable device path names from inside
``/dev/tape/by-id/``. Names like ``/dev/nst0`` may point to a
.. Note:: Please use the persistent device path names from inside
``/dev/tape/by-id/``. Names like ``/dev/sg0`` may point to a
different device after reboot, and that is not what you want.
If you have a tape library, you also need to set the associated
@ -334,10 +371,10 @@ changer device:
# proxmox-tape drive update mydrive --changer sl3 --changer-drivenum 0
The ``--changer-drivenum`` is only necessary if the tape library
includes more than one drive (The changer status command lists all
includes more than one drive (the changer status command lists all
drive numbers).
You can show the final configuration with:
You can display the final configuration with:
.. code-block:: console
@ -347,13 +384,13 @@ You can show the final configuration with:
╞═════════╪════════════════════════════════╡
│ name │ mydrive │
├─────────┼────────────────────────────────┤
│ path │ /dev/tape/by-id/scsi-12345-nst
│ path │ /dev/tape/by-id/scsi-12345-sg
├─────────┼────────────────────────────────┤
│ changer │ sl3 │
└─────────┴────────────────────────────────┘
.. NOTE:: The ``changer-drivenum`` value 0 is not stored in the
configuration, because that is the default.
configuration, because it is the default.
To list all configured drives use:
@ -363,10 +400,10 @@ To list all configured drives use:
┌──────────┬────────────────────────────────┬─────────┬────────┬─────────────┬────────┐
│ name │ path │ changer │ vendor │ model │ serial │
╞══════════╪════════════════════════════════╪═════════╪════════╪═════════════╪════════╡
│ mydrive │ /dev/tape/by-id/scsi-12345-nst │ sl3 │ IBM │ ULT3580-TD4 │ 12345 │
│ mydrive │ /dev/tape/by-id/scsi-12345-sg │ sl3 │ IBM │ ULT3580-TD4 │ 12345 │
└──────────┴────────────────────────────────┴─────────┴────────┴─────────────┴────────┘
The Vendor, Model and Serial number are auto detected, but only shown
The Vendor, Model and Serial number are auto detected and only shown
if the device is online.
For testing, you can simply query the drive status with:
@ -374,16 +411,38 @@ For testing, you can simply query the drive status with:
.. code-block:: console
# proxmox-tape status --drive mydrive
┌───────────────────────────────────┐
┌────────────────┬──────────────────────────┐
│ Name │ Value │
╞═══════════════════════════════════╡
╞════════════════╪══════════════════════════╡
│ blocksize │ 0 │
├───────────────────────────────────┤
status │ DRIVE_OPEN | IM_REP_EN
───────────────────────────────────
├────────────────┼──────────────────────────┤
density │ LTO4
├────────────────┼──────────────────────────
│ compression │ 1 │
├────────────────┼──────────────────────────┤
│ buffer-mode │ 1 │
├────────────────┼──────────────────────────┤
│ alert-flags │ (empty) │
├────────────────┼──────────────────────────┤
│ file-number │ 0 │
├────────────────┼──────────────────────────┤
│ block-number │ 0 │
├────────────────┼──────────────────────────┤
│ manufactured │ Fri Dec 13 01:00:00 2019 │
├────────────────┼──────────────────────────┤
│ bytes-written │ 501.80 GiB │
├────────────────┼──────────────────────────┤
│ bytes-read │ 4.00 MiB │
├────────────────┼──────────────────────────┤
│ medium-passes │ 20 │
├────────────────┼──────────────────────────┤
│ medium-wearout │ 0.12% │
├────────────────┼──────────────────────────┤
│ volume-mounts │ 2 │
└────────────────┴──────────────────────────┘
.. NOTE:: Blocksize should always be 0 (variable block size
mode). This is the default anyways.
mode). This is the default anyway.
.. _tape_media_pool_config:
@ -391,19 +450,23 @@ For testing, you can simply query the drive status with:
Media Pools
~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-tape-pools.png
:align: right
:alt: Tape Backup: Media Pools
A media pool is a logical container for tapes. A backup job targets
one media pool, so a job only uses tapes from that pool.
a single media pool, so a job only uses tapes from that pool.
.. topic:: Media Set
A media set is a group of continuously written tapes, used to split
the larger pool into smaller, restorable units. One or more backup
jobs write to a media set, producing an ordered group of
tapes. Media sets are identified by an unique ID. That ID and the
sequence number is stored on each tape of that set (tape label).
tapes. Media sets are identified by a unique ID. That ID and the
sequence number are stored on each tape of that set (tape label).
Media sets are the basic unit for restore tasks, i.e. you need all
tapes in the set to restore the media set content. Data is fully
Media sets are the basic unit for restore tasks. This means that you need
every tape in the set to restore the media set contents. Data is fully
deduplicated inside a media set.
@ -412,37 +475,37 @@ one media pool, so a job only uses tapes from that pool.
The pool additionally defines how long backup jobs can append data
to a media set. The following settings are possible:
- Try to use the current media set.
- Try to use the current media set (``continue``).
This setting produce one large media set. While this is very
This setting produces one large media set. While this is very
space efficient (deduplication, no unused space), it can lead to
long restore times, because restore jobs needs to read all tapes in the
long restore times, because restore jobs need to read all tapes in the
set.
.. NOTE:: Data is fully deduplicated inside a media set. That
.. NOTE:: Data is fully deduplicated inside a media set. This
also means that data is randomly distributed over the tapes in
the set. So even if you restore a single VM, this may have to
read data from all tapes inside the media set.
the set. Thus, even if you restore a single VM, data may have to be
read from all tapes inside the media set.
Larger media sets are also more error prone, because a single
damaged media makes the restore fail.
Larger media sets are also more error-prone, because a single
damaged tape makes the restore fail.
Usage scenario: Mostly used with tape libraries, and you manually
Usage scenario: Mostly used with tape libraries. You manually
trigger new set creation by running a backup job with the
``--export`` option.
.. NOTE:: Retention period starts with the existence of a newer
media set.
- Always create a new media set.
- Always create a new media set (``always``).
With this setting each backup job creates a new media set. This
is less space efficient, because the last media from the last set
With this setting, each backup job creates a new media set. This
is less space efficient, because the media from the last set
may not be fully written, leaving the remaining space unused.
The advantage is that this procudes media sets of minimal
size. Small set are easier to handle, you can move sets to an
off-site vault, and restore is much faster.
size. Small sets are easier to handle, can be moved more conveniently
to an off-site vault, and can be restored much faster.
.. NOTE:: Retention period starts with the creation time of the
media set.
@ -468,11 +531,11 @@ one media pool, so a job only uses tapes from that pool.
- Current set contains damaged or retired tapes.
- Media pool encryption changed
- Media pool encryption has changed
- Database consistency errors, e.g. if the inventory does not
contain required media info, or contain conflicting infos
(outdated data).
- Database consistency errors, for example, if the inventory does not
contain the required media information, or it contains conflicting
information (outdated data).
.. topic:: Retention Policy
@ -489,29 +552,34 @@ one media pool, so a job only uses tapes from that pool.
.. topic:: Hardware Encryption
LTO4 (or later) tape drives support hardware encryption. If you
LTO-4 (or later) tape drives support hardware encryption. If you
configure the media pool to use encryption, all data written to the
tapes is encrypted using the configured key.
That way, unauthorized users cannot read data from the media,
e.g. if you loose a media while shipping to an offsite location.
This way, unauthorized users cannot read data from the media,
for example, if you loose a tape while shipping to an offsite location.
.. Note:: If the backup client also encrypts data, data on tape
.. Note:: If the backup client also encrypts data, data on the tape
will be double encrypted.
The password protected key is stored on each media, so it is
possbible to `restore the key <tape_restore_encryption_key_>`_ using the password. Please make sure
you remember the password in case you need to restore the key.
The password protected key is stored on each medium, so that it is
possbible to `restore the key <tape_restore_encryption_key_>`_ using
the password. Please make sure to remember the password, in case
you need to restore the key.
.. NOTE:: We use global content namespace, i.e. we do not store the
source datastore, so it is impossible to distinguish store1:/vm/100
from store2:/vm/100. Please use different media pools if the
sources are from different name spaces with conflicting names
(E.g. if the sources are from different Proxmox VE clusters).
.. NOTE:: We use global content namespace, meaning we do not store the
source datastore name. Because of this, it is impossible to distinguish
store1:/vm/100 from store2:/vm/100. Please use different media pools
if the sources are from different namespaces with conflicting names
(for example, if the sources are from different Proxmox VE clusters).
.. image:: images/screenshots/pbs-gui-tape-pools-add.png
:align: right
:alt: Tape Backup: Add a media pool
The following command creates a new media pool:
To create a new media pool, add one from **Tape Backup -> Media Pools** in the
GUI, or enter the following command:
.. code-block:: console
@ -520,7 +588,7 @@ The following command creates a new media pool:
# proxmox-tape pool create daily --drive mydrive
Additional option can be set later using the update command:
Additional options can be set later, using the update command:
.. code-block:: console
@ -543,9 +611,13 @@ To list all configured pools use:
Tape Backup Jobs
~~~~~~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-tape-backup-jobs.png
:align: right
:alt: Tape Backup: Tape Backup Jobs
To automate tape backup, you can configure tape backup jobs which
store datastore content to a media pool at a specific time
schedule. Required settings are:
write datastore content to a media pool, based on a specific time schedule.
The required settings are:
- ``store``: The datastore you want to backup
@ -564,14 +636,14 @@ use:
# proxmox-tape backup-job create job2 --store vmstore1 \
--pool yourpool --drive yourdrive --schedule daily
Backup includes all snapshot from a backup group by default. You can
The backup includes all snapshots from a backup group by default. You can
set the ``latest-only`` flag to include only the latest snapshots:
.. code-block:: console
# proxmox-tape backup-job update job2 --latest-only
Backup jobs can use email to send tape requests notifications or
Backup jobs can use email to send tape request notifications or
report errors. You can set the notification user with:
.. code-block:: console
@ -581,7 +653,7 @@ report errors. You can set the notification user with:
.. Note:: The email address is a property of the user (see :ref:`user_mgmt`).
It is sometimes useful to eject the tape from the drive after a
backup. For a standalone drive, the ``eject-media`` option eject the
backup. For a standalone drive, the ``eject-media`` option ejects the
tape, making sure that the following backup cannot use the tape
(unless someone manually loads the tape again). For tape libraries,
this option unloads the tape to a free slot, which provides better
@ -591,9 +663,9 @@ dust protection than inside a drive:
# proxmox-tape backup-job update job2 --eject-media
.. Note:: For failed jobs, the tape remain in the drive.
.. Note:: For failed jobs, the tape remains in the drive.
For tape libraries, the ``export-media`` options moves all tapes from
For tape libraries, the ``export-media`` option moves all tapes from
the media set to an export slot, making sure that the following backup
cannot use the tapes. An operator can pick up those tapes and move them
to a vault.
@ -618,13 +690,21 @@ To remove a job, please use:
# proxmox-tape backup-job remove job2
.. image:: images/screenshots/pbs-gui-tape-backup-jobs-add.png
:align: right
:alt: Tape Backup: Add a backup job
This same functionality also exists in the GUI, under the **Backup Jobs** tab of
**Tape Backup**, where *Local Datastore* relates to the datastore you want to
backup and *Media Pool* is the pool to back up to.
Administration
--------------
Many sub-command of the ``proxmox-tape`` command line tools take a
Many sub-commands of the ``proxmox-tape`` command line tools take a
parameter called ``--drive``, which specifies the tape drive you want
to work on. For convenience, you can set that in an environment
to work on. For convenience, you can set this in an environment
variable:
.. code-block:: console
@ -633,33 +713,33 @@ variable:
You can then omit the ``--drive`` parameter from the command. If the
drive has an associated changer device, you may also omit the changer
parameter from commands that needs a changer device, for example:
parameter from commands that need a changer device, for example:
.. code-block:: console
# proxmox-tape changer status
Should displays the changer status of the changer device associated with
should display the changer status of the changer device associated with
drive ``mydrive``.
Label Tapes
~~~~~~~~~~~
By default, tape cartidges all looks the same, so you need to put a
label on them for unique identification. So first, put a sticky paper
By default, tape cartridges all look the same, so you need to put a
label on them for unique identification. First, put a sticky paper
label with some human readable text on the cartridge.
If you use a `Tape Library`_, you should use an 8 character string
encoded as `Code 39`_, as definded in the `LTO Ultrium Cartridge Label
Specification`_. You can either bye such barcode labels from your
cartidge vendor, or print them yourself. You can use our `LTO Barcode
Generator`_ App for that.
encoded as `Code 39`_, as defined in the `LTO Ultrium Cartridge Label
Specification`_. You can either buy such barcode labels from your
cartridge vendor, or print them yourself. You can use our `LTO Barcode
Generator`_ app to print them.
Next, you need to write that same label text to the tape, so that the
software can uniquely identify the tape too.
For a standalone drive, manually insert the new tape cartidge into the
For a standalone drive, manually insert the new tape cartridge into the
drive and run:
.. code-block:: console
@ -668,7 +748,7 @@ drive and run:
You may omit the ``--pool`` argument to allow the tape to be used by any pool.
.. Note:: For safety reasons, this command fails if the tape contain
.. Note:: For safety reasons, this command fails if the tape contains
any data. If you want to overwrite it anyway, erase the tape first.
You can verify success by reading back the label:
@ -707,7 +787,7 @@ can then label all unlabeled tapes with a single command:
Run Tape Backups
~~~~~~~~~~~~~~~~
To manually run a backup job use:
To manually run a backup job click *Run Now* in the GUI or use the command:
.. code-block:: console
@ -718,7 +798,7 @@ The following options are available:
--eject-media Eject media upon job completion.
It is normally good practice to eject the tape after use. This unmounts the
tape from the drive and prevents the tape from getting dirty with dust.
tape from the drive and prevents the tape from getting dusty.
--export-media-set Export media set upon job completion.
@ -737,7 +817,7 @@ catalogs, you need to restore them first. Please note that you need
the catalog to find your data, but restoring a complete media-set does
not need media catalogs.
The following command shows the media content (from catalog):
The following command lists the media content (from catalog):
.. code-block:: console
@ -772,7 +852,14 @@ Restore Catalog
Encryption Key Management
~~~~~~~~~~~~~~~~~~~~~~~~~
Creating a new encryption key:
.. image:: images/screenshots/pbs-gui-tape-crypt-keys.png
:align: right
:alt: Tape Backup: Encryption Keys
Proxmox Backup Server also provides an interface for handling encryption keys on
the backup server. Encryption keys can be managed from the **Tape Backup ->
Encryption Keys** section of the GUI or through the ``proxmox-tape key`` command
line tool. To create a new encryption key from the command line:
.. code-block:: console
@ -841,7 +928,7 @@ database. Further restore jobs automatically use any available key.
Tape Cleaning
~~~~~~~~~~~~~
LTO tape drives requires regular cleaning. This is done by loading a
LTO tape drives require regular cleaning. This is done by loading a
cleaning cartridge into the drive, which is a manual task for
standalone drives.
@ -883,78 +970,3 @@ This command does the following:
- run drive cleaning operation
- unload the cleaning tape (to slot 3)
Configuration Files
-------------------
``media-pool.cfg``
~~~~~~~~~~~~~~~~~~
File Format
^^^^^^^^^^^
.. include:: config/media-pool/format.rst
Options
^^^^^^^
.. include:: config/media-pool/config.rst
``tape.cfg``
~~~~~~~~~~~~
File Format
^^^^^^^^^^^
.. include:: config/tape/format.rst
Options
^^^^^^^
.. include:: config/tape/config.rst
``tape-job.cfg``
~~~~~~~~~~~~~~~~
File Format
^^^^^^^^^^^
.. include:: config/tape-job/format.rst
Options
^^^^^^^
.. include:: config/tape-job/config.rst
Command Syntax
--------------
``proxmox-tape``
----------------
.. include:: proxmox-tape/synopsis.rst
``pmt``
-------
.. include:: pmt/options.rst
....
.. include:: pmt/synopsis.rst
``pmtx``
--------
.. include:: pmtx/synopsis.rst

View File

@ -100,7 +100,7 @@ can be encrypted, and they are handled in a slightly different manner than
normal chunks.
The hashes of encrypted chunks are calculated not with the actual (encrypted)
chunk content, but with the plaintext content concatenated with the encryption
chunk content, but with the plain-text content concatenated with the encryption
key. This way, two chunks of the same data encrypted with different keys
generate two different checksums and no collisions occur for multiple
encryption keys.
@ -138,7 +138,7 @@ will see that the probability of a collision in that scenario is:
For context, in a lottery game of guessing 6 out of 45, the chance to correctly
guess all 6 numbers is only :math:`1.2277 * 10^{-7}`, that means the chance of
collission is about the same as winning 13 such lotto games *in a row*.
a collision is about the same as winning 13 such lotto games *in a row*.
In conclusion, it is extremely unlikely that such a collision would occur by
accident in a normal datastore.

View File

@ -360,7 +360,9 @@ WebAuthn
For WebAuthn to work, you need to have two things:
* a trusted HTTPS certificate (for example, by using `Let's Encrypt
<https://pbs.proxmox.com/wiki/index.php/HTTPS_Certificate_Configuration>`_)
<https://pbs.proxmox.com/wiki/index.php/HTTPS_Certificate_Configuration>`_).
While it probably works with an untrusted certificate, some browsers may warn
or refuse WebAuthn operations if it is not trusted.
* setup the WebAuthn configuration (see *Configuration -> Authentication* in the
Proxmox Backup Server web-interface). This can be auto-filled in most setups.

684
src/acme/client.rs Normal file
View File

@ -0,0 +1,684 @@
//! HTTP Client for the ACME protocol.
use std::fs::OpenOptions;
use std::io;
use std::os::unix::fs::OpenOptionsExt;
use anyhow::{bail, format_err};
use bytes::Bytes;
use hyper::{Body, Request};
use nix::sys::stat::Mode;
use serde::{Deserialize, Serialize};
use proxmox::tools::fs::{replace_file, CreateOptions};
use proxmox_acme_rs::account::AccountCreator;
use proxmox_acme_rs::account::AccountData as AcmeAccountData;
use proxmox_acme_rs::order::{Order, OrderData};
use proxmox_acme_rs::Request as AcmeRequest;
use proxmox_acme_rs::{Account, Authorization, Challenge, Directory, Error, ErrorResponse};
use proxmox_http::client::SimpleHttp;
use crate::api2::types::AcmeAccountName;
use crate::config::acme::account_path;
use crate::tools::pbs_simple_http;
/// Our on-disk format inherited from PVE's proxmox-acme code.
#[derive(Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct AccountData {
/// The account's location URL.
location: String,
/// The account data.
account: AcmeAccountData,
/// The private key as PEM formatted string.
key: String,
/// ToS URL the user agreed to.
#[serde(skip_serializing_if = "Option::is_none")]
tos: Option<String>,
#[serde(skip_serializing_if = "is_false", default)]
debug: bool,
/// The directory's URL.
directory_url: String,
}
#[inline]
fn is_false(b: &bool) -> bool {
!*b
}
pub struct AcmeClient {
directory_url: String,
debug: bool,
account_path: Option<String>,
tos: Option<String>,
account: Option<Account>,
directory: Option<Directory>,
nonce: Option<String>,
http_client: SimpleHttp,
}
impl AcmeClient {
/// Create a new ACME client for a given ACME directory URL.
pub fn new(directory_url: String) -> Self {
Self {
directory_url,
debug: false,
account_path: None,
tos: None,
account: None,
directory: None,
nonce: None,
http_client: pbs_simple_http(None),
}
}
/// Load an existing ACME account by name.
pub async fn load(account_name: &AcmeAccountName) -> Result<Self, anyhow::Error> {
let account_path = account_path(account_name.as_ref());
let data = match tokio::fs::read(&account_path).await {
Ok(data) => data,
Err(err) if err.kind() == io::ErrorKind::NotFound => {
bail!("acme account '{}' does not exist", account_name)
}
Err(err) => bail!(
"failed to load acme account from '{}' - {}",
account_path,
err
),
};
let data: AccountData = serde_json::from_slice(&data).map_err(|err| {
format_err!(
"failed to parse acme account from '{}' - {}",
account_path,
err
)
})?;
let account = Account::from_parts(data.location, data.key, data.account);
let mut me = Self::new(data.directory_url);
me.debug = data.debug;
me.account_path = Some(account_path);
me.tos = data.tos;
me.account = Some(account);
Ok(me)
}
pub async fn new_account<'a>(
&'a mut self,
account_name: &AcmeAccountName,
tos_agreed: bool,
contact: Vec<String>,
rsa_bits: Option<u32>,
) -> Result<&'a Account, anyhow::Error> {
self.tos = if tos_agreed {
self.terms_of_service_url().await?.map(str::to_owned)
} else {
None
};
let account = Account::creator()
.set_contacts(contact)
.agree_to_tos(tos_agreed);
let account = if let Some(bits) = rsa_bits {
account.generate_rsa_key(bits)?
} else {
account.generate_ec_key()?
};
let _ = self.register_account(account).await?;
crate::config::acme::make_acme_account_dir()?;
let account_path = account_path(account_name.as_ref());
let file = OpenOptions::new()
.write(true)
.create(true)
.mode(0o600)
.open(&account_path)
.map_err(|err| format_err!("failed to open {:?} for writing: {}", account_path, err))?;
self.write_to(file).map_err(|err| {
format_err!(
"failed to write acme account to {:?}: {}",
account_path,
err
)
})?;
self.account_path = Some(account_path);
// unwrap: Setting `self.account` is literally this function's job, we just can't keep
// the borrow from from `self.register_account()` active due to clashes.
Ok(self.account.as_ref().unwrap())
}
fn save(&self) -> Result<(), anyhow::Error> {
let mut data = Vec::<u8>::new();
self.write_to(&mut data)?;
let account_path = self.account_path.as_ref().ok_or_else(|| {
format_err!("no account path set, cannot save upated account information")
})?;
crate::config::acme::make_acme_account_dir()?;
replace_file(
account_path,
&data,
CreateOptions::new()
.perm(Mode::from_bits_truncate(0o600))
.owner(nix::unistd::ROOT)
.group(nix::unistd::Gid::from_raw(0)),
)
}
/// Shortcut to `account().ok_or_else(...).key_authorization()`.
pub fn key_authorization(&self, token: &str) -> Result<String, anyhow::Error> {
Ok(Self::need_account(&self.account)?.key_authorization(token)?)
}
/// Shortcut to `account().ok_or_else(...).dns_01_txt_value()`.
/// the key authorization value.
pub fn dns_01_txt_value(&self, token: &str) -> Result<String, anyhow::Error> {
Ok(Self::need_account(&self.account)?.dns_01_txt_value(token)?)
}
async fn register_account(
&mut self,
account: AccountCreator,
) -> Result<&Account, anyhow::Error> {
let mut retry = retry();
let mut response = loop {
retry.tick()?;
let (directory, nonce) = Self::get_dir_nonce(
&mut self.http_client,
&self.directory_url,
&mut self.directory,
&mut self.nonce,
)
.await?;
let request = account.request(directory, nonce)?;
match self.run_request(request).await {
Ok(response) => break response,
Err(err) if err.is_bad_nonce() => continue,
Err(err) => return Err(err.into()),
}
};
let account = account.response(response.location_required()?, &response.body)?;
self.account = Some(account);
Ok(self.account.as_ref().unwrap())
}
pub async fn update_account<T: Serialize>(
&mut self,
data: &T,
) -> Result<&Account, anyhow::Error> {
let account = Self::need_account(&self.account)?;
let mut retry = retry();
let response = loop {
retry.tick()?;
let (_directory, nonce) = Self::get_dir_nonce(
&mut self.http_client,
&self.directory_url,
&mut self.directory,
&mut self.nonce,
)
.await?;
let request = account.post_request(&account.location, &nonce, data)?;
match Self::execute(&mut self.http_client, request, &mut self.nonce).await {
Ok(response) => break response,
Err(err) if err.is_bad_nonce() => continue,
Err(err) => return Err(err.into()),
}
};
// unwrap: we've been keeping an immutable reference to it from the top of the method
let _ = account;
self.account.as_mut().unwrap().data = response.json()?;
self.save()?;
Ok(self.account.as_ref().unwrap())
}
pub async fn new_order<I>(&mut self, domains: I) -> Result<Order, anyhow::Error>
where
I: IntoIterator<Item = String>,
{
let account = Self::need_account(&self.account)?;
let order = domains
.into_iter()
.fold(OrderData::new(), |order, domain| order.domain(domain));
let mut retry = retry();
loop {
retry.tick()?;
let (directory, nonce) = Self::get_dir_nonce(
&mut self.http_client,
&self.directory_url,
&mut self.directory,
&mut self.nonce,
)
.await?;
let mut new_order = account.new_order(&order, directory, nonce)?;
let mut response = match Self::execute(
&mut self.http_client,
new_order.request.take().unwrap(),
&mut self.nonce,
)
.await
{
Ok(response) => response,
Err(err) if err.is_bad_nonce() => continue,
Err(err) => return Err(err.into()),
};
return Ok(
new_order.response(response.location_required()?, response.bytes().as_ref())?
);
}
}
/// Low level "POST-as-GET" request.
async fn post_as_get(&mut self, url: &str) -> Result<AcmeResponse, anyhow::Error> {
let account = Self::need_account(&self.account)?;
let mut retry = retry();
loop {
retry.tick()?;
let (_directory, nonce) = Self::get_dir_nonce(
&mut self.http_client,
&self.directory_url,
&mut self.directory,
&mut self.nonce,
)
.await?;
let request = account.get_request(url, nonce)?;
match Self::execute(&mut self.http_client, request, &mut self.nonce).await {
Ok(response) => return Ok(response),
Err(err) if err.is_bad_nonce() => continue,
Err(err) => return Err(err.into()),
}
}
}
/// Low level POST request.
async fn post<T: Serialize>(
&mut self,
url: &str,
data: &T,
) -> Result<AcmeResponse, anyhow::Error> {
let account = Self::need_account(&self.account)?;
let mut retry = retry();
loop {
retry.tick()?;
let (_directory, nonce) = Self::get_dir_nonce(
&mut self.http_client,
&self.directory_url,
&mut self.directory,
&mut self.nonce,
)
.await?;
let request = account.post_request(url, nonce, data)?;
match Self::execute(&mut self.http_client, request, &mut self.nonce).await {
Ok(response) => return Ok(response),
Err(err) if err.is_bad_nonce() => continue,
Err(err) => return Err(err.into()),
}
}
}
/// Request challenge validation. Afterwards, the challenge should be polled.
pub async fn request_challenge_validation(
&mut self,
url: &str,
) -> Result<Challenge, anyhow::Error> {
Ok(self
.post(url, &serde_json::Value::Object(Default::default()))
.await?
.json()?)
}
/// Assuming the provided URL is an 'Authorization' URL, get and deserialize it.
pub async fn get_authorization(&mut self, url: &str) -> Result<Authorization, anyhow::Error> {
Ok(self.post_as_get(url).await?.json()?)
}
/// Assuming the provided URL is an 'Order' URL, get and deserialize it.
pub async fn get_order(&mut self, url: &str) -> Result<OrderData, anyhow::Error> {
Ok(self.post_as_get(url).await?.json()?)
}
/// Finalize an Order via its `finalize` URL property and the DER encoded CSR.
pub async fn finalize(&mut self, url: &str, csr: &[u8]) -> Result<(), anyhow::Error> {
let csr = base64::encode_config(csr, base64::URL_SAFE_NO_PAD);
let data = serde_json::json!({ "csr": csr });
self.post(url, &data).await?;
Ok(())
}
/// Download a certificate via its 'certificate' URL property.
///
/// The certificate will be a PEM certificate chain.
pub async fn get_certificate(&mut self, url: &str) -> Result<Bytes, anyhow::Error> {
Ok(self.post_as_get(url).await?.body)
}
/// Revoke an existing certificate (PEM or DER formatted).
pub async fn revoke_certificate(
&mut self,
certificate: &[u8],
reason: Option<u32>,
) -> Result<(), anyhow::Error> {
// TODO: This can also work without an account.
let account = Self::need_account(&self.account)?;
let revocation = account.revoke_certificate(certificate, reason)?;
let mut retry = retry();
loop {
retry.tick()?;
let (directory, nonce) = Self::get_dir_nonce(
&mut self.http_client,
&self.directory_url,
&mut self.directory,
&mut self.nonce,
)
.await?;
let request = revocation.request(&directory, nonce)?;
match Self::execute(&mut self.http_client, request, &mut self.nonce).await {
Ok(_response) => return Ok(()),
Err(err) if err.is_bad_nonce() => continue,
Err(err) => return Err(err.into()),
}
}
}
fn need_account(account: &Option<Account>) -> Result<&Account, anyhow::Error> {
account
.as_ref()
.ok_or_else(|| format_err!("cannot use client without an account"))
}
pub(crate) fn account(&self) -> Result<&Account, anyhow::Error> {
Self::need_account(&self.account)
}
pub fn tos(&self) -> Option<&str> {
self.tos.as_deref()
}
pub fn directory_url(&self) -> &str {
&self.directory_url
}
fn to_account_data(&self) -> Result<AccountData, anyhow::Error> {
let account = self.account()?;
Ok(AccountData {
location: account.location.clone(),
key: account.private_key.clone(),
account: AcmeAccountData {
only_return_existing: false, // don't actually write this out in case it's set
..account.data.clone()
},
tos: self.tos.clone(),
debug: self.debug,
directory_url: self.directory_url.clone(),
})
}
fn write_to<T: io::Write>(&self, out: T) -> Result<(), anyhow::Error> {
let data = self.to_account_data()?;
Ok(serde_json::to_writer_pretty(out, &data)?)
}
}
struct AcmeResponse {
body: Bytes,
location: Option<String>,
got_nonce: bool,
}
impl AcmeResponse {
/// Convenience helper to assert that a location header was part of the response.
fn location_required(&mut self) -> Result<String, anyhow::Error> {
self.location
.take()
.ok_or_else(|| format_err!("missing Location header"))
}
/// Convenience shortcut to perform json deserialization of the returned body.
fn json<T: for<'a> Deserialize<'a>>(&self) -> Result<T, Error> {
Ok(serde_json::from_slice(&self.body)?)
}
/// Convenience shortcut to get the body as bytes.
fn bytes(&self) -> &[u8] {
&self.body
}
}
impl AcmeClient {
/// Non-self-borrowing run_request version for borrow workarounds.
async fn execute(
http_client: &mut SimpleHttp,
request: AcmeRequest,
nonce: &mut Option<String>,
) -> Result<AcmeResponse, Error> {
let req_builder = Request::builder().method(request.method).uri(&request.url);
let http_request = if !request.content_type.is_empty() {
req_builder
.header("Content-Type", request.content_type)
.header("Content-Length", request.body.len())
.body(request.body.into())
} else {
req_builder.body(Body::empty())
}
.map_err(|err| Error::Custom(format!("failed to create http request: {}", err)))?;
let response = http_client
.request(http_request)
.await
.map_err(|err| Error::Custom(err.to_string()))?;
let (parts, body) = response.into_parts();
let status = parts.status.as_u16();
let body = hyper::body::to_bytes(body)
.await
.map_err(|err| Error::Custom(format!("failed to retrieve response body: {}", err)))?;
let got_nonce = if let Some(new_nonce) = parts.headers.get(proxmox_acme_rs::REPLAY_NONCE) {
let new_nonce = new_nonce.to_str().map_err(|err| {
Error::Client(format!(
"received invalid replay-nonce header from ACME server: {}",
err
))
})?;
*nonce = Some(new_nonce.to_owned());
true
} else {
false
};
if parts.status.is_success() {
if status != request.expected {
return Err(Error::InvalidApi(format!(
"ACME server responded with unexpected status code: {:?}",
parts.status
)));
}
let location = parts
.headers
.get("Location")
.map(|header| {
header.to_str().map(str::to_owned).map_err(|err| {
Error::Client(format!(
"received invalid location header from ACME server: {}",
err
))
})
})
.transpose()?;
return Ok(AcmeResponse {
body,
location,
got_nonce,
});
}
let error: ErrorResponse = serde_json::from_slice(&body).map_err(|err| {
Error::Client(format!(
"error status with improper error ACME response: {}",
err
))
})?;
if error.ty == proxmox_acme_rs::error::BAD_NONCE {
if !got_nonce {
return Err(Error::InvalidApi(
"badNonce without a new Replay-Nonce header".to_string(),
));
}
return Err(Error::BadNonce);
}
Err(Error::Api(error))
}
/// Low-level API to run an n API request. This automatically updates the current nonce!
async fn run_request(&mut self, request: AcmeRequest) -> Result<AcmeResponse, Error> {
Self::execute(&mut self.http_client, request, &mut self.nonce).await
}
async fn directory(&mut self) -> Result<&Directory, Error> {
Ok(Self::get_directory(
&mut self.http_client,
&self.directory_url,
&mut self.directory,
&mut self.nonce,
)
.await?
.0)
}
async fn get_directory<'a, 'b>(
http_client: &mut SimpleHttp,
directory_url: &str,
directory: &'a mut Option<Directory>,
nonce: &'b mut Option<String>,
) -> Result<(&'a Directory, Option<&'b str>), Error> {
if let Some(d) = directory {
return Ok((d, nonce.as_deref()));
}
let response = Self::execute(
http_client,
AcmeRequest {
url: directory_url.to_string(),
method: "GET",
content_type: "",
body: String::new(),
expected: 200,
},
nonce,
)
.await?;
*directory = Some(Directory::from_parts(
directory_url.to_string(),
response.json()?,
));
Ok((directory.as_ref().unwrap(), nonce.as_deref()))
}
/// Like `get_directory`, but if the directory provides no nonce, also performs a `HEAD`
/// request on the new nonce URL.
async fn get_dir_nonce<'a, 'b>(
http_client: &mut SimpleHttp,
directory_url: &str,
directory: &'a mut Option<Directory>,
nonce: &'b mut Option<String>,
) -> Result<(&'a Directory, &'b str), Error> {
// this let construct is a lifetime workaround:
let _ = Self::get_directory(http_client, directory_url, directory, nonce).await?;
let dir = directory.as_ref().unwrap(); // the above fails if it couldn't fill this option
if nonce.is_none() {
// this is also a lifetime issue...
let _ = Self::get_nonce(http_client, nonce, dir.new_nonce_url()).await?;
};
Ok((dir, nonce.as_deref().unwrap()))
}
pub async fn terms_of_service_url(&mut self) -> Result<Option<&str>, Error> {
Ok(self.directory().await?.terms_of_service_url())
}
async fn get_nonce<'a>(
http_client: &mut SimpleHttp,
nonce: &'a mut Option<String>,
new_nonce_url: &str,
) -> Result<&'a str, Error> {
let response = Self::execute(
http_client,
AcmeRequest {
url: new_nonce_url.to_owned(),
method: "HEAD",
content_type: "",
body: String::new(),
expected: 200,
},
nonce,
)
.await?;
if !response.got_nonce {
return Err(Error::InvalidApi(
"no new nonce received from new nonce URL".to_string(),
));
}
nonce
.as_deref()
.ok_or_else(|| Error::Client("failed to update nonce".to_string()))
}
}
/// bad nonce retry count helper
struct Retry(usize);
const fn retry() -> Retry {
Retry(0)
}
impl Retry {
fn tick(&mut self) -> Result<(), Error> {
if self.0 >= 3 {
Err(Error::Client(format!("kept getting a badNonce error!")))
} else {
self.0 += 1;
Ok(())
}
}
}

5
src/acme/mod.rs Normal file
View File

@ -0,0 +1,5 @@
mod client;
pub use client::AcmeClient;
pub(crate) mod plugin;
pub(crate) use plugin::get_acme_plugin;

314
src/acme/plugin.rs Normal file
View File

@ -0,0 +1,314 @@
use std::future::Future;
use std::pin::Pin;
use std::process::Stdio;
use std::sync::Arc;
use std::time::Duration;
use anyhow::{bail, format_err, Error};
use hyper::{Body, Request, Response};
use tokio::io::{AsyncBufReadExt, AsyncRead, AsyncWriteExt, BufReader};
use tokio::process::Command;
use proxmox_acme_rs::{Authorization, Challenge};
use crate::acme::AcmeClient;
use crate::api2::types::AcmeDomain;
use crate::server::WorkerTask;
use crate::config::acme::plugin::{DnsPlugin, PluginData};
const PROXMOX_ACME_SH_PATH: &str = "/usr/share/proxmox-acme/proxmox-acme";
pub(crate) fn get_acme_plugin(
plugin_data: &PluginData,
name: &str,
) -> Result<Option<Box<dyn AcmePlugin + Send + Sync + 'static>>, Error> {
let (ty, data) = match plugin_data.get(name) {
Some(plugin) => plugin,
None => return Ok(None),
};
Ok(Some(match ty.as_str() {
"dns" => {
let plugin: DnsPlugin = serde_json::from_value(data.clone())?;
Box::new(plugin)
}
"standalone" => {
// this one has no config
Box::new(StandaloneServer::default())
}
other => bail!("missing implementation for plugin type '{}'", other),
}))
}
pub(crate) trait AcmePlugin {
/// Setup everything required to trigger the validation and return the corresponding validation
/// URL.
fn setup<'fut, 'a: 'fut, 'b: 'fut, 'c: 'fut, 'd: 'fut>(
&'a mut self,
client: &'b mut AcmeClient,
authorization: &'c Authorization,
domain: &'d AcmeDomain,
task: Arc<WorkerTask>,
) -> Pin<Box<dyn Future<Output = Result<&'c str, Error>> + Send + 'fut>>;
fn teardown<'fut, 'a: 'fut, 'b: 'fut, 'c: 'fut, 'd: 'fut>(
&'a mut self,
client: &'b mut AcmeClient,
authorization: &'c Authorization,
domain: &'d AcmeDomain,
task: Arc<WorkerTask>,
) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'fut>>;
}
fn extract_challenge<'a>(
authorization: &'a Authorization,
ty: &str,
) -> Result<&'a Challenge, Error> {
authorization
.challenges
.iter()
.find(|ch| ch.ty == ty)
.ok_or_else(|| format_err!("no supported challenge type ({}) found", ty))
}
async fn pipe_to_tasklog<T: AsyncRead + Unpin>(
pipe: T,
task: Arc<WorkerTask>,
) -> Result<(), std::io::Error> {
let mut pipe = BufReader::new(pipe);
let mut line = String::new();
loop {
line.clear();
match pipe.read_line(&mut line).await {
Ok(0) => return Ok(()),
Ok(_) => task.log(line.as_str()),
Err(err) => return Err(err),
}
}
}
impl DnsPlugin {
async fn action<'a>(
&self,
client: &mut AcmeClient,
authorization: &'a Authorization,
domain: &AcmeDomain,
task: Arc<WorkerTask>,
action: &str,
) -> Result<&'a str, Error> {
let challenge = extract_challenge(authorization, "dns-01")?;
let mut stdin_data = client
.dns_01_txt_value(
challenge
.token()
.ok_or_else(|| format_err!("missing token in challenge"))?,
)?
.into_bytes();
stdin_data.push(b'\n');
stdin_data.extend(self.data.as_bytes());
if stdin_data.last() != Some(&b'\n') {
stdin_data.push(b'\n');
}
let mut command = Command::new("/usr/bin/setpriv");
#[rustfmt::skip]
command.args(&[
"--reuid", "nobody",
"--regid", "nogroup",
"--clear-groups",
"--reset-env",
"--",
"/bin/bash",
PROXMOX_ACME_SH_PATH,
action,
&self.core.api,
domain.alias.as_deref().unwrap_or(&domain.domain),
]);
// We could use 1 socketpair, but tokio wraps them all in `File` internally causing `close`
// to be called separately on all of them without exception, so we need 3 pipes :-(
let mut child = command
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()?;
let mut stdin = child.stdin.take().expect("Stdio::piped()");
let stdout = child.stdout.take().expect("Stdio::piped() failed?");
let stdout = pipe_to_tasklog(stdout, Arc::clone(&task));
let stderr = child.stderr.take().expect("Stdio::piped() failed?");
let stderr = pipe_to_tasklog(stderr, Arc::clone(&task));
let stdin = async move {
stdin.write_all(&stdin_data).await?;
stdin.flush().await?;
Ok::<_, std::io::Error>(())
};
match futures::try_join!(stdin, stdout, stderr) {
Ok(((), (), ())) => (),
Err(err) => {
if let Err(err) = child.kill().await {
task.log(format!(
"failed to kill '{} {}' command: {}",
PROXMOX_ACME_SH_PATH, action, err
));
}
bail!("'{}' failed: {}", PROXMOX_ACME_SH_PATH, err);
}
}
let status = child.wait().await?;
if !status.success() {
bail!(
"'{} {}' exited with error ({})",
PROXMOX_ACME_SH_PATH,
action,
status.code().unwrap_or(-1)
);
}
Ok(&challenge.url)
}
}
impl AcmePlugin for DnsPlugin {
fn setup<'fut, 'a: 'fut, 'b: 'fut, 'c: 'fut, 'd: 'fut>(
&'a mut self,
client: &'b mut AcmeClient,
authorization: &'c Authorization,
domain: &'d AcmeDomain,
task: Arc<WorkerTask>,
) -> Pin<Box<dyn Future<Output = Result<&'c str, Error>> + Send + 'fut>> {
Box::pin(async move {
let result = self
.action(client, authorization, domain, task.clone(), "setup")
.await;
let validation_delay = self.core.validation_delay.unwrap_or(30) as u64;
if validation_delay > 0 {
task.log(format!(
"Sleeping {} seconds to wait for TXT record propagation",
validation_delay
));
tokio::time::sleep(Duration::from_secs(validation_delay)).await;
}
result
})
}
fn teardown<'fut, 'a: 'fut, 'b: 'fut, 'c: 'fut, 'd: 'fut>(
&'a mut self,
client: &'b mut AcmeClient,
authorization: &'c Authorization,
domain: &'d AcmeDomain,
task: Arc<WorkerTask>,
) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'fut>> {
Box::pin(async move {
self.action(client, authorization, domain, task, "teardown")
.await
.map(drop)
})
}
}
#[derive(Default)]
struct StandaloneServer {
abort_handle: Option<futures::future::AbortHandle>,
}
// In case the "order_certificates" future gets dropped between setup & teardown, let's also cancel
// the HTTP listener on Drop:
impl Drop for StandaloneServer {
fn drop(&mut self) {
self.stop();
}
}
impl StandaloneServer {
fn stop(&mut self) {
if let Some(abort) = self.abort_handle.take() {
abort.abort();
}
}
}
async fn standalone_respond(
req: Request<Body>,
path: Arc<String>,
key_auth: Arc<String>,
) -> Result<Response<Body>, hyper::Error> {
if req.method() == hyper::Method::GET && req.uri().path() == path.as_str() {
Ok(Response::builder()
.status(http::StatusCode::OK)
.body(key_auth.as_bytes().to_vec().into())
.unwrap())
} else {
Ok(Response::builder()
.status(http::StatusCode::NOT_FOUND)
.body("Not found.".into())
.unwrap())
}
}
impl AcmePlugin for StandaloneServer {
fn setup<'fut, 'a: 'fut, 'b: 'fut, 'c: 'fut, 'd: 'fut>(
&'a mut self,
client: &'b mut AcmeClient,
authorization: &'c Authorization,
_domain: &'d AcmeDomain,
_task: Arc<WorkerTask>,
) -> Pin<Box<dyn Future<Output = Result<&'c str, Error>> + Send + 'fut>> {
use hyper::server::conn::AddrIncoming;
use hyper::service::{make_service_fn, service_fn};
Box::pin(async move {
self.stop();
let challenge = extract_challenge(authorization, "http-01")?;
let token = challenge
.token()
.ok_or_else(|| format_err!("missing token in challenge"))?;
let key_auth = Arc::new(client.key_authorization(&token)?);
let path = Arc::new(format!("/.well-known/acme-challenge/{}", token));
let service = make_service_fn(move |_| {
let path = Arc::clone(&path);
let key_auth = Arc::clone(&key_auth);
async move {
Ok::<_, hyper::Error>(service_fn(move |request| {
standalone_respond(request, Arc::clone(&path), Arc::clone(&key_auth))
}))
}
});
// `[::]:80` first, then `*:80`
let incoming = AddrIncoming::bind(&(([0u16; 8], 80).into()))
.or_else(|_| AddrIncoming::bind(&(([0u8; 4], 80).into())))?;
let server = hyper::Server::builder(incoming).serve(service);
let (future, abort) = futures::future::abortable(server);
self.abort_handle = Some(abort);
tokio::spawn(future);
Ok(challenge.url.as_str())
})
}
fn teardown<'fut, 'a: 'fut, 'b: 'fut, 'c: 'fut, 'd: 'fut>(
&'a mut self,
_client: &'b mut AcmeClient,
_authorization: &'c Authorization,
_domain: &'d AcmeDomain,
_task: Arc<WorkerTask>,
) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'fut>> {
Box::pin(async move {
if let Some(abort) = self.abort_handle.take() {
abort.abort();
}
Ok(())
})
}
}

View File

@ -12,7 +12,7 @@ pub mod version;
pub mod ping;
pub mod pull;
pub mod tape;
mod helpers;
pub mod helpers;
use proxmox::api::router::SubdirMap;
use proxmox::api::Router;

View File

@ -18,8 +18,7 @@ use crate::api2::types::*;
description: "User configuration (without password).",
properties: {
realm: {
description: "Realm ID.",
type: String,
schema: REALM_ID_SCHEMA,
},
comment: {
schema: SINGLE_LINE_COMMENT_SCHEMA,

View File

@ -181,7 +181,7 @@ fn get_tfa_entry(userid: Userid, id: String) -> Result<TypedTfaInfo, Error> {
if let Some(user_data) = crate::config::tfa::read()?.users.remove(&userid) {
match {
// scope to prevent the temprary iter from borrowing across the whole match
// scope to prevent the temporary iter from borrowing across the whole match
let entry = tfa_id_iter(&user_data).find(|(_ty, _index, entry_id)| id == *entry_id);
entry.map(|(ty, index, _)| (ty, index))
} {
@ -259,7 +259,7 @@ fn delete_tfa(
.ok_or_else(|| http_err!(NOT_FOUND, "no such entry: {}/{}", userid, id))?;
match {
// scope to prevent the temprary iter from borrowing across the whole match
// scope to prevent the temporary iter from borrowing across the whole match
let entry = tfa_id_iter(&user_data).find(|(_, _, entry_id)| id == *entry_id);
entry.map(|(ty, index, _)| (ty, index))
} {

View File

@ -477,6 +477,17 @@ pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error>
user::save_config(&config)?;
let authenticator = crate::auth::lookup_authenticator(userid.realm())?;
match authenticator.remove_password(userid.name()) {
Ok(()) => {},
Err(err) => {
eprintln!(
"error removing password after deleting user {:?}: {}",
userid, err
);
}
}
match crate::config::tfa::read().and_then(|mut cfg| {
let _: bool = cfg.remove_user(&userid);
crate::config::tfa::write(&cfg)

View File

@ -219,6 +219,48 @@ pub fn list_groups(
Ok(group_info)
}
#[api(
input: {
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_MODIFY| PRIV_DATASTORE_PRUNE,
true),
},
)]
/// Delete backup group including all snapshots.
pub fn delete_group(
store: String,
backup_type: String,
backup_id: String,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let group = BackupGroup::new(backup_type, backup_id);
let datastore = DataStore::lookup_datastore(&store)?;
check_priv_or_backup_owner(&datastore, &group, &auth_id, PRIV_DATASTORE_MODIFY)?;
datastore.remove_backup_group(&group)?;
Ok(Value::Null)
}
#[api(
input: {
properties: {
@ -1140,7 +1182,7 @@ pub fn download_file_decoded(
manifest.verify_file(&file_name, &csum, size)?;
let chunk_reader = LocalChunkReader::new(datastore, None, CryptMode::None);
let reader = AsyncIndexReader::new(index, chunk_reader);
let reader = CachedChunkReader::new(chunk_reader, index, 1).seekable();
Body::wrap_stream(AsyncReaderStream::new(reader)
.map_err(move |err| {
eprintln!("error during streaming of '{:?}' - {}", path, err);
@ -1155,7 +1197,7 @@ pub fn download_file_decoded(
manifest.verify_file(&file_name, &csum, size)?;
let chunk_reader = LocalChunkReader::new(datastore, None, CryptMode::None);
let reader = AsyncIndexReader::new(index, chunk_reader);
let reader = CachedChunkReader::new(chunk_reader, index, 1).seekable();
Body::wrap_stream(AsyncReaderStream::with_buffer_size(reader, 4*1024*1024)
.map_err(move |err| {
eprintln!("error during streaming of '{:?}' - {}", path, err);
@ -1385,7 +1427,7 @@ pub fn pxar_file_download(
let mut split = components.splitn(2, |c| *c == b'/');
let pxar_name = std::str::from_utf8(split.next().unwrap())?;
let file_path = split.next().ok_or_else(|| format_err!("filepath looks strange '{}'", filepath))?;
let file_path = split.next().unwrap_or(b"/");
let (manifest, files) = read_backup_index(&datastore, &backup_dir)?;
for file in files {
if file.filename == pxar_name && file.crypt_mode == Some(CryptMode::Encrypt) {
@ -1722,6 +1764,7 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
"groups",
&Router::new()
.get(&API_METHOD_LIST_GROUPS)
.delete(&API_METHOD_DELETE_GROUP)
),
(
"notes",

View File

@ -1,4 +1,4 @@
//! Datastore Syncronization Job Management
//! Datastore Synchronization Job Management
use anyhow::{bail, format_err, Error};
use serde_json::Value;

View File

@ -4,6 +4,7 @@ use proxmox::api::router::{Router, SubdirMap};
use proxmox::list_subdirs_api_method;
pub mod access;
pub mod acme;
pub mod datastore;
pub mod remote;
pub mod sync;
@ -16,6 +17,7 @@ pub mod tape_backup_job;
const SUBDIRS: SubdirMap = &[
("access", &access::ROUTER),
("acme", &acme::ROUTER),
("changer", &changer::ROUTER),
("datastore", &datastore::ROUTER),
("drive", &drive::ROUTER),

727
src/api2/config/acme.rs Normal file
View File

@ -0,0 +1,727 @@
use std::fs;
use std::path::Path;
use std::sync::{Arc, Mutex};
use std::time::SystemTime;
use anyhow::{bail, format_err, Error};
use lazy_static::lazy_static;
use serde::{Deserialize, Serialize};
use serde_json::{json, Value};
use proxmox::api::router::SubdirMap;
use proxmox::api::schema::Updatable;
use proxmox::api::{api, Permission, Router, RpcEnvironment};
use proxmox::http_bail;
use proxmox::list_subdirs_api_method;
use proxmox_acme_rs::account::AccountData as AcmeAccountData;
use proxmox_acme_rs::Account;
use crate::acme::AcmeClient;
use crate::api2::types::{AcmeAccountName, AcmeChallengeSchema, Authid, KnownAcmeDirectory};
use crate::config::acl::PRIV_SYS_MODIFY;
use crate::config::acme::plugin::{
DnsPlugin, DnsPluginCore, DnsPluginCoreUpdater, PLUGIN_ID_SCHEMA,
};
use crate::server::WorkerTask;
use crate::tools::ControlFlow;
pub(crate) const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);
const SUBDIRS: SubdirMap = &[
(
"account",
&Router::new()
.get(&API_METHOD_LIST_ACCOUNTS)
.post(&API_METHOD_REGISTER_ACCOUNT)
.match_all("name", &ACCOUNT_ITEM_ROUTER),
),
(
"challenge-schema",
&Router::new().get(&API_METHOD_GET_CHALLENGE_SCHEMA),
),
(
"directories",
&Router::new().get(&API_METHOD_GET_DIRECTORIES),
),
(
"plugins",
&Router::new()
.get(&API_METHOD_LIST_PLUGINS)
.post(&API_METHOD_ADD_PLUGIN)
.match_all("id", &PLUGIN_ITEM_ROUTER),
),
("tos", &Router::new().get(&API_METHOD_GET_TOS)),
];
const ACCOUNT_ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_ACCOUNT)
.put(&API_METHOD_UPDATE_ACCOUNT)
.delete(&API_METHOD_DEACTIVATE_ACCOUNT);
const PLUGIN_ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_PLUGIN)
.put(&API_METHOD_UPDATE_PLUGIN)
.delete(&API_METHOD_DELETE_PLUGIN);
#[api(
properties: {
name: { type: AcmeAccountName },
},
)]
/// An ACME Account entry.
///
/// Currently only contains a 'name' property.
#[derive(Serialize)]
pub struct AccountEntry {
name: AcmeAccountName,
}
#[api(
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
returns: {
type: Array,
items: { type: AccountEntry },
description: "List of ACME accounts.",
},
protected: true,
)]
/// List ACME accounts.
pub fn list_accounts() -> Result<Vec<AccountEntry>, Error> {
let mut entries = Vec::new();
crate::config::acme::foreach_acme_account(|name| {
entries.push(AccountEntry { name });
ControlFlow::Continue(())
})?;
Ok(entries)
}
#[api(
properties: {
account: { type: Object, properties: {}, additional_properties: true },
tos: {
type: String,
optional: true,
},
},
)]
/// ACME Account information.
///
/// This is what we return via the API.
#[derive(Serialize)]
pub struct AccountInfo {
/// Raw account data.
account: AcmeAccountData,
/// The ACME directory URL the account was created at.
directory: String,
/// The account's own URL within the ACME directory.
location: String,
/// The ToS URL, if the user agreed to one.
#[serde(skip_serializing_if = "Option::is_none")]
tos: Option<String>,
}
#[api(
input: {
properties: {
name: { type: AcmeAccountName },
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
returns: { type: AccountInfo },
protected: true,
)]
/// Return existing ACME account information.
pub async fn get_account(name: AcmeAccountName) -> Result<AccountInfo, Error> {
let client = AcmeClient::load(&name).await?;
let account = client.account()?;
Ok(AccountInfo {
location: account.location.clone(),
tos: client.tos().map(str::to_owned),
directory: client.directory_url().to_owned(),
account: AcmeAccountData {
only_return_existing: false, // don't actually write this out in case it's set
..account.data.clone()
},
})
}
fn account_contact_from_string(s: &str) -> Vec<String> {
s.split(&[' ', ';', ',', '\0'][..])
.map(|s| format!("mailto:{}", s))
.collect()
}
#[api(
input: {
properties: {
name: {
type: AcmeAccountName,
optional: true,
},
contact: {
description: "List of email addresses.",
},
tos_url: {
description: "URL of CA TermsOfService - setting this indicates agreement.",
optional: true,
},
directory: {
type: String,
description: "The ACME Directory.",
optional: true,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
)]
/// Register an ACME account.
fn register_account(
name: Option<AcmeAccountName>,
// Todo: email & email-list schema
contact: String,
tos_url: Option<String>,
directory: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let name = name.unwrap_or_else(|| unsafe {
AcmeAccountName::from_string_unchecked("default".to_string())
});
if Path::new(&crate::config::acme::account_path(&name)).exists() {
http_bail!(BAD_REQUEST, "account {} already exists", name);
}
let directory = directory.unwrap_or_else(|| {
crate::config::acme::DEFAULT_ACME_DIRECTORY_ENTRY
.url
.to_owned()
});
WorkerTask::spawn(
"acme-register",
Some(name.to_string()),
auth_id,
true,
move |worker| async move {
let mut client = AcmeClient::new(directory);
worker.log(format!("Registering ACME account '{}'...", &name));
let account =
do_register_account(&mut client, &name, tos_url.is_some(), contact, None).await?;
worker.log(format!(
"Registration successful, account URL: {}",
account.location
));
Ok(())
},
)
}
pub async fn do_register_account<'a>(
client: &'a mut AcmeClient,
name: &AcmeAccountName,
agree_to_tos: bool,
contact: String,
rsa_bits: Option<u32>,
) -> Result<&'a Account, Error> {
let contact = account_contact_from_string(&contact);
Ok(client
.new_account(name, agree_to_tos, contact, rsa_bits)
.await?)
}
#[api(
input: {
properties: {
name: { type: AcmeAccountName },
contact: {
description: "List of email addresses.",
optional: true,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
)]
/// Update an ACME account.
pub fn update_account(
name: AcmeAccountName,
// Todo: email & email-list schema
contact: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
WorkerTask::spawn(
"acme-update",
Some(name.to_string()),
auth_id,
true,
move |_worker| async move {
let data = match contact {
Some(data) => json!({
"contact": account_contact_from_string(&data),
}),
None => json!({}),
};
AcmeClient::load(&name).await?.update_account(&data).await?;
Ok(())
},
)
}
#[api(
input: {
properties: {
name: { type: AcmeAccountName },
force: {
description:
"Delete account data even if the server refuses to deactivate the account.",
optional: true,
default: false,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
)]
/// Deactivate an ACME account.
pub fn deactivate_account(
name: AcmeAccountName,
force: bool,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
WorkerTask::spawn(
"acme-deactivate",
Some(name.to_string()),
auth_id,
true,
move |worker| async move {
match AcmeClient::load(&name)
.await?
.update_account(&json!({"status": "deactivated"}))
.await
{
Ok(_account) => (),
Err(err) if !force => return Err(err),
Err(err) => {
worker.warn(format!(
"error deactivating account {}, proceedeing anyway - {}",
name, err,
));
}
}
crate::config::acme::mark_account_deactivated(&name)?;
Ok(())
},
)
}
#[api(
input: {
properties: {
directory: {
type: String,
description: "The ACME Directory.",
optional: true,
},
},
},
access: {
permission: &Permission::Anybody,
},
returns: {
type: String,
optional: true,
description: "The ACME Directory's ToS URL, if any.",
},
)]
/// Get the Terms of Service URL for an ACME directory.
async fn get_tos(directory: Option<String>) -> Result<Option<String>, Error> {
let directory = directory.unwrap_or_else(|| {
crate::config::acme::DEFAULT_ACME_DIRECTORY_ENTRY
.url
.to_owned()
});
Ok(AcmeClient::new(directory)
.terms_of_service_url()
.await?
.map(str::to_owned))
}
#[api(
access: {
permission: &Permission::Anybody,
},
returns: {
description: "List of known ACME directories.",
type: Array,
items: { type: KnownAcmeDirectory },
},
)]
/// Get named known ACME directory endpoints.
fn get_directories() -> Result<&'static [KnownAcmeDirectory], Error> {
Ok(crate::config::acme::KNOWN_ACME_DIRECTORIES)
}
/// Wrapper for efficient Arc use when returning the ACME challenge-plugin schema for serializing
struct ChallengeSchemaWrapper {
inner: Arc<Vec<AcmeChallengeSchema>>,
}
impl Serialize for ChallengeSchemaWrapper {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
self.inner.serialize(serializer)
}
}
fn get_cached_challenge_schemas() -> Result<ChallengeSchemaWrapper, Error> {
lazy_static! {
static ref CACHE: Mutex<Option<(Arc<Vec<AcmeChallengeSchema>>, SystemTime)>> =
Mutex::new(None);
}
// the actual loading code
let mut last = CACHE.lock().unwrap();
let actual_mtime = fs::metadata(crate::config::acme::ACME_DNS_SCHEMA_FN)?.modified()?;
let schema = match &*last {
Some((schema, cached_mtime)) if *cached_mtime >= actual_mtime => schema.clone(),
_ => {
let new_schema = Arc::new(crate::config::acme::load_dns_challenge_schema()?);
*last = Some((Arc::clone(&new_schema), actual_mtime));
new_schema
}
};
Ok(ChallengeSchemaWrapper { inner: schema })
}
#[api(
access: {
permission: &Permission::Anybody,
},
returns: {
description: "ACME Challenge Plugin Shema.",
type: Array,
items: { type: AcmeChallengeSchema },
},
)]
/// Get named known ACME directory endpoints.
fn get_challenge_schema() -> Result<ChallengeSchemaWrapper, Error> {
get_cached_challenge_schemas()
}
#[api]
#[derive(Default, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// The API's format is inherited from PVE/PMG:
pub struct PluginConfig {
/// Plugin ID.
plugin: String,
/// Plugin type.
#[serde(rename = "type")]
ty: String,
/// DNS Api name.
api: Option<String>,
/// Plugin configuration data.
data: Option<String>,
/// Extra delay in seconds to wait before requesting validation.
///
/// Allows to cope with long TTL of DNS records.
#[serde(skip_serializing_if = "Option::is_none", default)]
validation_delay: Option<u32>,
/// Flag to disable the config.
#[serde(skip_serializing_if = "Option::is_none", default)]
disable: Option<bool>,
}
// See PMG/PVE's $modify_cfg_for_api sub
fn modify_cfg_for_api(id: &str, ty: &str, data: &Value) -> PluginConfig {
let mut entry = data.clone();
let obj = entry.as_object_mut().unwrap();
obj.remove("id");
obj.insert("plugin".to_string(), Value::String(id.to_owned()));
obj.insert("type".to_string(), Value::String(ty.to_owned()));
// FIXME: This needs to go once the `Updater` is fixed.
// None of these should be able to fail unless the user changed the files by hand, in which
// case we leave the unmodified string in the Value for now. This will be handled with an error
// later.
if let Some(Value::String(ref mut data)) = obj.get_mut("data") {
if let Ok(new) = base64::decode_config(&data, base64::URL_SAFE_NO_PAD) {
if let Ok(utf8) = String::from_utf8(new) {
*data = utf8;
}
}
}
// PVE/PMG do this explicitly for ACME plugins...
// obj.insert("digest".to_string(), Value::String(digest.clone()));
serde_json::from_value(entry).unwrap_or_else(|_| PluginConfig {
plugin: "*Error*".to_string(),
ty: "*Error*".to_string(),
..Default::default()
})
}
#[api(
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
returns: {
type: Array,
description: "List of ACME plugin configurations.",
items: { type: PluginConfig },
},
)]
/// List ACME challenge plugins.
pub fn list_plugins(mut rpcenv: &mut dyn RpcEnvironment) -> Result<Vec<PluginConfig>, Error> {
use crate::config::acme::plugin;
let (plugins, digest) = plugin::config()?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(plugins
.iter()
.map(|(id, (ty, data))| modify_cfg_for_api(&id, &ty, data))
.collect())
}
#[api(
input: {
properties: {
id: { schema: PLUGIN_ID_SCHEMA },
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
returns: { type: PluginConfig },
)]
/// List ACME challenge plugins.
pub fn get_plugin(id: String, mut rpcenv: &mut dyn RpcEnvironment) -> Result<PluginConfig, Error> {
use crate::config::acme::plugin;
let (plugins, digest) = plugin::config()?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
match plugins.get(&id) {
Some((ty, data)) => Ok(modify_cfg_for_api(&id, &ty, &data)),
None => http_bail!(NOT_FOUND, "no such plugin"),
}
}
// Currently we only have "the" standalone plugin and DNS plugins so we can just flatten a
// DnsPluginUpdater:
//
// FIXME: The 'id' parameter should not be "optional" in the schema.
#[api(
input: {
properties: {
type: {
type: String,
description: "The ACME challenge plugin type.",
},
core: {
type: DnsPluginCoreUpdater,
flatten: true,
},
data: {
type: String,
// This is different in the API!
description: "DNS plugin data (base64 encoded with padding).",
},
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
)]
/// Add ACME plugin configuration.
pub fn add_plugin(r#type: String, core: DnsPluginCoreUpdater, data: String) -> Result<(), Error> {
use crate::config::acme::plugin;
// Currently we only support DNS plugins and the standalone plugin is "fixed":
if r#type != "dns" {
bail!("invalid ACME plugin type: {:?}", r#type);
}
let data = String::from_utf8(base64::decode(&data)?)
.map_err(|_| format_err!("data must be valid UTF-8"))?;
//core.api_fixup()?;
// FIXME: Solve the Updater with non-optional fields thing...
let id = core
.id
.clone()
.ok_or_else(|| format_err!("missing required 'id' parameter"))?;
let _lock = plugin::lock()?;
let (mut plugins, _digest) = plugin::config()?;
if plugins.contains_key(&id) {
bail!("ACME plugin ID {:?} already exists", id);
}
let plugin = serde_json::to_value(DnsPlugin {
core: DnsPluginCore::try_build_from(core)?,
data,
})?;
plugins.insert(id, r#type, plugin);
plugin::save_config(&plugins)?;
Ok(())
}
#[api(
input: {
properties: {
id: { schema: PLUGIN_ID_SCHEMA },
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
)]
/// Delete an ACME plugin configuration.
pub fn delete_plugin(id: String) -> Result<(), Error> {
use crate::config::acme::plugin;
let _lock = plugin::lock()?;
let (mut plugins, _digest) = plugin::config()?;
if plugins.remove(&id).is_none() {
http_bail!(NOT_FOUND, "no such plugin");
}
plugin::save_config(&plugins)?;
Ok(())
}
#[api(
input: {
properties: {
core_update: {
type: DnsPluginCoreUpdater,
flatten: true,
},
data: {
type: String,
optional: true,
// This is different in the API!
description: "DNS plugin data (base64 encoded with padding).",
},
digest: {
description: "Digest to protect against concurrent updates",
optional: true,
},
delete: {
description: "Options to remove from the configuration",
optional: true,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
)]
/// Update an ACME plugin configuration.
pub fn update_plugin(
core_update: DnsPluginCoreUpdater,
data: Option<String>,
delete: Option<String>,
digest: Option<String>,
) -> Result<(), Error> {
use crate::config::acme::plugin;
let data = data
.as_deref()
.map(base64::decode)
.transpose()?
.map(String::from_utf8)
.transpose()
.map_err(|_| format_err!("data must be valid UTF-8"))?;
//core_update.api_fixup()?;
// unwrap: the id is matched by this method's API path
let id = core_update.id.clone().unwrap();
let delete: Vec<&str> = delete
.as_deref()
.unwrap_or("")
.split(&[' ', ',', ';', '\0'][..])
.collect();
let _lock = plugin::lock()?;
let (mut plugins, expected_digest) = plugin::config()?;
if let Some(digest) = digest {
let digest = proxmox::tools::hex_to_digest(&digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
match plugins.get_mut(&id) {
Some((ty, ref mut entry)) => {
if ty != "dns" {
bail!("cannot update plugin of type {:?}", ty);
}
let mut plugin: DnsPlugin = serde_json::from_value(entry.clone())?;
plugin.core.update_from(core_update, &delete)?;
if let Some(data) = data {
plugin.data = data;
}
*entry = serde_json::to_value(plugin)?;
}
None => http_bail!(NOT_FOUND, "no such plugin"),
}
plugin::save_config(&plugins)?;
Ok(())
}

View File

@ -27,7 +27,7 @@ use crate::{
SLOT_ARRAY_SCHEMA,
EXPORT_SLOT_LIST_SCHEMA,
ScsiTapeChanger,
LinuxTapeDrive,
LtoTapeDrive,
},
tape::{
linux_tape_changer_list,
@ -303,7 +303,7 @@ pub fn delete_changer(name: String, _param: Value) -> Result<(), Error> {
None => bail!("Delete changer '{}' failed - no such entry", name),
}
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
let drive_list: Vec<LtoTapeDrive> = config.convert_to_typed_array("lto")?;
for drive in drive_list {
if let Some(changer) = drive.changer {
if changer == name {

View File

@ -5,15 +5,15 @@ use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment, Permission};
use proxmox::api::section_config::SectionConfigData;
use proxmox::api::schema::parse_property_string;
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::backup::*;
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::datastore::{self, DataStoreConfig, DIR_NAME_SCHEMA};
use crate::config::acl::{PRIV_DATASTORE_ALLOCATE, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_MODIFY};
use crate::server::jobstate;
use crate::server::{jobstate, WorkerTask};
#[api(
input: {
@ -50,6 +50,26 @@ pub fn list_datastores(
Ok(list.into_iter().filter(filter_by_privs).collect())
}
pub(crate) fn do_create_datastore(
_lock: std::fs::File,
mut config: SectionConfigData,
datastore: DataStoreConfig,
worker: Option<&dyn crate::task::TaskState>,
) -> Result<(), Error> {
let path: PathBuf = datastore.path.clone().into();
let backup_user = crate::backup::backup_user()?;
let _store = ChunkStore::create(&datastore.name, path, backup_user.uid, backup_user.gid, worker)?;
config.set_data(&datastore.name, "datastore", &datastore)?;
datastore::save_config(&config)?;
jobstate::create_state_file("prune", &datastore.name)?;
jobstate::create_state_file("garbage_collection", &datastore.name)?;
Ok(())
}
// fixme: impl. const fn get_object_schema(datastore::DataStoreConfig::API_SCHEMA),
// but this need support for match inside const fn
@ -116,31 +136,30 @@ pub fn list_datastores(
},
)]
/// Create new datastore config.
pub fn create_datastore(param: Value) -> Result<(), Error> {
pub fn create_datastore(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let lock = datastore::lock_config()?;
let datastore: datastore::DataStoreConfig = serde_json::from_value(param)?;
let (mut config, _digest) = datastore::config()?;
let (config, _digest) = datastore::config()?;
if config.sections.get(&datastore.name).is_some() {
bail!("datastore '{}' already exists.", datastore.name);
}
let path: PathBuf = datastore.path.clone().into();
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let backup_user = crate::backup::backup_user()?;
let _store = ChunkStore::create(&datastore.name, path, backup_user.uid, backup_user.gid)?;
config.set_data(&datastore.name, "datastore", &datastore)?;
datastore::save_config(&config)?;
jobstate::create_state_file("prune", &datastore.name)?;
jobstate::create_state_file("garbage_collection", &datastore.name)?;
Ok(())
WorkerTask::new_thread(
"create-datastore",
Some(datastore.name.to_string()),
auth_id,
false,
move |worker| do_create_datastore(lock, config, datastore, Some(&worker)),
)
}
#[api(
@ -296,7 +315,7 @@ pub fn update_datastore(
digest: Option<String>,
) -> Result<(), Error> {
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let _lock = datastore::lock_config()?;
// pass/compare digest
let (mut config, expected_digest) = datastore::config()?;
@ -375,11 +394,11 @@ pub fn update_datastore(
// we want to reset the statefiles, to avoid an immediate action in some cases
// (e.g. going from monthly to weekly in the second week of the month)
if gc_schedule_changed {
jobstate::create_state_file("garbage_collection", &name)?;
jobstate::update_job_last_run_time("garbage_collection", &name)?;
}
if prune_schedule_changed {
jobstate::create_state_file("prune", &name)?;
jobstate::update_job_last_run_time("prune", &name)?;
}
Ok(())
@ -403,9 +422,9 @@ pub fn update_datastore(
},
)]
/// Remove a datastore configuration.
pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Error> {
pub async fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let _lock = datastore::lock_config()?;
let (mut config, expected_digest) = datastore::config()?;
@ -425,6 +444,8 @@ pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Erro
let _ = jobstate::remove_state_file("prune", &name);
let _ = jobstate::remove_state_file("garbage_collection", &name);
crate::server::notify_datastore_removed().await?;
Ok(())
}

View File

@ -19,12 +19,12 @@ use crate::{
DRIVE_NAME_SCHEMA,
CHANGER_NAME_SCHEMA,
CHANGER_DRIVENUM_SCHEMA,
LINUX_DRIVE_PATH_SCHEMA,
LinuxTapeDrive,
LTO_DRIVE_PATH_SCHEMA,
LtoTapeDrive,
ScsiTapeChanger,
},
tape::{
linux_tape_device_list,
lto_tape_device_list,
check_drive_path,
},
};
@ -37,7 +37,7 @@ use crate::{
schema: DRIVE_NAME_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
schema: LTO_DRIVE_PATH_SCHEMA,
},
changer: {
schema: CHANGER_NAME_SCHEMA,
@ -60,13 +60,13 @@ pub fn create_drive(param: Value) -> Result<(), Error> {
let (mut config, _digest) = config::drive::config()?;
let item: LinuxTapeDrive = serde_json::from_value(param)?;
let item: LtoTapeDrive = serde_json::from_value(param)?;
let linux_drives = linux_tape_device_list();
let lto_drives = lto_tape_device_list();
check_drive_path(&linux_drives, &item.path)?;
check_drive_path(&lto_drives, &item.path)?;
let existing: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
let existing: Vec<LtoTapeDrive> = config.convert_to_typed_array("lto")?;
for drive in existing {
if drive.name == item.name {
@ -77,7 +77,7 @@ pub fn create_drive(param: Value) -> Result<(), Error> {
}
}
config.set_data(&item.name, "linux", &item)?;
config.set_data(&item.name, "lto", &item)?;
config::drive::save_config(&config)?;
@ -93,7 +93,7 @@ pub fn create_drive(param: Value) -> Result<(), Error> {
},
},
returns: {
type: LinuxTapeDrive,
type: LtoTapeDrive,
},
access: {
permission: &Permission::Privilege(&["tape", "device", "{name}"], PRIV_TAPE_AUDIT, false),
@ -104,11 +104,11 @@ pub fn get_config(
name: String,
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<LinuxTapeDrive, Error> {
) -> Result<LtoTapeDrive, Error> {
let (config, digest) = config::drive::config()?;
let data: LinuxTapeDrive = config.lookup("linux", &name)?;
let data: LtoTapeDrive = config.lookup("lto", &name)?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
@ -123,7 +123,7 @@ pub fn get_config(
description: "The list of configured drives (with config digest).",
type: Array,
items: {
type: LinuxTapeDrive,
type: LtoTapeDrive,
},
},
access: {
@ -135,13 +135,13 @@ pub fn get_config(
pub fn list_drives(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<LinuxTapeDrive>, Error> {
) -> Result<Vec<LtoTapeDrive>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = config::drive::config()?;
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
let drive_list: Vec<LtoTapeDrive> = config.convert_to_typed_array("lto")?;
let drive_list = drive_list
.into_iter()
@ -176,7 +176,7 @@ pub enum DeletableProperty {
schema: DRIVE_NAME_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
schema: LTO_DRIVE_PATH_SCHEMA,
optional: true,
},
changer: {
@ -225,7 +225,7 @@ pub fn update_drive(
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let mut data: LinuxTapeDrive = config.lookup("linux", &name)?;
let mut data: LtoTapeDrive = config.lookup("lto", &name)?;
if let Some(delete) = delete {
for delete_prop in delete {
@ -240,8 +240,8 @@ pub fn update_drive(
}
if let Some(path) = path {
let linux_drives = linux_tape_device_list();
check_drive_path(&linux_drives, &path)?;
let lto_drives = lto_tape_device_list();
check_drive_path(&lto_drives, &path)?;
data.path = path;
}
@ -261,7 +261,7 @@ pub fn update_drive(
}
}
config.set_data(&name, "linux", &data)?;
config.set_data(&name, "lto", &data)?;
config::drive::save_config(&config)?;
@ -290,8 +290,8 @@ pub fn delete_drive(name: String, _param: Value) -> Result<(), Error> {
match config.sections.get(&name) {
Some((section_type, _)) => {
if section_type != "linux" {
bail!("Entry '{}' exists, but is not a linux tape drive", name);
if section_type != "lto" {
bail!("Entry '{}' exists, but is not a lto tape drive", name);
}
config.sections.remove(&name);
},

View File

@ -333,6 +333,7 @@ pub fn update_sync_job(
if let Some(remote_store) = remote_store { data.remote_store = remote_store; }
if let Some(owner) = owner { data.owner = Some(owner); }
let schedule_changed = data.schedule != schedule;
if schedule.is_some() { data.schedule = schedule; }
if remove_vanished.is_some() { data.remove_vanished = remove_vanished; }
@ -344,6 +345,10 @@ pub fn update_sync_job(
sync::save_config(&config)?;
if schedule_changed {
crate::server::jobstate::update_job_last_run_time("syncjob", &id)?;
}
Ok(())
}

View File

@ -266,6 +266,7 @@ pub fn update_tape_backup_job(
if latest_only.is_some() { data.setup.latest_only = latest_only; }
if notify_user.is_some() { data.setup.notify_user = notify_user; }
let schedule_changed = data.schedule != schedule;
if schedule.is_some() { data.schedule = schedule; }
if let Some(comment) = comment {
@ -281,6 +282,10 @@ pub fn update_tape_backup_job(
config::tape_job::save_config(&config)?;
if schedule_changed {
crate::server::jobstate::update_job_last_run_time("tape-backup-job", &id)?;
}
Ok(())
}

View File

@ -119,7 +119,7 @@ pub fn change_passphrase(
let kdf = kdf.unwrap_or_default();
if let Kdf::None = kdf {
bail!("Please specify a key derivation funktion (none is not allowed here).");
bail!("Please specify a key derivation function (none is not allowed here).");
}
let _lock = open_file_locked(
@ -187,7 +187,7 @@ pub fn create_key(
let kdf = kdf.unwrap_or_default();
if let Kdf::None = kdf {
bail!("Please specify a key derivation funktion (none is not allowed here).");
bail!("Please specify a key derivation function (none is not allowed here).");
}
let (key, mut key_config) = KeyConfig::new(password.as_bytes(), kdf)?;

View File

@ -274,12 +274,17 @@ pub fn update_verification_job(
if ignore_verified.is_some() { data.ignore_verified = ignore_verified; }
if outdated_after.is_some() { data.outdated_after = outdated_after; }
let schedule_changed = data.schedule != schedule;
if schedule.is_some() { data.schedule = schedule; }
config.set_data(&id, "verification", &data)?;
verify::save_config(&config)?;
if schedule_changed {
crate::server::jobstate::update_job_last_run_time("verificationjob", &id)?;
}
Ok(())
}

View File

@ -48,7 +48,7 @@ pub fn list_dir_content<R: Read + Seek>(
let mut components = path.clone();
components.push(b'/');
components.extend(&direntry.name);
let mut entry = ArchiveEntry::new(&components, &direntry.attr);
let mut entry = ArchiveEntry::new(&components, Some(&direntry.attr));
if let DirEntryAttribute::File { size, mtime } = direntry.attr {
entry.size = size.into();
entry.mtime = mtime.into();

View File

@ -17,7 +17,7 @@ use proxmox::api::{
api, schema::*, ApiHandler, ApiMethod, ApiResponseFuture, Permission, RpcEnvironment,
};
use proxmox::list_subdirs_api_method;
use proxmox::tools::websocket::WebSocket;
use proxmox_http::websocket::WebSocket;
use proxmox::{identity, sortable};
use crate::api2::types::*;
@ -27,6 +27,8 @@ use crate::tools;
use crate::tools::ticket::{self, Empty, Ticket};
pub mod apt;
pub mod certificates;
pub mod config;
pub mod disks;
pub mod dns;
pub mod network;
@ -314,6 +316,8 @@ fn upgrade_to_websocket(
pub const SUBDIRS: SubdirMap = &[
("apt", &apt::ROUTER),
("certificates", &certificates::ROUTER),
("config", &config::ROUTER),
("disks", &disks::ROUTER),
("dns", &dns::ROUTER),
("journal", &journal::ROUTER),

View File

@ -5,10 +5,17 @@ use std::collections::HashMap;
use proxmox::list_subdirs_api_method;
use proxmox::api::{api, RpcEnvironment, RpcEnvironmentType, Permission};
use proxmox::api::router::{Router, SubdirMap};
use proxmox::tools::fs::{replace_file, CreateOptions};
use proxmox_http::ProxyConfig;
use crate::config::node;
use crate::server::WorkerTask;
use crate::tools::{apt, http, subscription};
use crate::tools::{
apt,
pbs_simple_http,
subscription,
};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::api2::types::{Authid, APTUpdateInfo, NODE_SCHEMA, UPID_SCHEMA};
@ -46,10 +53,38 @@ fn apt_update_available(_param: Value) -> Result<Value, Error> {
Ok(json!(cache.package_status))
}
pub fn update_apt_proxy_config(proxy_config: Option<&ProxyConfig>) -> Result<(), Error> {
const PROXY_CFG_FN: &str = "/etc/apt/apt.conf.d/76pveproxy"; // use same file as PVE
if let Some(proxy_config) = proxy_config {
let proxy = proxy_config.to_proxy_string()?;
let data = format!("Acquire::http::Proxy \"{}\";\n", proxy);
replace_file(PROXY_CFG_FN, data.as_bytes(), CreateOptions::new())
} else {
match std::fs::remove_file(PROXY_CFG_FN) {
Ok(()) => Ok(()),
Err(err) if err.kind() == std::io::ErrorKind::NotFound => Ok(()),
Err(err) => bail!("failed to remove proxy config '{}' - {}", PROXY_CFG_FN, err),
}
}
}
fn read_and_update_proxy_config() -> Result<Option<ProxyConfig>, Error> {
let proxy_config = if let Ok((node_config, _digest)) = node::config() {
node_config.http_proxy()
} else {
None
};
update_apt_proxy_config(proxy_config.as_ref())?;
Ok(proxy_config)
}
fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
if !quiet { worker.log("starting apt-get update") }
// TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE
read_and_update_proxy_config()?;
let mut command = std::process::Command::new("apt-get");
command.arg("update");
@ -85,7 +120,7 @@ fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
},
notify: {
type: bool,
description: r#"Send notification mail about new package updates availanle to the
description: r#"Send notification mail about new package updates available to the
email address configured for 'root@pam')."#,
default: false,
optional: true,
@ -152,6 +187,7 @@ pub fn apt_update_database(
}
#[api(
protected: true,
input: {
properties: {
node: {
@ -194,10 +230,13 @@ fn apt_get_changelog(
bail!("Package '{}' not found", name);
}
let proxy_config = read_and_update_proxy_config()?;
let mut client = pbs_simple_http(proxy_config);
let changelog_url = &pkg_info[0].change_log_url;
// FIXME: use 'apt-get changelog' for proxmox packages as well, once repo supports it
if changelog_url.starts_with("http://download.proxmox.com/") {
let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url, None))
let changelog = crate::tools::runtime::block_on(client.get_string(changelog_url, None))
.map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?;
Ok(json!(changelog))
@ -221,7 +260,7 @@ fn apt_get_changelog(
auth_header.insert("Authorization".to_owned(),
format!("Basic {}", base64::encode(format!("{}:{}", key, id))));
let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url, Some(&auth_header)))
let changelog = crate::tools::runtime::block_on(client.get_string(changelog_url, Some(&auth_header)))
.map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?;
Ok(json!(changelog))

View File

@ -0,0 +1,579 @@
use std::convert::TryFrom;
use std::sync::Arc;
use std::time::Duration;
use anyhow::{bail, format_err, Error};
use openssl::pkey::PKey;
use openssl::x509::X509;
use serde::{Deserialize, Serialize};
use proxmox::api::router::SubdirMap;
use proxmox::api::{api, Permission, Router, RpcEnvironment};
use proxmox::list_subdirs_api_method;
use crate::acme::AcmeClient;
use crate::api2::types::Authid;
use crate::api2::types::NODE_SCHEMA;
use crate::api2::types::AcmeDomain;
use crate::config::acl::PRIV_SYS_MODIFY;
use crate::config::node::NodeConfig;
use crate::server::WorkerTask;
use crate::tools::cert;
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);
const SUBDIRS: SubdirMap = &[
("acme", &ACME_ROUTER),
(
"custom",
&Router::new()
.post(&API_METHOD_UPLOAD_CUSTOM_CERTIFICATE)
.delete(&API_METHOD_DELETE_CUSTOM_CERTIFICATE),
),
("info", &Router::new().get(&API_METHOD_GET_INFO)),
];
const ACME_ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(ACME_SUBDIRS))
.subdirs(ACME_SUBDIRS);
const ACME_SUBDIRS: SubdirMap = &[(
"certificate",
&Router::new()
.post(&API_METHOD_NEW_ACME_CERT)
.put(&API_METHOD_RENEW_ACME_CERT),
)];
#[api(
properties: {
san: {
type: Array,
items: {
description: "A SubjectAlternateName entry.",
type: String,
},
},
},
)]
/// Certificate information.
#[derive(Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
pub struct CertificateInfo {
/// Certificate file name.
#[serde(skip_serializing_if = "Option::is_none")]
filename: Option<String>,
/// Certificate subject name.
subject: String,
/// List of certificate's SubjectAlternativeName entries.
san: Vec<String>,
/// Certificate issuer name.
issuer: String,
/// Certificate's notBefore timestamp (UNIX epoch).
#[serde(skip_serializing_if = "Option::is_none")]
notbefore: Option<i64>,
/// Certificate's notAfter timestamp (UNIX epoch).
#[serde(skip_serializing_if = "Option::is_none")]
notafter: Option<i64>,
/// Certificate in PEM format.
#[serde(skip_serializing_if = "Option::is_none")]
pem: Option<String>,
/// Certificate's public key algorithm.
public_key_type: String,
/// Certificate's public key size if available.
#[serde(skip_serializing_if = "Option::is_none")]
public_key_bits: Option<u32>,
/// The SSL Fingerprint.
fingerprint: Option<String>,
}
impl TryFrom<&cert::CertInfo> for CertificateInfo {
type Error = Error;
fn try_from(info: &cert::CertInfo) -> Result<Self, Self::Error> {
let pubkey = info.public_key()?;
Ok(Self {
filename: None,
subject: info.subject_name()?,
san: info
.subject_alt_names()
.map(|san| {
san.into_iter()
// FIXME: Support `.ipaddress()`?
.filter_map(|name| name.dnsname().map(str::to_owned))
.collect()
})
.unwrap_or_default(),
issuer: info.issuer_name()?,
notbefore: info.not_before_unix().ok(),
notafter: info.not_after_unix().ok(),
pem: None,
public_key_type: openssl::nid::Nid::from_raw(pubkey.id().as_raw())
.long_name()
.unwrap_or("<unsupported key type>")
.to_owned(),
public_key_bits: Some(pubkey.bits()),
fingerprint: Some(info.fingerprint()?),
})
}
}
fn get_certificate_pem() -> Result<String, Error> {
let cert_path = configdir!("/proxy.pem");
let cert_pem = proxmox::tools::fs::file_get_contents(&cert_path)?;
String::from_utf8(cert_pem)
.map_err(|_| format_err!("certificate in {:?} is not a valid PEM file", cert_path))
}
// to deduplicate error messages
fn pem_to_cert_info(pem: &[u8]) -> Result<cert::CertInfo, Error> {
cert::CertInfo::from_pem(pem)
.map_err(|err| format_err!("error loading proxy certificate: {}", err))
}
#[api(
input: {
properties: {
node: { schema: NODE_SCHEMA },
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
returns: {
type: Array,
items: { type: CertificateInfo },
description: "List of certificate infos.",
},
)]
/// Get certificate info.
pub fn get_info() -> Result<Vec<CertificateInfo>, Error> {
let cert_pem = get_certificate_pem()?;
let cert = pem_to_cert_info(cert_pem.as_bytes())?;
Ok(vec![CertificateInfo {
filename: Some("proxy.pem".to_string()), // we only have the one
pem: Some(cert_pem),
..CertificateInfo::try_from(&cert)?
}])
}
#[api(
input: {
properties: {
node: { schema: NODE_SCHEMA },
certificates: { description: "PEM encoded certificate (chain)." },
key: { description: "PEM encoded private key." },
// FIXME: widget-toolkit should have an option to disable using these 2 parameters...
restart: {
description: "UI compatibility parameter, ignored",
type: Boolean,
optional: true,
default: false,
},
force: {
description: "Force replacement of existing files.",
type: Boolean,
optional: true,
default: false,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
returns: {
type: Array,
items: { type: CertificateInfo },
description: "List of certificate infos.",
},
protected: true,
)]
/// Upload a custom certificate.
pub async fn upload_custom_certificate(
certificates: String,
key: String,
) -> Result<Vec<CertificateInfo>, Error> {
let certificates = X509::stack_from_pem(certificates.as_bytes())
.map_err(|err| format_err!("failed to decode certificate chain: {}", err))?;
let key = PKey::private_key_from_pem(key.as_bytes())
.map_err(|err| format_err!("failed to parse private key: {}", err))?;
let certificates = certificates
.into_iter()
.try_fold(Vec::<u8>::new(), |mut stack, cert| -> Result<_, Error> {
if !stack.is_empty() {
stack.push(b'\n');
}
stack.extend(cert.to_pem()?);
Ok(stack)
})
.map_err(|err| format_err!("error formatting certificate chain as PEM: {}", err))?;
let key = key.private_key_to_pem_pkcs8()?;
crate::config::set_proxy_certificate(&certificates, &key)?;
crate::server::reload_proxy_certificate().await?;
get_info()
}
#[api(
input: {
properties: {
node: { schema: NODE_SCHEMA },
restart: {
description: "UI compatibility parameter, ignored",
type: Boolean,
optional: true,
default: false,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
)]
/// Delete the current certificate and regenerate a self signed one.
pub async fn delete_custom_certificate() -> Result<(), Error> {
let cert_path = configdir!("/proxy.pem");
// Here we fail since if this fails nothing else breaks anyway
std::fs::remove_file(&cert_path)
.map_err(|err| format_err!("failed to unlink {:?} - {}", cert_path, err))?;
let key_path = configdir!("/proxy.key");
if let Err(err) = std::fs::remove_file(&key_path) {
// Here we just log since the certificate is already gone and we'd rather try to generate
// the self-signed certificate even if this fails:
log::error!(
"failed to remove certificate private key {:?} - {}",
key_path,
err
);
}
crate::config::update_self_signed_cert(true)?;
crate::server::reload_proxy_certificate().await?;
Ok(())
}
struct OrderedCertificate {
certificate: hyper::body::Bytes,
private_key_pem: Vec<u8>,
}
async fn order_certificate(
worker: Arc<WorkerTask>,
node_config: &NodeConfig,
) -> Result<Option<OrderedCertificate>, Error> {
use proxmox_acme_rs::authorization::Status;
use proxmox_acme_rs::order::Identifier;
let domains = node_config.acme_domains().try_fold(
Vec::<AcmeDomain>::new(),
|mut acc, domain| -> Result<_, Error> {
let mut domain = domain?;
domain.domain.make_ascii_lowercase();
if let Some(alias) = &mut domain.alias {
alias.make_ascii_lowercase();
}
acc.push(domain);
Ok(acc)
},
)?;
let get_domain_config = |domain: &str| {
domains
.iter()
.find(|d| d.domain == domain)
.ok_or_else(|| format_err!("no config for domain '{}'", domain))
};
if domains.is_empty() {
worker.log("No domains configured to be ordered from an ACME server.");
return Ok(None);
}
let (plugins, _) = crate::config::acme::plugin::config()?;
let mut acme = node_config.acme_client().await?;
worker.log("Placing ACME order");
let order = acme
.new_order(domains.iter().map(|d| d.domain.to_ascii_lowercase()))
.await?;
worker.log(format!("Order URL: {}", order.location));
let identifiers: Vec<String> = order
.data
.identifiers
.iter()
.map(|identifier| match identifier {
Identifier::Dns(domain) => domain.clone(),
})
.collect();
for auth_url in &order.data.authorizations {
worker.log(format!("Getting authorization details from '{}'", auth_url));
let mut auth = acme.get_authorization(&auth_url).await?;
let domain = match &mut auth.identifier {
Identifier::Dns(domain) => domain.to_ascii_lowercase(),
};
if auth.status == Status::Valid {
worker.log(format!("{} is already validated!", domain));
continue;
}
worker.log(format!("The validation for {} is pending", domain));
let domain_config: &AcmeDomain = get_domain_config(&domain)?;
let plugin_id = domain_config.plugin.as_deref().unwrap_or("standalone");
let mut plugin_cfg =
crate::acme::get_acme_plugin(&plugins, plugin_id)?.ok_or_else(|| {
format_err!("plugin '{}' for domain '{}' not found!", plugin_id, domain)
})?;
worker.log("Setting up validation plugin");
let validation_url = plugin_cfg
.setup(&mut acme, &auth, domain_config, Arc::clone(&worker))
.await?;
let result = request_validation(&worker, &mut acme, auth_url, validation_url).await;
if let Err(err) = plugin_cfg
.teardown(&mut acme, &auth, domain_config, Arc::clone(&worker))
.await
{
worker.warn(format!(
"Failed to teardown plugin '{}' for domain '{}' - {}",
plugin_id, domain, err
));
}
let _: () = result?;
}
worker.log("All domains validated");
worker.log("Creating CSR");
let csr = proxmox_acme_rs::util::Csr::generate(&identifiers, &Default::default())?;
let mut finalize_error_cnt = 0u8;
let order_url = &order.location;
let mut order;
loop {
use proxmox_acme_rs::order::Status;
order = acme.get_order(order_url).await?;
match order.status {
Status::Pending => {
worker.log("still pending, trying to finalize anyway");
let finalize = order
.finalize
.as_deref()
.ok_or_else(|| format_err!("missing 'finalize' URL in order"))?;
if let Err(err) = acme.finalize(finalize, &csr.data).await {
if finalize_error_cnt >= 5 {
return Err(err.into());
}
finalize_error_cnt += 1;
}
tokio::time::sleep(Duration::from_secs(5)).await;
}
Status::Ready => {
worker.log("order is ready, finalizing");
let finalize = order
.finalize
.as_deref()
.ok_or_else(|| format_err!("missing 'finalize' URL in order"))?;
acme.finalize(finalize, &csr.data).await?;
tokio::time::sleep(Duration::from_secs(5)).await;
}
Status::Processing => {
worker.log("still processing, trying again in 30 seconds");
tokio::time::sleep(Duration::from_secs(30)).await;
}
Status::Valid => {
worker.log("valid");
break;
}
other => bail!("order status: {:?}", other),
}
}
worker.log("Downloading certificate");
let certificate = acme
.get_certificate(
order
.certificate
.as_deref()
.ok_or_else(|| format_err!("missing certificate url in finalized order"))?,
)
.await?;
Ok(Some(OrderedCertificate {
certificate,
private_key_pem: csr.private_key_pem,
}))
}
async fn request_validation(
worker: &WorkerTask,
acme: &mut AcmeClient,
auth_url: &str,
validation_url: &str,
) -> Result<(), Error> {
worker.log("Triggering validation");
acme.request_challenge_validation(&validation_url).await?;
worker.log("Sleeping for 5 seconds");
tokio::time::sleep(Duration::from_secs(5)).await;
loop {
use proxmox_acme_rs::authorization::Status;
let auth = acme.get_authorization(&auth_url).await?;
match auth.status {
Status::Pending => {
worker.log("Status is still 'pending', trying again in 10 seconds");
tokio::time::sleep(Duration::from_secs(10)).await;
}
Status::Valid => return Ok(()),
other => bail!(
"validating challenge '{}' failed - status: {:?}",
validation_url,
other
),
}
}
}
#[api(
input: {
properties: {
node: { schema: NODE_SCHEMA },
force: {
description: "Force replacement of existing files.",
type: Boolean,
optional: true,
default: false,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
)]
/// Order a new ACME certificate.
pub fn new_acme_cert(force: bool, rpcenv: &mut dyn RpcEnvironment) -> Result<String, Error> {
spawn_certificate_worker("acme-new-cert", force, rpcenv)
}
#[api(
input: {
properties: {
node: { schema: NODE_SCHEMA },
force: {
description: "Force replacement of existing files.",
type: Boolean,
optional: true,
default: false,
},
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
)]
/// Renew the current ACME certificate if it expires within 30 days (or always if the `force`
/// parameter is set).
pub fn renew_acme_cert(force: bool, rpcenv: &mut dyn RpcEnvironment) -> Result<String, Error> {
if !cert_expires_soon()? && !force {
bail!("Certificate does not expire within the next 30 days and 'force' is not set.")
}
spawn_certificate_worker("acme-renew-cert", force, rpcenv)
}
/// Check whether the current certificate expires within the next 30 days.
pub fn cert_expires_soon() -> Result<bool, Error> {
let cert = pem_to_cert_info(get_certificate_pem()?.as_bytes())?;
cert.is_expired_after_epoch(proxmox::tools::time::epoch_i64() + 30 * 24 * 60 * 60)
.map_err(|err| format_err!("Failed to check certificate expiration date: {}", err))
}
fn spawn_certificate_worker(
name: &'static str,
force: bool,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
// We only have 1 certificate path in PBS which makes figuring out whether or not it is a
// custom one too hard... We keep the parameter because the widget-toolkit may be using it...
let _ = force;
let (node_config, _digest) = crate::config::node::config()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
WorkerTask::spawn(name, None, auth_id, true, move |worker| async move {
if let Some(cert) = order_certificate(worker, &node_config).await? {
crate::config::set_proxy_certificate(&cert.certificate, &cert.private_key_pem)?;
crate::server::reload_proxy_certificate().await?;
}
Ok(())
})
}
#[api(
input: {
properties: {
node: { schema: NODE_SCHEMA },
},
},
access: {
permission: &Permission::Privilege(&["system", "certificates"], PRIV_SYS_MODIFY, false),
},
protected: true,
)]
/// Renew the current ACME certificate if it expires within 30 days (or always if the `force`
/// parameter is set).
pub fn revoke_acme_cert(rpcenv: &mut dyn RpcEnvironment) -> Result<String, Error> {
let (node_config, _digest) = crate::config::node::config()?;
let cert_pem = get_certificate_pem()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
WorkerTask::spawn(
"acme-revoke-cert",
None,
auth_id,
true,
move |worker| async move {
worker.log("Loading ACME account");
let mut acme = node_config.acme_client().await?;
worker.log("Revoking old certificate");
acme.revoke_certificate(cert_pem.as_bytes(), None).await?;
worker.log("Deleting certificate and regenerating a self-signed one");
delete_custom_certificate().await?;
Ok(())
},
)
}

87
src/api2/node/config.rs Normal file
View File

@ -0,0 +1,87 @@
use anyhow::Error;
use proxmox::api::schema::Updatable;
use proxmox::api::{api, Permission, Router, RpcEnvironment};
use crate::api2::types::NODE_SCHEMA;
use crate::api2::node::apt::update_apt_proxy_config;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::config::node::{NodeConfig, NodeConfigUpdater};
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_GET_NODE_CONFIG)
.put(&API_METHOD_UPDATE_NODE_CONFIG);
#[api(
input: {
properties: {
node: { schema: NODE_SCHEMA },
},
},
access: {
permission: &Permission::Privilege(&["system"], PRIV_SYS_AUDIT, false),
},
returns: {
type: NodeConfig,
},
)]
/// Get the node configuration
pub fn get_node_config(mut rpcenv: &mut dyn RpcEnvironment) -> Result<NodeConfig, Error> {
let (config, digest) = crate::config::node::config()?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(config)
}
#[api(
input: {
properties: {
node: { schema: NODE_SCHEMA },
digest: {
description: "Digest to protect against concurrent updates",
optional: true,
},
updater: {
type: NodeConfigUpdater,
flatten: true,
},
delete: {
description: "Options to remove from the configuration",
optional: true,
},
},
},
access: {
permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
},
protected: true,
)]
/// Update the node configuration
pub fn update_node_config(
updater: NodeConfigUpdater,
delete: Option<String>,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = crate::config::node::lock()?;
let (mut config, expected_digest) = crate::config::node::config()?;
if let Some(digest) = digest {
// FIXME: GUI doesn't handle our non-inlined digest part here properly...
if !digest.is_empty() {
let digest = proxmox::tools::hex_to_digest(&digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
}
let delete: Vec<&str> = delete
.as_deref()
.unwrap_or("")
.split(&[' ', ',', ';', '\0'][..])
.collect();
config.update_from(updater, &delete)?;
crate::config::node::save_config(&config)?;
update_apt_proxy_config(config.http_proxy().as_ref())?;
Ok(())
}

View File

@ -66,6 +66,8 @@ pub fn list_disks(
}
}
list.sort_by(|a, b| a.name.cmp(&b.name));
Ok(list)
}

View File

@ -5,6 +5,7 @@ use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Permission, RpcEnvironment, RpcEnvironmentType};
use proxmox::api::section_config::SectionConfigData;
use proxmox::api::router::Router;
use proxmox::tools::fs::open_file_locked;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::tools::disks::{
@ -16,7 +17,7 @@ use crate::tools::systemd::{self, types::*};
use crate::server::WorkerTask;
use crate::api2::types::*;
use crate::config::datastore::DataStoreConfig;
use crate::config::datastore::{self, DataStoreConfig};
#[api(
properties: {
@ -179,7 +180,17 @@ pub fn create_datastore_disk(
systemd::start_unit(&mount_unit_name)?;
if add_datastore {
crate::api2::config::datastore::create_datastore(json!({ "name": name, "path": mount_point }))?
let lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let datastore: DataStoreConfig =
serde_json::from_value(json!({ "name": name, "path": mount_point }))?;
let (config, _digest) = datastore::config()?;
if config.sections.get(&datastore.name).is_some() {
bail!("datastore '{}' already exists.", datastore.name);
}
crate::api2::config::datastore::do_create_datastore(lock, config, datastore, Some(&worker))?;
}
Ok(())

View File

@ -20,6 +20,7 @@ use crate::tools::disks::{
zpool_list, zpool_status, parse_zpool_status_config_tree, vdev_list_to_tree,
DiskUsageType,
};
use crate::config::datastore::{self, DataStoreConfig};
use crate::server::WorkerTask;
@ -372,7 +373,17 @@ pub fn create_zpool(
}
if add_datastore {
crate::api2::config::datastore::create_datastore(json!({ "name": name, "path": mount_point }))?
let lock = datastore::lock_config()?;
let datastore: DataStoreConfig =
serde_json::from_value(json!({ "name": name, "path": mount_point }))?;
let (config, _digest) = datastore::config()?;
if config.sections.get(&datastore.name).is_some() {
bail!("datastore '{}' already exists.", datastore.name);
}
crate::api2::config::datastore::do_create_datastore(lock, config, datastore, Some(&worker))?;
}
Ok(())

View File

@ -60,36 +60,41 @@ use crate::config::acl::PRIV_SYS_AUDIT;
)]
/// Read syslog entries.
fn get_journal(
param: Value,
since: Option<i64>,
until: Option<i64>,
lastentries: Option<u64>,
startcursor: Option<String>,
endcursor: Option<String>,
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let mut args = vec![];
if let Some(lastentries) = param["lastentries"].as_u64() {
if let Some(lastentries) = lastentries {
args.push(String::from("-n"));
args.push(format!("{}", lastentries));
}
if let Some(since) = param["since"].as_str() {
if let Some(since) = since {
args.push(String::from("-b"));
args.push(since.to_owned());
args.push(since.to_string());
}
if let Some(until) = param["until"].as_str() {
if let Some(until) = until {
args.push(String::from("-e"));
args.push(until.to_owned());
args.push(until.to_string());
}
if let Some(startcursor) = param["startcursor"].as_str() {
if let Some(startcursor) = startcursor {
args.push(String::from("-f"));
args.push(startcursor.to_owned());
args.push(startcursor);
}
if let Some(endcursor) = param["endcursor"].as_str() {
if let Some(endcursor) = endcursor {
args.push(String::from("-t"));
args.push(endcursor.to_owned());
args.push(endcursor);
}
let mut lines: Vec<String> = vec![];

View File

@ -2,7 +2,7 @@ use std::process::Command;
use std::path::Path;
use anyhow::{Error, format_err, bail};
use serde_json::{json, Value};
use serde_json::Value;
use proxmox::sys::linux::procfs;
@ -12,6 +12,16 @@ use crate::api2::types::*;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_POWER_MANAGEMENT};
use crate::tools::cert::CertInfo;
impl std::convert::From<procfs::ProcFsCPUInfo> for NodeCpuInformation {
fn from(info: procfs::ProcFsCPUInfo) -> Self {
Self {
model: info.model,
sockets: info.sockets,
cpus: info.cpus,
}
}
}
#[api(
input: {
properties: {
@ -21,43 +31,7 @@ use crate::tools::cert::CertInfo;
},
},
returns: {
type: Object,
description: "Returns node memory, CPU and (root) disk usage",
properties: {
memory: {
type: Object,
description: "node memory usage counters",
properties: {
total: {
description: "total memory",
type: Integer,
},
used: {
description: "total memory",
type: Integer,
},
free: {
description: "free memory",
type: Integer,
},
},
},
cpu: {
type: Number,
description: "Total CPU usage since last query.",
optional: true,
},
info: {
type: Object,
description: "contains node information",
properties: {
fingerprint: {
description: "The SSL Fingerprint",
type: String,
},
},
},
},
type: NodeStatus,
},
access: {
permission: &Permission::Privilege(&["system", "status"], PRIV_SYS_AUDIT, false),
@ -68,32 +42,52 @@ fn get_status(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
) -> Result<NodeStatus, Error> {
let meminfo: procfs::ProcFsMemInfo = procfs::read_meminfo()?;
let memory = NodeMemoryCounters {
total: meminfo.memtotal,
used: meminfo.memused,
free: meminfo.memfree,
};
let swap = NodeSwapCounters {
total: meminfo.swaptotal,
used: meminfo.swapused,
free: meminfo.swapfree,
};
let kstat: procfs::ProcFsStat = procfs::read_proc_stat()?;
let disk_usage = crate::tools::disks::disk_usage(Path::new("/"))?;
let cpu = kstat.cpu;
let wait = kstat.iowait_percent;
// get fingerprint
let cert = CertInfo::new()?;
let fp = cert.fingerprint()?;
let loadavg = procfs::Loadavg::read()?;
let loadavg = [loadavg.one(), loadavg.five(), loadavg.fifteen()];
Ok(json!({
"memory": {
"total": meminfo.memtotal,
"used": meminfo.memused,
"free": meminfo.memfree,
let cpuinfo = procfs::read_cpuinfo()?;
let cpuinfo = cpuinfo.into();
let uname = nix::sys::utsname::uname();
let kversion = format!(
"{} {} {}",
uname.sysname(),
uname.release(),
uname.version()
);
Ok(NodeStatus {
memory,
swap,
root: crate::tools::disks::disk_usage(Path::new("/"))?,
uptime: procfs::read_proc_uptime()?.0 as u64,
loadavg,
kversion,
cpuinfo,
cpu,
wait,
info: NodeInformation {
fingerprint: CertInfo::new()?.fingerprint()?,
},
"cpu": kstat.cpu,
"root": {
"total": disk_usage.total,
"used": disk_usage.used,
"free": disk_usage.avail,
},
"info": {
"fingerprint": fp,
},
}))
})
}
#[api(

View File

@ -32,9 +32,6 @@ use crate::api2::types::{NODE_SCHEMA, SUBSCRIPTION_KEY_SCHEMA, Authid};
pub fn check_subscription(
force: bool,
) -> Result<(), Error> {
// FIXME: drop once proxmox-api-macro is bumped to >> 5.0.0-1
let _remove_me = API_METHOD_CHECK_SUBSCRIPTION_PARAM_DEFAULT_FORCE;
let info = match subscription::read_subscription() {
Err(err) => bail!("could not read subscription status: {}", err),
Ok(Some(info)) => info,

View File

@ -256,7 +256,7 @@ fn extract_upid(param: &Value) -> Result<UPID, Error> {
},
},
access: {
description: "Users can access there own tasks, or need Sys.Audit on /system/tasks.",
description: "Users can access their own tasks, or need Sys.Audit on /system/tasks.",
permission: &Permission::Anybody,
},
)]
@ -326,7 +326,7 @@ async fn read_task_log(
},
},
access: {
description: "Users can stop there own tasks, or need Sys.Modify on /system/tasks.",
description: "Users can stop their own tasks, or need Sys.Modify on /system/tasks.",
permission: &Permission::Anybody,
},
)]
@ -420,7 +420,7 @@ fn stop_task(
items: { type: TaskListItem },
},
access: {
description: "Users can only see there own tasks, unless the have Sys.Audit on /system/tasks.",
description: "Users can only see their own tasks, unless they have Sys.Audit on /system/tasks.",
permission: &Permission::Anybody,
},
)]

View File

@ -21,7 +21,7 @@ use crate::api2::types::{
Authid,
};
use crate::backup::{DataStore};
use crate::backup::DataStore;
use crate::config::datastore;
use crate::tools::statistics::{linear_regression};
use crate::config::cached_user_info::CachedUserInfo;
@ -55,6 +55,7 @@ use crate::config::acl::{
},
history: {
type: Array,
optional: true,
description: "A list of usages of the past (last Month).",
items: {
type: Number,
@ -69,6 +70,11 @@ use crate::config::acl::{
of RRD data of the last Month. Missing if there are not enough data points yet.\
If the estimate lies in the past, the usage is decreasing.",
},
"error": {
type: String,
optional: true,
description: "An error description, for example, when the datastore could not be looked up.",
},
},
},
},
@ -97,7 +103,19 @@ pub fn datastore_status(
continue;
}
let datastore = DataStore::lookup_datastore(&store)?;
let datastore = match DataStore::lookup_datastore(&store) {
Ok(datastore) => datastore,
Err(err) => {
list.push(json!({
"store": store,
"total": -1,
"used": -1,
"avail": -1,
"error": err.to_string()
}));
continue;
}
};
let status = crate::tools::disks::disk_usage(&datastore.base_path())?;
let mut entry = json!({
@ -110,24 +128,17 @@ pub fn datastore_status(
let rrd_dir = format!("datastore/{}", store);
let now = proxmox::tools::time::epoch_f64();
let rrd_resolution = RRDTimeFrameResolution::Month;
let rrd_mode = RRDMode::Average;
let total_res = crate::rrd::extract_cached_data(
let get_rrd = |what: &str| crate::rrd::extract_cached_data(
&rrd_dir,
"total",
what,
now,
rrd_resolution,
rrd_mode,
RRDTimeFrameResolution::Month,
RRDMode::Average,
);
let used_res = crate::rrd::extract_cached_data(
&rrd_dir,
"used",
now,
rrd_resolution,
rrd_mode,
);
let total_res = get_rrd("total");
let used_res = get_rrd("used");
if let (Some((start, reso, total_list)), Some((_, _, used_list))) = (total_res, used_res) {
let mut usage_list: Vec<f64> = Vec::new();
@ -160,13 +171,10 @@ pub fn datastore_status(
// we skip the calculation for datastores with not enough data
if usage_list.len() >= 7 {
entry["estimated-full-date"] = Value::from(0);
if let Some((a,b)) = linear_regression(&time_list, &usage_list) {
if b != 0.0 {
let estimate = (1.0 - a) / b;
entry["estimated-full-date"] = Value::from(estimate.floor() as u64);
}
}
entry["estimated-full-date"] = match linear_regression(&time_list, &usage_list) {
Some((a, b)) if b != 0.0 => Value::from(((1.0 - a) / b).floor() as u64),
_ => Value::from(0),
};
}
}

View File

@ -1,10 +1,11 @@
use std::path::Path;
use std::sync::Arc;
use std::sync::{Mutex, Arc};
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::{
try_block,
api::{
api,
RpcEnvironment,
@ -16,6 +17,7 @@ use proxmox::{
use crate::{
task_log,
task_warn,
config::{
self,
cached_user_info::CachedUserInfo,
@ -32,6 +34,7 @@ use crate::{
},
server::{
lookup_user_email,
TapeBackupJobSummary,
jobstate::{
Job,
JobState,
@ -42,6 +45,7 @@ use crate::{
DataStore,
BackupDir,
BackupInfo,
StoreProgress,
},
api2::types::{
Authid,
@ -61,6 +65,7 @@ use crate::{
drive::{
media_changer,
lock_tape_device,
TapeLockError,
set_tape_device_state,
},
changer::update_changer_online_status,
@ -174,8 +179,15 @@ pub fn do_tape_backup_job(
let (drive_config, _digest) = config::drive::config()?;
// early check/lock before starting worker
let drive_lock = lock_tape_device(&drive_config, &setup.drive)?;
// for scheduled jobs we acquire the lock later in the worker
let drive_lock = if schedule.is_some() {
None
} else {
Some(lock_tape_device(&drive_config, &setup.drive)?)
};
let notify_user = setup.notify_user.as_ref().unwrap_or_else(|| &Userid::root_userid());
let email = lookup_user_email(notify_user);
let upid_str = WorkerTask::new_thread(
&worker_type,
@ -183,26 +195,44 @@ pub fn do_tape_backup_job(
auth_id.clone(),
false,
move |worker| {
let _drive_lock = drive_lock; // keep lock guard
set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
job.start(&worker.upid().to_string())?;
let mut drive_lock = drive_lock;
let mut summary = Default::default();
let job_result = try_block!({
if schedule.is_some() {
// for scheduled tape backup jobs, we wait indefinitely for the lock
task_log!(worker, "waiting for drive lock...");
loop {
worker.check_abort()?;
match lock_tape_device(&drive_config, &setup.drive) {
Ok(lock) => {
drive_lock = Some(lock);
break;
}
Err(TapeLockError::TimeOut) => continue,
Err(TapeLockError::Other(err)) => return Err(err),
}
}
}
set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
task_log!(worker,"Starting tape backup job '{}'", job_id);
if let Some(event_str) = schedule {
task_log!(worker,"task triggered by schedule '{}'", event_str);
}
let notify_user = setup.notify_user.as_ref().unwrap_or_else(|| &Userid::root_userid());
let email = lookup_user_email(notify_user);
let job_result = backup_worker(
backup_worker(
&worker,
datastore,
&pool_config,
&setup,
email.clone(),
);
&mut summary,
false,
)
});
let status = worker.create_state(&job_result);
@ -212,6 +242,7 @@ pub fn do_tape_backup_job(
Some(job.jobname()),
&setup,
&job_result,
summary,
) {
eprintln!("send tape backup notification failed: {}", err);
}
@ -286,6 +317,12 @@ pub fn run_tape_backup_job(
type: TapeBackupJobSetup,
flatten: true,
},
"force-media-set": {
description: "Ignore the allocation policy and start a new media-set.",
optional: true,
type: bool,
default: false,
},
},
},
returns: {
@ -301,6 +338,7 @@ pub fn run_tape_backup_job(
/// Backup datastore to tape media pool
pub fn backup(
setup: TapeBackupJobSetup,
force_media_set: bool,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
@ -338,12 +376,16 @@ pub fn backup(
move |worker| {
let _drive_lock = drive_lock; // keep lock guard
set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
let mut summary = Default::default();
let job_result = backup_worker(
&worker,
datastore,
&pool_config,
&setup,
email.clone(),
&mut summary,
force_media_set,
);
if let Some(email) = email {
@ -352,6 +394,7 @@ pub fn backup(
None,
&setup,
&job_result,
summary,
) {
eprintln!("send tape backup notification failed: {}", err);
}
@ -372,61 +415,141 @@ fn backup_worker(
pool_config: &MediaPoolConfig,
setup: &TapeBackupJobSetup,
email: Option<String>,
summary: &mut TapeBackupJobSummary,
force_media_set: bool,
) -> Result<(), Error> {
let status_path = Path::new(TAPE_STATUS_DIR);
let _lock = MediaPool::lock(status_path, &pool_config.name)?;
let start = std::time::Instant::now();
task_log!(worker, "update media online status");
let changer_name = update_media_online_status(&setup.drive)?;
let pool = MediaPool::with_config(status_path, &pool_config, changer_name)?;
let pool = MediaPool::with_config(status_path, &pool_config, changer_name, false)?;
let mut pool_writer = PoolWriter::new(pool, &setup.drive, worker, email)?;
let mut pool_writer = PoolWriter::new(
pool,
&setup.drive,
worker,
email,
force_media_set
)?;
let mut group_list = BackupInfo::list_backup_groups(&datastore.base_path())?;
group_list.sort_unstable();
let group_count = group_list.len();
task_log!(worker, "found {} groups", group_count);
let mut progress = StoreProgress::new(group_count as u64);
let latest_only = setup.latest_only.unwrap_or(false);
if latest_only {
task_log!(worker, "latest-only: true (only considering latest snapshots)");
}
for group in group_list {
let mut snapshot_list = group.list_backups(&datastore.base_path())?;
let datastore_name = datastore.name();
let mut errors = false;
let mut need_catalog = false; // avoid writing catalog for empty jobs
for (group_number, group) in group_list.into_iter().enumerate() {
progress.done_groups = group_number as u64;
progress.done_snapshots = 0;
progress.group_snapshots = 0;
let snapshot_list = group.list_backups(&datastore.base_path())?;
// filter out unfinished backups
let mut snapshot_list = snapshot_list
.into_iter()
.filter(|item| item.is_finished())
.collect();
BackupInfo::sort_list(&mut snapshot_list, true); // oldest first
if latest_only {
progress.group_snapshots = 1;
if let Some(info) = snapshot_list.pop() {
if pool_writer.contains_snapshot(&info.backup_dir.to_string()) {
if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
task_log!(worker, "skip snapshot {}", info.backup_dir);
continue;
}
task_log!(worker, "backup snapshot {}", info.backup_dir);
backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)?;
need_catalog = true;
let snapshot_name = info.backup_dir.to_string();
if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
errors = true;
} else {
summary.snapshot_list.push(snapshot_name);
}
progress.done_snapshots = 1;
task_log!(
worker,
"percentage done: {}",
progress
);
}
} else {
for info in snapshot_list {
if pool_writer.contains_snapshot(&info.backup_dir.to_string()) {
progress.group_snapshots = snapshot_list.len() as u64;
for (snapshot_number, info) in snapshot_list.into_iter().enumerate() {
if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
task_log!(worker, "skip snapshot {}", info.backup_dir);
continue;
}
task_log!(worker, "backup snapshot {}", info.backup_dir);
backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)?;
need_catalog = true;
let snapshot_name = info.backup_dir.to_string();
if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
errors = true;
} else {
summary.snapshot_list.push(snapshot_name);
}
progress.done_snapshots = snapshot_number as u64 + 1;
task_log!(
worker,
"percentage done: {}",
progress
);
}
}
}
pool_writer.commit()?;
if need_catalog {
task_log!(worker, "append media catalog");
let uuid = pool_writer.load_writable_media(worker)?;
let done = pool_writer.append_catalog_archive(worker)?;
if !done {
task_log!(worker, "catalog does not fit on tape, writing to next volume");
pool_writer.set_media_status_full(&uuid)?;
pool_writer.load_writable_media(worker)?;
let done = pool_writer.append_catalog_archive(worker)?;
if !done {
bail!("write_catalog_archive failed on second media");
}
}
}
if setup.export_media_set.unwrap_or(false) {
pool_writer.export_media_set(worker)?;
} else if setup.eject_media.unwrap_or(false) {
pool_writer.eject_media(worker)?;
}
if errors {
bail!("Tape backup finished with some errors. Please check the task log.");
}
summary.duration = start.elapsed();
Ok(())
}
@ -460,39 +583,61 @@ pub fn backup_snapshot(
pool_writer: &mut PoolWriter,
datastore: Arc<DataStore>,
snapshot: BackupDir,
) -> Result<(), Error> {
) -> Result<bool, Error> {
task_log!(worker, "start backup {}:{}", datastore.name(), snapshot);
task_log!(worker, "backup snapshot {}", snapshot);
let snapshot_reader = SnapshotReader::new(datastore.clone(), snapshot.clone())?;
let snapshot_reader = match SnapshotReader::new(datastore.clone(), snapshot.clone()) {
Ok(reader) => reader,
Err(err) => {
// ignore missing snapshots and continue
task_warn!(worker, "failed opening snapshot '{}': {}", snapshot, err);
return Ok(false);
}
};
let mut chunk_iter = snapshot_reader.chunk_iterator()?.peekable();
let snapshot_reader = Arc::new(Mutex::new(snapshot_reader));
let (reader_thread, chunk_iter) = pool_writer.spawn_chunk_reader_thread(
datastore.clone(),
snapshot_reader.clone(),
)?;
let mut chunk_iter = chunk_iter.peekable();
loop {
worker.check_abort()?;
// test is we have remaining chunks
if chunk_iter.peek().is_none() {
break;
match chunk_iter.peek() {
None => break,
Some(Ok(_)) => { /* Ok */ },
Some(Err(err)) => bail!("{}", err),
}
let uuid = pool_writer.load_writable_media(worker)?;
worker.check_abort()?;
let (leom, _bytes) = pool_writer.append_chunk_archive(worker, &datastore, &mut chunk_iter)?;
let (leom, _bytes) = pool_writer.append_chunk_archive(worker, &mut chunk_iter, datastore.name())?;
if leom {
pool_writer.set_media_status_full(&uuid)?;
}
}
if let Err(_) = reader_thread.join() {
bail!("chunk reader thread failed");
}
worker.check_abort()?;
let uuid = pool_writer.load_writable_media(worker)?;
worker.check_abort()?;
let snapshot_reader = snapshot_reader.lock().unwrap();
let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
if !done {
@ -511,5 +656,5 @@ pub fn backup_snapshot(
task_log!(worker, "end backup {}:{}", datastore.name(), snapshot);
Ok(())
Ok(true)
}

View File

@ -20,7 +20,7 @@ use crate::{
Authid,
CHANGER_NAME_SCHEMA,
ChangerListEntry,
LinuxTapeDrive,
LtoTapeDrive,
MtxEntryKind,
MtxStatusEntry,
ScsiTapeChanger,
@ -88,7 +88,7 @@ pub async fn get_status(
inventory.update_online_status(&map)?;
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
let drive_list: Vec<LtoTapeDrive> = config.convert_to_typed_array("lto")?;
let mut drive_map: HashMap<u64, String> = HashMap::new();
for drive in drive_list {

View File

@ -1,6 +1,7 @@
use std::panic::UnwindSafe;
use std::path::Path;
use std::sync::Arc;
use std::collections::HashMap;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
@ -10,7 +11,6 @@ use proxmox::{
identity,
list_subdirs_api_method,
tools::Uuid,
sys::error::SysError,
api::{
api,
section_config::SectionConfigData,
@ -42,22 +42,29 @@ use crate::{
MEDIA_POOL_NAME_SCHEMA,
Authid,
DriveListEntry,
LinuxTapeDrive,
LtoTapeDrive,
MediaIdFlat,
LabelUuidMap,
MamAttribute,
LinuxDriveAndMediaStatus,
LtoDriveAndMediaStatus,
Lp17VolumeStatistics,
},
tape::restore::{
fast_catalog_restore,
restore_media,
},
tape::restore::restore_media,
},
server::WorkerTask,
tape::{
TAPE_STATUS_DIR,
MediaPool,
Inventory,
MediaCatalog,
MediaId,
linux_tape_device_list,
BlockReadError,
lock_media_set,
lock_media_pool,
lock_unassigned_media_pool,
lto_tape_device_list,
lookup_device_identification,
file_formats::{
MediaLabel,
@ -65,9 +72,8 @@ use crate::{
},
drive::{
TapeDriver,
LinuxTapeHandle,
Lp17VolumeStatistics,
open_linux_tape_device,
LtoTapeHandle,
open_lto_tape_device,
media_changer,
required_media_changer,
open_drive,
@ -220,7 +226,7 @@ pub async fn load_slot(drive: String, source_slot: u64) -> Result<(), Error> {
},
},
returns: {
description: "The import-export slot number the media was transfered to.",
description: "The import-export slot number the media was transferred to.",
type: u64,
minimum: 1,
},
@ -316,8 +322,8 @@ pub fn unload(
permission: &Permission::Privilege(&["tape", "device", "{drive}"], PRIV_TAPE_WRITE, false),
},
)]
/// Erase media. Check for label-text if given (cancels if wrong media).
pub fn erase_media(
/// Format media. Check for label-text if given (cancels if wrong media).
pub fn format_media(
drive: String,
fast: Option<bool>,
label_text: Option<String>,
@ -326,7 +332,7 @@ pub fn erase_media(
let upid_str = run_drive_worker(
rpcenv,
drive.clone(),
"erase-media",
"format-media",
Some(drive.clone()),
move |worker, config| {
if let Some(ref label) = label_text {
@ -345,15 +351,15 @@ pub fn erase_media(
}
/* assume drive contains no or unrelated data */
task_log!(worker, "unable to read media label: {}", err);
task_log!(worker, "erase anyways");
handle.erase_media(fast.unwrap_or(true))?;
task_log!(worker, "format anyways");
handle.format_media(fast.unwrap_or(true))?;
}
Ok((None, _)) => {
if let Some(label) = label_text {
bail!("expected label '{}', found empty tape", label);
}
task_log!(worker, "found empty media - erase anyways");
handle.erase_media(fast.unwrap_or(true))?;
task_log!(worker, "found empty media - format anyways");
handle.format_media(fast.unwrap_or(true))?;
}
Ok((Some(media_id), _key_config)) => {
if let Some(label_text) = label_text {
@ -373,11 +379,20 @@ pub fn erase_media(
);
let status_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(status_path)?;
let mut inventory = Inventory::new(status_path);
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(status_path, pool)?;
let _media_set_lock = lock_media_set(status_path, uuid, None)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.remove_media(&media_id.label.uuid)?;
handle.erase_media(fast.unwrap_or(true))?;
} else {
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.remove_media(&media_id.label.uuid)?;
};
handle.format_media(fast.unwrap_or(true))?;
}
}
@ -489,7 +504,7 @@ pub fn eject_media(
/// Write a new media label to the media in 'drive'. The media is
/// assigned to the specified 'pool', or else to the free media pool.
///
/// Note: The media need to be empty (you may want to erase it first).
/// Note: The media need to be empty (you may want to format it first).
pub fn label_media(
drive: String,
pool: Option<String>,
@ -514,16 +529,13 @@ pub fn label_media(
drive.rewind()?;
match drive.read_next_file() {
Ok(Some(_file)) => bail!("media is not empty (erase first)"),
Ok(None) => { /* EOF mark at BOT, assume tape is empty */ },
Ok(_reader) => bail!("media is not empty (format it first)"),
Err(BlockReadError::EndOfFile) => { /* EOF mark at BOT, assume tape is empty */ },
Err(BlockReadError::EndOfStream) => { /* tape is empty */ },
Err(err) => {
if err.is_errno(nix::errno::Errno::ENOSPC) || err.is_errno(nix::errno::Errno::EIO) {
/* assume tape is empty */
} else {
bail!("media read error - {}", err);
}
}
}
let ctime = proxmox::tools::time::epoch_i64();
let label = MediaLabel {
@ -548,29 +560,38 @@ fn write_media_label(
drive.label_tape(&label)?;
let mut media_set_label = None;
let status_path = Path::new(TAPE_STATUS_DIR);
if let Some(ref pool) = pool {
let media_id = if let Some(ref pool) = pool {
// assign media to pool by writing special media set label
worker.log(format!("Label media '{}' for pool '{}'", label.label_text, pool));
let set = MediaSetLabel::with_data(&pool, [0u8; 16].into(), 0, label.ctime, None);
drive.write_media_set_label(&set, None)?;
media_set_label = Some(set);
} else {
worker.log(format!("Label media '{}' (no pool assignment)", label.label_text));
}
let media_id = MediaId { label, media_set_label };
let status_path = Path::new(TAPE_STATUS_DIR);
let media_id = MediaId { label, media_set_label: Some(set) };
// Create the media catalog
MediaCatalog::overwrite(status_path, &media_id, false)?;
let mut inventory = Inventory::load(status_path)?;
let mut inventory = Inventory::new(status_path);
inventory.store(media_id.clone(), false)?;
media_id
} else {
worker.log(format!("Label media '{}' (no pool assignment)", label.label_text));
let media_id = MediaId { label, media_set_label: None };
// Create the media catalog
MediaCatalog::overwrite(status_path, &media_id, false)?;
let mut inventory = Inventory::new(status_path);
inventory.store(media_id.clone(), false)?;
media_id
};
drive.rewind()?;
match drive.read_label() {
@ -705,14 +726,24 @@ pub async fn read_label(
if let Err(err) = drive.set_encryption(encrypt_fingerprint) {
// try, but ignore errors. just log to stderr
eprintln!("uable to load encryption key: {}", err);
eprintln!("unable to load encryption key: {}", err);
}
}
if let Some(true) = inventorize {
let state_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(state_path)?;
let mut inventory = Inventory::new(state_path);
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(state_path, pool)?;
let _lock = lock_media_set(state_path, uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(state_path, &media_id)?;
inventory.store(media_id, false)?;
} else {
let _lock = lock_unassigned_media_pool(state_path)?;
MediaCatalog::destroy(state_path, &media_id.label.uuid)?;
inventory.store(media_id, false)?;
};
}
flat
@ -760,9 +791,9 @@ pub fn clean_drive(
changer.clean_drive()?;
if let Ok(drive_config) = config.lookup::<LinuxTapeDrive>("linux", &drive) {
if let Ok(drive_config) = config.lookup::<LtoTapeDrive>("lto", &drive) {
// Note: clean_drive unloads the cleaning media, so we cannot use drive_config.open
let mut handle = LinuxTapeHandle::new(open_linux_tape_device(&drive_config.path)?);
let mut handle = LtoTapeHandle::new(open_lto_tape_device(&drive_config.path)?)?;
// test for critical tape alert flags
if let Ok(alert_flags) = handle.tape_alert_flags() {
@ -782,7 +813,7 @@ pub fn clean_drive(
}
}
worker.log("Drive cleaned sucessfully");
worker.log("Drive cleaned successfully");
Ok(())
},
@ -943,11 +974,21 @@ pub fn update_inventory(
}
Ok((Some(media_id), _key_config)) => {
if label_text != media_id.label.label_text {
worker.warn(format!("label text missmatch ({} != {})", label_text, media_id.label.label_text));
worker.warn(format!("label text mismatch ({} != {})", label_text, media_id.label.label_text));
continue;
}
worker.log(format!("inventorize media '{}' with uuid '{}'", label_text, media_id.label.uuid));
if let Some(MediaSetLabel { ref pool, ref uuid, ..}) = media_id.media_set_label {
let _pool_lock = lock_media_pool(state_path, pool)?;
let _lock = lock_media_set(state_path, uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(state_path, &media_id)?;
inventory.store(media_id, false)?;
} else {
let _lock = lock_unassigned_media_pool(state_path)?;
MediaCatalog::destroy(state_path, &media_id.label.uuid)?;
inventory.store(media_id, false)?;
};
}
}
changer.unload_media(None)?;
@ -1012,7 +1053,10 @@ fn barcode_label_media_worker(
) -> Result<(), Error> {
let (mut changer, changer_name) = required_media_changer(drive_config, &drive)?;
let label_text_list = changer.online_media_label_texts()?;
let mut label_text_list = changer.online_media_label_texts()?;
// make sure we label them in the right order
label_text_list.sort();
let state_path = Path::new(TAPE_STATUS_DIR);
@ -1044,20 +1088,17 @@ fn barcode_label_media_worker(
drive.rewind()?;
match drive.read_next_file() {
Ok(Some(_file)) => {
worker.log(format!("media '{}' is not empty (erase first)", label_text));
Ok(_reader) => {
worker.log(format!("media '{}' is not empty (format it first)", label_text));
continue;
}
Ok(None) => { /* EOF mark at BOT, assume tape is empty */ },
Err(err) => {
if err.is_errno(nix::errno::Errno::ENOSPC) || err.is_errno(nix::errno::Errno::EIO) {
/* assume tape is empty */
} else {
worker.warn(format!("media '{}' read error (maybe not empty - erase first)", label_text));
Err(BlockReadError::EndOfFile) => { /* EOF mark at BOT, assume tape is empty */ },
Err(BlockReadError::EndOfStream) => { /* tape is empty */ },
Err(_err) => {
worker.warn(format!("media '{}' read error (maybe not empty - format it first)", label_text));
continue;
}
}
}
let ctime = proxmox::tools::time::epoch_i64();
let label = MediaLabel {
@ -1097,7 +1138,7 @@ pub async fn cartridge_memory(drive: String) -> Result<Vec<MamAttribute>, Error>
drive.clone(),
"reading cartridge memory".to_string(),
move |config| {
let drive_config: LinuxTapeDrive = config.lookup("linux", &drive)?;
let drive_config: LtoTapeDrive = config.lookup("lto", &drive)?;
let mut handle = drive_config.open()?;
handle.cartridge_memory()
@ -1127,7 +1168,7 @@ pub async fn volume_statistics(drive: String) -> Result<Lp17VolumeStatistics, Er
drive.clone(),
"reading volume statistics".to_string(),
move |config| {
let drive_config: LinuxTapeDrive = config.lookup("linux", &drive)?;
let drive_config: LtoTapeDrive = config.lookup("lto", &drive)?;
let mut handle = drive_config.open()?;
handle.volume_statistics()
@ -1145,24 +1186,24 @@ pub async fn volume_statistics(drive: String) -> Result<Lp17VolumeStatistics, Er
},
},
returns: {
type: LinuxDriveAndMediaStatus,
type: LtoDriveAndMediaStatus,
},
access: {
permission: &Permission::Privilege(&["tape", "device", "{drive}"], PRIV_TAPE_AUDIT, false),
},
)]
/// Get drive/media status
pub async fn status(drive: String) -> Result<LinuxDriveAndMediaStatus, Error> {
pub async fn status(drive: String) -> Result<LtoDriveAndMediaStatus, Error> {
run_drive_blocking_task(
drive.clone(),
"reading drive status".to_string(),
move |config| {
let drive_config: LinuxTapeDrive = config.lookup("linux", &drive)?;
let drive_config: LtoTapeDrive = config.lookup("lto", &drive)?;
// Note: use open_linux_tape_device, because this also works if no medium loaded
let file = open_linux_tape_device(&drive_config.path)?;
// Note: use open_lto_tape_device, because this also works if no medium loaded
let file = open_lto_tape_device(&drive_config.path)?;
let mut handle = LinuxTapeHandle::new(file);
let mut handle = LtoTapeHandle::new(file)?;
handle.get_drive_and_media_status()
}
@ -1181,6 +1222,11 @@ pub async fn status(drive: String) -> Result<LinuxDriveAndMediaStatus, Error> {
type: bool,
optional: true,
},
scan: {
description: "Re-read the whole tape to reconstruct the catalog instead of restoring saved versions.",
type: bool,
optional: true,
},
verbose: {
description: "Verbose mode - log all found chunks.",
type: bool,
@ -1199,11 +1245,13 @@ pub async fn status(drive: String) -> Result<LinuxDriveAndMediaStatus, Error> {
pub fn catalog_media(
drive: String,
force: Option<bool>,
scan: Option<bool>,
verbose: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let verbose = verbose.unwrap_or(false);
let force = force.unwrap_or(false);
let scan = scan.unwrap_or(false);
let upid_str = run_drive_worker(
rpcenv,
@ -1234,19 +1282,22 @@ pub fn catalog_media(
let status_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(status_path)?;
inventory.store(media_id.clone(), false)?;
let mut inventory = Inventory::new(status_path);
let pool = match media_id.media_set_label {
let (_media_set_lock, media_set_uuid) = match media_id.media_set_label {
None => {
worker.log("media is empty");
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.store(media_id.clone(), false)?;
return Ok(());
}
Some(ref set) => {
if set.uuid.as_ref() == [0u8;16] { // media is empty
worker.log("media is empty");
let _lock = lock_unassigned_media_pool(status_path)?;
MediaCatalog::destroy(status_path, &media_id.label.uuid)?;
inventory.store(media_id.clone(), false)?;
return Ok(());
}
let encrypt_fingerprint = set.encryption_key_fingerprint.clone()
@ -1254,17 +1305,38 @@ pub fn catalog_media(
drive.set_encryption(encrypt_fingerprint)?;
set.pool.clone()
let _pool_lock = lock_media_pool(status_path, &set.pool)?;
let media_set_lock = lock_media_set(status_path, &set.uuid, None)?;
MediaCatalog::destroy_unrelated_catalog(status_path, &media_id)?;
inventory.store(media_id.clone(), false)?;
(media_set_lock, &set.uuid)
}
};
let _lock = MediaPool::lock(status_path, &pool)?;
if MediaCatalog::exists(status_path, &media_id.label.uuid) && !force {
bail!("media catalog exists (please use --force to overwrite)");
}
restore_media(&worker, &mut drive, &media_id, None, verbose)?;
if !scan {
let media_set = inventory.compute_media_set_members(media_set_uuid)?;
if fast_catalog_restore(&worker, &mut drive, &media_set, &media_id.label.uuid)? {
return Ok(())
}
task_log!(worker, "no catalog found");
}
task_log!(worker, "scanning entire media to reconstruct catalog");
drive.rewind()?;
drive.read_label()?; // skip over labels - we already read them above
let mut checked_chunks = HashMap::new();
restore_media(worker, &mut drive, &media_id, None, &mut checked_chunks, verbose)?;
Ok(())
},
@ -1305,9 +1377,9 @@ pub fn list_drives(
let (config, _) = config::drive::config()?;
let linux_drives = linux_tape_device_list();
let lto_drives = lto_tape_device_list();
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
let drive_list: Vec<LtoTapeDrive> = config.convert_to_typed_array("lto")?;
let mut list = Vec::new();
@ -1321,7 +1393,7 @@ pub fn list_drives(
continue;
}
let info = lookup_device_identification(&linux_drives, &drive.path);
let info = lookup_device_identification(&lto_drives, &drive.path);
let state = get_tape_device_state(&config, &drive.name)?;
let entry = DriveListEntry { config: drive, info, state };
list.push(entry);
@ -1353,9 +1425,9 @@ pub const SUBDIRS: SubdirMap = &sorted!([
.post(&API_METHOD_EJECT_MEDIA)
),
(
"erase-media",
"format-media",
&Router::new()
.post(&API_METHOD_ERASE_MEDIA)
.post(&API_METHOD_FORMAT_MEDIA)
),
(
"export-media",
@ -1381,7 +1453,7 @@ pub const SUBDIRS: SubdirMap = &sorted!([
(
"load-slot",
&Router::new()
.put(&API_METHOD_LOAD_SLOT)
.post(&API_METHOD_LOAD_SLOT)
),
(
"cartridge-memory",

View File

@ -1,4 +1,5 @@
use std::path::Path;
use std::collections::HashSet;
use anyhow::{bail, format_err, Error};
use serde::{Serialize, Deserialize};
@ -28,6 +29,7 @@ use crate::{
CHANGER_NAME_SCHEMA,
MediaPoolConfig,
MediaListEntry,
MediaSetListEntry,
MediaStatus,
MediaContentEntry,
VAULT_NAME_SCHEMA,
@ -44,6 +46,74 @@ use crate::{
},
};
#[api(
returns: {
description: "List of media sets.",
type: Array,
items: {
type: MediaSetListEntry,
},
},
access: {
description: "List of media sets filtered by Tape.Audit privileges on pool",
permission: &Permission::Anybody,
},
)]
/// List Media sets
pub async fn list_media_sets(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<MediaSetListEntry>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, _digest) = config::media_pool::config()?;
let status_path = Path::new(TAPE_STATUS_DIR);
let mut media_sets: HashSet<Uuid> = HashSet::new();
let mut list = Vec::new();
for (_section_type, data) in config.sections.values() {
let pool_name = match data["name"].as_str() {
None => continue,
Some(name) => name,
};
let privs = user_info.lookup_privs(&auth_id, &["tape", "pool", pool_name]);
if (privs & PRIV_TAPE_AUDIT) == 0 {
continue;
}
let config: MediaPoolConfig = config.lookup("pool", pool_name)?;
let changer_name = None; // assume standalone drive
let pool = MediaPool::with_config(status_path, &config, changer_name, true)?;
for media in pool.list_media() {
if let Some(label) = media.media_set_label() {
if media_sets.contains(&label.uuid) {
continue;
}
let media_set_uuid = label.uuid.clone();
let media_set_ctime = label.ctime;
let media_set_name = pool
.generate_media_set_name(&media_set_uuid, config.template.clone())
.unwrap_or_else(|_| media_set_uuid.to_string());
media_sets.insert(media_set_uuid.clone());
list.push(MediaSetListEntry {
media_set_name,
media_set_uuid,
media_set_ctime,
pool: pool_name.to_string(),
});
}
}
}
Ok(list)
}
#[api(
input: {
properties: {
@ -122,14 +192,14 @@ pub async fn list_media(
let config: MediaPoolConfig = config.lookup("pool", pool_name)?;
let changer_name = None; // assume standalone drive
let mut pool = MediaPool::with_config(status_path, &config, changer_name)?;
let mut pool = MediaPool::with_config(status_path, &config, changer_name, true)?;
let current_time = proxmox::tools::time::epoch_i64();
// Call start_write_session, so that we show the same status a
// backup job would see.
pool.force_media_availability();
pool.start_write_session(current_time)?;
pool.start_write_session(current_time, false)?;
for media in pool.list_media() {
let expired = pool.media_is_expired(&media, current_time);
@ -432,9 +502,10 @@ pub fn list_content(
.generate_media_set_name(&set.uuid, template)
.unwrap_or_else(|_| set.uuid.to_string());
let catalog = MediaCatalog::open(status_path, &media_id.label.uuid, false, false)?;
let catalog = MediaCatalog::open(status_path, &media_id, false, false)?;
for snapshot in catalog.snapshot_index().keys() {
for (store, content) in catalog.content() {
for snapshot in content.snapshot_index.keys() {
let backup_dir: BackupDir = snapshot.parse()?;
if let Some(ref backup_type) = filter.backup_type {
@ -453,10 +524,12 @@ pub fn list_content(
media_set_ctime: set.ctime,
seq_nr: set.seq_nr,
snapshot: snapshot.to_owned(),
store: store.to_owned(),
backup_time: backup_dir.backup_time(),
});
}
}
}
Ok(list)
}
@ -497,7 +570,7 @@ pub fn get_media_status(uuid: Uuid) -> Result<MediaStatus, Error> {
/// Update media status (None, 'full', 'damaged' or 'retired')
///
/// It is not allowed to set status to 'writable' or 'unknown' (those
/// are internaly managed states).
/// are internally managed states).
pub fn update_media_status(uuid: Uuid, status: Option<MediaStatus>) -> Result<(), Error> {
let status_path = Path::new(TAPE_STATUS_DIR);
@ -543,6 +616,11 @@ const SUBDIRS: SubdirMap = &[
.get(&API_METHOD_DESTROY_MEDIA)
),
( "list", &MEDIA_LIST_ROUTER ),
(
"media-sets",
&Router::new()
.get(&API_METHOD_LIST_MEDIA_SETS)
),
(
"move",
&Router::new()

View File

@ -15,7 +15,7 @@ use proxmox::{
use crate::{
api2::types::TapeDeviceInfo,
tape::{
linux_tape_device_list,
lto_tape_device_list,
linux_tape_changer_list,
},
};
@ -41,7 +41,7 @@ pub mod restore;
/// Scan tape drives
pub fn scan_drives(_param: Value) -> Result<Vec<TapeDeviceInfo>, Error> {
let list = linux_tape_device_list();
let list = lto_tape_device_list();
Ok(list)
}

File diff suppressed because it is too large Load Diff

100
src/api2/types/acme.rs Normal file
View File

@ -0,0 +1,100 @@
use serde::{Deserialize, Serialize};
use serde_json::Value;
use proxmox::api::{api, schema::{Schema, StringSchema, ApiStringFormat}};
use crate::api2::types::{
DNS_ALIAS_FORMAT, DNS_NAME_FORMAT, PROXMOX_SAFE_ID_FORMAT,
};
#[api(
properties: {
"domain": { format: &DNS_NAME_FORMAT },
"alias": {
optional: true,
format: &DNS_ALIAS_FORMAT,
},
"plugin": {
optional: true,
format: &PROXMOX_SAFE_ID_FORMAT,
},
},
default_key: "domain",
)]
#[derive(Deserialize, Serialize)]
/// A domain entry for an ACME certificate.
pub struct AcmeDomain {
/// The domain to certify for.
pub domain: String,
/// The domain to use for challenges instead of the default acme challenge domain.
///
/// This is useful if you use CNAME entries to redirect `_acme-challenge.*` domains to a
/// different DNS server.
#[serde(skip_serializing_if = "Option::is_none")]
pub alias: Option<String>,
/// The plugin to use to validate this domain.
///
/// Empty means standalone HTTP validation is used.
#[serde(skip_serializing_if = "Option::is_none")]
pub plugin: Option<String>,
}
pub const ACME_DOMAIN_PROPERTY_SCHEMA: Schema = StringSchema::new(
"ACME domain configuration string")
.format(&ApiStringFormat::PropertyString(&AcmeDomain::API_SCHEMA))
.schema();
#[api(
properties: {
name: { type: String },
url: { type: String },
},
)]
/// An ACME directory endpoint with a name and URL.
#[derive(Serialize)]
pub struct KnownAcmeDirectory {
/// The ACME directory's name.
pub name: &'static str,
/// The ACME directory's endpoint URL.
pub url: &'static str,
}
proxmox::api_string_type! {
#[api(format: &PROXMOX_SAFE_ID_FORMAT)]
/// ACME account name.
#[derive(Clone, Eq, PartialEq, Hash, Deserialize, Serialize)]
#[serde(transparent)]
pub struct AcmeAccountName(String);
}
#[api(
properties: {
schema: {
type: Object,
additional_properties: true,
properties: {},
},
type: {
type: String,
},
},
)]
#[derive(Serialize)]
/// Schema for an ACME challenge plugin.
pub struct AcmeChallengeSchema {
/// Plugin ID.
pub id: String,
/// Human readable name, falls back to id.
pub name: String,
/// Plugin Type.
#[serde(rename = "type")]
pub ty: &'static str,
/// The plugin's parameter schema.
pub schema: Value,
}

View File

@ -0,0 +1,15 @@
use serde::{Deserialize, Serialize};
use proxmox::api::api;
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// General status information about a running VM file-restore daemon
pub struct RestoreDaemonStatus {
/// VM uptime in seconds
pub uptime: i64,
/// time left until auto-shutdown, keep in mind that this is useless when 'keep-timeout' is
/// not set, as then the status call will have reset the timer before returning the value
pub timeout: i64,
}

View File

@ -11,7 +11,6 @@ use crate::{
backup::{
CryptMode,
Fingerprint,
BACKUP_ID_REGEX,
DirEntryAttribute,
CatalogEntryType,
},
@ -34,6 +33,12 @@ pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GRO
mod tape;
pub use tape::*;
mod file_restore;
pub use file_restore::*;
mod acme;
pub use acme::*;
// File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
if name.starts_with('.') {
@ -45,9 +50,25 @@ pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
Ok(())
});
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9_][A-Za-z0-9._\-]*") }
macro_rules! BACKUP_TYPE_RE { () => (r"(?:host|vm|ct)") }
macro_rules! BACKUP_TIME_RE {
() => (r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z")
}
macro_rules! SNAPSHOT_PATH_REGEX_STR {
() => (
concat!(r"(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")")
);
}
macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_NAME { () => (concat!(r"(?:(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!(), ")")) }
macro_rules! DNS_ALIAS_LABEL { () => (r"(?:[a-zA-Z0-9_](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_ALIAS_NAME {
() => (concat!(r"(?:(?:", DNS_ALIAS_LABEL!() , r"\.)*", DNS_ALIAS_LABEL!(), ")"))
}
macro_rules! CIDR_V4_REGEX_STR { () => (concat!(r"(?:", IPV4RE!(), r"/\d{1,2})$")) }
macro_rules! CIDR_V6_REGEX_STR { () => (concat!(r"(?:", IPV6RE!(), r"/\d{1,3})$")) }
@ -84,6 +105,8 @@ const_regex!{
pub DNS_NAME_REGEX = concat!(r"^", DNS_NAME!(), r"$");
pub DNS_ALIAS_REGEX = concat!(r"^", DNS_ALIAS_NAME!(), r"$");
pub DNS_NAME_OR_IP_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE!(), r")$");
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), "|", APITOKEN_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE_BRACKET!() ,"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
@ -99,6 +122,22 @@ const_regex!{
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$";
pub BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$");
pub BACKUP_ID_REGEX = concat!(r"^", BACKUP_ID_RE!(), r"$");
pub BACKUP_DATE_REGEX = concat!(r"^", BACKUP_TIME_RE!() ,r"$");
pub GROUP_PATH_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$");
pub SNAPSHOT_PATH_REGEX = concat!(r"^", SNAPSHOT_PATH_REGEX_STR!(), r"$");
pub BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
pub DATASTORE_MAP_REGEX = concat!(r"(:?", PROXMOX_SAFE_ID_REGEX_STR!(), r"=)?", PROXMOX_SAFE_ID_REGEX_STR!());
pub TAPE_RESTORE_SNAPSHOT_REGEX = concat!(r"^", PROXMOX_SAFE_ID_REGEX_STR!(), r":", SNAPSHOT_PATH_REGEX_STR!(), r"$");
}
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
@ -137,6 +176,9 @@ pub const HOSTNAME_FORMAT: ApiStringFormat =
pub const DNS_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_REGEX);
pub const DNS_ALIAS_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_ALIAS_REGEX);
pub const DNS_NAME_OR_IP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_OR_IP_REGEX);
@ -164,6 +206,12 @@ pub const SUBSCRIPTION_KEY_FORMAT: ApiStringFormat =
pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX);
pub const DATASTORE_MAP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DATASTORE_MAP_REGEX);
pub const TAPE_RESTORE_SNAPSHOT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&TAPE_RESTORE_SNAPSHOT_REGEX);
pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.")
.format(&PASSWORD_FORMAT)
.min_length(1)
@ -356,6 +404,31 @@ pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.max_length(32)
.schema();
pub const DATASTORE_MAP_SCHEMA: Schema = StringSchema::new("Datastore mapping.")
.format(&DATASTORE_MAP_FORMAT)
.min_length(3)
.max_length(65)
.type_text("(<source>=)?<target>")
.schema();
pub const DATASTORE_MAP_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Datastore mapping list.", &DATASTORE_MAP_SCHEMA)
.schema();
pub const DATASTORE_MAP_LIST_SCHEMA: Schema = StringSchema::new(
"A list of Datastore mappings (or single datastore), comma separated. \
For example 'a=b,e' maps the source datastore 'a' to target 'b and \
all other sources to the default 'e'. If no default is given, only the \
specified sources are mapped.")
.format(&ApiStringFormat::PropertyString(&DATASTORE_MAP_ARRAY_SCHEMA))
.schema();
pub const TAPE_RESTORE_SNAPSHOT_SCHEMA: Schema = StringSchema::new(
"A snapshot in the format: 'store:type/id/time")
.format(&TAPE_RESTORE_SNAPSHOT_FORMAT)
.type_text("store:type/id/time")
.schema();
pub const MEDIA_SET_UUID_SCHEMA: Schema =
StringSchema::new("MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).")
.format(&UUID_FORMAT)
@ -441,6 +514,12 @@ pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name
.max_length(64)
.schema();
pub const REALM_ID_SCHEMA: Schema = StringSchema::new("Realm name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(2)
.max_length(32)
.schema();
// Complex type definitions
#[api(
@ -724,9 +803,8 @@ impl Default for GarbageCollectionStatus {
}
}
#[api()]
#[derive(Serialize, Deserialize)]
#[derive(Default, Serialize, Deserialize)]
/// Storage space usage information.
pub struct StorageStatus {
/// Total space (bytes).
@ -1272,7 +1350,7 @@ pub struct APTUpdateInfo {
pub enum Notify {
/// Never send notification
Never,
/// Send notifications for failed and sucessful jobs
/// Send notifications for failed and successful jobs
Always,
/// Send notifications for failed jobs only
Error,
@ -1327,20 +1405,32 @@ pub struct ArchiveEntry {
}
impl ArchiveEntry {
pub fn new(filepath: &[u8], entry_type: &DirEntryAttribute) -> Self {
pub fn new(filepath: &[u8], entry_type: Option<&DirEntryAttribute>) -> Self {
let size = match entry_type {
Some(DirEntryAttribute::File { size, .. }) => Some(*size),
_ => None,
};
Self::new_with_size(filepath, entry_type, size)
}
pub fn new_with_size(
filepath: &[u8],
entry_type: Option<&DirEntryAttribute>,
size: Option<u64>,
) -> Self {
Self {
filepath: base64::encode(filepath),
text: String::from_utf8_lossy(filepath.split(|x| *x == b'/').last().unwrap())
.to_string(),
entry_type: CatalogEntryType::from(entry_type).to_string(),
leaf: !matches!(entry_type, DirEntryAttribute::Directory { .. }),
size: match entry_type {
DirEntryAttribute::File { size, .. } => Some(*size),
_ => None
entry_type: match entry_type {
Some(entry_type) => CatalogEntryType::from(entry_type).to_string(),
None => "v".to_owned(),
},
leaf: !matches!(entry_type, None | Some(DirEntryAttribute::Directory { .. })),
size,
mtime: match entry_type {
DirEntryAttribute::File { mtime, .. } => Some(*mtime),
_ => None
Some(DirEntryAttribute::File { mtime, .. }) => Some(*mtime),
_ => None,
},
}
}
@ -1464,8 +1554,8 @@ impl std::convert::TryFrom<openssl::rsa::Rsa<openssl::pkey::Public>> for RsaPubK
},
}
)]
#[serde(rename_all="kebab-case")]
#[derive(Serialize,Deserialize,Default)]
#[serde(rename_all="kebab-case")]
/// Job Scheduling Status
pub struct JobScheduleStatus {
#[serde(skip_serializing_if="Option::is_none")]
@ -1477,3 +1567,109 @@ pub struct JobScheduleStatus {
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_endtime: Option<i64>,
}
#[api]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
/// Node memory usage counters
pub struct NodeMemoryCounters {
/// Total memory
pub total: u64,
/// Used memory
pub used: u64,
/// Free memory
pub free: u64,
}
#[api]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
/// Node swap usage counters
pub struct NodeSwapCounters {
/// Total swap
pub total: u64,
/// Used swap
pub used: u64,
/// Free swap
pub free: u64,
}
#[api]
#[derive(Serialize,Deserialize,Default)]
#[serde(rename_all = "kebab-case")]
/// Contains general node information such as the fingerprint`
pub struct NodeInformation {
/// The SSL Fingerprint
pub fingerprint: String,
}
#[api]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
/// Information about the CPU
pub struct NodeCpuInformation {
/// The CPU model
pub model: String,
/// The number of CPU sockets
pub sockets: usize,
/// The number of CPU cores (incl. threads)
pub cpus: usize,
}
#[api(
properties: {
memory: {
type: NodeMemoryCounters,
},
root: {
type: StorageStatus,
},
swap: {
type: NodeSwapCounters,
},
loadavg: {
type: Array,
items: {
type: Number,
description: "the load",
}
},
cpuinfo: {
type: NodeCpuInformation,
},
info: {
type: NodeInformation,
}
},
)]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
/// The Node status
pub struct NodeStatus {
pub memory: NodeMemoryCounters,
pub root: StorageStatus,
pub swap: NodeSwapCounters,
/// The current uptime of the server.
pub uptime: u64,
/// Load for 1, 5 and 15 minutes.
pub loadavg: [f64; 3],
/// The current kernel version.
pub kversion: String,
/// Total CPU usage since last query.
pub cpu: f64,
/// Total IO wait since last query.
pub wait: f64,
pub cpuinfo: NodeCpuInformation,
pub info: NodeInformation,
}
pub const HTTP_PROXY_SCHEMA: Schema = StringSchema::new(
"HTTP proxy configuration [http://]<host>[:port]")
.format(&ApiStringFormat::VerifyFn(|s| {
proxmox_http::ProxyConfig::parse_proxy_url(s)?;
Ok(())
}))
.min_length(1)
.max_length(128)
.type_text("[http://]<host>[:port]")
.schema();

View File

@ -21,7 +21,7 @@ pub struct OptionalDeviceIdentification {
#[api()]
#[derive(Debug,Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Kind of devive
/// Kind of device
pub enum DeviceKind {
/// Tape changer (Autoloader, Robot)
Changer,

View File

@ -21,8 +21,8 @@ pub const DRIVE_NAME_SCHEMA: Schema = StringSchema::new("Drive Identifier.")
.max_length(32)
.schema();
pub const LINUX_DRIVE_PATH_SCHEMA: Schema = StringSchema::new(
"The path to a LINUX non-rewinding SCSI tape device (i.e. '/dev/nst0')")
pub const LTO_DRIVE_PATH_SCHEMA: Schema = StringSchema::new(
"The path to a LTO SCSI-generic tape device (i.e. '/dev/sg0')")
.schema();
pub const CHANGER_DRIVENUM_SCHEMA: Schema = IntegerSchema::new(
@ -57,7 +57,7 @@ pub struct VirtualTapeDrive {
schema: DRIVE_NAME_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
schema: LTO_DRIVE_PATH_SCHEMA,
},
changer: {
schema: CHANGER_NAME_SCHEMA,
@ -71,8 +71,8 @@ pub struct VirtualTapeDrive {
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Linux SCSI tape driver
pub struct LinuxTapeDrive {
/// Lto SCSI tape driver
pub struct LtoTapeDrive {
pub name: String,
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]
@ -84,7 +84,7 @@ pub struct LinuxTapeDrive {
#[api(
properties: {
config: {
type: LinuxTapeDrive,
type: LtoTapeDrive,
},
info: {
type: OptionalDeviceIdentification,
@ -96,7 +96,7 @@ pub struct LinuxTapeDrive {
/// Drive list entry
pub struct DriveListEntry {
#[serde(flatten)]
pub config: LinuxTapeDrive,
pub config: LtoTapeDrive,
#[serde(flatten)]
pub info: OptionalDeviceIdentification,
/// the state of the drive if locked
@ -119,6 +119,8 @@ pub struct MamAttribute {
#[api()]
#[derive(Serialize,Deserialize,Copy,Clone,Debug)]
pub enum TapeDensity {
/// Unknown (no media loaded)
Unknown,
/// LTO1
LTO1,
/// LTO2
@ -144,6 +146,7 @@ impl TryFrom<u8> for TapeDensity {
fn try_from(value: u8) -> Result<Self, Self::Error> {
let density = match value {
0x00 => TapeDensity::Unknown,
0x40 => TapeDensity::LTO1,
0x42 => TapeDensity::LTO2,
0x44 => TapeDensity::LTO3,
@ -169,29 +172,37 @@ impl TryFrom<u8> for TapeDensity {
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Drive/Media status for Linux SCSI drives.
/// Drive/Media status for Lto SCSI drives.
///
/// Media related data is optional - only set if there is a medium
/// loaded.
pub struct LinuxDriveAndMediaStatus {
pub struct LtoDriveAndMediaStatus {
/// Vendor
pub vendor: String,
/// Product
pub product: String,
/// Revision
pub revision: String,
/// Block size (0 is variable size)
pub blocksize: u32,
/// Compression enabled
pub compression: bool,
/// Drive buffer mode
pub buffer_mode: u8,
/// Tape density
pub density: TapeDensity,
/// Media is write protected
#[serde(skip_serializing_if="Option::is_none")]
pub density: Option<TapeDensity>,
/// Status flags
pub status: String,
/// Linux Driver Options
pub options: String,
pub write_protect: Option<bool>,
/// Tape Alert Flags
#[serde(skip_serializing_if="Option::is_none")]
pub alert_flags: Option<String>,
/// Current file number
#[serde(skip_serializing_if="Option::is_none")]
pub file_number: Option<u32>,
pub file_number: Option<u64>,
/// Current block number
#[serde(skip_serializing_if="Option::is_none")]
pub block_number: Option<u32>,
pub block_number: Option<u64>,
/// Medium Manufacture Date (epoch)
#[serde(skip_serializing_if="Option::is_none")]
pub manufactured: Option<i64>,
@ -212,3 +223,62 @@ pub struct LinuxDriveAndMediaStatus {
#[serde(skip_serializing_if="Option::is_none")]
pub medium_wearout: Option<f64>,
}
#[api()]
/// Volume statistics from SCSI log page 17h
#[derive(Default, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct Lp17VolumeStatistics {
/// Volume mounts (thread count)
pub volume_mounts: u64,
/// Total data sets written
pub volume_datasets_written: u64,
/// Write retries
pub volume_recovered_write_data_errors: u64,
/// Total unrecovered write errors
pub volume_unrecovered_write_data_errors: u64,
/// Total suspended writes
pub volume_write_servo_errors: u64,
/// Total fatal suspended writes
pub volume_unrecovered_write_servo_errors: u64,
/// Total datasets read
pub volume_datasets_read: u64,
/// Total read retries
pub volume_recovered_read_errors: u64,
/// Total unrecovered read errors
pub volume_unrecovered_read_errors: u64,
/// Last mount unrecovered write errors
pub last_mount_unrecovered_write_errors: u64,
/// Last mount unrecovered read errors
pub last_mount_unrecovered_read_errors: u64,
/// Last mount bytes written
pub last_mount_bytes_written: u64,
/// Last mount bytes read
pub last_mount_bytes_read: u64,
/// Lifetime bytes written
pub lifetime_bytes_written: u64,
/// Lifetime bytes read
pub lifetime_bytes_read: u64,
/// Last load write compression ratio
pub last_load_write_compression_ratio: u64,
/// Last load read compression ratio
pub last_load_read_compression_ratio: u64,
/// Medium mount time
pub medium_mount_time: u64,
/// Medium ready time
pub medium_ready_time: u64,
/// Total native capacity
pub total_native_capacity: u64,
/// Total used native capacity
pub total_used_native_capacity: u64,
/// Write protect
pub write_protect: bool,
/// Volume is WORM
pub worm: bool,
/// Beginning of medium passes
pub beginning_of_medium_passes: u64,
/// Middle of medium passes
pub middle_of_tape_passes: u64,
/// Volume serial number
pub serial: String,
}

View File

@ -12,6 +12,26 @@ use crate::api2::types::{
MediaLocation,
};
#[api(
properties: {
"media-set-uuid": {
schema: MEDIA_SET_UUID_SCHEMA,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media Set list entry
pub struct MediaSetListEntry {
/// Media set name
pub media_set_name: String,
pub media_set_uuid: Uuid,
/// MediaSet creation time stamp
pub media_set_ctime: i64,
/// Media Pool
pub pool: String,
}
#[api(
properties: {
location: {
@ -144,6 +164,8 @@ pub struct MediaContentEntry {
pub seq_nr: u64,
/// Media Pool
pub pool: String,
/// Datastore Name
pub store: String,
/// Backup snapshot
pub snapshot: String,
/// Snapshot creation time (epoch)

View File

@ -13,7 +13,7 @@ pub const PROXMOX_PKG_VERSION: &str =
env!("CARGO_PKG_VERSION_MINOR"),
);
pub const PROXMOX_PKG_RELEASE: &str = env!("CARGO_PKG_VERSION_PATCH");
pub const PROXMOX_PKG_REPOID: &str = env!("CARGO_PKG_REPOSITORY");
pub const PROXMOX_PKG_REPOID: &str = env!("REPOID");
fn get_version(
_param: Value,

View File

@ -14,6 +14,7 @@ use crate::api2::types::{Userid, UsernameRef, RealmRef};
pub trait ProxmoxAuthenticator {
fn authenticate_user(&self, username: &UsernameRef, password: &str) -> Result<(), Error>;
fn store_password(&self, username: &UsernameRef, password: &str) -> Result<(), Error>;
fn remove_password(&self, username: &UsernameRef) -> Result<(), Error>;
}
pub struct PAM();
@ -60,6 +61,11 @@ impl ProxmoxAuthenticator for PAM {
Ok(())
}
// do not remove password for pam users
fn remove_password(&self, _username: &UsernameRef) -> Result<(), Error> {
Ok(())
}
}
pub struct PBS();
@ -132,6 +138,24 @@ impl ProxmoxAuthenticator for PBS {
Ok(())
}
fn remove_password(&self, username: &UsernameRef) -> Result<(), Error> {
let mut data = proxmox::tools::fs::file_get_json(SHADOW_CONFIG_FILENAME, Some(json!({})))?;
if let Some(map) = data.as_object_mut() {
map.remove(username.as_str());
}
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0600);
let options = proxmox::tools::fs::CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(nix::unistd::Gid::from_raw(0));
let data = serde_json::to_vec_pretty(&data)?;
proxmox::tools::fs::replace_file(SHADOW_CONFIG_FILENAME, &data, options)?;
Ok(())
}
}
/// Lookup the autenticator for the specified realm

View File

@ -75,7 +75,7 @@
//!
//! Since PBS allows multiple potentially interfering operations at the
//! same time (e.g. garbage collect, prune, multiple backup creations
//! (only in seperate groups), forget, ...), these need to lock against
//! (only in separate groups), forget, ...), these need to lock against
//! each other in certain scenarios. There is no overarching global lock
//! though, instead always the finest grained lock possible is used,
//! because running these operations concurrently is treated as a feature
@ -238,6 +238,7 @@ pub use fixed_index::*;
mod dynamic_index;
pub use dynamic_index::*;
#[macro_use]
mod backup_info;
pub use backup_info::*;
@ -256,5 +257,5 @@ pub use verify::*;
mod catalog_shell;
pub use catalog_shell::*;
mod async_index_reader;
pub use async_index_reader::*;
mod cached_chunk_reader;
pub use cached_chunk_reader::*;

View File

@ -1,215 +0,0 @@
use std::future::Future;
use std::task::{Poll, Context};
use std::pin::Pin;
use std::io::SeekFrom;
use anyhow::Error;
use futures::future::FutureExt;
use futures::ready;
use tokio::io::{AsyncRead, AsyncSeek, ReadBuf};
use proxmox::sys::error::io_err_other;
use proxmox::io_format_err;
use super::IndexFile;
use super::read_chunk::AsyncReadChunk;
use super::index::ChunkReadInfo;
type ReadFuture<S> = dyn Future<Output = Result<(S, Vec<u8>), Error>> + Send + 'static;
// FIXME: This enum may not be required?
// - Put the `WaitForData` case directly into a `read_future: Option<>`
// - make the read loop as follows:
// * if read_buffer is not empty:
// use it
// * else if read_future is there:
// poll it
// if read: move data to read_buffer
// * else
// create read future
#[allow(clippy::enum_variant_names)]
enum AsyncIndexReaderState<S> {
NoData,
WaitForData(Pin<Box<ReadFuture<S>>>),
HaveData,
}
pub struct AsyncIndexReader<S, I: IndexFile> {
store: Option<S>,
index: I,
read_buffer: Vec<u8>,
current_chunk_offset: u64,
current_chunk_idx: usize,
current_chunk_info: Option<ChunkReadInfo>,
position: u64,
seek_to_pos: i64,
state: AsyncIndexReaderState<S>,
}
// ok because the only public interfaces operates on &mut Self
unsafe impl<S: Sync, I: IndexFile + Sync> Sync for AsyncIndexReader<S, I> {}
impl<S: AsyncReadChunk, I: IndexFile> AsyncIndexReader<S, I> {
pub fn new(index: I, store: S) -> Self {
Self {
store: Some(store),
index,
read_buffer: Vec::with_capacity(1024 * 1024),
current_chunk_offset: 0,
current_chunk_idx: 0,
current_chunk_info: None,
position: 0,
seek_to_pos: 0,
state: AsyncIndexReaderState::NoData,
}
}
}
impl<S, I> AsyncRead for AsyncIndexReader<S, I>
where
S: AsyncReadChunk + Unpin + Sync + 'static,
I: IndexFile + Unpin,
{
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context,
buf: &mut ReadBuf,
) -> Poll<tokio::io::Result<()>> {
let this = Pin::get_mut(self);
loop {
match &mut this.state {
AsyncIndexReaderState::NoData => {
let (idx, offset) = if this.current_chunk_info.is_some() &&
this.position == this.current_chunk_info.as_ref().unwrap().range.end
{
// optimization for sequential chunk read
let next_idx = this.current_chunk_idx + 1;
(next_idx, 0)
} else {
match this.index.chunk_from_offset(this.position) {
Some(res) => res,
None => return Poll::Ready(Ok(()))
}
};
if idx >= this.index.index_count() {
return Poll::Ready(Ok(()));
}
let info = this
.index
.chunk_info(idx)
.ok_or_else(|| io_format_err!("could not get digest"))?;
this.current_chunk_offset = offset;
this.current_chunk_idx = idx;
let old_info = this.current_chunk_info.replace(info.clone());
if let Some(old_info) = old_info {
if old_info.digest == info.digest {
// hit, chunk is currently in cache
this.state = AsyncIndexReaderState::HaveData;
continue;
}
}
// miss, need to download new chunk
let store = match this.store.take() {
Some(store) => store,
None => {
return Poll::Ready(Err(io_format_err!("could not find store")));
}
};
let future = async move {
store.read_chunk(&info.digest)
.await
.map(move |x| (store, x))
};
this.state = AsyncIndexReaderState::WaitForData(future.boxed());
}
AsyncIndexReaderState::WaitForData(ref mut future) => {
match ready!(future.as_mut().poll(cx)) {
Ok((store, chunk_data)) => {
this.read_buffer = chunk_data;
this.state = AsyncIndexReaderState::HaveData;
this.store = Some(store);
}
Err(err) => {
return Poll::Ready(Err(io_err_other(err)));
}
};
}
AsyncIndexReaderState::HaveData => {
let offset = this.current_chunk_offset as usize;
let len = this.read_buffer.len();
let n = if len - offset < buf.remaining() {
len - offset
} else {
buf.remaining()
};
buf.put_slice(&this.read_buffer[offset..(offset + n)]);
this.position += n as u64;
if offset + n == len {
this.state = AsyncIndexReaderState::NoData;
} else {
this.current_chunk_offset += n as u64;
this.state = AsyncIndexReaderState::HaveData;
}
return Poll::Ready(Ok(()));
}
}
}
}
}
impl<S, I> AsyncSeek for AsyncIndexReader<S, I>
where
S: AsyncReadChunk + Unpin + Sync + 'static,
I: IndexFile + Unpin,
{
fn start_seek(
self: Pin<&mut Self>,
pos: SeekFrom,
) -> tokio::io::Result<()> {
let this = Pin::get_mut(self);
this.seek_to_pos = match pos {
SeekFrom::Start(offset) => {
offset as i64
},
SeekFrom::End(offset) => {
this.index.index_bytes() as i64 + offset
},
SeekFrom::Current(offset) => {
this.position as i64 + offset
}
};
Ok(())
}
fn poll_complete(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
) -> Poll<tokio::io::Result<u64>> {
let this = Pin::get_mut(self);
let index_bytes = this.index.index_bytes();
if this.seek_to_pos < 0 {
return Poll::Ready(Err(io_format_err!("cannot seek to negative values")));
} else if this.seek_to_pos > index_bytes as i64 {
this.position = index_bytes;
} else {
this.position = this.seek_to_pos as u64;
}
// even if seeking within one chunk, we need to go to NoData to
// recalculate the current_chunk_offset (data is cached anyway)
this.state = AsyncIndexReaderState::NoData;
Poll::Ready(Ok(this.position))
}
}

View File

@ -3,31 +3,19 @@ use crate::tools;
use anyhow::{bail, format_err, Error};
use std::os::unix::io::RawFd;
use std::path::{PathBuf, Path};
use std::path::{Path, PathBuf};
use proxmox::const_regex;
use crate::api2::types::{
BACKUP_ID_REGEX,
BACKUP_TYPE_REGEX,
BACKUP_DATE_REGEX,
GROUP_PATH_REGEX,
SNAPSHOT_PATH_REGEX,
BACKUP_FILE_REGEX,
};
use super::manifest::MANIFEST_BLOB_NAME;
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9_][A-Za-z0-9._\-]*") }
macro_rules! BACKUP_TYPE_RE { () => (r"(?:host|vm|ct)") }
macro_rules! BACKUP_TIME_RE { () => (r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z") }
const_regex!{
BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$");
pub BACKUP_ID_REGEX = concat!(r"^", BACKUP_ID_RE!(), r"$");
BACKUP_DATE_REGEX = concat!(r"^", BACKUP_TIME_RE!() ,r"$");
GROUP_PATH_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$");
SNAPSHOT_PATH_REGEX = concat!(
r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")$");
}
/// BackupGroup is a directory containing a list of BackupDir
#[derive(Debug, Eq, PartialEq, Hash, Clone)]
pub struct BackupGroup {
@ -38,7 +26,6 @@ pub struct BackupGroup {
}
impl std::cmp::Ord for BackupGroup {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
let type_order = self.backup_type.cmp(&other.backup_type);
if type_order != std::cmp::Ordering::Equal {
@ -63,9 +50,11 @@ impl std::cmp::PartialOrd for BackupGroup {
}
impl BackupGroup {
pub fn new<T: Into<String>, U: Into<String>>(backup_type: T, backup_id: U) -> Self {
Self { backup_type: backup_type.into(), backup_id: backup_id.into() }
Self {
backup_type: backup_type.into(),
backup_id: backup_id.into(),
}
}
pub fn backup_type(&self) -> &str {
@ -77,7 +66,6 @@ impl BackupGroup {
}
pub fn group_path(&self) -> PathBuf {
let mut relative_path = PathBuf::new();
relative_path.push(&self.backup_type);
@ -88,46 +76,65 @@ impl BackupGroup {
}
pub fn list_backups(&self, base_path: &Path) -> Result<Vec<BackupInfo>, Error> {
let mut list = vec![];
let mut path = base_path.to_owned();
path.push(self.group_path());
tools::scandir(libc::AT_FDCWD, &path, &BACKUP_DATE_REGEX, |l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(
libc::AT_FDCWD,
&path,
&BACKUP_DATE_REGEX,
|l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
let backup_dir = BackupDir::with_rfc3339(&self.backup_type, &self.backup_id, backup_time)?;
let backup_dir =
BackupDir::with_rfc3339(&self.backup_type, &self.backup_id, backup_time)?;
let files = list_backup_files(l2_fd, backup_time)?;
list.push(BackupInfo { backup_dir, files });
Ok(())
})?;
},
)?;
Ok(list)
}
pub fn last_successful_backup(&self, base_path: &Path) -> Result<Option<i64>, Error> {
let mut last = None;
let mut path = base_path.to_owned();
path.push(self.group_path());
tools::scandir(libc::AT_FDCWD, &path, &BACKUP_DATE_REGEX, |l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(
libc::AT_FDCWD,
&path,
&BACKUP_DATE_REGEX,
|l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
let mut manifest_path = PathBuf::from(backup_time);
manifest_path.push(MANIFEST_BLOB_NAME);
use nix::fcntl::{openat, OFlag};
match openat(l2_fd, &manifest_path, OFlag::O_RDONLY, nix::sys::stat::Mode::empty()) {
match openat(
l2_fd,
&manifest_path,
OFlag::O_RDONLY,
nix::sys::stat::Mode::empty(),
) {
Ok(rawfd) => {
/* manifest exists --> assume backup was successful */
/* close else this leaks! */
nix::unistd::close(rawfd)?;
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => { return Ok(()); }
}
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
return Ok(());
}
Err(err) => {
bail!("last_successful_backup: unexpected error - {}", err);
}
@ -135,13 +142,16 @@ impl BackupGroup {
let timestamp = proxmox::tools::time::parse_rfc3339(backup_time)?;
if let Some(last_timestamp) = last {
if timestamp > last_timestamp { last = Some(timestamp); }
if timestamp > last_timestamp {
last = Some(timestamp);
}
} else {
last = Some(timestamp);
}
Ok(())
})?;
},
)?;
Ok(last)
}
@ -162,7 +172,8 @@ impl std::str::FromStr for BackupGroup {
///
/// This parses strings like `vm/100".
fn from_str(path: &str) -> Result<Self, Self::Err> {
let cap = GROUP_PATH_REGEX.captures(path)
let cap = GROUP_PATH_REGEX
.captures(path)
.ok_or_else(|| format_err!("unable to parse backup group path '{}'", path))?;
Ok(Self {
@ -182,11 +193,10 @@ pub struct BackupDir {
/// Backup timestamp
backup_time: i64,
// backup_time as rfc3339
backup_time_string: String
backup_time_string: String,
}
impl BackupDir {
pub fn new<T, U>(backup_type: T, backup_id: U, backup_time: i64) -> Result<Self, Error>
where
T: Into<String>,
@ -196,7 +206,11 @@ impl BackupDir {
BackupDir::with_group(group, backup_time)
}
pub fn with_rfc3339<T,U,V>(backup_type: T, backup_id: U, backup_time_string: V) -> Result<Self, Error>
pub fn with_rfc3339<T, U, V>(
backup_type: T,
backup_id: U,
backup_time_string: V,
) -> Result<Self, Error>
where
T: Into<String>,
U: Into<String>,
@ -205,12 +219,20 @@ impl BackupDir {
let backup_time_string = backup_time_string.into();
let backup_time = proxmox::tools::time::parse_rfc3339(&backup_time_string)?;
let group = BackupGroup::new(backup_type.into(), backup_id.into());
Ok(Self { group, backup_time, backup_time_string })
Ok(Self {
group,
backup_time,
backup_time_string,
})
}
pub fn with_group(group: BackupGroup, backup_time: i64) -> Result<Self, Error> {
let backup_time_string = Self::backup_time_to_string(backup_time)?;
Ok(Self { group, backup_time, backup_time_string })
Ok(Self {
group,
backup_time,
backup_time_string,
})
}
pub fn group(&self) -> &BackupGroup {
@ -226,7 +248,6 @@ impl BackupDir {
}
pub fn relative_path(&self) -> PathBuf {
let mut relative_path = self.group.group_path();
relative_path.push(self.backup_time_string.clone());
@ -247,7 +268,8 @@ impl std::str::FromStr for BackupDir {
///
/// This parses strings like `host/elsa/2020-06-15T05:18:33Z".
fn from_str(path: &str) -> Result<Self, Self::Err> {
let cap = SNAPSHOT_PATH_REGEX.captures(path)
let cap = SNAPSHOT_PATH_REGEX
.captures(path)
.ok_or_else(|| format_err!("unable to parse backup snapshot path '{}'", path))?;
BackupDir::with_rfc3339(
@ -276,7 +298,6 @@ pub struct BackupInfo {
}
impl BackupInfo {
pub fn new(base_path: &Path, backup_dir: BackupDir) -> Result<BackupInfo, Error> {
let mut path = base_path.to_owned();
path.push(backup_dir.relative_path());
@ -287,19 +308,24 @@ impl BackupInfo {
}
/// Finds the latest backup inside a backup group
pub fn last_backup(base_path: &Path, group: &BackupGroup, only_finished: bool)
-> Result<Option<BackupInfo>, Error>
{
pub fn last_backup(
base_path: &Path,
group: &BackupGroup,
only_finished: bool,
) -> Result<Option<BackupInfo>, Error> {
let backups = group.list_backups(base_path)?;
Ok(backups.into_iter()
Ok(backups
.into_iter()
.filter(|item| !only_finished || item.is_finished())
.max_by_key(|item| item.backup_dir.backup_time()))
}
pub fn sort_list(list: &mut Vec<BackupInfo>, ascendending: bool) {
if ascendending { // oldest first
if ascendending {
// oldest first
list.sort_unstable_by(|a, b| a.backup_dir.backup_time.cmp(&b.backup_dir.backup_time));
} else { // newest first
} else {
// newest first
list.sort_unstable_by(|a, b| b.backup_dir.backup_time.cmp(&a.backup_dir.backup_time));
}
}
@ -316,31 +342,52 @@ impl BackupInfo {
pub fn list_backup_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
let mut list = Vec::new();
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(
libc::AT_FDCWD,
base_path,
&BACKUP_TYPE_REGEX,
|l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
tools::scandir(
l0_fd,
backup_type,
&BACKUP_ID_REGEX,
|_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
list.push(BackupGroup::new(backup_type, backup_id));
Ok(())
})
})?;
},
)
},
)?;
Ok(list)
}
pub fn is_finished(&self) -> bool {
// backup is considered unfinished if there is no manifest
self.files.iter().any(|name| name == super::MANIFEST_BLOB_NAME)
self.files
.iter()
.any(|name| name == super::MANIFEST_BLOB_NAME)
}
}
fn list_backup_files<P: ?Sized + nix::NixPath>(dirfd: RawFd, path: &P) -> Result<Vec<String>, Error> {
fn list_backup_files<P: ?Sized + nix::NixPath>(
dirfd: RawFd,
path: &P,
) -> Result<Vec<String>, Error> {
let mut files = vec![];
tools::scandir(dirfd, path, &BACKUP_FILE_REGEX, |_, filename, file_type| {
if file_type != nix::dir::Type::File { return Ok(()); }
if file_type != nix::dir::Type::File {
return Ok(());
}
files.push(filename.to_owned());
Ok(())
})?;

View File

@ -0,0 +1,189 @@
//! An async and concurrency safe data reader backed by a local LRU cache.
use anyhow::Error;
use futures::future::Future;
use futures::ready;
use tokio::io::{AsyncRead, AsyncSeek, ReadBuf};
use std::io::SeekFrom;
use std::pin::Pin;
use std::sync::Arc;
use std::task::{Context, Poll};
use super::{AsyncReadChunk, IndexFile};
use crate::tools::async_lru_cache::{AsyncCacher, AsyncLruCache};
use proxmox::io_format_err;
use proxmox::sys::error::io_err_other;
struct AsyncChunkCacher<T> {
reader: Arc<T>,
}
impl<T: AsyncReadChunk + Send + Sync + 'static> AsyncCacher<[u8; 32], Arc<Vec<u8>>>
for AsyncChunkCacher<T>
{
fn fetch(
&self,
key: [u8; 32],
) -> Box<dyn Future<Output = Result<Option<Arc<Vec<u8>>>, Error>> + Send> {
let reader = Arc::clone(&self.reader);
Box::new(async move {
AsyncReadChunk::read_chunk(reader.as_ref(), &key)
.await
.map(|x| Some(Arc::new(x)))
})
}
}
/// Allows arbitrary data reads from an Index via an AsyncReadChunk implementation, using an LRU
/// cache internally to cache chunks and provide support for multiple concurrent reads (potentially
/// to the same chunk).
pub struct CachedChunkReader<I: IndexFile, R: AsyncReadChunk + Send + Sync + 'static> {
cache: Arc<AsyncLruCache<[u8; 32], Arc<Vec<u8>>>>,
cacher: AsyncChunkCacher<R>,
index: I,
}
impl<I: IndexFile, R: AsyncReadChunk + Send + Sync + 'static> CachedChunkReader<I, R> {
/// Create a new reader with a local LRU cache containing 'capacity' chunks.
pub fn new(reader: R, index: I, capacity: usize) -> Self {
let cache = Arc::new(AsyncLruCache::new(capacity));
Self::new_with_cache(reader, index, cache)
}
/// Create a new reader with a custom LRU cache. Use this to share a cache between multiple
/// readers.
pub fn new_with_cache(
reader: R,
index: I,
cache: Arc<AsyncLruCache<[u8; 32], Arc<Vec<u8>>>>,
) -> Self {
Self {
cache,
cacher: AsyncChunkCacher {
reader: Arc::new(reader),
},
index,
}
}
/// Read data at a given byte offset into a variable size buffer. Returns the amount of bytes
/// read, which will always be the size of the buffer except when reaching EOF.
pub async fn read_at(&self, buf: &mut [u8], offset: u64) -> Result<usize, Error> {
let size = buf.len();
let mut read: usize = 0;
while read < size {
let cur_offset = offset + read as u64;
if let Some(chunk) = self.index.chunk_from_offset(cur_offset) {
// chunk indices retrieved from chunk_from_offset always resolve to Some(_)
let info = self.index.chunk_info(chunk.0).unwrap();
// will never be None, see AsyncChunkCacher
let data = self.cache.access(info.digest, &self.cacher).await?.unwrap();
let want_bytes = ((info.range.end - cur_offset) as usize).min(size - read);
let slice = &mut buf[read..(read + want_bytes)];
let intra_chunk = chunk.1 as usize;
slice.copy_from_slice(&data[intra_chunk..(intra_chunk + want_bytes)]);
read += want_bytes;
} else {
// EOF
break;
}
}
Ok(read)
}
}
impl<I: IndexFile + Send + Sync + 'static, R: AsyncReadChunk + Send + Sync + 'static>
CachedChunkReader<I, R>
{
/// Returns a SeekableCachedChunkReader based on this instance, which implements AsyncSeek and
/// AsyncRead for use in interfaces which require that. Direct use of read_at is preferred
/// otherwise.
pub fn seekable(self) -> SeekableCachedChunkReader<I, R> {
SeekableCachedChunkReader {
index_bytes: self.index.index_bytes(),
reader: Arc::new(self),
position: 0,
read_future: None,
}
}
}
pub struct SeekableCachedChunkReader<
I: IndexFile + Send + Sync + 'static,
R: AsyncReadChunk + Send + Sync + 'static,
> {
reader: Arc<CachedChunkReader<I, R>>,
index_bytes: u64,
position: u64,
read_future: Option<Pin<Box<dyn Future<Output = Result<(Vec<u8>, usize), Error>> + Send>>>,
}
impl<I, R> AsyncSeek for SeekableCachedChunkReader<I, R>
where
I: IndexFile + Send + Sync + 'static,
R: AsyncReadChunk + Send + Sync + 'static,
{
fn start_seek(self: Pin<&mut Self>, pos: SeekFrom) -> tokio::io::Result<()> {
let this = Pin::get_mut(self);
let seek_to_pos = match pos {
SeekFrom::Start(offset) => offset as i64,
SeekFrom::End(offset) => this.index_bytes as i64 + offset,
SeekFrom::Current(offset) => this.position as i64 + offset,
};
if seek_to_pos < 0 {
return Err(io_format_err!("cannot seek to negative values"));
} else if seek_to_pos > this.index_bytes as i64 {
this.position = this.index_bytes;
} else {
this.position = seek_to_pos as u64;
}
Ok(())
}
fn poll_complete(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<tokio::io::Result<u64>> {
Poll::Ready(Ok(self.position))
}
}
impl<I, R> AsyncRead for SeekableCachedChunkReader<I, R>
where
I: IndexFile + Send + Sync + 'static,
R: AsyncReadChunk + Send + Sync + 'static,
{
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context,
buf: &mut ReadBuf,
) -> Poll<tokio::io::Result<()>> {
let this = Pin::get_mut(self);
let offset = this.position;
let wanted = buf.capacity();
let reader = Arc::clone(&this.reader);
let fut = this.read_future.get_or_insert_with(|| {
Box::pin(async move {
let mut read_buf = vec![0u8; wanted];
let read = reader.read_at(&mut read_buf[..wanted], offset).await?;
Ok((read_buf, read))
})
});
let ret = match ready!(fut.as_mut().poll(cx)) {
Ok((read_buf, read)) => {
buf.put_slice(&read_buf[..read]);
this.position += read as u64;
Ok(())
}
Err(err) => Err(io_err_other(err)),
};
// future completed, drop
this.read_future = None;
Poll::Ready(ret)
}
}

View File

@ -19,9 +19,10 @@ use proxmox::tools::fs::{create_path, CreateOptions};
use pxar::{EntryKind, Metadata};
use crate::backup::catalog::{self, DirEntryAttribute};
use crate::pxar::Flags;
use crate::pxar::fuse::{Accessor, FileEntry};
use crate::pxar::Flags;
use crate::tools::runtime::block_in_place;
use crate::tools::ControlFlow;
type CatalogReader = crate::backup::CatalogReader<std::fs::File>;
@ -998,11 +999,6 @@ impl Shell {
}
}
enum LoopState {
Break,
Continue,
}
struct ExtractorState<'a> {
path: Vec<u8>,
path_len: usize,
@ -1060,8 +1056,8 @@ impl<'a> ExtractorState<'a> {
let entry = match self.read_dir.next() {
Some(entry) => entry,
None => match self.handle_end_of_directory()? {
LoopState::Break => break, // done with root directory
LoopState::Continue => continue,
ControlFlow::Break(()) => break, // done with root directory
ControlFlow::Continue(()) => continue,
},
};
@ -1079,11 +1075,11 @@ impl<'a> ExtractorState<'a> {
Ok(())
}
fn handle_end_of_directory(&mut self) -> Result<LoopState, Error> {
fn handle_end_of_directory(&mut self) -> Result<ControlFlow<()>, Error> {
// go up a directory:
self.read_dir = match self.read_dir_stack.pop() {
Some(r) => r,
None => return Ok(LoopState::Break), // out of root directory
None => return Ok(ControlFlow::Break(())), // out of root directory
};
self.matches = self
@ -1102,7 +1098,7 @@ impl<'a> ExtractorState<'a> {
self.extractor.leave_directory()?;
Ok(LoopState::Continue)
Ok(ControlFlow::CONTINUE)
}
async fn handle_new_directory(

View File

@ -7,6 +7,7 @@ use std::os::unix::io::AsRawFd;
use proxmox::tools::fs::{CreateOptions, create_path, create_dir};
use crate::task_log;
use crate::tools;
use crate::api2::types::GarbageCollectionStatus;
@ -61,7 +62,7 @@ impl ChunkStore {
chunk_dir
}
pub fn create<P>(name: &str, path: P, uid: nix::unistd::Uid, gid: nix::unistd::Gid) -> Result<Self, Error>
pub fn create<P>(name: &str, path: P, uid: nix::unistd::Uid, gid: nix::unistd::Gid, worker: Option<&dyn TaskState>) -> Result<Self, Error>
where
P: Into<PathBuf>,
{
@ -104,7 +105,9 @@ impl ChunkStore {
}
let percentage = (i*100)/(64*1024);
if percentage != last_percentage {
// eprintln!("ChunkStore::create {}%", percentage);
if let Some(worker) = worker {
task_log!(worker, "Chunkstore create: {}%", percentage)
}
last_percentage = percentage;
}
}
@ -452,7 +455,7 @@ impl ChunkStore {
#[test]
fn test_chunk_store1() {
let mut path = std::fs::canonicalize(".").unwrap(); // we need absulute path
let mut path = std::fs::canonicalize(".").unwrap(); // we need absolute path
path.push(".testdir");
if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }
@ -461,7 +464,7 @@ fn test_chunk_store1() {
assert!(chunk_store.is_err());
let user = nix::unistd::User::from_uid(nix::unistd::Uid::current()).unwrap().unwrap();
let chunk_store = ChunkStore::create("test", &path, user.uid, user.gid).unwrap();
let chunk_store = ChunkStore::create("test", &path, user.uid, user.gid, None).unwrap();
let (chunk, digest) = super::DataChunkBuilder::new(&[0u8, 1u8]).build().unwrap();
@ -472,7 +475,7 @@ fn test_chunk_store1() {
assert!(exists);
let chunk_store = ChunkStore::create("test", &path, user.uid, user.gid);
let chunk_store = ChunkStore::create("test", &path, user.uid, user.gid, None);
assert!(chunk_store.is_err());
if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }

View File

@ -52,16 +52,20 @@ impl DataStore {
let mut map = DATASTORE_MAP.lock().unwrap();
if let Some(datastore) = map.get(name) {
// reuse chunk store so that we keep using the same process locker instance!
let chunk_store = if let Some(datastore) = map.get(name) {
// Compare Config - if changed, create new Datastore object!
if datastore.chunk_store.base == path &&
datastore.verify_new == config.verify_new.unwrap_or(false)
{
return Ok(datastore.clone());
}
}
Arc::clone(&datastore.chunk_store)
} else {
Arc::new(ChunkStore::open(name, &config.path)?)
};
let datastore = DataStore::open_with_path(name, &path, config)?;
let datastore = DataStore::open_with_path(chunk_store, config)?;
let datastore = Arc::new(datastore);
map.insert(name.to_string(), datastore.clone());
@ -69,9 +73,19 @@ impl DataStore {
Ok(datastore)
}
fn open_with_path(store_name: &str, path: &Path, config: DataStoreConfig) -> Result<Self, Error> {
let chunk_store = ChunkStore::open(store_name, path)?;
/// removes all datastores that are not configured anymore
pub fn remove_unused_datastores() -> Result<(), Error>{
let (config, _digest) = datastore::config()?;
let mut map = DATASTORE_MAP.lock().unwrap();
// removes all elements that are not in the config
map.retain(|key, _| {
config.sections.contains_key(key)
});
Ok(())
}
fn open_with_path(chunk_store: Arc<ChunkStore>, config: DataStoreConfig) -> Result<Self, Error> {
let mut gc_status_path = chunk_store.base_path();
gc_status_path.push(".gc-status");
@ -88,7 +102,7 @@ impl DataStore {
};
Ok(Self {
chunk_store: Arc::new(chunk_store),
chunk_store,
gc_mutex: Mutex::new(()),
last_gc_status: Mutex::new(gc_status),
verify_new: config.verify_new.unwrap_or(false),
@ -153,6 +167,34 @@ impl DataStore {
Ok(out)
}
/// Fast index verification - only check if chunks exists
pub fn fast_index_verification(
&self,
index: &dyn IndexFile,
checked: &mut HashSet<[u8;32]>,
) -> Result<(), Error> {
for pos in 0..index.index_count() {
let info = index.chunk_info(pos).unwrap();
if checked.contains(&info.digest) {
continue;
}
self.stat_chunk(&info.digest).
map_err(|err| {
format_err!(
"fast_index_verification error, stat_chunk {} failed - {}",
proxmox::tools::digest_to_hex(&info.digest),
err,
)
})?;
checked.insert(info.digest);
}
Ok(())
}
pub fn name(&self) -> &str {
self.chunk_store.name()
}
@ -448,7 +490,7 @@ impl DataStore {
if !self.chunk_store.cond_touch_chunk(digest, false)? {
crate::task_warn!(
worker,
"warning: unable to access non-existant chunk {}, required by {:?}",
"warning: unable to access non-existent chunk {}, required by {:?}",
proxmox::tools::digest_to_hex(digest),
file_name,
);
@ -686,6 +728,11 @@ impl DataStore {
}
pub fn stat_chunk(&self, digest: &[u8; 32]) -> Result<std::fs::Metadata, Error> {
let (chunk_path, _digest_str) = self.chunk_store.chunk_path(digest);
std::fs::metadata(chunk_path).map_err(Error::from)
}
pub fn load_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let (chunk_path, digest_str) = self.chunk_store.chunk_path(digest);
@ -781,4 +828,3 @@ impl DataStore {
self.verify_new
}
}

Some files were not shown because too many files have changed in this diff Show More