locking during the tests as regular user failed because we try to
chown to the backup user (which is not always possible).
Instead, do not lock at all, by implementing 'open_backup_lockfile' with
'create_mocked_lock'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This also moves a couple of required utilities such as
logrotate and some file descriptor methods to pbs-tools.
Note that the logrotate usage and run-dir handling should be
improved to work as a regular user as this *should* (IMHO)
be a regular unprivileged command (including running
qemu given the kvm privileges...)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Adds possibility to recover data from an index file. Options:
- chunks: path to the directory where the chunks are saved
- file: the index file that should be recovered(must be either .fidx or
didx)
- [opt] keyfile: path to a keyfile, if the data was encrypted, a keyfile is
needed
- [opt] skip-crc: boolean, if true, read chunks wont be verified with their
crc-sum, increases the restore speed by a lot
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Adds possibility to inspect .blob, .fidx and .didx files. For index
files a list of the chunks referenced will be printed in addition to
some other information. .blob files can be decoded into file or directly
into stdout. Without decode the tool just prints the size and encryption
mode of the blob file. Options:
- file: path to the file
- [opt] decode: path to a file or stdout(-), if specidied, the file will be
decoded into the specified location [only for blob files, no effect
with index files]
- [opt] keyfile: path to a keyfile, needed if decode is specified and the
data was encrypted
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Adds possibility to inspect chunks and find indexes that reference the
chunk. Options:
- chunk: path to the chunk file
- [opt] decode: path to a file or to stdout(-), if specified, the
chunk will be decoded into the specified location
- [opt] digest: needed when searching for references, if set, it will
be used for verification when decoding
- [opt] keyfile: path to a keyfile, needed if decode is specified and
the data was encrypted
- [opt] reference-filter: path in which indexes that reference the
chunk should be searched, can be a group, snapshot or the whole
datastore, if not specified no references will be searched
- [default=true] use-filename-as-digest: use chunk-filename as digest,
if no digest is specified
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Defined a new struct RemoteConfig (without name and password). This makes it
possible to bas64-encode the pasword in the config, but still allow plain
passwords with the API.
otherwise a user might get a task log like this:
-----
...
found 7 groups
TASK OK
-----
which could confuse the users as why there were no snapshots backed up
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
it seems that for some actions or in some circumstances, two minutes is
simply too short and the command aborts. Increase the default timeout to
10 minutes.
While it should give most commands enough time to finish, in case of a real
failure the procedure now takes up to 5 times longer, but IMHO thats an
OK tradeoff.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this should make the api call much faster, since it is not reading
the whole catalog anymore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
For some parts of the ui, we only need the snapshot list from the catalog,
and reading the whole catalog (can be multiple hundred MiB) is not
really necessary.
Instead, we write the list of snapshots into a seperate .index file. This file
is generated on demand and is much smaller and thus faster to read.
a test for a valid status_page, one with excess data
(in the descriptor as well in the page as a whole)
and a test with too little data
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if the library sends more data than advertised, simply cut it off,
but if it sends less data, bail out (depending on how much data is
missing, trying to parse it could lead to a panic, so bail out early)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in 'restore_archive', we reach that 'catalog.commit()' for
* every skipped snapshot (we already call 'commit_if_large' then before)
* every skipped chunk archive (no change in catalog since we do not read
the chunk archive in that case)
* after reading a catalog (no change in catalog)
in all other cases, we call 'commit_if_large' and return early,
meaning that the 'commit' there was executed too often and
unnecessary, so move it after the loop over the files, before
finishing the temporary database.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of having a public start/end_chunk_archive and register_chunks,
simply expose a 'register_chunk_archive' method since we always have
a list of chunks anywhere we want to add them
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We get the descriptor length from the library and use that in
'chunks_exact', which panics on length 0. Catch that case
and bail out, since that makes no sense here anyway.
This could prevent a panic, in case a library sends wrong data.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
debugging history showed that its surely nice to have more logs at
when stuff happens (and thus fails)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
now required as we always enforce lock files to be owned by the
backup user, and the restore code uses such code indirectly as the
REST server module is reused from proxmox-backup-server. Once that is
refactored out we may do away such things, but until then we need to
have a somewhat complete system env.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of 'blindly' trusting the changer to deliver the fields written
in the specification, trust the length data it returns in the header.
we slice the descriptor data into equal sized chunks of the correct
size, then we do not have care bout the len and empty checks anymore
this also makes the code to read the rest of the page obsolete,
since the next descriptor is on the correct offset anyway
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to 255. 8 drives per changer was a rather arbitrary limitation and could
well be reached in practice with big libraries.
Altough 255 is still a arbirtrary limitation, this is much less likely
to be reached in practice.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
pbs-datastore now ended up depending on tokio after all, but
that's fine for now
for the fuse code I added pbs-fuse-loop (has the old
fuse_loop and its 'loopdev' module)
ultimately only binaries should depend on this to avoid the
library link
the only thins remaining to move out the client binary are
the api method return types, those will need to be moved to
pbs-api-types...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Factor out open_backup_lockfile() method to acquire locks owned by
user backup with permission 0660.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to prune the whole datastore at once, with the given parameters.
We need a new api call since this can take a while and we need to start
a worker for this. The exisiting api call returns a list of removed/kept
snapshots and is synchronous.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
checks for PRIV_DATASTORE_MODIFY, or else if the auth_id is the backup
owner, and skips the group if not.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
it is the same as when pruning single groups.
for prune_jobs, we never start the worker if there is no prune option set.
but if we want to call 'prune_datastore' from somewhere else, we
have to check it here again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by using the api macro and reusing the PruneOptions from pbs-datastore
this means we can now drop the 'add_common_prune_prameters' macro
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by using the api macro on the async method and reusing the PruneOptions
from pbs-datastore with 'flatten: true'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
some libraries cannot handle a request with volume tags and DVCID set at
the same time.
So we make 2 separate requests and merge them, since we want to keep
the vendor/model/serial data.
to not overcomplicate the code, add another special type to ElementType
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The previous assumption was that the Tasks returned by the Iterator are
sorted by the starttime, but that is not actually the case, and
could never have been, since we append the tasks into the log when
they are finished (not started) and running tasks are always iterated
first.
To correctly filter (and simplify the the api call) we forgo the
combinators, and use a for loop instead. This way we only have to do
the since/until checks only once per Task, but have to do the
start/limit counting ourselves.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
LVM replaces any dashes '-' in an LV or PV name with two '--' for the
created device node in /dev/mapper/ to distinguish the seperating
character between the PV and LV name.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This lock is held during VM startup, so that multiple calls will not
start VMs twice. But this means that the timeout needs to incorporate
the time it might take a VM to boot, so increase it quite a bit.
This could previously lead to "interrupted system call" errors when
accessing backups with many disks.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
if an error occurs, the snapshot dirs will already be created, and we
do not clean them up (some might already be finished).
Warn the user that they are not cleaned up.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
According to crypt(3):
"crypt places its result in a static storage area, which will be
overwritten by subsequent calls to crypt. It is not safe to call crypt
from multiple threads simultaneously."
This means that multiple login calls as a PBS-realm user can collide and
produce intermittent authentication failures. A visible case is for
file-restore, where VMs with many disks lead to just as many auth-calls
at the same time, as the GUI tries to expand each tree element on load.
Instead, use the thread-safe variant 'crypt_r', which places the result
into a pre-allocated buffer of type 'crypt_data'. The C struct is laid
out according to 'lib/crypt.h.in' and the man page mentioned above.
Use the opportunity and make both arguments to the rust 'crypt' function
take a &[u8].
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Some changers do not like the DVCID bit when querying non-drives,
this includes when querying 'all' elements.
To circumvent this, we query each type by itself (like mtx does it),
and only add the DVCID bit for drives (Data Transfer Elements).
Reported by a user in the forum:
https://forum.proxmox.com/threads/ibm-3584-ts3500-support.92291/
and limit to 1000 elements per request.
(Because some changers limit that request with the options we set)
instead of checking if the data len was equal to the allocation_len
for getting more data, we count the returned elements and compare
that with the number we requested
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
New kernel has stricter checks on tmpfs with stick-bit on directories, so some
commands (i.e. proxmox-tape changer status) fails when executed as root, because
permission checks fails when locking the drive.
This patch move the drive locks to /run/proxmox-backup/drive-lock.
Note: This is incompatible to old locking mechmanism, so users may not
run tape backups during update (or running backup can fail).
Stored in atomically-updated 'notes' file in backup group directory.
Available via dedicated GET/PUT API calls, as well as the first line
being included in list_groups (similar to list_snapshots).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
a match expresses the fallback slightly nicer and needs no mut,
which is always nice to avoid.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
modeled like our other section config api calls
two drawbacks of doing it this way:
* we have to copy some api properties again for the update call,
since not all of them are updateable (username-claim)
* we only handle openid for now, which we would have to change
when we add ldap/ad
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
these will be used as parameters/return types for the read/create/etc.
calls for realms
for now we copy the necessary attributes (only from openid) since
our api macros/tools are not good enought to generate the necessary
api definitions for section configs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
it's not used by the client and not part of the client, it
just makes use *of* the client, but is used on the
datastore/server...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
So callers get more stable results. Most noticeable, the disk list in
the web UI doesn't jump around upon reloading, and while sorting could
be done directly there, like this other callers get the benefit too.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
in preparation to also get the file system type from lsblk.
Co-developed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
While the PVE one "bails" too, it has an eval around those and moves
the error to the message property, so lets do so too to ensure a user
can force an update on a too old subscription
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the systemd config/unit parsing stays in pbs for now since
that's not usually required and uses our section config
parser
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
move key_derivation to pbs-datastore
pbs-api-types should only contain "basic" types which
* are usually required by clients
* don't depend on pbs-related code directly
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
These are mostly tokio specific "hacks" or "workarounds" we
only really need/want in our binaries without pulling it in
via our library crates.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
the dns plugin config allow for a specified amount of time to wait for
the TXT record to be set and propagated through DNS.
This patch adds a sleep for this amount of time.
The log message was taken from the perl implementation in proxmox-acme
for consistency.
Tested with the powerdns plugin in my test setup.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
During startup most of the stuff is happening in milliseconds (or
less), so the timestamp granularity of seconds made it hard to tell
if the previous command required 990ms or 1ms, which is quite the
difference in the restore daemon context.
Using micros seems not to bring too much additional information, a
millisecond is already an ok lower time resolution for logging, so
switch only to millis for now.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
fixes file restore again.
The new Memcom tracking file lives in `/run/proxmox-backup` and is
always created on REST interaction, as CachedUserInfo uses it to
efficiently track config changes, and such a cache is used in each
REST handle_request.
Further, the Memcom infra expects the base run PBS dir to exists
already, which is an OK assumption to have, but in the file-restore
daemon we have a significantly more minimal environment, and the run
dir was simply not required there, even /run isn't a tmpfs yet.
Fixes fda19dcc6f ("fix CachedUserInfo by using a shared memory version counter")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We send it already to the user via the response body, but the
log_response does not has, nor wants to have FWIW, access to the
async body stream, so pass it through the ErrorMessageExtension
mechanism like we do else where.
Note that this is not only useful for PBS API proxy/daemon but also
the REST server of the file-restore daemon running inside the restore
VM, and it really is *very* helpful to debug things there..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Parses JSON output from 'pvs' and 'lvs' LVM utils and does two passes:
one to scan for thinpools and create a device node for their
metadata_lv, and a second to load all LVs, thin-provisioned or not.
Should support every LV-type that LVM supports, as we only parse LVM
tools and use 'vgscan --mknodes' to create device nodes for us.
Produces a two-layer BucketComponent hierarchy with VGs followed by LVs,
PVs are mapped to their respective disk node.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Prefix zpool mount paths to avoid clashing with other mount namespaces
(like LVM).
Also ignore "already-mounted" error and return it as success instead -
as we always assume that a mount path is unique, this is a safe
assumption, as nothing else could have been mounted here.
This fixes an issue where a mountpoint=legacy subvol might be available
on different disks, and thus have different Bucket instances that don't
share the mountpoint cache, which could lead to an error if the user
tried opening it multiple times on different disks.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
otherwise the path ends in an array ["foo", "bar"] instead of "foo/bar"
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
To support nested BucketComponents, it is necessary to dedup them, as
otherwise two components like:
/foo/bar
/foo/baz
will result in /foo being shown twice at the first hierarchy.
Also make the size property based on index and optional, as for example
/foo in the example above might not have a size, and bar/baz might have
differing sizes.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since it pulls in lots of additional linked libraries for all binaries
compiled as part of proxmox-backup. it can easily be re-enabled with
`--cfg openid` added to the RUSTFLAGS env variable.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
it's not really needed in the config module, and this makes it easier to
disable the proxmox-openid dependency linkage as a stop-gap measure.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we try to load the correct media in a loop until we find the correct tape.
when encountering an error or wrong tape, we want to log that (and send
an email if one is set) that requests the correct tape.
while trying to avoid printing the same errors more than once in a row,
we had at least one case (starting with an empty tape in the drive)
which would not print/send any tape request.
reworking that code to use a custom 'TapeRequest' enum, which contains
the state + error message, and a helper that prints and sends an email
when the state changes
this reduces the change check/log to a single variable, instead of 4
(tried, last_media_uuid, last_error, failure_reason)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Add test code to the first locate_file command, compute locate_offset.
Subsequent locate_file commands use that offset.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
we have a static list of filesystems and their capabilities regarding
file attributes and fs features (e.g. sockets/fifos/etc) which also
includes xattrs,acls and fcaps
if we did not know a filesystem by its magic number (for example cephfs),
we did not even attempt to read xattrs, etc.
this patch adds those flags by default to unknown filesystems, and
removes them when we encounter EOPNOTSUPP (to remove the number
of syscalls)
with this, we should be able to catch xattrs/acls/fcaps on all
(unknown) fs types that support them
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
These require mounting using the regular 'mount' syscall.
Auto-generates an appropriate mount path.
Note that subvols with mountpoint=none cannot be mounted this way, and
would require setting the mountpoint property, which is not possible as
the zpools have to be imported with readonly=on.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Uses the ZFS utils to detect, import and mount zpools. These are
available as a new Bucket type 'zpool'.
Requires some minor changes to the existing disk and partiton detection
code, so the ZFS-specific part can use the information gathered in the
previous pass to associate drive names with their 'drive-xxxN.img.fidx'
node.
For detecting size, the zpool has to be imported. This is only done with
pools containing 5 or less disks, as anything else might take too long
(and should be seldomly found within VMs).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Even through best efforts at keeping it small, including the ZFS tools
in the initramfs seems to have exhausted the small overhead we had left
- give it a bit more RAM to compensate.
Also disable the ZFS ARC, as it's no use in such a memory constrained
environment, and we cache on the QEMU/rust layer anyway.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The future needs to be removed from the pending map in any case, even if
it returned an error, else all upcoming calls to access this key will
always return the same error.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
sort the chunks we want to backup to tape by inode, to gain some
speed on spinning disks. this is done per index, not globally.
costs a bit memory, but not too much, about 16 bytes per chunk which
would mean ~4MiB for a 1TiB index with 4MiB chunks.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that we can reuse that information
the removal of the adding to the corrupted list is ok, since
'get_chunks_in_order' returns them at the end of the list
and we do the same if the loading fails later in 'verify_index_chunks'
so we still mark them corrupt
(assuming that the load will fail if the stat does)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since the output:
Result: "<UPID>"
is not really interesting, show instead the task log while
the datastore is creating, since it is now run in a worker
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Setting this to 0 is not just useless, but breaks the logic horribly
enough to cause random segfaults - better forbid this, to avoid someone
else having to debug it again ;)
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
admin/datastore reads linearly only, so no need for cache (capacity of 1
basically means no cache except for the currently active chunk).
mount can do random access too, so cache last 8 chunks for possibly a
mild performance improvement.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Implemented as a seperate struct SeekableCachedChunkReader that contains
the original as an Arc, since the read_at future captures the
CachedChunkReader, which would otherwise not work with the lifetimes
required by AsyncRead. This is also the reason we cannot use a shared
read buffer and have to allocate a new one for every read. It also means
that the struct items required for AsyncRead/Seek do not need to be
included in a regular CachedChunkReader.
This is intended as a replacement for AsyncIndexReader, so we have less
code duplication and can utilize the LRU cache there too (even though
actual request concurrency is not supported in these traits).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Supports concurrent 'access' calls to the same key via a
BroadcastFuture. These are stored in a seperate HashMap, the LruCache
underneath is only modified once a valid value has been retrieved.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Explicitly test that data will stay available and can be retrieved
immediately via listen(), even if the future producing the data and
notifying the consumers was already run in the past.
Wasn't broken or anything, but helps with understanding IMO.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
in PVE, the logic how wearout gets read from the smartctl output was
changed from a vendor -> id map to a sorted list of specific
attribute field names.
copy that list to pbs (in the same order), and use that to get the
wearout
in the future we might want to split the disk logic into its own crate
and reuse it in pve
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we skip snapshots that are older than the newest snapshot of the group in
the target datastore, log it so the user can know why it is not synced
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that longer running creates (e.g. a slow storage), does not
run in a timeout and we can follow its creation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
when we remove a datastore via api/cli, the proxy
has sometimes leftover references to that datastore in its
DATASTORE_MAP which includes an open filehandle on the
'.lock' file
this prevents unmounting/exporting the datastore even after removal,
only a reload/restart of the proxy did help
add a command to our command socket, which removes all non
configured datastores from the map, dropping the open filehandle
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
by implementing a custom error type that is either 'TimeOut' or
'Other'.
In the api, check in the worker loop for exactly 'TimeOut' errors and continue only
then. All other errors lead to a aborted task.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
removing the backup dir must acquire the snapshot lock, else it can
happen that we remove a snapshot while it is being restored
or backed up to tape
the original commit that adds the force flag
(c9756b40d1)
mentions that the prune checks itself if the snapshot is in use,
but i could not find such code, so simply set force to false
to avoid failing and aborting the prune job, warn if it could not
and continue
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This reverts commit 75f9f40922, which is
no longer needed now that we use tokio >= 1.6 which contains the proper
fix.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
this is deprecated with rustc 1.52+, and will become a hard error at
some point:
https://github.com/rust-lang/rust/issues/79202
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
from the SspDataEncryptionCapabilityPage
it seems we do not need it, since the EXTDECC flag is only used for
determining if the drive is capable to be configured via
ADI (Automation/Drive Interface) which we do not use at all.
this makes the call work with LTO-4 again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want a 'media-set' selector in the gui, this makes it
very easy to do and is not as costly as reusing the media list,
since we do not need to iterate over all media (e.g. unassigned)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by extracting them via the api macro into the function signature
this fixes an issue, where giving 'since' and 'until' where not
used since we tried to extract them as 'str' while they were numbers.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
While the issue with vsock packets starving kernel memory is mostly
worked around by the '64k -> 4k buffer' patch in
'proxmox-backup-restore-image', let's be safe and also limit the number
of concurrent transfers. 8 downloads per VM seems like a fair value.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The extract API call may be active for more than the watchdog timeout,
so a simple ping is not enough.
This adds an "inhibit" API, which will stop the watchdog from completing
as long as at least one WatchdogInhibitor instance is alive. Keep one in
the download task, so it will be dropped once it completes (or errors).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
See this PR for more info: https://github.com/tokio-rs/tokio/pull/3756
As a workaround use a pair of connected unix sockets - this obviously
incurs some overhead, albeit not measureable on my machine. Once tokio
includes the fix we can go back to a DuplexStream for performance and
simplicity.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Used to specify a filesystem placed directly on a disk, without a
partition table inbetween. Detected by simply attempting to mount the
disk itself.
A helper "make_dev_node" is extracted to avoid code duplication.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
A bucket might contain multiple (or 0) layers of components in its path
specification, so allow a mapping between bucket type strings and
expected component depth. For partitions, this is 1, as there is only
the partition number layer below the "part" node.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This can happen if the underlying storage failed, in which case we do
not want to fail the whole API call, as it should report the status
of all datastores. So rather add the error inline to the related
store entry and continue.
Allows to nicely visualize those stores in the gui.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it's the only PBS-specific part in there, so let's make it
product-agnostic before moving it off to proxmox-http.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
so that a user can delete a whole group at once, until now, the fastest
way for this was to prune to one snapshot, and delete that
code is basically a copy/paste from the snapshot delete, sans
the 'backup-time' parameter
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that a user can force a new media set, e.g. if he uses the
allocation policy 'continue', but wants to manually start a new
media-set.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if the account does not exist, error with its name
if file loading fails, the error includes the full path
if the content fails to parse, show file & parse error
and in each case mention that it's about loading the acme account file
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
syncs behavior with both, the displayed state in the PBS
web-interface, and the behavior of PVE/PMG.
Without this a standard setup would result in a Error like:
> TASK ERROR: no acme client configured
which was pretty confusing, as the actual error was something else
(no account configured), and the web-interface showed "default" as
selected account, so a user had no idea what actually was wrong and
how to fix it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
- refactor the combinators,
- make it take a `&T: Serialize` instead of a Value, and
allow sending the raw string via `send_raw_command`.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
return a result with optional fingerprint instead of tuple, allowing
easy extraction of a meaningful error message.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
if the expected fingerprint and the one returned by the server don't
match, print a warning and allow confirmation and proceeding if running
interactive.
previous:
$ proxmox-backup-client ...
Error: error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:../ssl/statem/statem_clnt.c:1915:
new:
$ proxmox-backup-client ...
WARNING: certificate fingerprint does not match expected fingerprint!
expected: ac:cb:6a:bc:d6:b7:b4:77:3e:17:05:d6:b6:29:dd:1f:05:9c:2b:3a:df:84:3b:4d:f9:06:2c:be:da:06:52:12
fingerprint: ab:cb:6a:bc:d6:b7:b4:77:3e:17:05:d6:b6:29:dd:1f:05:9c:2b:3a:df:84:3b:4d:f9:06:2c:be:da:06:52:12
Are you sure you want to continue connecting? (y/n): n
Error: error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:../ssl/statem/statem_clnt.c:1915:
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
this makes it possible to only restore some snapshots from a tape media-set
instead of the whole. If the user selects only a small part, this will
probably be faster (and definitely uses less space on the target
datastores).
the user has to provide a list of snapshots to restore in the form of
'store:type/group/id'
e.g. 'mystore:ct/100/2021-01-01T00:00:00Z'
we achieve this by first restoring the index to a temp dir, retrieving
a list of chunks, and using the catalog, we generate a list of
media/files that we need to (partially) restore.
finally, we copy the snapshots to the correct dir in the datastore,
and clean up the temp dir
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and create the 'email' and 'restore_owner' variable at the beginning,
so that we can reuse them and do not have to pass the sources of those
through too many functions
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if the reader thread is already gone here, we panic here, resulting in
a nondescript error message, so simply ignore/warn in that case and
return gracefully
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
It may make sense in the future, e.g., if the built-in standalone
type is not enough, e.g., as HTTP**s**, HTTP 2 or even QUIC (HTTP 3)
is wanted in some setups, but for now there's no scenario where one
would profit from adding a new HTTP plugin, especially as it requires
the `data` property to be set, which makes no sense..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we cannot add a plugin with an existing ID so this completion helper
is rather counterproductive...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It will be reused in a later patch in another module which should not
depend on the actual API implementation (ugly and cyclic)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
especially for the last group, without this the progress would report:
"percentage done: 100.00% (1 of 2 groups, 1 of 1 group snapshots)"
instead of the more logical
"percentage done: 100.00% (2 of 2 groups)"
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Set PBS_QEMU_DEBUG=1 on a command that starts a VM and then connect to
the debug root shell via:
minicom -D \unix#/run/proxmox-backup/file-restore-serial-10.sock
or similar.
Note that this requires 'proxmox-backup-restore-image-debug' to work,
the postinst script is updated to also generate the corresponding image.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
A PCI bus can only support up to 32 devices, so excluding built-in
devices that left us with a maximum of about 25 drives. By adding a new
PCI bridge every 32 devices (starting at bridge ID 2 to avoid conflicts
with automatic bridges), we can theoretically support up to 8096 drives.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The guest kernel requires more memory depending on how many disks are
attached. 256 seems to be enough for basically any reasonable and
unreasonable amount of disks though.
For debug instance, make it 1G, as these are never started automatically
anyway, and need at least 512MB since the initramfs (especially when
including a debug build of the daemon) is substantially bigger.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Helps to clean up a VM that has crashed, is not responding to vsock API
calls, but still has a running QEMU instance.
We always check the process commandline to ensure we don't kill a random
process that took over the PID.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
otherwise, the kernel driver exposes file names as iso 8859-1,
but we want to have them as utf8.
This mapping should always work, since UTF16 can be cleanly converted
to UTF8.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if we are given a 'naked' ipv6 without square brackets around it,
we need to add them ourselves, since the address is ambigious otherwise
when we add the port.
e.g. giving 'fe80::1' as address we arrive at the url (with the default port)
'https://fe80::1:8007/'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
by checking the 'checked_chunks' before trying to write to disk
and by doing the existance check in the parallel handler. This way,
we do not have to check the existance of a chunk multiple times
(if multiple source datastores gets restored to the same target
datastore) and also we do not have to wait on the stat before reading
the next chunk.
We have to change the &WorkerTask to an Arc though, otherwise we
cannot log to the worker from the parallel handler
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Split out a separate function scan_chunk_archive() for catalog restores.
Note: Required, because we need to optimize restore_chunk_archive() to
write datastore in separate threads (else thape drive will stop during restore)
API like in PVE:
GET .../info => current cert information
POST .../custom => upload custom certificate
DELETE .../custom => delete custom certificate
POST .../acme/certificate => order acme certificate
PUT .../acme/certificate => renew expiring acme cert
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This is the highlevel part using proxmox-acme-rs to create
requests and our hyper code to issue them to the acme
server.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
else we sometimes forget to remove it from the 'params' variable
and use that further, running into 'invalid parameter' errors
found by giving 'output-format' paramter to proxmox-tape status
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by checking for definedness of the label (tapes without barcode
have the empty string as label-text) and falling back to the
source slot for the load action
Note: Changed the load-slot API from PUT to POST
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
This allows mounting XFS partitons with 'dirty' states, like from a
running VM. Otherwise XFS tries to write recovery information, which
fails on a read-only mount.
Tested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Drive serials have a character limit of 20, longer names like
"drive-virtio0.img.fidx" or "drive-efidisk0.img.fidx" would get cut off.
Fix this by removing the suffix, it is not necessary to uniquely
identify an image.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Some drives will always return the number of bytes given in the
allocation_length field, but correctly report the data len in the mode
sense header. Simply ignore the excess data.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
include the expected and unexpected sizes in the error message,
so that it's easier to debug in case of an error
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
With the vsock-pkt-buffer fix in proxmox-backup-restore-image, we can
use way less memory for the VM without risking any crashes. 128 MiB
seems to be the lowest it will go and still be fully reliable.
While at it, add the "panic=1" argument to the kernel command line, so
in case the kernel *does* run out of memory, it will at least restart
automatically.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Read image sizes (.pxar.fidx/.img.didx) from manifest and partition
sizes from /sys/...
Requires a change to ArchiveEntry, as DirEntryAttribute::Directory
does not have a size associated with it (and that's probably good).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
this way, the api call does not error out when the file is locked
currently (which means that job is running and we do not need
to update the time)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when a user updates a job schedule, we want to save that point in time
to calculate future runs, otherwise when a user updates a schedule to
a time that would have been between the last run and 'now' the
schedule is triggered instantly
for example:
schedule 08:00
last run today 08:00
now it is 12:00
before this patch:
update schedule to 11:00
-> triggered instantly since we calculate from 08:00
after this patch:
update schedule to 11:00
-> triggered tomorrow 11:00 since we calculate from today 12:00
the change in the enum type is ok, since by default serde does not
error on unknown fields and the new field is optional
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if a backup task failed (e.g. it was aborted), show the snapshots
which were successfully backed up in the notification
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to make the following cryptic error:
proxmox-file-restore failed: Error: Invalid byte 46, offset 5.
more understandable:
proxmox-file-restore failed: Error: Failed base64-decoding path '/root.pxar.didx' - Invalid byte 46, offset 5.
when a user passes in a non-base64 path but sets `--base64`.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
basically the same as commit eeff085d9d
Will be required once we get to use a newer rustc, at least the
client build for archlinux was broken due to this.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
same functionality as crypto_parameters, except it keeps the file
descriptor passed as "keyfd" open (and seeks to the beginning after
reading), if one is given.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
For the actual partitions and blockdevices in a backup, which the
user sees like folders in the file-restore ui
Encoded as "None", to avoid cluttering DirEntryAttribute, where it
wouldn't make any sense to have.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
These can't be entered or restored anyway, and cause issues with catalog
files for example.
Also a clippy fix.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
in some configurations, samba stores NTFS-ACLs in this xattr[0], so
we should backup (if we can)
altough the 'security' namespace is special (e.g. in use by
selinux, etc.) this value is normally only used by samba and we
should be able to back it up.
to restore it, the user needs at least 'CAP_SYS_ADMIN' rights, otherwise
it cannot be set
0: https://www.samba.org/samba/docs/current/man-html/vfs_acl_xattr.8.html
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Some changer seem to append more data than we expect, but correctly
annotates that size in the subheader.
For each descriptor entry, read as much as the size given in the
subheader (or until the end of the reader), else our position in
the reader is wrong for the next entry, and we will parse
incorrect data.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
only check every 1024'th, which is cheaper to do than a modulo, as we
can just mask the 10 least-significant-bits and check if the result
is zero.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fixes a non-negligible performance regression from commit
7f394c807b
While we skip known-verified chunks in the stat-and-inode-sort loop,
those are only the ones from previous indexes. If there's a repeated
chunk in one index they would get re-verified more often as required.
So, add the check again explicitly to the read+verify loop.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
before reading the chunks from disk in the order of the index file,
stat them first and sort them by inode number.
this can have a very positive impact on read speed on spinning disks,
even with the additional stat'ing of the chunks.
memory footprint should be tolerable, for 1_000_000 chunks
we need about ~16MiB of memory (Vec of 64bit position + 64bit inode)
(assuming 4MiB Chunks, such an index would reference 4TiB of data)
two small benchmarks (single spinner, ext4) here showed an improvement from
~430 seconds to ~330 seconds for a 32GiB fixed index
and from
~160 seconds to ~120 seconds for a 10GiB dynamic index
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when we get an error from the tape, we possibly want to ignore it,
i.e. when the file was incomplete, but we still want to error
out if the error came from e.g, the datastore, so we have to move
the error checking code to the 'next_chunk' call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Treat filepaths like "/root.pxar.didx" without a trailing slash as
wanting to download the entire archive content instead of erroring. The
zip-creation code already works fine for this scenario.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
in proxmox-backup-proxy, we log and discard any errors on 'accept',
so that we can continue to server requests
in proxmox-backup-api, we just have the StaticIncoming that accepts,
which will forward any errors from the underlying TcpListener
this patch also logs and discards the errors, like in the proxy.
Otherwise it could happen that if the api-daemon has more files open
than the proxy, it will shut itself down because of a
'too many open files' error if there are many open connections
(the service should also restart on exit i think, but this is
a separate issue)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if a datastore or root is not used directly on the pool dir
(e.g. the installer creates 2 sub datasets ROOT/pbs-1), info in
/proc/self/mountinfo returns not the pool, but the path to the
dataset, which has no iostats itself in /proc/spl/kstat/zfs/
but only the pool itself
so instead of not gathering data at all, gather the info from the
underlying pool instead. if one has multiple datastores on the same
pool those rrd stats will be the same for all those datastores now
(instead of empty) similar to 'normal' directories
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>