to prune the whole datastore at once, with the given parameters.
We need a new api call since this can take a while and we need to start
a worker for this. The exisiting api call returns a list of removed/kept
snapshots and is synchronous.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
checks for PRIV_DATASTORE_MODIFY, or else if the auth_id is the backup
owner, and skips the group if not.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
it is the same as when pruning single groups.
for prune_jobs, we never start the worker if there is no prune option set.
but if we want to call 'prune_datastore' from somewhere else, we
have to check it here again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by using the api macro and reusing the PruneOptions from pbs-datastore
this means we can now drop the 'add_common_prune_prameters' macro
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by using the api macro on the async method and reusing the PruneOptions
from pbs-datastore with 'flatten: true'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
some libraries cannot handle a request with volume tags and DVCID set at
the same time.
So we make 2 separate requests and merge them, since we want to keep
the vendor/model/serial data.
to not overcomplicate the code, add another special type to ElementType
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The previous assumption was that the Tasks returned by the Iterator are
sorted by the starttime, but that is not actually the case, and
could never have been, since we append the tasks into the log when
they are finished (not started) and running tasks are always iterated
first.
To correctly filter (and simplify the the api call) we forgo the
combinators, and use a for loop instead. This way we only have to do
the since/until checks only once per Task, but have to do the
start/limit counting ourselves.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
LVM replaces any dashes '-' in an LV or PV name with two '--' for the
created device node in /dev/mapper/ to distinguish the seperating
character between the PV and LV name.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This lock is held during VM startup, so that multiple calls will not
start VMs twice. But this means that the timeout needs to incorporate
the time it might take a VM to boot, so increase it quite a bit.
This could previously lead to "interrupted system call" errors when
accessing backups with many disks.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
if an error occurs, the snapshot dirs will already be created, and we
do not clean them up (some might already be finished).
Warn the user that they are not cleaned up.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
According to crypt(3):
"crypt places its result in a static storage area, which will be
overwritten by subsequent calls to crypt. It is not safe to call crypt
from multiple threads simultaneously."
This means that multiple login calls as a PBS-realm user can collide and
produce intermittent authentication failures. A visible case is for
file-restore, where VMs with many disks lead to just as many auth-calls
at the same time, as the GUI tries to expand each tree element on load.
Instead, use the thread-safe variant 'crypt_r', which places the result
into a pre-allocated buffer of type 'crypt_data'. The C struct is laid
out according to 'lib/crypt.h.in' and the man page mentioned above.
Use the opportunity and make both arguments to the rust 'crypt' function
take a &[u8].
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Some changers do not like the DVCID bit when querying non-drives,
this includes when querying 'all' elements.
To circumvent this, we query each type by itself (like mtx does it),
and only add the DVCID bit for drives (Data Transfer Elements).
Reported by a user in the forum:
https://forum.proxmox.com/threads/ibm-3584-ts3500-support.92291/
and limit to 1000 elements per request.
(Because some changers limit that request with the options we set)
instead of checking if the data len was equal to the allocation_len
for getting more data, we count the returned elements and compare
that with the number we requested
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
New kernel has stricter checks on tmpfs with stick-bit on directories, so some
commands (i.e. proxmox-tape changer status) fails when executed as root, because
permission checks fails when locking the drive.
This patch move the drive locks to /run/proxmox-backup/drive-lock.
Note: This is incompatible to old locking mechmanism, so users may not
run tape backups during update (or running backup can fail).
Stored in atomically-updated 'notes' file in backup group directory.
Available via dedicated GET/PUT API calls, as well as the first line
being included in list_groups (similar to list_snapshots).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
a match expresses the fallback slightly nicer and needs no mut,
which is always nice to avoid.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
modeled like our other section config api calls
two drawbacks of doing it this way:
* we have to copy some api properties again for the update call,
since not all of them are updateable (username-claim)
* we only handle openid for now, which we would have to change
when we add ldap/ad
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
these will be used as parameters/return types for the read/create/etc.
calls for realms
for now we copy the necessary attributes (only from openid) since
our api macros/tools are not good enought to generate the necessary
api definitions for section configs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
it's not used by the client and not part of the client, it
just makes use *of* the client, but is used on the
datastore/server...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
So callers get more stable results. Most noticeable, the disk list in
the web UI doesn't jump around upon reloading, and while sorting could
be done directly there, like this other callers get the benefit too.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>