we are not actually pruning the whole datastore, but only the single
group, so set that as a title
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since the api call always starts a real worker, we cannot have a
preview. It would also be very hard to show that for all groups in a
non-confusing way. We reuse the pbsPruneInputPanel and add the dry-run
field there conditionally.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to prune the whole datastore at once, with the given parameters.
We need a new api call since this can take a while and we need to start
a worker for this. The exisiting api call returns a list of removed/kept
snapshots and is synchronous.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
checks for PRIV_DATASTORE_MODIFY, or else if the auth_id is the backup
owner, and skips the group if not.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
it is the same as when pruning single groups.
for prune_jobs, we never start the worker if there is no prune option set.
but if we want to call 'prune_datastore' from somewhere else, we
have to check it here again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by using the api macro and reusing the PruneOptions from pbs-datastore
this means we can now drop the 'add_common_prune_prameters' macro
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by using the api macro on the async method and reusing the PruneOptions
from pbs-datastore with 'flatten: true'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
some libraries cannot handle a request with volume tags and DVCID set at
the same time.
So we make 2 separate requests and merge them, since we want to keep
the vendor/model/serial data.
to not overcomplicate the code, add another special type to ElementType
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The previous assumption was that the Tasks returned by the Iterator are
sorted by the starttime, but that is not actually the case, and
could never have been, since we append the tasks into the log when
they are finished (not started) and running tasks are always iterated
first.
To correctly filter (and simplify the the api call) we forgo the
combinators, and use a for loop instead. This way we only have to do
the since/until checks only once per Task, but have to do the
start/limit counting ourselves.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
LVM replaces any dashes '-' in an LV or PV name with two '--' for the
created device node in /dev/mapper/ to distinguish the seperating
character between the PV and LV name.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This lock is held during VM startup, so that multiple calls will not
start VMs twice. But this means that the timeout needs to incorporate
the time it might take a VM to boot, so increase it quite a bit.
This could previously lead to "interrupted system call" errors when
accessing backups with many disks.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
if an error occurs, the snapshot dirs will already be created, and we
do not clean them up (some might already be finished).
Warn the user that they are not cleaned up.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
otherwise this context is missing in some tasks (e.g. tape restore)
and it is unclear where it came from
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
a single catalog can be over 100MiB, and a media-set can have multiple
catalogs to read (no technical upper limit). On slow disks, this can
take much longer than 30 seconds (the default timeout).
The real solution would be to have some kind of index only for the gui
relevant part, e.g. a table in the beginning of the catalog, or
alternatively a seperate file with that info. Until we have such a
solution increase the timeout as a stopgap.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
According to crypt(3):
"crypt places its result in a static storage area, which will be
overwritten by subsequent calls to crypt. It is not safe to call crypt
from multiple threads simultaneously."
This means that multiple login calls as a PBS-realm user can collide and
produce intermittent authentication failures. A visible case is for
file-restore, where VMs with many disks lead to just as many auth-calls
at the same time, as the GUI tries to expand each tree element on load.
Instead, use the thread-safe variant 'crypt_r', which places the result
into a pre-allocated buffer of type 'crypt_data'. The C struct is laid
out according to 'lib/crypt.h.in' and the man page mentioned above.
Use the opportunity and make both arguments to the rust 'crypt' function
take a &[u8].
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Some changers do not like the DVCID bit when querying non-drives,
this includes when querying 'all' elements.
To circumvent this, we query each type by itself (like mtx does it),
and only add the DVCID bit for drives (Data Transfer Elements).
Reported by a user in the forum:
https://forum.proxmox.com/threads/ibm-3584-ts3500-support.92291/
and limit to 1000 elements per request.
(Because some changers limit that request with the options we set)
instead of checking if the data len was equal to the allocation_len
for getting more data, we count the returned elements and compare
that with the number we requested
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
New kernel has stricter checks on tmpfs with stick-bit on directories, so some
commands (i.e. proxmox-tape changer status) fails when executed as root, because
permission checks fails when locking the drive.
This patch move the drive locks to /run/proxmox-backup/drive-lock.
Note: This is incompatible to old locking mechmanism, so users may not
run tape backups during update (or running backup can fail).
Make docs target depend directly on the some docs-only required
binaries and add a new intermediate ".do-cargo-build" target that is
explicitly not a PHONY target.
That avoids one extra set of full builds.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
as those have a hover effect and use dark-grey vs. the quite "harsh"
looking plain black. We need to override the margin though, as else
the floated layout adds another line.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
eslint is configured to not allow using quoted object keys if they
could be just passed in dot notation, e.g.,
wrong: `group["comment"]`
good: `group.comment`
It's not a big problem but eslint fails the build with the wrong one,
so this needs to be fixed anyway..
Also, rewrite to async, shorter and less indentation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Currently done a little bit hacky in a seperate API call following the
initial list_snapshots, as we previously didn't call list_groups at all
and instead calculated the groups from the snapshots.
This calls it async and updates the view with group comments when data
arrives. The editor is simply reused with the 'group-notes' API call,
since the semantics are the same.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Stored in atomically-updated 'notes' file in backup group directory.
Available via dedicated GET/PUT API calls, as well as the first line
being included in list_groups (similar to list_snapshots).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>