admin/datastore reads linearly only, so no need for cache (capacity of 1
basically means no cache except for the currently active chunk).
mount can do random access too, so cache last 8 chunks for possibly a
mild performance improvement.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
so that longer running creates (e.g. a slow storage), does not
run in a timeout and we can follow its creation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
when we remove a datastore via api/cli, the proxy
has sometimes leftover references to that datastore in its
DATASTORE_MAP which includes an open filehandle on the
'.lock' file
this prevents unmounting/exporting the datastore even after removal,
only a reload/restart of the proxy did help
add a command to our command socket, which removes all non
configured datastores from the map, dropping the open filehandle
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
by implementing a custom error type that is either 'TimeOut' or
'Other'.
In the api, check in the worker loop for exactly 'TimeOut' errors and continue only
then. All other errors lead to a aborted task.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is deprecated with rustc 1.52+, and will become a hard error at
some point:
https://github.com/rust-lang/rust/issues/79202
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we want a 'media-set' selector in the gui, this makes it
very easy to do and is not as costly as reusing the media list,
since we do not need to iterate over all media (e.g. unassigned)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by extracting them via the api macro into the function signature
this fixes an issue, where giving 'since' and 'until' where not
used since we tried to extract them as 'str' while they were numbers.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This can happen if the underlying storage failed, in which case we do
not want to fail the whole API call, as it should report the status
of all datastores. So rather add the error inline to the related
store entry and continue.
Allows to nicely visualize those stores in the gui.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
so that a user can delete a whole group at once, until now, the fastest
way for this was to prune to one snapshot, and delete that
code is basically a copy/paste from the snapshot delete, sans
the 'backup-time' parameter
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that a user can force a new media set, e.g. if he uses the
allocation policy 'continue', but wants to manually start a new
media-set.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this makes it possible to only restore some snapshots from a tape media-set
instead of the whole. If the user selects only a small part, this will
probably be faster (and definitely uses less space on the target
datastores).
the user has to provide a list of snapshots to restore in the form of
'store:type/group/id'
e.g. 'mystore:ct/100/2021-01-01T00:00:00Z'
we achieve this by first restoring the index to a temp dir, retrieving
a list of chunks, and using the catalog, we generate a list of
media/files that we need to (partially) restore.
finally, we copy the snapshots to the correct dir in the datastore,
and clean up the temp dir
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and create the 'email' and 'restore_owner' variable at the beginning,
so that we can reuse them and do not have to pass the sources of those
through too many functions
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
It will be reused in a later patch in another module which should not
depend on the actual API implementation (ugly and cyclic)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by checking the 'checked_chunks' before trying to write to disk
and by doing the existance check in the parallel handler. This way,
we do not have to check the existance of a chunk multiple times
(if multiple source datastores gets restored to the same target
datastore) and also we do not have to wait on the stat before reading
the next chunk.
We have to change the &WorkerTask to an Arc though, otherwise we
cannot log to the worker from the parallel handler
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Split out a separate function scan_chunk_archive() for catalog restores.
Note: Required, because we need to optimize restore_chunk_archive() to
write datastore in separate threads (else thape drive will stop during restore)
API like in PVE:
GET .../info => current cert information
POST .../custom => upload custom certificate
DELETE .../custom => delete custom certificate
POST .../acme/certificate => order acme certificate
PUT .../acme/certificate => renew expiring acme cert
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
by checking for definedness of the label (tapes without barcode
have the empty string as label-text) and falling back to the
source slot for the load action
Note: Changed the load-slot API from PUT to POST
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>