from proxmox-widget-toolkit-dev and not as normal dependency,
else we would have to ship widget-toolkit on the wiki
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in extjs 7.0, specifying displayField overwrites the displayTpl,
which we want to use here, so remove it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we have a static list of filesystems and their capabilities regarding
file attributes and fs features (e.g. sockets/fifos/etc) which also
includes xattrs,acls and fcaps
if we did not know a filesystem by its magic number (for example cephfs),
we did not even attempt to read xattrs, etc.
this patch adds those flags by default to unknown filesystems, and
removes them when we encounter EOPNOTSUPP (to remove the number
of syscalls)
with this, we should be able to catch xattrs/acls/fcaps on all
(unknown) fs types that support them
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
These require mounting using the regular 'mount' syscall.
Auto-generates an appropriate mount path.
Note that subvols with mountpoint=none cannot be mounted this way, and
would require setting the mountpoint property, which is not possible as
the zpools have to be imported with readonly=on.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Uses the ZFS utils to detect, import and mount zpools. These are
available as a new Bucket type 'zpool'.
Requires some minor changes to the existing disk and partiton detection
code, so the ZFS-specific part can use the information gathered in the
previous pass to associate drive names with their 'drive-xxxN.img.fidx'
node.
For detecting size, the zpool has to be imported. This is only done with
pools containing 5 or less disks, as anything else might take too long
(and should be seldomly found within VMs).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Even through best efforts at keeping it small, including the ZFS tools
in the initramfs seems to have exhausted the small overhead we had left
- give it a bit more RAM to compensate.
Also disable the ZFS ARC, as it's no use in such a memory constrained
environment, and we cache on the QEMU/rust layer anyway.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The future needs to be removed from the pending map in any case, even if
it returned an error, else all upcoming calls to access this key will
always return the same error.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
sort the chunks we want to backup to tape by inode, to gain some
speed on spinning disks. this is done per index, not globally.
costs a bit memory, but not too much, about 16 bytes per chunk which
would mean ~4MiB for a 1TiB index with 4MiB chunks.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that we can reuse that information
the removal of the adding to the corrupted list is ok, since
'get_chunks_in_order' returns them at the end of the list
and we do the same if the loading fails later in 'verify_index_chunks'
so we still mark them corrupt
(assuming that the load will fail if the stat does)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since the output:
Result: "<UPID>"
is not really interesting, show instead the task log while
the datastore is creating, since it is now run in a worker
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Setting this to 0 is not just useless, but breaks the logic horribly
enough to cause random segfaults - better forbid this, to avoid someone
else having to debug it again ;)
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
admin/datastore reads linearly only, so no need for cache (capacity of 1
basically means no cache except for the currently active chunk).
mount can do random access too, so cache last 8 chunks for possibly a
mild performance improvement.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Implemented as a seperate struct SeekableCachedChunkReader that contains
the original as an Arc, since the read_at future captures the
CachedChunkReader, which would otherwise not work with the lifetimes
required by AsyncRead. This is also the reason we cannot use a shared
read buffer and have to allocate a new one for every read. It also means
that the struct items required for AsyncRead/Seek do not need to be
included in a regular CachedChunkReader.
This is intended as a replacement for AsyncIndexReader, so we have less
code duplication and can utilize the LRU cache there too (even though
actual request concurrency is not supported in these traits).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Supports concurrent 'access' calls to the same key via a
BroadcastFuture. These are stored in a seperate HashMap, the LruCache
underneath is only modified once a valid value has been retrieved.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Explicitly test that data will stay available and can be retrieved
immediately via listen(), even if the future producing the data and
notifying the consumers was already run in the past.
Wasn't broken or anything, but helps with understanding IMO.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
in PVE, the logic how wearout gets read from the smartctl output was
changed from a vendor -> id map to a sorted list of specific
attribute field names.
copy that list to pbs (in the same order), and use that to get the
wearout
in the future we might want to split the disk logic into its own crate
and reuse it in pve
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if we want the empty value as a valid default value in a combogrid,
we have to explicitely select 'null' else the field will be marked as
dirty
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
in case an invalid drive was configured, now it marks the field
invalid instead of autoselecting the first valid one
this could have lead to users configuring the wrong drive in a
tape-backup-job when they edited one
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we skip snapshots that are older than the newest snapshot of the group in
the target datastore, log it so the user can know why it is not synced
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that a user can remove a datastore from the gui,
though no data is deleted, this has to be done elsewhere (for now)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that longer running creates (e.g. a slow storage), does not
run in a timeout and we can follow its creation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
when we remove a datastore via api/cli, the proxy
has sometimes leftover references to that datastore in its
DATASTORE_MAP which includes an open filehandle on the
'.lock' file
this prevents unmounting/exporting the datastore even after removal,
only a reload/restart of the proxy did help
add a command to our command socket, which removes all non
configured datastores from the map, dropping the open filehandle
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
by implementing a custom error type that is either 'TimeOut' or
'Other'.
In the api, check in the worker loop for exactly 'TimeOut' errors and continue only
then. All other errors lead to a aborted task.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
removing the backup dir must acquire the snapshot lock, else it can
happen that we remove a snapshot while it is being restored
or backed up to tape
the original commit that adds the force flag
(c9756b40d1)
mentions that the prune checks itself if the snapshot is in use,
but i could not find such code, so simply set force to false
to avoid failing and aborting the prune job, warn if it could not
and continue
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>