Commit Graph

5728 Commits

Author SHA1 Message Date
Dietmar Maurer 0decd11efb cli: add CLI to manage openid realms. 2021-06-30 08:54:30 +02:00
Dietmar Maurer b84d2592fb add API to manage openid realms 2021-06-30 08:54:30 +02:00
Dietmar Maurer 0219ba2cc5 check_acl_path: add /access/domains and /access/openid 2021-06-30 08:54:30 +02:00
Dietmar Maurer bbff6c4968 config: new domains.cfg to configure openid realm
Or other realmy types...
2021-06-30 08:54:30 +02:00
Dietmar Maurer bb88c6a29d depend on proxmox-openid-rs 2021-06-30 08:54:30 +02:00
Thomas Lamprecht a02466966d update enterprise repository to bullseye
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:57:50 +02:00
Thomas Lamprecht b0fc11804e d/changelog: add actual changelog for initial 2.0 build
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:40:44 +02:00
Thomas Lamprecht d9d81741e3 buildsys: switch to bullseye as upload dist target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:10:26 +02:00
Thomas Lamprecht 9678366102 bump version to 2.0.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:07:46 +02:00
Thomas Lamprecht a2c73c78dd buildsys: call dpkg-buildpackage directly in deb-all
else we may double-build

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:07:15 +02:00
Thomas Lamprecht c6a0e7d98e d/control: bump versioned dependency for ExtJS 7.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:05:58 +02:00
Dominik Csapak 85417b2a88 docs: build api-viewer from widget-toolkit-dev
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
Dominik Csapak d738669066 docs: add Toolkit.js to lto-barcode
and generate a single js file for it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
Dominik Csapak 442d6da8fb docs: add Toolkit.js to prune simulator
from proxmox-widget-toolkit-dev and not as normal dependency,
else we would have to ship widget-toolkit on the wiki

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
Dominik Csapak 62f10a01db docs/prune-simulator: remove displayField for Calendar Field
in extjs 7.0, specifying displayField overwrites the displayTpl,
which we want to use here, so remove it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
Dominik Csapak 5667b76381 fix #3393 (again): pxar/create: try to read xattrs/fcaps/acls by default
we have a static list of filesystems and their capabilities regarding
file attributes and fs features (e.g. sockets/fifos/etc) which also
includes xattrs,acls and fcaps

if we did not know a filesystem by its magic number (for example cephfs),
we did not even attempt to read xattrs, etc.

this patch adds those flags by default to unknown filesystems, and
removes them when we encounter EOPNOTSUPP (to remove the number
of syscalls)

with this, we should be able to catch xattrs/acls/fcaps on all
(unknown) fs types that support them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-28 14:04:22 +02:00
Stefan Reiter d9b318a444 file-restore/disk: support ZFS subvols with mountpoint=legacy
These require mounting using the regular 'mount' syscall.
Auto-generates an appropriate mount path.

Note that subvols with mountpoint=none cannot be mounted this way, and
would require setting the mountpoint property, which is not possible as
the zpools have to be imported with readonly=on.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-28 13:58:41 +02:00
Stefan Reiter 86ce56f193 file-restore/disk: support ZFS pools
Uses the ZFS utils to detect, import and mount zpools. These are
available as a new Bucket type 'zpool'.

Requires some minor changes to the existing disk and partiton detection
code, so the ZFS-specific part can use the information gathered in the
previous pass to associate drive names with their 'drive-xxxN.img.fidx'
node.

For detecting size, the zpool has to be imported. This is only done with
pools containing 5 or less disks, as anything else might take too long
(and should be seldomly found within VMs).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-28 13:58:41 +02:00
Stefan Reiter 8d72c2c32e file-restore: increase RAM for ZFS and disable ARC
Even through best efforts at keeping it small, including the ZFS tools
in the initramfs seems to have exhausted the small overhead we had left
- give it a bit more RAM to compensate.

Also disable the ZFS ARC, as it's no use in such a memory constrained
environment, and we cache on the QEMU/rust layer anyway.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-28 13:58:41 +02:00
Stefan Reiter c48c38ab8c async_lru_cache: fix handling of errors in fetch
The future needs to be removed from the pending map in any case, even if
it returned an error, else all upcoming calls to access this key will
always return the same error.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-28 13:48:26 +02:00
Dominik Csapak 3d3769830b tape/helpers/snapshot_reader: sort chunks by inode (per index)
sort the chunks we want to backup to tape by inode, to gain some
speed on spinning disks. this is done per index, not globally.

costs a bit memory, but not too much, about 16 bytes per chunk which
would mean ~4MiB for a 1TiB index with 4MiB chunks.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 12:16:14 +02:00
Dominik Csapak 4921a411ad backup/datastore: refactor chunk inode sorting to the datastore
so that we can reuse that information

the removal of the adding to the corrupted list is ok, since
'get_chunks_in_order' returns them at the end of the list
and we do the same if the loading fails later in 'verify_index_chunks'
so we still mark them corrupt
(assuming that the load will fail if the stat does)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 12:14:52 +02:00
Dominik Csapak 81c767efce proxmox-backup-manager: show task log on datastore create
since the output:
Result: "<UPID>"
is not really interesting, show instead the task log while
the datastore is creating, since it is now run in a worker

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 12:09:55 +02:00
Hannes Laimer 60abf03f05 close #3459: manager: add --ignore-verified and --outdated-after parameters
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 11:03:51 +02:00
Hannes Laimer dcbf29e71b api: add ignore-verified and outdated-after to datastore verify endpoint
preparatory change for fixing #3459

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 11:03:51 +02:00
Hannes Laimer 037e6c0ca8 verify-job: move snapshot filter into function
preparatory steps for fixing #3459

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 11:03:44 +02:00
Fabian Grünbichler c7024b282a d/control: set R-R-R to run binary d/rules targets as root
the build still requires root to make helper binaries setuid

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-14 13:02:20 +02:00
Fabian Grünbichler 90ff75f85c update to zstd 0.6
compatible with libzstd from bullseye.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-14 13:01:43 +02:00
Dietmar Maurer 2165f0d450 api: define and use REALM_ID_SCHEMA 2021-06-10 11:10:00 +02:00
Wolfgang Bumiller 1e7639bfc4 fixup minimum lru capacity
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-08 10:13:46 +02:00
Stefan Reiter 4121628d99 tools/lru_cache: make minimum capacity 1
Setting this to 0 is not just useless, but breaks the logic horribly
enough to cause random segfaults - better forbid this, to avoid someone
else having to debug it again ;)

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:55 +02:00
Stefan Reiter da78b90f9c backup: remove AsyncIndexReader
superseded by CachedChunkReader, with less code and more speed

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:46 +02:00
Stefan Reiter 1ef6e8b6a7 replace AsyncIndexReader with SeekableCachedChunkReader
admin/datastore reads linearly only, so no need for cache (capacity of 1
basically means no cache except for the currently active chunk).
mount can do random access too, so cache last 8 chunks for possibly a
mild performance improvement.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:44 +02:00
Stefan Reiter 10351f7075 backup: add AsyncRead/Seek to CachedChunkReader
Implemented as a seperate struct SeekableCachedChunkReader that contains
the original as an Arc, since the read_at future captures the
CachedChunkReader, which would otherwise not work with the lifetimes
required by AsyncRead. This is also the reason we cannot use a shared
read buffer and have to allocate a new one for every read. It also means
that the struct items required for AsyncRead/Seek do not need to be
included in a regular CachedChunkReader.

This is intended as a replacement for AsyncIndexReader, so we have less
code duplication and can utilize the LRU cache there too (even though
actual request concurrency is not supported in these traits).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:40 +02:00
Stefan Reiter 70a152deb7 backup: add CachedChunkReader utilizing AsyncLruCache
Provides a fast arbitrary read implementation with full async and
concurrency support.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:37 +02:00
Stefan Reiter 5446bfbba8 tools: add AsyncLruCache as a wrapper around sync LruCache
Supports concurrent 'access' calls to the same key via a
BroadcastFuture. These are stored in a seperate HashMap, the LruCache
underneath is only modified once a valid value has been retrieved.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:34 +02:00
Stefan Reiter 400885e620 tools/BroadcastFuture: add testcase for better understanding
Explicitly test that data will stay available and can be retrieved
immediately via listen(), even if the future producing the data and
notifying the consumers was already run in the past.

Wasn't broken or anything, but helps with understanding IMO.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:29 +02:00
Dominik Csapak f960fc3b6f fix #3433: use PVE's wearout logic in PBS
in PVE, the logic how wearout gets read from the smartctl output was
changed from a vendor -> id map to a sorted list of specific
attribute field names.

copy that list to pbs (in the same order), and use that to get the
wearout

in the future we might want to split the disk logic into its own crate
and reuse it in pve

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-08 08:31:37 +02:00
Thomas Lamprecht ddfa4d679a ui: tape: DriveSelector: make wider and fine-tune column flex
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:57:45 +02:00
Thomas Lamprecht 10e8026786 ui: tape: DriveSelector: code cleanup, group config together
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:30:57 +02:00
Dominik Csapak 2527c039df ui: tape: TapeBackupJob: use correct default value for pbsUserSelector
if we want the empty value as a valid default value in a combogrid,
we have to explicitely select 'null' else the field will be marked as
dirty

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:30:29 +02:00
Dominik Csapak 93d8a2044e ui: tape: DriveSelector: do not autoselect the drive
in case an invalid drive was configured, now it marks the field
invalid instead of autoselecting the first valid one

this could have lead to users configuring the wrong drive in a
tape-backup-job when they edited one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:30:13 +02:00
Dominik Csapak d2354a16cd client/pull: log snapshots that are skipped because of time
we skip snapshots that are older than the newest snapshot of the group in
the target datastore, log it so the user can know why it is not synced

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-07 10:51:25 +02:00
Dominik Csapak 34ee1f1c76 ui: DataStoreList: add remove button
so that a user can remove a datastore from the gui,
though no data is deleted, this has to be done elsewhere (for now)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-04 09:41:52 +02:00
Dominik Csapak 2de4dc3a81 backup/chunk_store: optionally log progress on creation
and enable it for the worker variants

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-04 09:32:09 +02:00
Dietmar Maurer b90036dadd cleanup: factor out config::datastore::lock_config() 2021-06-04 09:04:14 +02:00
Dominik Csapak 4708f4fc21 api2/config/datastore: change create datastore api call to a worker
so that longer running creates (e.g. a slow storage), does not
run in a timeout and we can follow its creation

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-04 09:02:05 +02:00
Dominik Csapak 062cf75cdf proxmox-backup-proxy: fix leftover references on datastore removal
when we remove a datastore via api/cli, the proxy
has sometimes leftover references to that datastore in its
DATASTORE_MAP which includes an open filehandle on the
'.lock' file

this prevents unmounting/exporting the datastore even after removal,
only a reload/restart of the proxy did help

add a command to our command socket, which removes all non
configured datastores from the map, dropping the open filehandle

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-04 08:22:53 +02:00
Dominik Csapak e5950360ca tape/drive: improve tape device locking behaviour
by implementing a custom error type that is either 'TimeOut' or
'Other'.

In the api, check in the worker loop for exactly 'TimeOut' errors and continue only
then. All other errors lead to a aborted task.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-02 17:08:00 +02:00
Dominik Csapak 5b358ff0b1 server/prune_job: fix locking during prune jobs
removing the backup dir must acquire the snapshot lock, else it can
happen that we remove a snapshot while it is being restored
or backed up to tape

the original commit that adds the force flag
(c9756b40d1)
mentions that the prune checks itself if the snapshot is in use,
but i could not find such code, so simply set force to false

to avoid failing and aborting the prune job, warn if it could not
and continue

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-02 17:04:49 +02:00