debugging history showed that its surely nice to have more logs at
when stuff happens (and thus fails)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
now required as we always enforce lock files to be owned by the
backup user, and the restore code uses such code indirectly as the
REST server module is reused from proxmox-backup-server. Once that is
refactored out we may do away such things, but until then we need to
have a somewhat complete system env.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
pbs-datastore now ended up depending on tokio after all, but
that's fine for now
for the fuse code I added pbs-fuse-loop (has the old
fuse_loop and its 'loopdev' module)
ultimately only binaries should depend on this to avoid the
library link
the only thins remaining to move out the client binary are
the api method return types, those will need to be moved to
pbs-api-types...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
by using the api macro on the async method and reusing the PruneOptions
from pbs-datastore with 'flatten: true'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
LVM replaces any dashes '-' in an LV or PV name with two '--' for the
created device node in /dev/mapper/ to distinguish the seperating
character between the PV and LV name.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This lock is held during VM startup, so that multiple calls will not
start VMs twice. But this means that the timeout needs to incorporate
the time it might take a VM to boot, so increase it quite a bit.
This could previously lead to "interrupted system call" errors when
accessing backups with many disks.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
New kernel has stricter checks on tmpfs with stick-bit on directories, so some
commands (i.e. proxmox-tape changer status) fails when executed as root, because
permission checks fails when locking the drive.
This patch move the drive locks to /run/proxmox-backup/drive-lock.
Note: This is incompatible to old locking mechmanism, so users may not
run tape backups during update (or running backup can fail).
move key_derivation to pbs-datastore
pbs-api-types should only contain "basic" types which
* are usually required by clients
* don't depend on pbs-related code directly
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
These are mostly tokio specific "hacks" or "workarounds" we
only really need/want in our binaries without pulling it in
via our library crates.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
During startup most of the stuff is happening in milliseconds (or
less), so the timestamp granularity of seconds made it hard to tell
if the previous command required 990ms or 1ms, which is quite the
difference in the restore daemon context.
Using micros seems not to bring too much additional information, a
millisecond is already an ok lower time resolution for logging, so
switch only to millis for now.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
fixes file restore again.
The new Memcom tracking file lives in `/run/proxmox-backup` and is
always created on REST interaction, as CachedUserInfo uses it to
efficiently track config changes, and such a cache is used in each
REST handle_request.
Further, the Memcom infra expects the base run PBS dir to exists
already, which is an OK assumption to have, but in the file-restore
daemon we have a significantly more minimal environment, and the run
dir was simply not required there, even /run isn't a tmpfs yet.
Fixes fda19dcc6f ("fix CachedUserInfo by using a shared memory version counter")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Parses JSON output from 'pvs' and 'lvs' LVM utils and does two passes:
one to scan for thinpools and create a device node for their
metadata_lv, and a second to load all LVs, thin-provisioned or not.
Should support every LV-type that LVM supports, as we only parse LVM
tools and use 'vgscan --mknodes' to create device nodes for us.
Produces a two-layer BucketComponent hierarchy with VGs followed by LVs,
PVs are mapped to their respective disk node.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Prefix zpool mount paths to avoid clashing with other mount namespaces
(like LVM).
Also ignore "already-mounted" error and return it as success instead -
as we always assume that a mount path is unique, this is a safe
assumption, as nothing else could have been mounted here.
This fixes an issue where a mountpoint=legacy subvol might be available
on different disks, and thus have different Bucket instances that don't
share the mountpoint cache, which could lead to an error if the user
tried opening it multiple times on different disks.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
otherwise the path ends in an array ["foo", "bar"] instead of "foo/bar"
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
To support nested BucketComponents, it is necessary to dedup them, as
otherwise two components like:
/foo/bar
/foo/baz
will result in /foo being shown twice at the first hierarchy.
Also make the size property based on index and optional, as for example
/foo in the example above might not have a size, and bar/baz might have
differing sizes.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
These require mounting using the regular 'mount' syscall.
Auto-generates an appropriate mount path.
Note that subvols with mountpoint=none cannot be mounted this way, and
would require setting the mountpoint property, which is not possible as
the zpools have to be imported with readonly=on.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Uses the ZFS utils to detect, import and mount zpools. These are
available as a new Bucket type 'zpool'.
Requires some minor changes to the existing disk and partiton detection
code, so the ZFS-specific part can use the information gathered in the
previous pass to associate drive names with their 'drive-xxxN.img.fidx'
node.
For detecting size, the zpool has to be imported. This is only done with
pools containing 5 or less disks, as anything else might take too long
(and should be seldomly found within VMs).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Even through best efforts at keeping it small, including the ZFS tools
in the initramfs seems to have exhausted the small overhead we had left
- give it a bit more RAM to compensate.
Also disable the ZFS ARC, as it's no use in such a memory constrained
environment, and we cache on the QEMU/rust layer anyway.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
since the output:
Result: "<UPID>"
is not really interesting, show instead the task log while
the datastore is creating, since it is now run in a worker
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
admin/datastore reads linearly only, so no need for cache (capacity of 1
basically means no cache except for the currently active chunk).
mount can do random access too, so cache last 8 chunks for possibly a
mild performance improvement.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
when we remove a datastore via api/cli, the proxy
has sometimes leftover references to that datastore in its
DATASTORE_MAP which includes an open filehandle on the
'.lock' file
this prevents unmounting/exporting the datastore even after removal,
only a reload/restart of the proxy did help
add a command to our command socket, which removes all non
configured datastores from the map, dropping the open filehandle
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>