Allows mapping fixed-index .img files (usually from VM backups) to be
mapped to a local loopback device.
The architecture uses a FUSE-backed temp file mapped to a loopdev:
/dev/loopX -> FUSE /run/pbs-loopdev/xxx -> backup client -> PBS
Since unmapping requires some cleanup (unmap the loopdev, stop FUSE,
remove the temp files) a special 'unmap' command is added, which uses a
PID file to send SIGINT to the backup-client instance started with
'map', which will handle the cleanup itself.
The polling with select! in mount.rs needs to be split in two, since we
have a chicken and egg problem between running FUSE and setting up the
loop device - so we need to do them concurrently, until the loopdev is
assigned, at which point we can report success and daemonize, and then
continue polling the FUSE loop future.
A loopdev module is added to tools containing all required functions for
mapping a loop device to the FUSE file, with the ioctls moved into an
inline module to avoid exposing them directly.
The client code is placed in the 'mount' module, which, while
admittedly a loose fit, allows reuse of the daemonizing code.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
while restructuring the docs, explicit title wasn't included in the
correct file
fixes commit 04e24b14f0
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
previously it looked for the first instance. this behavior
became an issue while trying to add multiple onlineHelp buttons
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
if the archive file does not exist yet, we cannot rotate it, but it's not
actually an error, so just return Ok(false) to indicate no rotation took
place
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This removes the "Backup Management" first level heading in the docs,
and either uses the sub headings contained within it as first level
headings, or groups previous sections logically under new headings.
The administration-guide.rst file is also removed. Its contents are
instead separated into various files, that relate to their respective
first level heading.
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
while we probably do not add much more to them, it still looks ugly.
If this was made so that adding a World readable API call is "hard"
and not done by accident, it rather should be done as a test on build
time. But, IMO, the API permission schema definitions are easy to
review, and not often changed/added - so any wrong World readable API
call will normally still caught.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In theory, one can do std::mem::forget, and ignore the drop handler. With
the lifetime hack, this could result in a crash.
So we simply require 'static lifetime now (futures also needs that).
Causes a panic if last_update is smaller than RRD_DATA_ENTRIES*reso,
which (I believe) can happen when inserting the first value for a DB.
Clamp the value to 0 in that case.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Fix a potential bug where errors that happen after the SendHandle has
been dropped while doing the thread join might have been ignored.
Requires internal check_abort to be moved out of 'impl SendHandle' since
we only have the Mutex left, not the SendHandle.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This can slow things down by a lot on setups with (relatively) high
seek time, in the order of doubling the backup times if cache isn't
populated with the last backups chunk inode info.
Effectively there's nothing known this protects us from in the
codebase. The only thing which was theorized about was the case
where a really long running backup job (over 24 hours) is still
running and writing new chunks, not indexed yet anywhere, then an
update (or manual action) triggers a reload of the proxy. There was
some theory that then a GC in the new daemon would not know about the
oldest writer in the old one, and thus use a less strict atime limit
for chunk sweeping - opening up a window for deleting chunks from the
long running backup.
But, this simply cannot happen as we have a per datastore process
wide flock, which is acquired shared by backup jobs and exclusive by
GC. In the same process GC and backup can both get it, as it has a
process locking granularity. If there's an old daemon with a writer,
that also has the lock open shared, and so no GC in the new process
can get exclusive access to it.
So, with that confirmed we have no need for a "half-assed"
verification in the backup finish step. Rather, we plan to add an
opt-in "full verify each backup on finish" option (see #2988)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
use our hostport regexes to parse out a potential port from the host field
and send it individually
this makes for a simpler and cleaner ui
this additionally checks the field for valid input before sending it to
the backend
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We forgot to put braces around the DNS_NAME regex, and in
DNS_NAME_OR_IP_REGEX
this is wrong because the regex
^foo|bar$
matches 'foo' at the beginning and 'bar' at the end, so either
foobaz
bazbar
would match. only
^(foo|bar)$
matches only 'foo' and 'bar'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
* add square brackets to ipv6 adresses in BackupRepository if they not
already have some (we save them without in the remote config)
* in get_pull_parameters, we now create a BackupRepository first and use
those values (which does the [] mapping), this also has the advantage
that we have one place less were we hardcode 8007 as port
* in the ui, add square brackets for ipv6 adresses for remotes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by setting a maxHeight+scrollable
(i used 500px to be still visible on our 'min screen size' 1280x720)
and by disabling emptyText deferral, which now shows the text instantly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when upgrading from a version where we stored all tasks in the 'active' file,
we did not completly account for finished tasks still there
we should update the file when encountering any finished task in
'active' as well as filter them out on the api call (if they get through)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this adds the ability to add port numbers in the backup repo spec
as well as remotes, so that user that are behind a
NAT/Firewall/Reverse proxy can still use it
also adds some explanation and examples to the docs to make it clearer
for h2 client i left the localhost:8007 part, since it is not
configurable where we bind to
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When creating a new zpool for a datastore, also instantiate an
import-unit for it. This helps in cases where '/etc/zfs/zool.cache'
get corrupted and thus the pool is not imported upon boot.
This patch needs the corresponding addition of 'zfs-import@.service' in
the zfsonlinux repository.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
we need this, because we append the port to this to get a target url
e.g. we print
format!("https://{}:8007/", address)
if address is now an ipv6 (e.g. fe80::1) it would become
https://fe80::1:8007/ which is a valid ipv6 on its own
by using square brackets we get:
https://[fe80::1]:8007/ which now connects to the correct ip/port
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since len() and MAX_INDEX_TASKS are both usize, they underflow
instead of getting negative values
instead check the sizes and set them accordingly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>