this is intended to be a generic helper to (de)serialize job states
(e.g., sync, verify, and so on)
writes a json file into '/var/lib/proxmox-backup/jobstates/TYPE-ID.json'
the api creates the directory with the correct permissions, like
the rrd directory
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the endtime should be the timestamp of the last log line
or if there is no log at all, the starttime
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
representing a state via an enum makes more sense in this case
we also implement FromStr and Display to make it easy to convet from/to
a string
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
To prevent forgetting the base snapshot of a running backup, and catch
the case when it still happens (e.g. via manual rm) to at least error
out instead of storing a potentially invalid backup.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This reverts commit d53fbe2474.
The HashSet and "register" function are unnecessary, as we already know
which backup is the one we need to check: the last one, stored as
'last_backup'.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
An flock on the snapshot dir itself is used in addition to the group dir
lock. The lock is used to avoid races with forget and prune, while
having more granularity than the group lock (i.e. the group lock is
necessary to prevent more than one backup per group, but the snapshot
lock still allows backups unrelated to the currently running to be
forgotten/pruned).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Attempt to lock the backup directory to be deleted, if it works keep the
lock until the deletion is complete. This way we ensure that no other
locking operation (e.g. using a snapshot as base for another backup) can
happen concurrently.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
an encrypted Index should never reference a plain-text chunk, and an
unencrypted Index should never reference an encrypted chunk.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
these checks were already in place for regular downloading of backed up
files, also do them when attempting to decode a catalog, or when
downloading decoded files referenced by a pxar index.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
If the datastore holds broken backups for some reason, do not attempt to
base following snapshots on those. This would lead to an error on
/previous, leaving the client no choice but to upload all chunks, even
though there might be potential for incremental savings.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
When uploading an RSA encoded key alongside the backup,
the backup would fail with the error message: "wrong blob
file extension".
Adding the '.blob' extension to rsa-encrypted.key before the
the call to upload_blob_from_data(), rather than after, fixes
the issue.
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
Commit 9fa55e09 "finish_backup: test/verify manifest at server side"
moved the finished-marking above some checks, which means if those fail
the backup would still be marked as successful on the server.
Revert that part and comment the line for the future.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
instead of bailing and stopping the entire GC process, warn about the
missing chunks and continue.
this results in "TASK WARNINGS: X" as the status.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Used chunks are marked in phase1 of the garbage collection process by
using the atime property. Each used chunk gets touched so that the atime
gets updated (if older than 24h, see relatime).
Should there ever be a situation in which the phase1 in the GC run needs
a very long time to finish, it could happen that the grace period
calculated in phase2 is not long enough and thus the marking of the
chunks (atime) becomes invalid. This would result in the removal of
needed chunks.
Even though the likelyhood of this happening is very low, using the
timestamp from right before phase1 is started, to calculate the grace
period in phase2 should avoid this situation.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
and not just of previously synced ones.
we can't use BackupManifest::verify_file as the archive is still stored
under the tmp path at this point.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
for encrypted chunks this is currently not possible, as we need the key
to decode the chunk.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
regular chunks are only decoded when their contents are accessed, in
which case we need to have the key anyway and want to verify the digest.
for blobs we need to verify beforehand, since their checksums are always
calculated based on their raw content, and stored in the manifest.
manifests are also stored as blobs, but don't have a digest in the
traditional sense (they might have a signature covering parts of their
contents, but that is verified already when loading the manifest).
this commit does not cover pull/sync code which copies blobs and chunks
as-is without decoding them.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Errors while applying metadata will not be considered fatal
by default using `pxar extract` unless `--strict` was passed
in which case it'll bail out immediately.
It'll still return an error exit status if something had
failed along the way.
Note that most other errors will still cause it to bail out
(eg. errors creating files, or I/O errors while writing
the contents).
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
'username' here is without realm, but we really want to use user@realm
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
tokios kill_on_drop sometimes leaves zombies around, especially
when there is not another tokio::process::Command spawned after
so instead of relying on the 'kill_on_drop' feature, we explicitly
kill the child on a worker abort. to be able to do this
we have to use 'tokio::select' instead of 'futures::select' since
the latter requires the future to be fused, which consumes the
child handle, leaving us no possibility to kill it after fusing.
(tokio::select does not need the futures to be fused, so we
can reuse the child future after the select again)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
so that we can print a list at the end of the worker which backups
are corrupt.
this is useful if there are many snapshots and some in between had an
error. Before this patch, the task log simply says to 'look in the logs'
but if the log is very long it makes it hard to see what exactly failed.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this makes it easier to see which chunks are corrupt
(and enables us in the future to build a 'complete' list of
corrupt chunks)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The extraction algorithm has a state (bool) indicating
whether we're currently in a positive or negative match
which has always been initialized to true at the beginning,
but when the user provides a `--pattern` argument we need to
start out with a negative match.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This should never trigger if everything else works correctly, but it is
still a very cheap check to avoid wrongly marking a backup as "OK" when
in fact some chunks might be missing.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Multiple backups within one backup group don't really make sense, but
break all sorts of guarantees (e.g. a second backup started after a
first would use a "known-chunks" list from the previous unfinished one,
which would be empty - but using the list from the last finished one is
not a fix either, as that one could be deleted or pruned once the first
simultaneous backup is finished).
Fix it by only allowing one backup per backup group at one time. This is
done via a flock on the backup group directory, thus remaining intact
even after a reload.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
To prevent a race with a background GC operation, do not allow deletion
of backups who's index might currently be referenced as the "known chunk
list" for successive backups. Otherwise the GC could delete chunks it
thinks are no longer referenced, while at the same time telling the
client that it doesn't need to upload said chunks because they already
exist.
Additionally, prevent deletion of whole backup groups, if there are
snapshots contained that appear to be currently in-progress. This is
currently unlikely to trigger, as that function is only used for sync
jobs, but it's a useful safeguard either way.
Deleting a single snapshot has a 'force' parameter, which is necessary
to allow deleting incomplete snapshots on an aborted backup. Pruning
also sets force=true to avoid the check, since it calculates which
snapshots to keep on its own.
To avoid code duplication, the is_finished method is factored out.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Also swap the order of a couple of `.map_err().await` to
`.await.map_err()` since that's generally more efficient.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
a blob can be empty (e.g. an empty pct fw conf), so we
have to set the minimum size to the header size
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
And make verify_crc private for now. We always call load_from_reader() to
verify the CRC.
Also add load_chunk() to datastore.rs (from chunk_store::read_chunk())
is a helper to spawn an internal tokio task without it showing up
in the task list
it is still tracked for reload and notifies the last_worker_listeners
this enables the console to survive a reload of proxmox-backup-proxy
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
using micros vs. as_secs_f64 allows to have it calculated as usize
bytes, easier to handle - this was also used when it still lived in
upload_chunk_info_stream
Co-authored-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Even though it has nothing to do with vnc, we keep the name of the api
call for compatibility with our xtermjs client.
termproxy:
verifies that the user is allowed to open a console and starts
termproxy with the correct parameters
starts a TcpListener on "localhost:0" so that the kernel decides the
port (instead of trying to rerserving like in pve). Then it
leaves the fd open for termproxy and gives the number as port
and tells it via '--port-as-fd' that it should interpret this
as an open fd
the vncwebsocket api call checks the 'vncticket' (name for compatibility)
and connects the remote side (after an Upgrade) with a local TcpStream
connecting to the port given via WebSocket from the proxmox crate
to make sure that only the client can connect that called termproxy and
no one can connect to an arbitrary port on the host we have to include
the port in the ticket data
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
modeled after pves/pmgs vncticket (i substituted the vnc with term)
by putting the path and username as secret data in the ticket
when sending the ticket to /access/ticket it only verifies it,
checks the privs on the path and does not generate a new ticket
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of exposing handlebars itself, offer a register_template and
a render_template ourselves.
render_template checks if the template file was modified since
the last render and reloads it when necessary
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Depends on patched apt-pkg-native-rs. Changelog-URL detection is
inspired by PVE perl code for now, though marked with fixme to use 'apt
changelog' later on, if/when our repos have APT-compatible changelogs
set up.
list_installed_apt_packages iterates all packages and creates an
APTUpdateInfo with detailed information for every package matched by the
given filter Fn.
Sadly, libapt-pkg has some questionable design choices regarding their
use of 'iterators', which means quite a bit of nesting...
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
useful to get info like, was the previous snapshot encrypted in
libproxmox-backup-qemu
Requested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
I mean the user expects that we know what archives, fidx or didx, are
in a backup, so this is internal info and should not be logged by
default
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Verbosity needs to be a non binary level, as this now is just
debug/development info, for endusers normally to much.
We want to have it available, but with a much higher verbosity level.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Track reused size and chunk counts.
Log reused size and use pretty print for all sizes and bandwidth
metrics.
Calculate speed over the actually uploaded size, as else it can be
skewed really bad (showing like terabytes per second)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Requires updating the AsyncRead implementation to cope with byte-wise
seeks to intra-chunk positions.
Uses chunk_from_offset to get locations within chunks, but tries to
avoid it for sequential read to not reduce performance from before.
AsyncSeek needs to use the temporary seek_to_pos to avoid changing the
position in case an invalid seek is given and it needs to error in
poll_complete.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
We support using an ext4 mountpoint directly as datastore and even do
so ourself when creating one through the disk manage code.
Such ext4 ountpoints have a lost+found directory which only root can
traverse into. As the GC list images is done as backup:backup user
walkdir gets an error.
We cannot ignore just all permission errors, as they could lead to
missing some backup indexes and thus possibly sweeping more chunks
than desired. While *normally* that should not happen through our
stack, we had already user report that they do rsyncs to move a
datastore from old to new server and got the permission wrong.
So for now be still very strict, only allow a "lost+found" directory
as immediate child of the datastore base directory, nothing else.
If deemed safe, this can always be made less strict. Possibly by
filtering the known backup-types on the highest level first.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
otherwise we leak those descriptors and run into EMFILE when a backup
group contains many snapshots.
fcntl::openat and Dir::openat are not the same ;)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
also when they have been removed/forgotten since we retrieved the
snapshot list for the currently syncing backup group.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This aligns it with PVE and allows the widget toolkit's update window
"refresh" to work without modifications once POST /apt/update is
implemented.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
While we do not yet support the date specs for CalendarEvent the left
out "weekly" special expression[0] dies not requires that support.
It is specified to be equivalent with `Mon *-*-* 00:00:00` [0] and
this can be implemented with the weekday and time support we already
have.
[0]: https://www.freedesktop.org/software/systemd/man/systemd.time.html#Calendar%20Events
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
when a datastore has enough data to calculate the estimated full date,
but always has exactly the same usage, the factor b of the regression
is '0'
return 0 for that case so that the gui can show 'never' instead of
'not enough data'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
JSON keys MUST be quoted. this is a one-time break in signature
validation for backups created with the broken canonicalization code.
QEMU backups are not affected, as libproxmox-backup-qemu never linked
the broken versions.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Make it actually do the correct cast by using `libc::c_char`.
Fixes issues when building on other platforms, e.g., the aarch64
client only build on Arch Linux ARM I tested in my free time.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Especially helpful for requests not coming from browsers (where the
URL is normally easy to find out).
Makes it easier to detect if one triggered a request with an old
client, or so..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it is nice to have a command to exit from the shell instead of
only allowing ctrl+d or ctrl+c
the api method is just for documentation/help purposes and does nothing
by itself, the real logic is directly in the read loop
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Change the .pxarexclude parser to byte based parsing with
`.split(b'\n')` instead of `.lines()`, to not panic on
non-utf8 paths.
Specially deal with absolute paths by prefixing them with
the current directory.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
As else this is really user unfriendly, and it not printing it has no
advantage. If one doesn't wants to leak resource existence they just
need to *always* check permissions before checking if the requested
resource exists, if that's not done one can leak information also
without getting the path returned (as the system will either print
"resource doesn't exists" or "no permissions" respectively)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When creating a new datastore the basedir is only owned by the backup
user if it did not exist beforehand (create_path chowns only if it
creates the directory), and returns false if it did not create the
directory).
This improves the experience when adding a new datastore on a fresh
disk or existing directory (not owned by backup) - backups/pulls can
be run instead of terminating with EPERM.
Tested on my local testinstall with a new disk, and a existing directory:
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
it does not make sense to check if the worker is running if we already
have an endtime and state
our 'worker_is_active_local' heuristic returns true for non
process-local tasks, so we got 'running' for all tasks that were not
started by 'our' pid and were still running
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the **/ is not required and currently also mistakenly
doesn't match /lost+found which is probably buggy on the
pathpatterns crate side and needs fixing there
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
zfs does not have to be installed, so simply log an error and
continue, users still get an error when clicking directly on
ZFS
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
not exhaustive of what zfs allows (space is missing), but this
can be done easily without problems
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
those names are allowed for zpools
these will fail for now, but it will be fixed in the next commit
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
* don't clone hash keys, just use references
* we don't need a String, stick to Vec<u8> and use
serde_json::to_writer to avoid a temporary strings
altogether
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This is a more convenient way to pass along the key when
creating encrypted backups of unprivileged containers in PVE
where the unprivileged user namespace cannot access
`/etc/pve/priv`.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Have a single common function to get the BaseDirectories
instance and a wrapper for `find()` and `place()` which
wrap the error with some context.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
place() is used when creating a file, as it will create
intermediate directories, only use it when actually placing
a new file.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This also replaces the recently introduced --encryption
parameter on the client with a --crypt-mode parameter.
This can be "none", "encrypt" or "sign-only".
Note that this introduces various changes in the API types
which previously did not take the above distinction into
account properly:
Both `BackupContent` and the manifest's `FileInfo`:
lose `encryption: Option<bool>`
gain `crypt_mode: Option<CryptMode>`
Within the backup manifest itself, the "crypt-mode" property
will always be set.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This can be used to explicitly disable encryption even if a
default key file exists in ~/.config.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
instead of checking on '1' or 'true', check that it is there and not
'0' and 'false'. this allows using simply
https://foo:8007/?debug
instead of
https://foo:8007/?debug=1
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
And use it for fixed and dynamic index. Please note that this
changes checksums for fixed indexes, so restore older backups
will fails now (not backward compatible).
To support incremental backups (where not all chunks are sent to the
server), a new parameter "reuse-csum" is introduced on the
"create_fixed_index" API call. When set and equal to last backups'
checksum, the backup writer clones the data from the last index of this
archive file, and only updates chunks it actually receives.
In incremental mode some checks usually done on closing an index cannot
be made, since they would be inaccurate.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
if *only* data chunks are registered (high chance during incremental
backup), then chunk_count might be one lower then upload_stat.count
because of the zero chunk being unconditionally uploaded but not used.
Thus when subtracting the two, an overflow would occur.
In general, don't let the client make the server panic, instead just set
duplicates to 0.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
and if it does bail, because otherwise we would get an
error on mounting and have a zpool that is not imported
and disks that are used
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
returns the dir listing of the given filepath of the backup snapshot
the filepath has to be base64 encoded or 'root'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want to get a string representation of the DirEntryAttribute
like 'f' for file, etc. and since we have such a mapping already
in the CatalogEntryType, use that
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
mostly copied from BufferedDynamicReadAt from proxmox-backup-client
but the reader is wrapped in an Arc in addition to the Mutex
we will use this for local access to a pxar behind a didx file
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We get the path to our executable via a readlink() on
"/proc/self/exe", which appends a " (deleted)" during
package reloads.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
implements AsyncRead as well as Stream for an IndexFile and a store
that implements AsyncReadChunk
we can use this to asyncread or stream the content of a FixedIndex or
DynamicIndex
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want to save if a file of a backup is encrypted, so that we can
* show that info on the gui
* can later decide if we need to decrypt the backup
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
does the same, except the manual drop, but thats handled there by
letting the value go out of scope
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
copy_nonoverlapping is basically a memcpy which can also be done
via copy_from_slice which is not unsafe
(copy_from_slice uses copy_nonoverlapping internally)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
These aren't installed and are only used for manual testing,
so there's no reason to force them to be built all the time.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
for now mostly copy/paste from nodes/nodename/tasks
(without the parameters)
but we should replace the 'read_task_list' with a method
that gives us the tasks since some timestamp
so that we can get a longer list of tasks than for the node
(we could of course embed this then in the nodes/node/task api call and
remove this again as long as the api is not stable)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The download methods used to take the destination by value
and return them again, since this was required when using
combinators before we had `async fn`.
But this is just an ugly left-over now.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
else we start a dynamic writer and never close it, leading to a backup error
this fixes an issue with backing up vm templates
(and possibly vms without disks)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
we check if all dynamic_writers are closed and if the backup contains
any valid files, we can only mark the backup finished after those
checks, else the backup task gets marked as OK, even though it
is not finished and no cleanups run
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
and drop the now unused extract_lists function
this also fixes a bug, where we did not add the datastore to the list at
all when there was no rrd data
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
there is now a 'extract_cached_data' which just returns
the data of the specified field, and an api function that converts
a list of fields to the correct serde value
this way we do not have to create a serde value in rrd/cache.rs
(makes for a better interface)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Report vanished files (instead of erroring out on them),
also only warn about files inaccessible due to permissions
instead of bailing out.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This used to be default-off and was accidentally set to
on-by-default with the pxar crate update.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
returns a list of the datastores and their usages, a list of usages of
the past month (for the gui) and an estimation of when its full
(using the linear regression)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
provides some basic statistics functions (sum, mean, etc.)
and a function to return the parameters of the linear regression of
two variables
implemented using num_traits to be more flexible for the types
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is an interface to simply get the Vec<Option<f64>> out of rrd
without going through serde values
we return a list of timestamps and a HashMap with the lists we could find
(otherwise it is not in the map)
if no lists could be extracted, the time list is also empty
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
disk_usage returned the same values as defined in StorageStatus,
so simply use that
with that we can replace the logic of the datastore status with that
function and also use it for root disk usage of the nodes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the last chunk does not have to be as big as the chunk_size,
just use the already available 'chunk_end' function which does the
correct thing
this fixes restoration of images whose sizes are not a multiple of
'chunk_size' as well
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if the last sync job is too far in the past (or there was none at all
for now) we run it at the next iteration, so we want to show that
we now calculate the next_run by using either the real last endtime
as time or 0
then in the frontend, we check if the next_run is < now and show 'pending'
(we do it this way also for replication on pve)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
two things were wrong here:
* the range (x..y) does not include y, so the range
(day_num+1..6) goes from (day_num+1) to 5 (but sunday is 6)
* WeekDays.bits() does not return the 'day_num' of that day, but
the bit value (e.g. 64 for SUNDAY) but was treated as the index of
the day of the week
to fix this, we drop the map to WeekDays and use the 'indices'
directly
this patch makes the test work again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If one executes a client command like
# proxmox-backup-client files <snapshot> --repository ...
the files shown have already the '.fidx' or '.blob' file ending, so
if a user would just copy paste that one the client would always add
.blob, and the server would not find that file.
So avoid adding file endings if it is already a known OK one.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
will be extended in a next patch.
Also drop a dead else branch, can never get hit as we always add
.blob as fallback
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
while touching it, make columns and tbar in DataStoreContent.js
declarative members and remove the (now) unnecessary initComponent
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this returns the list of syncjobs with status, as opposed to
config/sync (which is just the config)
also adds an api call where users can run the job manually under
/admin/sync/$ID/run
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
'sync' is used for manually pulling a remote datastore
changing it for a scheduled sync to 'syncjob' so that we can
differentiate between both types of syncs
this also adds a seperate task description for it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The repo URL consists of
* optional userid
* optional host
* datastore name
All three have defined regex or format, but none of that is used, so
for example not all valid datastore names are accepted.
Move definition of the regex over to api2::types where we can access
all required regexes easily.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since the target side wants this to be a boolean and
serde interprets a None Value as 'null' we have to only
add this when it is really set via cli
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when starting a new task, we do two things to keep track of tasks
(in that order):
* updating the 'active' file with a list of tasks with
'update_active_workers'
* updating the WORKER_TASK_LIST
the second also updates the status of running tasks in the file by
checking if it is still running by checking the WORKER_TASK_LIST
since those two things are not locked, it can happend that
we update the file, and before updating the WORKER_TASK_LIST,
another thread calls update_active_workers and tries to
get the status from the task log, which won't have any data yet
so the status is 'unknown'
(we do not update that status ever, likely for performance reasons,
so we have to fix this here)
by switching the order of the two operations, we make sure that only
tasks reach the 'active' file which are inserted in the WORKER_TASK_LIST
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to avoid having arbitrary characters in the config (e.g. newlines)
note that this breaks existings configs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
with a catch: password is in the struct but we do not want it to return
via the api, so we only 'serialize' it when the string is not empty
(this can only happen when the format is not checked by us, iow.
when its returned from the api) and setting it manually to ""
when we return remotes from the api
this way we can still use the type but do not return the password
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since we want to set the owner of the acl config to 'root'
which is only possible when using a protected api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we added a userid attribute to the User struct, but missed that we
created the default user without that attribuet via the json! macro
which lead to a runtime panic on the deserialization
by using the struct directly, such errors will be caught by the compiler
in the future
with this change, we can remove the serde_json import here
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
using a handlebars instance in ApiConfig, to cache the templates
as long as possible, this is currently ok, as the index template
can only change when the whole package changes
if we split this in the future, we have to trigger a reload of
the daemon on gui package upgrade (so that the template gets reloaded)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We dont want to leak the full configuration to users with limited access permission.
Please use the api2::config::datastore api to get the full configuration.
Make `flistxattr()` return a `ListXAttr` helper which
provides an iterator over `&CStr`.
This exposes the property that xattr names are a
zero-terminated string without simply being an opaque
"byte vector". Using &[u8] as a type here is too lax.
Also let `fgetxattr` take a `CStr`. While this may be a
burden on the caller, we usually already have
zero-terminated strings on the call site. Currently we only
use this method coming from `flistxattr` after all.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>