When creating a new datastore the basedir is only owned by the backup
user if it did not exist beforehand (create_path chowns only if it
creates the directory), and returns false if it did not create the
directory).
This improves the experience when adding a new datastore on a fresh
disk or existing directory (not owned by backup) - backups/pulls can
be run instead of terminating with EPERM.
Tested on my local testinstall with a new disk, and a existing directory:
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
it does not make sense to check if the worker is running if we already
have an endtime and state
our 'worker_is_active_local' heuristic returns true for non
process-local tasks, so we got 'running' for all tasks that were not
started by 'our' pid and were still running
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the **/ is not required and currently also mistakenly
doesn't match /lost+found which is probably buggy on the
pathpatterns crate side and needs fixing there
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
zfs does not have to be installed, so simply log an error and
continue, users still get an error when clicking directly on
ZFS
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
not exhaustive of what zfs allows (space is missing), but this
can be done easily without problems
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
those names are allowed for zpools
these will fail for now, but it will be fixed in the next commit
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
* don't clone hash keys, just use references
* we don't need a String, stick to Vec<u8> and use
serde_json::to_writer to avoid a temporary strings
altogether
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This is a more convenient way to pass along the key when
creating encrypted backups of unprivileged containers in PVE
where the unprivileged user namespace cannot access
`/etc/pve/priv`.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Have a single common function to get the BaseDirectories
instance and a wrapper for `find()` and `place()` which
wrap the error with some context.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
place() is used when creating a file, as it will create
intermediate directories, only use it when actually placing
a new file.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This also replaces the recently introduced --encryption
parameter on the client with a --crypt-mode parameter.
This can be "none", "encrypt" or "sign-only".
Note that this introduces various changes in the API types
which previously did not take the above distinction into
account properly:
Both `BackupContent` and the manifest's `FileInfo`:
lose `encryption: Option<bool>`
gain `crypt_mode: Option<CryptMode>`
Within the backup manifest itself, the "crypt-mode" property
will always be set.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This can be used to explicitly disable encryption even if a
default key file exists in ~/.config.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
instead of checking on '1' or 'true', check that it is there and not
'0' and 'false'. this allows using simply
https://foo:8007/?debug
instead of
https://foo:8007/?debug=1
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
And use it for fixed and dynamic index. Please note that this
changes checksums for fixed indexes, so restore older backups
will fails now (not backward compatible).
To support incremental backups (where not all chunks are sent to the
server), a new parameter "reuse-csum" is introduced on the
"create_fixed_index" API call. When set and equal to last backups'
checksum, the backup writer clones the data from the last index of this
archive file, and only updates chunks it actually receives.
In incremental mode some checks usually done on closing an index cannot
be made, since they would be inaccurate.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
if *only* data chunks are registered (high chance during incremental
backup), then chunk_count might be one lower then upload_stat.count
because of the zero chunk being unconditionally uploaded but not used.
Thus when subtracting the two, an overflow would occur.
In general, don't let the client make the server panic, instead just set
duplicates to 0.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
and if it does bail, because otherwise we would get an
error on mounting and have a zpool that is not imported
and disks that are used
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
returns the dir listing of the given filepath of the backup snapshot
the filepath has to be base64 encoded or 'root'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want to get a string representation of the DirEntryAttribute
like 'f' for file, etc. and since we have such a mapping already
in the CatalogEntryType, use that
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
mostly copied from BufferedDynamicReadAt from proxmox-backup-client
but the reader is wrapped in an Arc in addition to the Mutex
we will use this for local access to a pxar behind a didx file
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We get the path to our executable via a readlink() on
"/proc/self/exe", which appends a " (deleted)" during
package reloads.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
implements AsyncRead as well as Stream for an IndexFile and a store
that implements AsyncReadChunk
we can use this to asyncread or stream the content of a FixedIndex or
DynamicIndex
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want to save if a file of a backup is encrypted, so that we can
* show that info on the gui
* can later decide if we need to decrypt the backup
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
does the same, except the manual drop, but thats handled there by
letting the value go out of scope
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
copy_nonoverlapping is basically a memcpy which can also be done
via copy_from_slice which is not unsafe
(copy_from_slice uses copy_nonoverlapping internally)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
These aren't installed and are only used for manual testing,
so there's no reason to force them to be built all the time.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
for now mostly copy/paste from nodes/nodename/tasks
(without the parameters)
but we should replace the 'read_task_list' with a method
that gives us the tasks since some timestamp
so that we can get a longer list of tasks than for the node
(we could of course embed this then in the nodes/node/task api call and
remove this again as long as the api is not stable)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The download methods used to take the destination by value
and return them again, since this was required when using
combinators before we had `async fn`.
But this is just an ugly left-over now.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
else we start a dynamic writer and never close it, leading to a backup error
this fixes an issue with backing up vm templates
(and possibly vms without disks)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
we check if all dynamic_writers are closed and if the backup contains
any valid files, we can only mark the backup finished after those
checks, else the backup task gets marked as OK, even though it
is not finished and no cleanups run
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
and drop the now unused extract_lists function
this also fixes a bug, where we did not add the datastore to the list at
all when there was no rrd data
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
there is now a 'extract_cached_data' which just returns
the data of the specified field, and an api function that converts
a list of fields to the correct serde value
this way we do not have to create a serde value in rrd/cache.rs
(makes for a better interface)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Report vanished files (instead of erroring out on them),
also only warn about files inaccessible due to permissions
instead of bailing out.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This used to be default-off and was accidentally set to
on-by-default with the pxar crate update.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
returns a list of the datastores and their usages, a list of usages of
the past month (for the gui) and an estimation of when its full
(using the linear regression)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
provides some basic statistics functions (sum, mean, etc.)
and a function to return the parameters of the linear regression of
two variables
implemented using num_traits to be more flexible for the types
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is an interface to simply get the Vec<Option<f64>> out of rrd
without going through serde values
we return a list of timestamps and a HashMap with the lists we could find
(otherwise it is not in the map)
if no lists could be extracted, the time list is also empty
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
disk_usage returned the same values as defined in StorageStatus,
so simply use that
with that we can replace the logic of the datastore status with that
function and also use it for root disk usage of the nodes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the last chunk does not have to be as big as the chunk_size,
just use the already available 'chunk_end' function which does the
correct thing
this fixes restoration of images whose sizes are not a multiple of
'chunk_size' as well
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if the last sync job is too far in the past (or there was none at all
for now) we run it at the next iteration, so we want to show that
we now calculate the next_run by using either the real last endtime
as time or 0
then in the frontend, we check if the next_run is < now and show 'pending'
(we do it this way also for replication on pve)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
two things were wrong here:
* the range (x..y) does not include y, so the range
(day_num+1..6) goes from (day_num+1) to 5 (but sunday is 6)
* WeekDays.bits() does not return the 'day_num' of that day, but
the bit value (e.g. 64 for SUNDAY) but was treated as the index of
the day of the week
to fix this, we drop the map to WeekDays and use the 'indices'
directly
this patch makes the test work again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If one executes a client command like
# proxmox-backup-client files <snapshot> --repository ...
the files shown have already the '.fidx' or '.blob' file ending, so
if a user would just copy paste that one the client would always add
.blob, and the server would not find that file.
So avoid adding file endings if it is already a known OK one.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
will be extended in a next patch.
Also drop a dead else branch, can never get hit as we always add
.blob as fallback
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
while touching it, make columns and tbar in DataStoreContent.js
declarative members and remove the (now) unnecessary initComponent
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this returns the list of syncjobs with status, as opposed to
config/sync (which is just the config)
also adds an api call where users can run the job manually under
/admin/sync/$ID/run
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
'sync' is used for manually pulling a remote datastore
changing it for a scheduled sync to 'syncjob' so that we can
differentiate between both types of syncs
this also adds a seperate task description for it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The repo URL consists of
* optional userid
* optional host
* datastore name
All three have defined regex or format, but none of that is used, so
for example not all valid datastore names are accepted.
Move definition of the regex over to api2::types where we can access
all required regexes easily.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since the target side wants this to be a boolean and
serde interprets a None Value as 'null' we have to only
add this when it is really set via cli
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when starting a new task, we do two things to keep track of tasks
(in that order):
* updating the 'active' file with a list of tasks with
'update_active_workers'
* updating the WORKER_TASK_LIST
the second also updates the status of running tasks in the file by
checking if it is still running by checking the WORKER_TASK_LIST
since those two things are not locked, it can happend that
we update the file, and before updating the WORKER_TASK_LIST,
another thread calls update_active_workers and tries to
get the status from the task log, which won't have any data yet
so the status is 'unknown'
(we do not update that status ever, likely for performance reasons,
so we have to fix this here)
by switching the order of the two operations, we make sure that only
tasks reach the 'active' file which are inserted in the WORKER_TASK_LIST
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to avoid having arbitrary characters in the config (e.g. newlines)
note that this breaks existings configs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
with a catch: password is in the struct but we do not want it to return
via the api, so we only 'serialize' it when the string is not empty
(this can only happen when the format is not checked by us, iow.
when its returned from the api) and setting it manually to ""
when we return remotes from the api
this way we can still use the type but do not return the password
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>