a reader connection should not be allowed to read arbitrary chunks in
the datastore, but only those that were previously registered by opening
the corresponding index files.
this mechanism is needed to allow unprivileged users (that don't have
full READ permissions on the whole datastore) access to their own
backups via a reader environment.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
not triggered by any current code, but this would lead to a stack
exhaustion since borrow would call deref which would call borrow again..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Previously only Datastore.Modify was required for creating a new
datastore.
But, that endpoint allows one to pass an arbitrary path, of which all
parent directories will be created, this can allow any user with the
"Datastore Admin" role on "/datastores" to do some damage to the
system. Further, it is effectively a side channel for revealing the
systems directory structure through educated guessing and error
handling.
Add a new privilege "Datastore.Allocate" which, for now, is used
specifically for the create datastore API endpoint.
Add it only to the "Admin" role.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
avoiding the need for reshuffling all bits when a new privilege is
added at the start or in the middle of this definition.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If a fuse_loop instance dies suddenly (e.g. SIGKILL), the FUSE mount and
loop device assignment are left behind. We can determine this scenario
on specific unmap, when the PID file is either missing or contains a PID
of a non-running process, but the backing file and potentially loop
device are still there.
If that's the case, do an "emergency cleanup", by unassigning the
loopdev, calling 'fusermount -u' and then cleaning any leftover files
manually.
With this in place, pretty much any situation is now recoverable via
only the 'proxmox-backup-client' binary, by either calling 'unmap' with
or without parameters.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
On unmap, only report success if the instance we are killing actually
terminates. This is especially important so that cleanup routines can be
assured that /run files are actually cleaned up after calling
cleanup_unused_run_files.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
A 'map' call will only clean up what it needs, that is only leftover
files or dangling instances of it's own name.
For a full cleanup the user can call 'unmap' without any arguments.
The 'cleanup on error' behaviour of map_loop is removed. It is no longer
needed (since the next call will clean up anyway), and in fact fixes a
bug where trying to map an image twice would result in an error, but
also cleanup the .pid file of the running instance, causing 'unmap' to
fail afterwards.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
So user doesn't need to remember which loop devices he has mapped to
what.
systemd unit encoding is used to transform a unique identifier for the
mapped image into a suitable name. The files created in /run/pbs-loopdev
will be named accordingly.
The encoding all happens outside fuse_loop.rs, so the fuse_loop module
does not need to care about encodings - it can always assume a name is a
valid filename.
'unmap' without parameter displays all current mappings. It's
autocompletion handler will list the names of all currently mapped
images for easy selection. Unmap by /dev/loopX or loopdev number is
maintained, as those can be distinguished from mapping names.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
the same as the regular TaskState, but without its fields, so that
we can use the api macro and use it as api call parameter
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Allows mapping fixed-index .img files (usually from VM backups) to be
mapped to a local loopback device.
The architecture uses a FUSE-backed temp file mapped to a loopdev:
/dev/loopX -> FUSE /run/pbs-loopdev/xxx -> backup client -> PBS
Since unmapping requires some cleanup (unmap the loopdev, stop FUSE,
remove the temp files) a special 'unmap' command is added, which uses a
PID file to send SIGINT to the backup-client instance started with
'map', which will handle the cleanup itself.
The polling with select! in mount.rs needs to be split in two, since we
have a chicken and egg problem between running FUSE and setting up the
loop device - so we need to do them concurrently, until the loopdev is
assigned, at which point we can report success and daemonize, and then
continue polling the FUSE loop future.
A loopdev module is added to tools containing all required functions for
mapping a loop device to the FUSE file, with the ioctls moved into an
inline module to avoid exposing them directly.
The client code is placed in the 'mount' module, which, while
admittedly a loose fit, allows reuse of the daemonizing code.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
if the archive file does not exist yet, we cannot rotate it, but it's not
actually an error, so just return Ok(false) to indicate no rotation took
place
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
while we probably do not add much more to them, it still looks ugly.
If this was made so that adding a World readable API call is "hard"
and not done by accident, it rather should be done as a test on build
time. But, IMO, the API permission schema definitions are easy to
review, and not often changed/added - so any wrong World readable API
call will normally still caught.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In theory, one can do std::mem::forget, and ignore the drop handler. With
the lifetime hack, this could result in a crash.
So we simply require 'static lifetime now (futures also needs that).
Causes a panic if last_update is smaller than RRD_DATA_ENTRIES*reso,
which (I believe) can happen when inserting the first value for a DB.
Clamp the value to 0 in that case.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Fix a potential bug where errors that happen after the SendHandle has
been dropped while doing the thread join might have been ignored.
Requires internal check_abort to be moved out of 'impl SendHandle' since
we only have the Mutex left, not the SendHandle.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This can slow things down by a lot on setups with (relatively) high
seek time, in the order of doubling the backup times if cache isn't
populated with the last backups chunk inode info.
Effectively there's nothing known this protects us from in the
codebase. The only thing which was theorized about was the case
where a really long running backup job (over 24 hours) is still
running and writing new chunks, not indexed yet anywhere, then an
update (or manual action) triggers a reload of the proxy. There was
some theory that then a GC in the new daemon would not know about the
oldest writer in the old one, and thus use a less strict atime limit
for chunk sweeping - opening up a window for deleting chunks from the
long running backup.
But, this simply cannot happen as we have a per datastore process
wide flock, which is acquired shared by backup jobs and exclusive by
GC. In the same process GC and backup can both get it, as it has a
process locking granularity. If there's an old daemon with a writer,
that also has the lock open shared, and so no GC in the new process
can get exclusive access to it.
So, with that confirmed we have no need for a "half-assed"
verification in the backup finish step. Rather, we plan to add an
opt-in "full verify each backup on finish" option (see #2988)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We forgot to put braces around the DNS_NAME regex, and in
DNS_NAME_OR_IP_REGEX
this is wrong because the regex
^foo|bar$
matches 'foo' at the beginning and 'bar' at the end, so either
foobaz
bazbar
would match. only
^(foo|bar)$
matches only 'foo' and 'bar'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
* add square brackets to ipv6 adresses in BackupRepository if they not
already have some (we save them without in the remote config)
* in get_pull_parameters, we now create a BackupRepository first and use
those values (which does the [] mapping), this also has the advantage
that we have one place less were we hardcode 8007 as port
* in the ui, add square brackets for ipv6 adresses for remotes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when upgrading from a version where we stored all tasks in the 'active' file,
we did not completly account for finished tasks still there
we should update the file when encountering any finished task in
'active' as well as filter them out on the api call (if they get through)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this adds the ability to add port numbers in the backup repo spec
as well as remotes, so that user that are behind a
NAT/Firewall/Reverse proxy can still use it
also adds some explanation and examples to the docs to make it clearer
for h2 client i left the localhost:8007 part, since it is not
configurable where we bind to
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When creating a new zpool for a datastore, also instantiate an
import-unit for it. This helps in cases where '/etc/zfs/zool.cache'
get corrupted and thus the pool is not imported upon boot.
This patch needs the corresponding addition of 'zfs-import@.service' in
the zfsonlinux repository.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
we need this, because we append the port to this to get a target url
e.g. we print
format!("https://{}:8007/", address)
if address is now an ipv6 (e.g. fe80::1) it would become
https://fe80::1:8007/ which is a valid ipv6 on its own
by using square brackets we get:
https://[fe80::1]:8007/ which now connects to the correct ip/port
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since len() and MAX_INDEX_TASKS are both usize, they underflow
instead of getting negative values
instead check the sizes and set them accordingly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this starts a task once a day at "00:00" that rotates the task log
archive if it is bigger than 500k
if we want, we can make the schedule/size limit/etc. configurable,
but for now it's ok to set fixed values for that
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since there are no users of this anymore and we now have a nicer
TaskListInfoIterator to use, we can drop this function
this also means that 'update_active_workers' does not need to return
a list anymore since we never used that result besides in
read_task_list
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this means that limiting with epoch now works correctly
also change the api type to i64, since that is what the starttime is
saved as
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this makes the filtering/limiting much nicer and readable
since we now have potentially an 'infinite' amount of tasks we iterate over,
and cannot now beforehand how many there are, we return the total count
as always 1 higher then requested iff we are not at the end (this is
the case when the amount of entries is smaller than the requested limit)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is an iterator that reads/parses/updates the task list as
necessary and returns the tasks in descending order (newest first)
it does this by using our logrotate iterator and using a vecdeque
we can use this to iterate over all tasks, even if they are in the
archive and even if the archive is logrotated but only read
as much as we need
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of removing tasks beyond the 1000 that are in the index
write them into an archive file by appending them at the end
this way we can later still read them
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
one for only the active tasks and one for up to 1000 finished tasks
factor out the parsing of a task file (we will later need this again)
and use iterator combinators for easier code
we now sort the tasks ascending (this will become important in a later patch)
but reverse (for now) it to keep compatibility
this code also omits the converting into an intermittent hash
since it cannot really happen that we have duplicate tasks in this list
(since the call is locked by an flock, and it is the only place where we
write into the lists)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is a helper to rotate and iterate over log files
there is an iterator for open filehandles as well as
only the filename
also it has the possibilty to rotate them
for compression, zstd is used
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
also changes:
* correct comment about reset (replace 'sync' with 'action')
* check schedule change correctly (only when it is actually changed)
with this changes, we can drop the 'lookup_last_worker' method
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we rely on the jobstate handling to write the error of the worker
into its state file, but we used '?' here in a block which does not
return the error to the block, but to the function/closure instead
so if a prune job failed because of such an '?', we did not write
into the statefile and got a wrong state there
instead use our try_block! macro that wraps the code in a closure
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
like the sync jobs, so that if an admin configures a schedule it
really starts the next time that time is reached not immediately
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
listing, updating or deleting a user is now possible for the user
itself, in addition to higher-privileged users that have appropriate
privileges on '/access/users'.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
filtered by those they are privileged enough to read individually. this
allows such users to configure prune/GC schedules via the GUI (the API
already allowed it previously).
permission-wise, a user with this privilege can already:
- list all stores they have access to (returns just name/comment)
- read the config of each store they have access to individually
(returns full config of that datastore + digest of whole config)
but combines them to
- read configs of all datastores they have access to (returns full
config of those datastores + digest of whole config)
user that have AUDIT on just /datastore without propagate can now no
longer read all configurations (but this could be added it back, it just
seems to make little sense to me).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by packing the auth into a RwLock and starting a background
future that renews the ticket every 15 minutes
we still use the BroadcastFuture for the first ticket and only
if that is finished we start the scheduled future
we have to store an abort handle for the renewal future and abort it when
the http client is dropped, so we do not request new tickets forever
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
like we do for PVE. this is visible on the dashboard, and caused 403 on
each update which bothers me when looking at the dev console.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
A client can omit uploading chunks in the "known_chunks" list, those
then also won't be written on the server side. Check all those chunks
mentioned in the index but not uploaded for existance and report an
error if they don't exist instead of marking a potentially broken backup
as "successful".
This is only important if the base snapshot references corrupted chunks,
but has not been negatively verified. Also, it is important to only
verify this at the end, *after* all index writers are closed, since only
then can it be guaranteed that no GC will sweep referenced chunks away.
If a chunk is found missing, also mark the previous backup with a
verification failure, since we know the missing chunk has to referenced
in it (only way it could have been inserted to known_chunks with
checked=false). This has the benefit of automatically doing a
full-upload backup if the user attempts to retry after seeing the new
error, instead of requiring a manual verify or forget.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Do not allow clients to reuse chunks from the previous backup if it has
a failed validation result. This would result in a new "successful"
backup that potentially references broken chunks.
If the previous backup has not been verified, assume it is fine and
continue on.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
- remove chrono dependency
- depend on proxmox 0.3.8
- remove epoch_now, epoch_now_u64 and epoch_now_f64
- remove tm_editor (moved to proxmox crate)
- use new helpers from proxmox 0.3.8
* epoch_i64 and epoch_f64
* parse_rfc3339
* epoch_to_rfc3339_utc
* strftime_local
- BackupDir changes:
* store epoch and rfc3339 string instead of DateTime
* backup_time_to_string now return a Result
* remove unnecessary TryFrom<(BackupGroup, i64)> for BackupDir
- DynamicIndexHeader: change ctime to i64
- FixedIndexHeader: change ctime to i64
since converting from i64 epoch timestamp to DateTime is not always
possible. previously, passing invalid backup-time from client to server
(or vice-versa) panicked the corresponding tokio task. now we get proper
error messages including the invalid timestamp.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
otherwise operations like catalog shell panic when viewing pxar archives
containing such entries, e.g. with mtime very far ahead into the future.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by either printing the original, out-of-range timestamp as-is, or
bailing with a proper error message instead of panicking.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
even if it can't be handled by chrono. silently replacing it with epoch
0 is confusing..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
else we get the default of 16k, which is quite low for our use case.
this improves the TLS upload benchmark speed by about 30-40% for me.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
fixes the error, "manifest does not contain
file 'X.pxar'", that occurs when trying to mount
a pxar archive with 'proxmox-backup-client mount':
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
similar to the other fix, if we do not set the buffer size manually,
we get better performance for high latency connections
restore benchmark from f.gruenbicher:
no delay, without patch: ~50MB/s
no delay, with patch: ~50MB/s
25ms delay, without patch: ~11MB/s
25ms delay, with path: ~50MB/s
my own restore benchmark:
no delay, without patch: ~1.5GiB/s
no delay, with patch: ~1.5GiB/s
25ms delay, without patch: 30MiB/s
25ms delay, with patch: ~950MiB/s
for some more details about those benchmarks see
https://lists.proxmox.com/pipermail/pbs-devel/2020-September/000600.html
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by leaving the buffer sizes on default, we get much better tcp performance
for high latency links
throughput is still impacted by latency, but much less so when
leaving the sizes at default.
the disadvantage is slightly higher memory usage of the server
(details below)
my local benchmarks (proxmox-backup-client benchmark):
pbs client:
PVE Host
Epyc 7351P (16core/32thread)
64GB Memory
pbs server:
VM on Host
1 Socket, 4 Cores (Host CPU type)
4GB Memory
average of 3 runs, rounded to MB/s
| no delay | 1ms | 5ms | 10ms | 25ms |
without this patch | 230MB/s | 55MB/s | 13MB/s | 7MB/s | 3MB/s |
with this patch | 293MB/s | 293MB/s | 249MB/s | 241MB/s | 104MB/s |
memory usage (resident memory) of proxmox-backup-proxy:
| peak during benchmarks | after benchmarks |
without this patch | 144MB | 100MB |
with this patch | 145MB | 130MB |
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We need to update the atime of chunk files if they already exist,
otherwise a concurrently running GC could sweep them away.
This is protected with ChunkStore.mutex, so the fstat/unlink does not
race with touching.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The iterator of get_chunk_iterator is extended with a third parameter
indicating whether the current file is a chunk (false) or a .bad file
(true).
Count their sizes to the total of removed bytes, since it also frees
disk space.
.bad files are only deleted if the corresponding chunk exists, i.e. has
been rewritten. Otherwise we might delete data only marked bad because
of transient errors.
While at it, also clean up and use nix::unistd::unlinkat instead of
unsafe libc calls.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This ensures that following backups will always upload the chunk,
thereby replacing it with a correct version again.
Format for renaming is <digest>.<counter>.bad where <counter> is used if
a chunk is found to be bad again before a GC cleans it up.
Care has been taken to deliberately only rename a chunk in conditions
where it is guaranteed to be an error in the chunk itself. Otherwise a
broken index file could lead to an unwanted mass-rename of chunks.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
a range from high to low in rust results in an empty range
(see std::ops::Range documentation)
so we need to generate the range from 0..data.len() and then reverse it
also, the task log contains a newline at the end, so we have to remove
that (should it exist)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this implements parsing and calculating calendarevents that have a
basic date component (year-mon-day) with the usual syntax options
(*, ranges, lists)
and some special events:
monthly
yearly/annually (like systemd)
quarterly
semiannually,semi-annually (like systemd)
includes some regression tests
the ~ syntax for days (the last x days of the month) is not yet
implemented
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of using 'as' and silently converting wrong,
use the TryInto trait and raise an error if we cannot convert
this should only happen if we have a negative year,
but this is expected (we do not want schedules from before the year 0)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
add_* are modeled after add_days
subtract one for set_mon to have a consistent interface for all fields
(i.e. getter/setter return/expect the 'real' number, not the ones
in the tm struct)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the tm struct contains the year - 1900 but we added that
if we want to use the libc normalization correctly, the tm struct
must have the correct year in it, else the computations for timezones,
etc. fail
instead add a getter that adds the years and a setter that subtracts it again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if we give multiple options/ranges for a value, e.g.
2,4,8
we always choose the biggest, instead of the smallest that is next
this happens because in DateTimeValue::find_next(value)
'next' can be set multiple times and we set it when the new
value was *bigger* than the last found 'next' value, when in reality
we have to choose the *smallest* next we can find
reverse the comparison operator to fix this
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we never passed 'false' to it anyway so remove it
(we can add it again if we should ever need it)
also remove the adding of wday (gets normalized anyway)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want to use dates for the calendarspec, and with that there are some
impossible combinations that cannot be detected during parsing
(e.g. some datetimes do not exist in some timezones, and the timezone
can change after setting the schedule)
so finding no timestamp is not an error anymore but a valid result
we omit logging in that case (since it is not an error anymore)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
mktime/gmtime can normalize time and even can handle special timezone
cases like the fact that the time 2:30 on specific day/timezone combos
do not exists
we have to convert the signature of all functions that use
normalize_time since mktime/gmtime can return an EOVERFLOW
but if this happens there is no way we can find a good time anyway
since normalize_time will always set wday according to the rest of the
time, remove set_wday
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
while it was correct, there was no measurable speed gain
(a benchmark yielded 2.8 ms for a spec that did not find a timestamp either way)
so remove it for simpler code
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when trying to parse the task status, we seek 8k from the end
which may be into the middle of a line, so the datetime parsing
can fail (when the log message contains ': ')
This patch does a fast search for the last line, and avoid the
'lines' iterator.
for datastores where the requesting user has read or write permissions,
since the API method itself filters by that already. this is the same
permission setting and filtering that the datastore list API endpoint
does.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Because if not, the backups it creates have bogus permissions and may
seem like they got broken once the daemon is started again with the
correct user/group.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Save the state ("ok" or "failed") and the UPID of the respective
verify task. With this we can easily allow to open the relevant task
log and show when the last verify happened.
As we already load the manifest when listing the snapshots, just add
it there directly.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It's a string-type.
Implement Serialize via Display, Deserialize via FromStr and
add an API_SCHEMA so that it can be used as a type within
the #[api] macro.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
to avoid `map_struct` which is actually unsafe because it
does not verify alignment constraints at all
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
There is no requirement to have at least
a blank line, attribute or comment in between two
interface definitions, e.g.
iface lo inet loopback
iface lo inet6 loopback
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
it really is not necessary, since the only time we are interested in
loading the state from the file is when we list it, and there
we use JobState::load directly to avoid the lock
we still need to create the file on syncjob creation though, so
that we have the correct time for the schedule
to do this we add a new create_state_file that overwrites it on creation
of a syncjob
for safety, we subtract 30 seconds from the in-memory state in case
the statefile is missing
since we call create_state_file from proxmox-backup-api,
we have to chown the lock file after creating to the backup user,
else the sync job scheduling cannot aquire the lock
also we remove the lock file on statefile removal
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that we can log if triggered by a schedule, and writing to a jobstatefile
also correctly polls now the abort_future of the worker, so that
users can stop a sync
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and move the pull parameters into the worker, so that the task log
contains the error if there is one
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is intended to be a generic helper to (de)serialize job states
(e.g., sync, verify, and so on)
writes a json file into '/var/lib/proxmox-backup/jobstates/TYPE-ID.json'
the api creates the directory with the correct permissions, like
the rrd directory
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the endtime should be the timestamp of the last log line
or if there is no log at all, the starttime
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
representing a state via an enum makes more sense in this case
we also implement FromStr and Display to make it easy to convet from/to
a string
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
To prevent forgetting the base snapshot of a running backup, and catch
the case when it still happens (e.g. via manual rm) to at least error
out instead of storing a potentially invalid backup.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This reverts commit d53fbe2474.
The HashSet and "register" function are unnecessary, as we already know
which backup is the one we need to check: the last one, stored as
'last_backup'.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
An flock on the snapshot dir itself is used in addition to the group dir
lock. The lock is used to avoid races with forget and prune, while
having more granularity than the group lock (i.e. the group lock is
necessary to prevent more than one backup per group, but the snapshot
lock still allows backups unrelated to the currently running to be
forgotten/pruned).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>