Used chunks are marked in phase1 of the garbage collection process by
using the atime property. Each used chunk gets touched so that the atime
gets updated (if older than 24h, see relatime).
Should there ever be a situation in which the phase1 in the GC run needs
a very long time to finish, it could happen that the grace period
calculated in phase2 is not long enough and thus the marking of the
chunks (atime) becomes invalid. This would result in the removal of
needed chunks.
Even though the likelyhood of this happening is very low, using the
timestamp from right before phase1 is started, to calculate the grace
period in phase2 should avoid this situation.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
just because we can't verify the signature, does not mean the contents
are not accessible. it might make sense to make it obvious with a hint
or click-through warning that no signature verification can take place
or this and downloading.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and not just of previously synced ones.
we can't use BackupManifest::verify_file as the archive is still stored
under the tmp path at this point.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
for encrypted chunks this is currently not possible, as we need the key
to decode the chunk.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
regular chunks are only decoded when their contents are accessed, in
which case we need to have the key anyway and want to verify the digest.
for blobs we need to verify beforehand, since their checksums are always
calculated based on their raw content, and stored in the manifest.
manifests are also stored as blobs, but don't have a digest in the
traditional sense (they might have a signature covering parts of their
contents, but that is verified already when loading the manifest).
this commit does not cover pull/sync code which copies blobs and chunks
as-is without decoding them.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Errors while applying metadata will not be considered fatal
by default using `pxar extract` unless `--strict` was passed
in which case it'll bail out immediately.
It'll still return an error exit status if something had
failed along the way.
Note that most other errors will still cause it to bail out
(eg. errors creating files, or I/O errors while writing
the contents).
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Fix typos and grammatical errors.
Reword some sentences for better readability.
Clean up the list found under "Software Stack", so that it maintains a consistent
style throughout.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
'username' here is without realm, but we really want to use user@realm
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
tokios kill_on_drop sometimes leaves zombies around, especially
when there is not another tokio::process::Command spawned after
so instead of relying on the 'kill_on_drop' feature, we explicitly
kill the child on a worker abort. to be able to do this
we have to use 'tokio::select' instead of 'futures::select' since
the latter requires the future to be fused, which consumes the
child handle, leaving us no possibility to kill it after fusing.
(tokio::select does not need the futures to be fused, so we
can reuse the child future after the select again)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
so that we can print a list at the end of the worker which backups
are corrupt.
this is useful if there are many snapshots and some in between had an
error. Before this patch, the task log simply says to 'look in the logs'
but if the log is very long it makes it hard to see what exactly failed.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this makes it easier to see which chunks are corrupt
(and enables us in the future to build a 'complete' list of
corrupt chunks)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The extraction algorithm has a state (bool) indicating
whether we're currently in a positive or negative match
which has always been initialized to true at the beginning,
but when the user provides a `--pattern` argument we need to
start out with a negative match.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This should never trigger if everything else works correctly, but it is
still a very cheap check to avoid wrongly marking a backup as "OK" when
in fact some chunks might be missing.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Multiple backups within one backup group don't really make sense, but
break all sorts of guarantees (e.g. a second backup started after a
first would use a "known-chunks" list from the previous unfinished one,
which would be empty - but using the list from the last finished one is
not a fix either, as that one could be deleted or pruned once the first
simultaneous backup is finished).
Fix it by only allowing one backup per backup group at one time. This is
done via a flock on the backup group directory, thus remaining intact
even after a reload.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
To prevent a race with a background GC operation, do not allow deletion
of backups who's index might currently be referenced as the "known chunk
list" for successive backups. Otherwise the GC could delete chunks it
thinks are no longer referenced, while at the same time telling the
client that it doesn't need to upload said chunks because they already
exist.
Additionally, prevent deletion of whole backup groups, if there are
snapshots contained that appear to be currently in-progress. This is
currently unlikely to trigger, as that function is only used for sync
jobs, but it's a useful safeguard either way.
Deleting a single snapshot has a 'force' parameter, which is necessary
to allow deleting incomplete snapshots on an aborted backup. Pruning
also sets force=true to avoid the check, since it calculates which
snapshots to keep on its own.
To avoid code duplication, the is_finished method is factored out.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Also swap the order of a couple of `.map_err().await` to
`.await.map_err()` since that's generally more efficient.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
...when an entry is selected, that doesn't exist after the reload.
E.g. when one deletes selects a file within a snapshot and then clicks
the delete icon for said snapshot, focusRow would then fail and the
loading mask stay on until a reload.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
a blob can be empty (e.g. an empty pct fw conf), so we
have to set the minimum size to the header size
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>