To prevent a race with a background GC operation, do not allow deletion
of backups who's index might currently be referenced as the "known chunk
list" for successive backups. Otherwise the GC could delete chunks it
thinks are no longer referenced, while at the same time telling the
client that it doesn't need to upload said chunks because they already
exist.
Additionally, prevent deletion of whole backup groups, if there are
snapshots contained that appear to be currently in-progress. This is
currently unlikely to trigger, as that function is only used for sync
jobs, but it's a useful safeguard either way.
Deleting a single snapshot has a 'force' parameter, which is necessary
to allow deleting incomplete snapshots on an aborted backup. Pruning
also sets force=true to avoid the check, since it calculates which
snapshots to keep on its own.
To avoid code duplication, the is_finished method is factored out.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
And make verify_crc private for now. We always call load_from_reader() to
verify the CRC.
Also add load_chunk() to datastore.rs (from chunk_store::read_chunk())
useful to get info like, was the previous snapshot encrypted in
libproxmox-backup-qemu
Requested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Requires updating the AsyncRead implementation to cope with byte-wise
seeks to intra-chunk positions.
Uses chunk_from_offset to get locations within chunks, but tries to
avoid it for sequential read to not reduce performance from before.
AsyncSeek needs to use the temporary seek_to_pos to avoid changing the
position in case an invalid seek is given and it needs to error in
poll_complete.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
We support using an ext4 mountpoint directly as datastore and even do
so ourself when creating one through the disk manage code.
Such ext4 ountpoints have a lost+found directory which only root can
traverse into. As the GC list images is done as backup:backup user
walkdir gets an error.
We cannot ignore just all permission errors, as they could lead to
missing some backup indexes and thus possibly sweeping more chunks
than desired. While *normally* that should not happen through our
stack, we had already user report that they do rsyncs to move a
datastore from old to new server and got the permission wrong.
So for now be still very strict, only allow a "lost+found" directory
as immediate child of the datastore base directory, nothing else.
If deemed safe, this can always be made less strict. Possibly by
filtering the known backup-types on the highest level first.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
otherwise we leak those descriptors and run into EMFILE when a backup
group contains many snapshots.
fcntl::openat and Dir::openat are not the same ;)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
JSON keys MUST be quoted. this is a one-time break in signature
validation for backups created with the broken canonicalization code.
QEMU backups are not affected, as libproxmox-backup-qemu never linked
the broken versions.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
it is nice to have a command to exit from the shell instead of
only allowing ctrl+d or ctrl+c
the api method is just for documentation/help purposes and does nothing
by itself, the real logic is directly in the read loop
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
When creating a new datastore the basedir is only owned by the backup
user if it did not exist beforehand (create_path chowns only if it
creates the directory), and returns false if it did not create the
directory).
This improves the experience when adding a new datastore on a fresh
disk or existing directory (not owned by backup) - backups/pulls can
be run instead of terminating with EPERM.
Tested on my local testinstall with a new disk, and a existing directory:
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
* don't clone hash keys, just use references
* we don't need a String, stick to Vec<u8> and use
serde_json::to_writer to avoid a temporary strings
altogether
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This is a more convenient way to pass along the key when
creating encrypted backups of unprivileged containers in PVE
where the unprivileged user namespace cannot access
`/etc/pve/priv`.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
place() is used when creating a file, as it will create
intermediate directories, only use it when actually placing
a new file.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This also replaces the recently introduced --encryption
parameter on the client with a --crypt-mode parameter.
This can be "none", "encrypt" or "sign-only".
Note that this introduces various changes in the API types
which previously did not take the above distinction into
account properly:
Both `BackupContent` and the manifest's `FileInfo`:
lose `encryption: Option<bool>`
gain `crypt_mode: Option<CryptMode>`
Within the backup manifest itself, the "crypt-mode" property
will always be set.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
And use it for fixed and dynamic index. Please note that this
changes checksums for fixed indexes, so restore older backups
will fails now (not backward compatible).
To support incremental backups (where not all chunks are sent to the
server), a new parameter "reuse-csum" is introduced on the
"create_fixed_index" API call. When set and equal to last backups'
checksum, the backup writer clones the data from the last index of this
archive file, and only updates chunks it actually receives.
In incremental mode some checks usually done on closing an index cannot
be made, since they would be inaccurate.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
we want to get a string representation of the DirEntryAttribute
like 'f' for file, etc. and since we have such a mapping already
in the CatalogEntryType, use that
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
mostly copied from BufferedDynamicReadAt from proxmox-backup-client
but the reader is wrapped in an Arc in addition to the Mutex
we will use this for local access to a pxar behind a didx file
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
implements AsyncRead as well as Stream for an IndexFile and a store
that implements AsyncReadChunk
we can use this to asyncread or stream the content of a FixedIndex or
DynamicIndex
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want to save if a file of a backup is encrypted, so that we can
* show that info on the gui
* can later decide if we need to decrypt the backup
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
copy_nonoverlapping is basically a memcpy which can also be done
via copy_from_slice which is not unsafe
(copy_from_slice uses copy_nonoverlapping internally)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the last chunk does not have to be as big as the chunk_size,
just use the already available 'chunk_end' function which does the
correct thing
this fixes restoration of images whose sizes are not a multiple of
'chunk_size' as well
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This is basically a rewrite of the current logic for navigating the catalog,
but in addition allows to follow symlinks.
Following symlinks introduces the issue that generation of canonical paths
(needed in the actual pxar archive) is more complex, as symlinks have to be
resolved and loops avoided.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
'clear-selected' allows to clear all the match patterns from the list of
patterns for a subsequent restore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
'list-selected' now shows the filenames matching the patterns for a restore
instead of the patterns themselfs.
The patterns can be displayed by passing the '--pattern' flag.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Two or more successive slashes should be allowed and treated as a single slash.
We also do not treat two successive slashes at the beginning of a path any
different.
Details are found here:
https://pubs.opengroup.org/onlinepubs/000095399/basedefs/xbd_chap04.html#tag_04_11
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
last_successful_backup: Returns the time of the last successful backup
group_path: Returns the absolute path for a backup_group
snapshot_path: Returns the absolute path for a backup_dir
The -sys, -tools and -api crate have now been merged into
the proxmx crate directly. Only macro crates are separate
(but still reexported by the proxmox crate in their
designated locations).
When we need to depend on "parts" of the crate later on
we'll just have to use features.
The reason is mostly that these modules had
inter-dependencies which really make them not independent
enough to be their own crates.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
By reading and including xattrs and payload size in struct `DirectoryEntry`,
the tuple of return types is avoided and the code is simpler.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The find matching was incorrectly performed starting from the parent directroy
and not as intended from the entries of the parent directory.
Further, the match pattern passed from the catalog shell contains the absolute
path of the search entry point as prefix, so find() must always start from the
archive root. This is because the match pattern has to be stored in the selected
list for a subsequent restore-selected command in the shell.
All matching paths are shown as absolute paths with all contents in the subdir,
equal to what would be restored by the given pattern.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Implements the find command which allows to find and select files for subsequent
restore.
Files selected for restore are now stored in a Vec instead of a HashSet.
This is needed, since instead of the full paths for each file, selected files are
now identified by a list of match pattern, where ordering matters.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
find() iterates over the file tree and matches each node against a list of match
patterns provided at function call.
For each matching node, a callback function with the current directroy stack is
called.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This major refactoring of the catalog based shell utilizes the new API macro and
the API Schema as well as rustyline instead of the old GNU readline C API.
The code now has these 3 main components:
* The `Shell` which handles the readline loop via rustyline.
* The shell functions defined via the API macro.
* The `Context` which holds catalog and decoder instances.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
By changing the way shell commands are defined and parsed, this makes it more
straight forward to extend the current functionality.
The readline input is parsed based on the provided command definition and the
given parameters and options are passed to a command specific callback function.
In addition, the provided command definition including its description is used
to generate a help string to display.
The help command shows a list of all supported commands or the help string for
the provided command.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
And use an extra functzion set_callback() to configure that.
Also rewrite pxar/fuse.rs and implement a generic Session (will get
further cleanups with next patches).
In order to provide the context needed for tab completion via the readline
callback, the needed mut ref is passed via a thread local storage key.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This is needed in order to explicitly clone the values when needed in the
catalog shell implementation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
proxmox::tools now has a Uuid module using the native
libuuid.
Adds build dependency: libuuid1 (which is a Pre-Depends of
util-linux, so always installed anyway).
Drops uuid + 16 more crate dependencies.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Counts the bytes written by the CatalogBlobWriter in order to obtain the
stream position, needed to get offset to reference catalog items.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The writer.write_all() call accessed data marked as
undefined to valgrind. Note that we shouldn't write out
uninitialized memory for security reasons anyway.
(note that vec::undefined already did zero-initialize the
data, but also marked it as undefined for valgrind when
compiling with the valgrind feature)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The type was sized properly but the number was still wrong,
fixed this.
TODO! Once unions with non-Copy values are stable make this
a `union { full: [u8; 4096], data: TheActualHeader }`;
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
AFAICT we have no use for it anymore, its api entry points
are gone. If we do end up needing something from it, it's
still in the git history anyway. (And about two thirds of it
can be made much less awkward by utilizing async-await
anyway, so no love lost there...)
Moved the chunker back into src/backup/chunker.rs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
We can now use iter::from_fn() which makes for a much nicer
logic. The only thing better is going to be when we can use
generators with `yield`.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>