like the sync jobs, so that if an admin configures a schedule it
really starts the next time that time is reached not immediately
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
listing, updating or deleting a user is now possible for the user
itself, in addition to higher-privileged users that have appropriate
privileges on '/access/users'.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
filtered by those they are privileged enough to read individually. this
allows such users to configure prune/GC schedules via the GUI (the API
already allowed it previously).
permission-wise, a user with this privilege can already:
- list all stores they have access to (returns just name/comment)
- read the config of each store they have access to individually
(returns full config of that datastore + digest of whole config)
but combines them to
- read configs of all datastores they have access to (returns full
config of those datastores + digest of whole config)
user that have AUDIT on just /datastore without propagate can now no
longer read all configurations (but this could be added it back, it just
seems to make little sense to me).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by packing the auth into a RwLock and starting a background
future that renews the ticket every 15 minutes
we still use the BroadcastFuture for the first ticket and only
if that is finished we start the scheduled future
we have to store an abort handle for the renewal future and abort it when
the http client is dropped, so we do not request new tickets forever
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
like we do for PVE. this is visible on the dashboard, and caused 403 on
each update which bothers me when looking at the dev console.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
A client can omit uploading chunks in the "known_chunks" list, those
then also won't be written on the server side. Check all those chunks
mentioned in the index but not uploaded for existance and report an
error if they don't exist instead of marking a potentially broken backup
as "successful".
This is only important if the base snapshot references corrupted chunks,
but has not been negatively verified. Also, it is important to only
verify this at the end, *after* all index writers are closed, since only
then can it be guaranteed that no GC will sweep referenced chunks away.
If a chunk is found missing, also mark the previous backup with a
verification failure, since we know the missing chunk has to referenced
in it (only way it could have been inserted to known_chunks with
checked=false). This has the benefit of automatically doing a
full-upload backup if the user attempts to retry after seeing the new
error, instead of requiring a manual verify or forget.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Do not allow clients to reuse chunks from the previous backup if it has
a failed validation result. This would result in a new "successful"
backup that potentially references broken chunks.
If the previous backup has not been verified, assume it is fine and
continue on.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
- remove chrono dependency
- depend on proxmox 0.3.8
- remove epoch_now, epoch_now_u64 and epoch_now_f64
- remove tm_editor (moved to proxmox crate)
- use new helpers from proxmox 0.3.8
* epoch_i64 and epoch_f64
* parse_rfc3339
* epoch_to_rfc3339_utc
* strftime_local
- BackupDir changes:
* store epoch and rfc3339 string instead of DateTime
* backup_time_to_string now return a Result
* remove unnecessary TryFrom<(BackupGroup, i64)> for BackupDir
- DynamicIndexHeader: change ctime to i64
- FixedIndexHeader: change ctime to i64
since converting from i64 epoch timestamp to DateTime is not always
possible. previously, passing invalid backup-time from client to server
(or vice-versa) panicked the corresponding tokio task. now we get proper
error messages including the invalid timestamp.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
otherwise operations like catalog shell panic when viewing pxar archives
containing such entries, e.g. with mtime very far ahead into the future.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by either printing the original, out-of-range timestamp as-is, or
bailing with a proper error message instead of panicking.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
even if it can't be handled by chrono. silently replacing it with epoch
0 is confusing..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
else we get the default of 16k, which is quite low for our use case.
this improves the TLS upload benchmark speed by about 30-40% for me.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
fixes the error, "manifest does not contain
file 'X.pxar'", that occurs when trying to mount
a pxar archive with 'proxmox-backup-client mount':
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
similar to the other fix, if we do not set the buffer size manually,
we get better performance for high latency connections
restore benchmark from f.gruenbicher:
no delay, without patch: ~50MB/s
no delay, with patch: ~50MB/s
25ms delay, without patch: ~11MB/s
25ms delay, with path: ~50MB/s
my own restore benchmark:
no delay, without patch: ~1.5GiB/s
no delay, with patch: ~1.5GiB/s
25ms delay, without patch: 30MiB/s
25ms delay, with patch: ~950MiB/s
for some more details about those benchmarks see
https://lists.proxmox.com/pipermail/pbs-devel/2020-September/000600.html
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by leaving the buffer sizes on default, we get much better tcp performance
for high latency links
throughput is still impacted by latency, but much less so when
leaving the sizes at default.
the disadvantage is slightly higher memory usage of the server
(details below)
my local benchmarks (proxmox-backup-client benchmark):
pbs client:
PVE Host
Epyc 7351P (16core/32thread)
64GB Memory
pbs server:
VM on Host
1 Socket, 4 Cores (Host CPU type)
4GB Memory
average of 3 runs, rounded to MB/s
| no delay | 1ms | 5ms | 10ms | 25ms |
without this patch | 230MB/s | 55MB/s | 13MB/s | 7MB/s | 3MB/s |
with this patch | 293MB/s | 293MB/s | 249MB/s | 241MB/s | 104MB/s |
memory usage (resident memory) of proxmox-backup-proxy:
| peak during benchmarks | after benchmarks |
without this patch | 144MB | 100MB |
with this patch | 145MB | 130MB |
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We need to update the atime of chunk files if they already exist,
otherwise a concurrently running GC could sweep them away.
This is protected with ChunkStore.mutex, so the fstat/unlink does not
race with touching.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The iterator of get_chunk_iterator is extended with a third parameter
indicating whether the current file is a chunk (false) or a .bad file
(true).
Count their sizes to the total of removed bytes, since it also frees
disk space.
.bad files are only deleted if the corresponding chunk exists, i.e. has
been rewritten. Otherwise we might delete data only marked bad because
of transient errors.
While at it, also clean up and use nix::unistd::unlinkat instead of
unsafe libc calls.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This ensures that following backups will always upload the chunk,
thereby replacing it with a correct version again.
Format for renaming is <digest>.<counter>.bad where <counter> is used if
a chunk is found to be bad again before a GC cleans it up.
Care has been taken to deliberately only rename a chunk in conditions
where it is guaranteed to be an error in the chunk itself. Otherwise a
broken index file could lead to an unwanted mass-rename of chunks.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
a range from high to low in rust results in an empty range
(see std::ops::Range documentation)
so we need to generate the range from 0..data.len() and then reverse it
also, the task log contains a newline at the end, so we have to remove
that (should it exist)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this implements parsing and calculating calendarevents that have a
basic date component (year-mon-day) with the usual syntax options
(*, ranges, lists)
and some special events:
monthly
yearly/annually (like systemd)
quarterly
semiannually,semi-annually (like systemd)
includes some regression tests
the ~ syntax for days (the last x days of the month) is not yet
implemented
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of using 'as' and silently converting wrong,
use the TryInto trait and raise an error if we cannot convert
this should only happen if we have a negative year,
but this is expected (we do not want schedules from before the year 0)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
add_* are modeled after add_days
subtract one for set_mon to have a consistent interface for all fields
(i.e. getter/setter return/expect the 'real' number, not the ones
in the tm struct)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the tm struct contains the year - 1900 but we added that
if we want to use the libc normalization correctly, the tm struct
must have the correct year in it, else the computations for timezones,
etc. fail
instead add a getter that adds the years and a setter that subtracts it again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if we give multiple options/ranges for a value, e.g.
2,4,8
we always choose the biggest, instead of the smallest that is next
this happens because in DateTimeValue::find_next(value)
'next' can be set multiple times and we set it when the new
value was *bigger* than the last found 'next' value, when in reality
we have to choose the *smallest* next we can find
reverse the comparison operator to fix this
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we never passed 'false' to it anyway so remove it
(we can add it again if we should ever need it)
also remove the adding of wday (gets normalized anyway)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want to use dates for the calendarspec, and with that there are some
impossible combinations that cannot be detected during parsing
(e.g. some datetimes do not exist in some timezones, and the timezone
can change after setting the schedule)
so finding no timestamp is not an error anymore but a valid result
we omit logging in that case (since it is not an error anymore)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
mktime/gmtime can normalize time and even can handle special timezone
cases like the fact that the time 2:30 on specific day/timezone combos
do not exists
we have to convert the signature of all functions that use
normalize_time since mktime/gmtime can return an EOVERFLOW
but if this happens there is no way we can find a good time anyway
since normalize_time will always set wday according to the rest of the
time, remove set_wday
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
while it was correct, there was no measurable speed gain
(a benchmark yielded 2.8 ms for a spec that did not find a timestamp either way)
so remove it for simpler code
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when trying to parse the task status, we seek 8k from the end
which may be into the middle of a line, so the datetime parsing
can fail (when the log message contains ': ')
This patch does a fast search for the last line, and avoid the
'lines' iterator.
for datastores where the requesting user has read or write permissions,
since the API method itself filters by that already. this is the same
permission setting and filtering that the datastore list API endpoint
does.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Because if not, the backups it creates have bogus permissions and may
seem like they got broken once the daemon is started again with the
correct user/group.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Save the state ("ok" or "failed") and the UPID of the respective
verify task. With this we can easily allow to open the relevant task
log and show when the last verify happened.
As we already load the manifest when listing the snapshots, just add
it there directly.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It's a string-type.
Implement Serialize via Display, Deserialize via FromStr and
add an API_SCHEMA so that it can be used as a type within
the #[api] macro.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
to avoid `map_struct` which is actually unsafe because it
does not verify alignment constraints at all
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
There is no requirement to have at least
a blank line, attribute or comment in between two
interface definitions, e.g.
iface lo inet loopback
iface lo inet6 loopback
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
it really is not necessary, since the only time we are interested in
loading the state from the file is when we list it, and there
we use JobState::load directly to avoid the lock
we still need to create the file on syncjob creation though, so
that we have the correct time for the schedule
to do this we add a new create_state_file that overwrites it on creation
of a syncjob
for safety, we subtract 30 seconds from the in-memory state in case
the statefile is missing
since we call create_state_file from proxmox-backup-api,
we have to chown the lock file after creating to the backup user,
else the sync job scheduling cannot aquire the lock
also we remove the lock file on statefile removal
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that we can log if triggered by a schedule, and writing to a jobstatefile
also correctly polls now the abort_future of the worker, so that
users can stop a sync
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and move the pull parameters into the worker, so that the task log
contains the error if there is one
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is intended to be a generic helper to (de)serialize job states
(e.g., sync, verify, and so on)
writes a json file into '/var/lib/proxmox-backup/jobstates/TYPE-ID.json'
the api creates the directory with the correct permissions, like
the rrd directory
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the endtime should be the timestamp of the last log line
or if there is no log at all, the starttime
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
representing a state via an enum makes more sense in this case
we also implement FromStr and Display to make it easy to convet from/to
a string
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
To prevent forgetting the base snapshot of a running backup, and catch
the case when it still happens (e.g. via manual rm) to at least error
out instead of storing a potentially invalid backup.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This reverts commit d53fbe2474.
The HashSet and "register" function are unnecessary, as we already know
which backup is the one we need to check: the last one, stored as
'last_backup'.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
An flock on the snapshot dir itself is used in addition to the group dir
lock. The lock is used to avoid races with forget and prune, while
having more granularity than the group lock (i.e. the group lock is
necessary to prevent more than one backup per group, but the snapshot
lock still allows backups unrelated to the currently running to be
forgotten/pruned).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Attempt to lock the backup directory to be deleted, if it works keep the
lock until the deletion is complete. This way we ensure that no other
locking operation (e.g. using a snapshot as base for another backup) can
happen concurrently.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
an encrypted Index should never reference a plain-text chunk, and an
unencrypted Index should never reference an encrypted chunk.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
these checks were already in place for regular downloading of backed up
files, also do them when attempting to decode a catalog, or when
downloading decoded files referenced by a pxar index.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
If the datastore holds broken backups for some reason, do not attempt to
base following snapshots on those. This would lead to an error on
/previous, leaving the client no choice but to upload all chunks, even
though there might be potential for incremental savings.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
When uploading an RSA encoded key alongside the backup,
the backup would fail with the error message: "wrong blob
file extension".
Adding the '.blob' extension to rsa-encrypted.key before the
the call to upload_blob_from_data(), rather than after, fixes
the issue.
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
Commit 9fa55e09 "finish_backup: test/verify manifest at server side"
moved the finished-marking above some checks, which means if those fail
the backup would still be marked as successful on the server.
Revert that part and comment the line for the future.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
instead of bailing and stopping the entire GC process, warn about the
missing chunks and continue.
this results in "TASK WARNINGS: X" as the status.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Used chunks are marked in phase1 of the garbage collection process by
using the atime property. Each used chunk gets touched so that the atime
gets updated (if older than 24h, see relatime).
Should there ever be a situation in which the phase1 in the GC run needs
a very long time to finish, it could happen that the grace period
calculated in phase2 is not long enough and thus the marking of the
chunks (atime) becomes invalid. This would result in the removal of
needed chunks.
Even though the likelyhood of this happening is very low, using the
timestamp from right before phase1 is started, to calculate the grace
period in phase2 should avoid this situation.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
and not just of previously synced ones.
we can't use BackupManifest::verify_file as the archive is still stored
under the tmp path at this point.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
for encrypted chunks this is currently not possible, as we need the key
to decode the chunk.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
regular chunks are only decoded when their contents are accessed, in
which case we need to have the key anyway and want to verify the digest.
for blobs we need to verify beforehand, since their checksums are always
calculated based on their raw content, and stored in the manifest.
manifests are also stored as blobs, but don't have a digest in the
traditional sense (they might have a signature covering parts of their
contents, but that is verified already when loading the manifest).
this commit does not cover pull/sync code which copies blobs and chunks
as-is without decoding them.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Errors while applying metadata will not be considered fatal
by default using `pxar extract` unless `--strict` was passed
in which case it'll bail out immediately.
It'll still return an error exit status if something had
failed along the way.
Note that most other errors will still cause it to bail out
(eg. errors creating files, or I/O errors while writing
the contents).
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
'username' here is without realm, but we really want to use user@realm
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
tokios kill_on_drop sometimes leaves zombies around, especially
when there is not another tokio::process::Command spawned after
so instead of relying on the 'kill_on_drop' feature, we explicitly
kill the child on a worker abort. to be able to do this
we have to use 'tokio::select' instead of 'futures::select' since
the latter requires the future to be fused, which consumes the
child handle, leaving us no possibility to kill it after fusing.
(tokio::select does not need the futures to be fused, so we
can reuse the child future after the select again)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
so that we can print a list at the end of the worker which backups
are corrupt.
this is useful if there are many snapshots and some in between had an
error. Before this patch, the task log simply says to 'look in the logs'
but if the log is very long it makes it hard to see what exactly failed.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this makes it easier to see which chunks are corrupt
(and enables us in the future to build a 'complete' list of
corrupt chunks)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The extraction algorithm has a state (bool) indicating
whether we're currently in a positive or negative match
which has always been initialized to true at the beginning,
but when the user provides a `--pattern` argument we need to
start out with a negative match.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This should never trigger if everything else works correctly, but it is
still a very cheap check to avoid wrongly marking a backup as "OK" when
in fact some chunks might be missing.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Multiple backups within one backup group don't really make sense, but
break all sorts of guarantees (e.g. a second backup started after a
first would use a "known-chunks" list from the previous unfinished one,
which would be empty - but using the list from the last finished one is
not a fix either, as that one could be deleted or pruned once the first
simultaneous backup is finished).
Fix it by only allowing one backup per backup group at one time. This is
done via a flock on the backup group directory, thus remaining intact
even after a reload.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
To prevent a race with a background GC operation, do not allow deletion
of backups who's index might currently be referenced as the "known chunk
list" for successive backups. Otherwise the GC could delete chunks it
thinks are no longer referenced, while at the same time telling the
client that it doesn't need to upload said chunks because they already
exist.
Additionally, prevent deletion of whole backup groups, if there are
snapshots contained that appear to be currently in-progress. This is
currently unlikely to trigger, as that function is only used for sync
jobs, but it's a useful safeguard either way.
Deleting a single snapshot has a 'force' parameter, which is necessary
to allow deleting incomplete snapshots on an aborted backup. Pruning
also sets force=true to avoid the check, since it calculates which
snapshots to keep on its own.
To avoid code duplication, the is_finished method is factored out.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Also swap the order of a couple of `.map_err().await` to
`.await.map_err()` since that's generally more efficient.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
a blob can be empty (e.g. an empty pct fw conf), so we
have to set the minimum size to the header size
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
And make verify_crc private for now. We always call load_from_reader() to
verify the CRC.
Also add load_chunk() to datastore.rs (from chunk_store::read_chunk())
is a helper to spawn an internal tokio task without it showing up
in the task list
it is still tracked for reload and notifies the last_worker_listeners
this enables the console to survive a reload of proxmox-backup-proxy
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
using micros vs. as_secs_f64 allows to have it calculated as usize
bytes, easier to handle - this was also used when it still lived in
upload_chunk_info_stream
Co-authored-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Even though it has nothing to do with vnc, we keep the name of the api
call for compatibility with our xtermjs client.
termproxy:
verifies that the user is allowed to open a console and starts
termproxy with the correct parameters
starts a TcpListener on "localhost:0" so that the kernel decides the
port (instead of trying to rerserving like in pve). Then it
leaves the fd open for termproxy and gives the number as port
and tells it via '--port-as-fd' that it should interpret this
as an open fd
the vncwebsocket api call checks the 'vncticket' (name for compatibility)
and connects the remote side (after an Upgrade) with a local TcpStream
connecting to the port given via WebSocket from the proxmox crate
to make sure that only the client can connect that called termproxy and
no one can connect to an arbitrary port on the host we have to include
the port in the ticket data
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
modeled after pves/pmgs vncticket (i substituted the vnc with term)
by putting the path and username as secret data in the ticket
when sending the ticket to /access/ticket it only verifies it,
checks the privs on the path and does not generate a new ticket
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of exposing handlebars itself, offer a register_template and
a render_template ourselves.
render_template checks if the template file was modified since
the last render and reloads it when necessary
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Depends on patched apt-pkg-native-rs. Changelog-URL detection is
inspired by PVE perl code for now, though marked with fixme to use 'apt
changelog' later on, if/when our repos have APT-compatible changelogs
set up.
list_installed_apt_packages iterates all packages and creates an
APTUpdateInfo with detailed information for every package matched by the
given filter Fn.
Sadly, libapt-pkg has some questionable design choices regarding their
use of 'iterators', which means quite a bit of nesting...
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
useful to get info like, was the previous snapshot encrypted in
libproxmox-backup-qemu
Requested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
I mean the user expects that we know what archives, fidx or didx, are
in a backup, so this is internal info and should not be logged by
default
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Verbosity needs to be a non binary level, as this now is just
debug/development info, for endusers normally to much.
We want to have it available, but with a much higher verbosity level.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Track reused size and chunk counts.
Log reused size and use pretty print for all sizes and bandwidth
metrics.
Calculate speed over the actually uploaded size, as else it can be
skewed really bad (showing like terabytes per second)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Requires updating the AsyncRead implementation to cope with byte-wise
seeks to intra-chunk positions.
Uses chunk_from_offset to get locations within chunks, but tries to
avoid it for sequential read to not reduce performance from before.
AsyncSeek needs to use the temporary seek_to_pos to avoid changing the
position in case an invalid seek is given and it needs to error in
poll_complete.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
We support using an ext4 mountpoint directly as datastore and even do
so ourself when creating one through the disk manage code.
Such ext4 ountpoints have a lost+found directory which only root can
traverse into. As the GC list images is done as backup:backup user
walkdir gets an error.
We cannot ignore just all permission errors, as they could lead to
missing some backup indexes and thus possibly sweeping more chunks
than desired. While *normally* that should not happen through our
stack, we had already user report that they do rsyncs to move a
datastore from old to new server and got the permission wrong.
So for now be still very strict, only allow a "lost+found" directory
as immediate child of the datastore base directory, nothing else.
If deemed safe, this can always be made less strict. Possibly by
filtering the known backup-types on the highest level first.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
otherwise we leak those descriptors and run into EMFILE when a backup
group contains many snapshots.
fcntl::openat and Dir::openat are not the same ;)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
also when they have been removed/forgotten since we retrieved the
snapshot list for the currently syncing backup group.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This aligns it with PVE and allows the widget toolkit's update window
"refresh" to work without modifications once POST /apt/update is
implemented.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
While we do not yet support the date specs for CalendarEvent the left
out "weekly" special expression[0] dies not requires that support.
It is specified to be equivalent with `Mon *-*-* 00:00:00` [0] and
this can be implemented with the weekday and time support we already
have.
[0]: https://www.freedesktop.org/software/systemd/man/systemd.time.html#Calendar%20Events
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
when a datastore has enough data to calculate the estimated full date,
but always has exactly the same usage, the factor b of the regression
is '0'
return 0 for that case so that the gui can show 'never' instead of
'not enough data'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
JSON keys MUST be quoted. this is a one-time break in signature
validation for backups created with the broken canonicalization code.
QEMU backups are not affected, as libproxmox-backup-qemu never linked
the broken versions.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Make it actually do the correct cast by using `libc::c_char`.
Fixes issues when building on other platforms, e.g., the aarch64
client only build on Arch Linux ARM I tested in my free time.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Especially helpful for requests not coming from browsers (where the
URL is normally easy to find out).
Makes it easier to detect if one triggered a request with an old
client, or so..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it is nice to have a command to exit from the shell instead of
only allowing ctrl+d or ctrl+c
the api method is just for documentation/help purposes and does nothing
by itself, the real logic is directly in the read loop
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Change the .pxarexclude parser to byte based parsing with
`.split(b'\n')` instead of `.lines()`, to not panic on
non-utf8 paths.
Specially deal with absolute paths by prefixing them with
the current directory.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
As else this is really user unfriendly, and it not printing it has no
advantage. If one doesn't wants to leak resource existence they just
need to *always* check permissions before checking if the requested
resource exists, if that's not done one can leak information also
without getting the path returned (as the system will either print
"resource doesn't exists" or "no permissions" respectively)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When creating a new datastore the basedir is only owned by the backup
user if it did not exist beforehand (create_path chowns only if it
creates the directory), and returns false if it did not create the
directory).
This improves the experience when adding a new datastore on a fresh
disk or existing directory (not owned by backup) - backups/pulls can
be run instead of terminating with EPERM.
Tested on my local testinstall with a new disk, and a existing directory:
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
it does not make sense to check if the worker is running if we already
have an endtime and state
our 'worker_is_active_local' heuristic returns true for non
process-local tasks, so we got 'running' for all tasks that were not
started by 'our' pid and were still running
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the **/ is not required and currently also mistakenly
doesn't match /lost+found which is probably buggy on the
pathpatterns crate side and needs fixing there
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
zfs does not have to be installed, so simply log an error and
continue, users still get an error when clicking directly on
ZFS
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
not exhaustive of what zfs allows (space is missing), but this
can be done easily without problems
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
those names are allowed for zpools
these will fail for now, but it will be fixed in the next commit
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
* don't clone hash keys, just use references
* we don't need a String, stick to Vec<u8> and use
serde_json::to_writer to avoid a temporary strings
altogether
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This is a more convenient way to pass along the key when
creating encrypted backups of unprivileged containers in PVE
where the unprivileged user namespace cannot access
`/etc/pve/priv`.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Have a single common function to get the BaseDirectories
instance and a wrapper for `find()` and `place()` which
wrap the error with some context.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
place() is used when creating a file, as it will create
intermediate directories, only use it when actually placing
a new file.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This also replaces the recently introduced --encryption
parameter on the client with a --crypt-mode parameter.
This can be "none", "encrypt" or "sign-only".
Note that this introduces various changes in the API types
which previously did not take the above distinction into
account properly:
Both `BackupContent` and the manifest's `FileInfo`:
lose `encryption: Option<bool>`
gain `crypt_mode: Option<CryptMode>`
Within the backup manifest itself, the "crypt-mode" property
will always be set.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This can be used to explicitly disable encryption even if a
default key file exists in ~/.config.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
instead of checking on '1' or 'true', check that it is there and not
'0' and 'false'. this allows using simply
https://foo:8007/?debug
instead of
https://foo:8007/?debug=1
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
And use it for fixed and dynamic index. Please note that this
changes checksums for fixed indexes, so restore older backups
will fails now (not backward compatible).
To support incremental backups (where not all chunks are sent to the
server), a new parameter "reuse-csum" is introduced on the
"create_fixed_index" API call. When set and equal to last backups'
checksum, the backup writer clones the data from the last index of this
archive file, and only updates chunks it actually receives.
In incremental mode some checks usually done on closing an index cannot
be made, since they would be inaccurate.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
if *only* data chunks are registered (high chance during incremental
backup), then chunk_count might be one lower then upload_stat.count
because of the zero chunk being unconditionally uploaded but not used.
Thus when subtracting the two, an overflow would occur.
In general, don't let the client make the server panic, instead just set
duplicates to 0.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
and if it does bail, because otherwise we would get an
error on mounting and have a zpool that is not imported
and disks that are used
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
returns the dir listing of the given filepath of the backup snapshot
the filepath has to be base64 encoded or 'root'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want to get a string representation of the DirEntryAttribute
like 'f' for file, etc. and since we have such a mapping already
in the CatalogEntryType, use that
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
mostly copied from BufferedDynamicReadAt from proxmox-backup-client
but the reader is wrapped in an Arc in addition to the Mutex
we will use this for local access to a pxar behind a didx file
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We get the path to our executable via a readlink() on
"/proc/self/exe", which appends a " (deleted)" during
package reloads.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
implements AsyncRead as well as Stream for an IndexFile and a store
that implements AsyncReadChunk
we can use this to asyncread or stream the content of a FixedIndex or
DynamicIndex
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want to save if a file of a backup is encrypted, so that we can
* show that info on the gui
* can later decide if we need to decrypt the backup
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
does the same, except the manual drop, but thats handled there by
letting the value go out of scope
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
copy_nonoverlapping is basically a memcpy which can also be done
via copy_from_slice which is not unsafe
(copy_from_slice uses copy_nonoverlapping internally)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
These aren't installed and are only used for manual testing,
so there's no reason to force them to be built all the time.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
for now mostly copy/paste from nodes/nodename/tasks
(without the parameters)
but we should replace the 'read_task_list' with a method
that gives us the tasks since some timestamp
so that we can get a longer list of tasks than for the node
(we could of course embed this then in the nodes/node/task api call and
remove this again as long as the api is not stable)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The download methods used to take the destination by value
and return them again, since this was required when using
combinators before we had `async fn`.
But this is just an ugly left-over now.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
else we start a dynamic writer and never close it, leading to a backup error
this fixes an issue with backing up vm templates
(and possibly vms without disks)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
we check if all dynamic_writers are closed and if the backup contains
any valid files, we can only mark the backup finished after those
checks, else the backup task gets marked as OK, even though it
is not finished and no cleanups run
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
and drop the now unused extract_lists function
this also fixes a bug, where we did not add the datastore to the list at
all when there was no rrd data
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
there is now a 'extract_cached_data' which just returns
the data of the specified field, and an api function that converts
a list of fields to the correct serde value
this way we do not have to create a serde value in rrd/cache.rs
(makes for a better interface)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Report vanished files (instead of erroring out on them),
also only warn about files inaccessible due to permissions
instead of bailing out.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This used to be default-off and was accidentally set to
on-by-default with the pxar crate update.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
returns a list of the datastores and their usages, a list of usages of
the past month (for the gui) and an estimation of when its full
(using the linear regression)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
provides some basic statistics functions (sum, mean, etc.)
and a function to return the parameters of the linear regression of
two variables
implemented using num_traits to be more flexible for the types
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is an interface to simply get the Vec<Option<f64>> out of rrd
without going through serde values
we return a list of timestamps and a HashMap with the lists we could find
(otherwise it is not in the map)
if no lists could be extracted, the time list is also empty
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
disk_usage returned the same values as defined in StorageStatus,
so simply use that
with that we can replace the logic of the datastore status with that
function and also use it for root disk usage of the nodes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the last chunk does not have to be as big as the chunk_size,
just use the already available 'chunk_end' function which does the
correct thing
this fixes restoration of images whose sizes are not a multiple of
'chunk_size' as well
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if the last sync job is too far in the past (or there was none at all
for now) we run it at the next iteration, so we want to show that
we now calculate the next_run by using either the real last endtime
as time or 0
then in the frontend, we check if the next_run is < now and show 'pending'
(we do it this way also for replication on pve)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
two things were wrong here:
* the range (x..y) does not include y, so the range
(day_num+1..6) goes from (day_num+1) to 5 (but sunday is 6)
* WeekDays.bits() does not return the 'day_num' of that day, but
the bit value (e.g. 64 for SUNDAY) but was treated as the index of
the day of the week
to fix this, we drop the map to WeekDays and use the 'indices'
directly
this patch makes the test work again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If one executes a client command like
# proxmox-backup-client files <snapshot> --repository ...
the files shown have already the '.fidx' or '.blob' file ending, so
if a user would just copy paste that one the client would always add
.blob, and the server would not find that file.
So avoid adding file endings if it is already a known OK one.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
will be extended in a next patch.
Also drop a dead else branch, can never get hit as we always add
.blob as fallback
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
while touching it, make columns and tbar in DataStoreContent.js
declarative members and remove the (now) unnecessary initComponent
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this returns the list of syncjobs with status, as opposed to
config/sync (which is just the config)
also adds an api call where users can run the job manually under
/admin/sync/$ID/run
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
'sync' is used for manually pulling a remote datastore
changing it for a scheduled sync to 'syncjob' so that we can
differentiate between both types of syncs
this also adds a seperate task description for it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The repo URL consists of
* optional userid
* optional host
* datastore name
All three have defined regex or format, but none of that is used, so
for example not all valid datastore names are accepted.
Move definition of the regex over to api2::types where we can access
all required regexes easily.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since the target side wants this to be a boolean and
serde interprets a None Value as 'null' we have to only
add this when it is really set via cli
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when starting a new task, we do two things to keep track of tasks
(in that order):
* updating the 'active' file with a list of tasks with
'update_active_workers'
* updating the WORKER_TASK_LIST
the second also updates the status of running tasks in the file by
checking if it is still running by checking the WORKER_TASK_LIST
since those two things are not locked, it can happend that
we update the file, and before updating the WORKER_TASK_LIST,
another thread calls update_active_workers and tries to
get the status from the task log, which won't have any data yet
so the status is 'unknown'
(we do not update that status ever, likely for performance reasons,
so we have to fix this here)
by switching the order of the two operations, we make sure that only
tasks reach the 'active' file which are inserted in the WORKER_TASK_LIST
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to avoid having arbitrary characters in the config (e.g. newlines)
note that this breaks existings configs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
with a catch: password is in the struct but we do not want it to return
via the api, so we only 'serialize' it when the string is not empty
(this can only happen when the format is not checked by us, iow.
when its returned from the api) and setting it manually to ""
when we return remotes from the api
this way we can still use the type but do not return the password
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since we want to set the owner of the acl config to 'root'
which is only possible when using a protected api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we added a userid attribute to the User struct, but missed that we
created the default user without that attribuet via the json! macro
which lead to a runtime panic on the deserialization
by using the struct directly, such errors will be caught by the compiler
in the future
with this change, we can remove the serde_json import here
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
using a handlebars instance in ApiConfig, to cache the templates
as long as possible, this is currently ok, as the index template
can only change when the whole package changes
if we split this in the future, we have to trigger a reload of
the daemon on gui package upgrade (so that the template gets reloaded)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We dont want to leak the full configuration to users with limited access permission.
Please use the api2::config::datastore api to get the full configuration.
Make `flistxattr()` return a `ListXAttr` helper which
provides an iterator over `&CStr`.
This exposes the property that xattr names are a
zero-terminated string without simply being an opaque
"byte vector". Using &[u8] as a type here is too lax.
Also let `fgetxattr` take a `CStr`. While this may be a
burden on the caller, we usually already have
zero-terminated strings on the call site. Currently we only
use this method coming from `flistxattr` after all.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This is basically a rewrite of the current logic for navigating the catalog,
but in addition allows to follow symlinks.
Following symlinks introduces the issue that generation of canonical paths
(needed in the actual pxar archive) is more complex, as symlinks have to be
resolved and loops avoided.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
'clear-selected' allows to clear all the match patterns from the list of
patterns for a subsequent restore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
'list-selected' now shows the filenames matching the patterns for a restore
instead of the patterns themselfs.
The patterns can be displayed by passing the '--pattern' flag.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This eliminates also repeated calls to readlink in fuse, which occur when the
preallocated buffer to store the symlink target path is to small.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
entry() allows to lookup the position where and entry belongs and update/insert
it in the HashMap more efficiently than get_mut() and insert().
Details: https://gankra.github.io/blah/hashbrown-insert/
In addition, use the struct LinkedList and remove the outdated code.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In addition to the .pxarexclude files, glob match patterns can be passed to pxar
also via cli parameters.
Therefore the warning is rephrased to be more ambiguous.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
- do not "double"-block_in_place() (it may not be nested)
- do not call block_in_place() in non-worker threads
is_in_tokio() isn't sufficient, we need to actually know
that we're in a worker-thread, so we do this by remembering
that we're blocking.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
tokio now has Handle::try_current() allowing us to
generally check for a tokio runtime even if spawned by
someone else
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
We want to avoid pbs if possible and also avoid placing internal
binaries, not intended for human direct use, in /bin or /sbin paths.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Modeled after the one from PVE, but using rust instead of perl for
resolving the nodename and writing to /etc/issue
Behavior differs a bit. We write all non-loopback addresses to this
file, as the gui accepts connections from them all, so limiting it to
the first one is not really sensible.
Further an error to resolve, or only getting loopback addresses won't
write out an empty /etc/issue file, but a note about the error at the
place where the address would be displayed.
Named it "pbsbanner", not "proxmox-backup-banner" as it's rather an
internal tool anyway and mirrors pvebanner, pmgbanner
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Two or more successive slashes should be allowed and treated as a single slash.
We also do not treat two successive slashes at the beginning of a path any
different.
Details are found here:
https://pubs.opengroup.org/onlinepubs/000095399/basedefs/xbd_chap04.html#tag_04_11
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Context::find_goodbye_entry() is removed and incorporated into the lookup
callback in order to take advantage of the entry_cache and since it is only used
inside this callback.
All entries read on lookup are cached.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
By storing the payload start offset in the `DirectoryEntry` and passing this
information to `Decoder::read()`, the payload can be read directly and a repeated
re-reading of the entry information is avoided.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
listxattr must only return the name list, no extended attribute values.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
ACLs are stored separately in the pxar archive. This implements the functionality
needed to read the ACLs and return them as extended attributes in the getxattr
callback.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This helpers are used to construct the extended attributes values from
the ACLs stored in the pxar archive.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
They are not only needed by the pxar::sequential_decoder but also for the fuse
xattr impl, so it makes more sense to have them there.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The goodbye table of directory entries is cached in a LRU cache to speed up
subsequent accesses.
This is especially important for directories with many entries, as then the
readdirplus callback is called repeatedly because of the limited reply buffer
size.
`DirectoryEntry`s are cached for subsequent access in their own LRU cache,
independent of the goodbye tables.
In order to avoid borrow conflicts, the `Context` provides a fn as_mut_refs
as well as a fn run_with_context_refs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
By passing `&DirectoryEntry` to stat, the function interface is simplified.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This will return a mutable reference just like get_mut, but on a cache miss
it will get and insert the missing value via the fetch method provided via the
Cacher trait.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
last_successful_backup: Returns the time of the last successful backup
group_path: Returns the absolute path for a backup_group
snapshot_path: Returns the absolute path for a backup_dir
It's a bit dangerous as it points to all the saved backups, so they
would be seemingly lost after updating the path.
Follow our logic from other products, e.g. in PVE we do not allow to
update the backing path/location of a storage either for similar
reasons.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Implements a cache with least recently used cache replacement policy.
Internally the state is tracked by a HashMap (for fast access) and a doubly
linked list (for the access order).
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The -sys, -tools and -api crate have now been merged into
the proxmx crate directly. Only macro crates are separate
(but still reexported by the proxmox crate in their
designated locations).
When we need to depend on "parts" of the crate later on
we'll just have to use features.
The reason is mostly that these modules had
inter-dependencies which really make them not independent
enough to be their own crates.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This allows to read the target path of a symbolic link in the
Decoder::read_directory_entry() function and stores it in the DirectoryEntry.
By this the Decoder::read_link() function becomes obsolete and is therefore
removed.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
By ambiguously using the Decoder::read_directory_entry() the code is simplified
and reading of the DirectoryEntry is concentrated into Context::run_in_context().
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Previously it was disciminated based on the entry mode.
For directories, the inode was the offset of the corresponding
goodbye tail mark while for all others it was the offset of the filename.
By simply using the start offset as calculated from the corresponding
goodbye table entry (which yields the archive offset of the filename),
the code is simplified and the more ambiguous read_directory_entry()
function can be used.
The disatvantage of this approach is the need to keep track of the
start and end offsets for each entry, as the end offset is needed in
order to access the goodbye table of directory entries.
The root node still has to be treated special, as it's inode is 1 as per fuse
definition and it has no filename as per the pxar file format definition.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Use Decoder::read_directory_entry() instead of Decoder::attributes() as this
already returns the needed DirectoryEntry.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
By not implementing readdir but only readdirplus, the FUSE_CAP_READDIRPLUS flag
is set while the FUSE_CAP_READDIRPLUS_AUTO flag is not set.
Thereby the kernel will issue only readdirplus calls.
Documentation at:
https://libfuse.github.io/doxygen/fuse-3_88_80_2include_2fuse__common_8h.html#a9b90333ad08d0e1c2ed0134d9305ee87
As the expensive part for accessing and reading the attributes is seeking and
decoding each directory entry, it is usefull to force readdirplus calls.
By this a struct `EntryParam` is returned for each entry, therebye avoiding a
subsequent lookup call.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
add a helper to perform some basic checks on password prompts.
- verification (asks for a 2nd time)
- check length
also use the new helper where password input in tty is taken to reduce
duplicate code.
this helper should be used when creating keys, changing passphrases etc.
note: this helper can be extended later on to provide better checks for
password strength.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
readdirplus returns the entries together with their `EntryParam`, so subsequent
lookups for each of the entries are avoided.
In order to reduce code duplication, the code for filling the reply buffer is
moved into a macro.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Remove the current caching of attrs and goodbye tables as it is broken anyway.
This will be replaced with a LRU cache.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
By moving the HashMap into `Context`, the use of lazy_static as well as the
additional Mutex can be avoided (`Context` is already guarded by a Mutex).
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
By reading and including xattrs and payload size in struct `DirectoryEntry`,
the tuple of return types is avoided and the code is simpler.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Limit the total number of entries and therefore the approximate memory
consumption instead of doing this on a per directory basis as it was previously.
This makes more sense as it limits not only the width but also the depth of the
directory tree.
Further, instead of hardcoding this value, allow to pass this information as
additional optional parameter 'entires-max'.
By this, creation of the archive with directories containing a large number of
entries is possible.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
If during creation of the archive, files/dirs with lacking read permissions are
encountered, the user is displayed a warning and the archive is created without
including the file/dir.
Previously this resulted in an error and the archive creation failed.
In order to implement this also for the .pxarexclude files, the Error type of
MatchPattern::from_file() and MatchPattern::from_line() was adopted accordingly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
If nodes are excluded by feature flags, they must not appear in the goodbye table.
This is fixed by continuing with the next entry in the for loop.
Further the relative path buffer is now poped in order to correctly display the path.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
for now forbid all control characters[0] in the comment value, the
section config writer cannot cope with newlines in the value, it
writes them out literally, allowing "injection" or breaking the whole
config.
In the webinterface use also a textfield, not a textarea.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The find matching was incorrectly performed starting from the parent directroy
and not as intended from the entries of the parent directory.
Further, the match pattern passed from the catalog shell contains the absolute
path of the search entry point as prefix, so find() must always start from the
archive root. This is because the match pattern has to be stored in the selected
list for a subsequent restore-selected command in the shell.
All matching paths are shown as absolute paths with all contents in the subdir,
equal to what would be restored by the given pattern.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Similar to PVE and PMG, for quick access when one has the basic
webinterface open anyway. Should move to the "proxmoxHelpButton" once
we have an onlineHelp mapping to the docs.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
some fitting rules copied over from PVE's ext6-pve.css file.
simply place it in the css subfolder where the proxmox-backup-gui.js
file is hosted and add a "css/" alias for that directory, the
formatter gets use the right content type with that.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Implements the find command which allows to find and select files for subsequent
restore.
Files selected for restore are now stored in a Vec instead of a HashSet.
This is needed, since instead of the full paths for each file, selected files are
now identified by a list of match pattern, where ordering matters.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
find() iterates over the file tree and matches each node against a list of match
patterns provided at function call.
For each matching node, a callback function with the current directroy stack is
called.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The MatchPattern impl heavily used copies and therefore was inefficient regarding
memory management.
This patch intoduces MatchPatternSlice as struct to avoid copies and perform the
same pattern matching functionality.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
futures-0.3 has a futures::future::abortable() function
which does the exact same, returns an Abortable future with
an AbortHandle providing an abort() method.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
We used to await all the futures via the runtime's shutdown
method, which doesn't exist anymore, so await all the join
handles instead.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
else we get an error from this call, using a 16 byte (128 bit) nonce
is currently only supported by the still in draft
XChaCha20-Poly1305, not the current default specified by RFC 7539[0],
which uses a 12 byte (96 bit) nonce.
Fixes the following error:
> thread 'main' panicked at 'called `Result::unwrap()` on an `Err`
> value: ErrorStack([])', src/libcore/result.rs:1165:5
[0]: https://tools.ietf.org/html/rfc7539
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The match_filename() in sequentail_decoder and encoder are moved to be static
functions of MatchPattern.
This allows to reuse the code also in the catalog find implementation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Cache not only the goodbye table for the last directory but for each opened
directory.
The opendir fuse callback will fill the cache with the goodbye table and
releasedir will remove it from the cache.
This should reduce the number of chuncks fetched from the server in some cases.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The goodbye table is layed out as binary search tree based on the hash, so use
this to be more efficient when looking up a hash in the table for directories
with a large number of entries.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This major refactoring of the catalog based shell utilizes the new API macro and
the API Schema as well as rustyline instead of the old GNU readline C API.
The code now has these 3 main components:
* The `Shell` which handles the readline loop via rustyline.
* The shell functions defined via the API macro.
* The `Context` which holds catalog and decoder instances.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This reverts commit a9aa52e6a8.
Because we do not want to use macros for the backup protocol for now.
And because it crashes backup tasks for some unknown reason.
The api macro now supports hyphens in parameter names and
referencing externally defined `Schema`s, so here's an
example.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Decoders read must check if the file is a hardlink and read data from the
corresponding offset if so.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The returned filename should be the one of the file given at the offset, not of
the one the hardlink points to.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
`match` is a bit more readable than the if-else chains,
also replace
space_chars.iter().any(|s| c == *s)
with
space_chars.contains(&c)
which is also more readable.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
By changing the way shell commands are defined and parsed, this makes it more
straight forward to extend the current functionality.
The readline input is parsed based on the provided command definition and the
given parameters and options are passed to a command specific callback function.
In addition, the provided command definition including its description is used
to generate a help string to display.
The help command shows a list of all supported commands or the help string for
the provided command.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
And use an extra functzion set_callback() to configure that.
Also rewrite pxar/fuse.rs and implement a generic Session (will get
further cleanups with next patches).
In order to provide the context needed for tab completion via the readline
callback, the needed mut ref is passed via a thread local storage key.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This is needed in order to explicitly clone the values when needed in the
catalog shell implementation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Create the full target path and not fail if an intermediate directory does not
exist.
This is needed in order to restore multiple archives via the catalog, where the
target should further contain each archive name as subdir.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
proxmox::tools now has a Uuid module using the native
libuuid.
Adds build dependency: libuuid1 (which is a Pre-Depends of
util-linux, so always installed anyway).
Drops uuid + 16 more crate dependencies.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Allows to lookup an entry in a directory based on the provided `DirectoryEntry`.
This is needed to navigate the filesystem based on `DirectoryEntry`s and similar
to the find_goodbye_entry() function in src/pxar/fuse.rs
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
As it turns out the original implementation was correct and the start in
`DirectoryEntry` points to the `PxarEntry` and not as wrongly stated to the
filename.
This reverts the incorrect code and adds comments to the fields clarifying this.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
`Decoder::restore()` calls the `SequentialDecoder::restore()` which expects to
encounter a `PxarEntry` at first. But the start of `DirectoryEntry` points to the
filename (except for the root dir), so skip over it.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Counts the bytes written by the CatalogBlobWriter in order to obtain the
stream position, needed to get offset to reference catalog items.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The expensive call to Decoder::read_directory_entry() can be omitted as
Decoder::attributes() returns all the information the fuse response needs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This exposes the option to pass a list of exclude MatchPattern via the
'--exclude' option.
The list is encoded as file '.pxarexclude-cli' in the archives root directory.
If such a file is present in the filesystem, it is skipped and not included in
the archive in order to avoid conflicting information.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This provides the functionality needed to encode MatchPatterns passed on the cli
in the root directory.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
and add Session::from_decoder() in order to be able to create a fuse session
with a `Decoder` given as argument.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This patch introduces `Context` to hold the decoder, ino_offset and caches for
the attributes and the goodbye table.
By caching, certain callbacks can be handled without the need to read additional
data via the decoder, which improves performance.
The searching of the goodbye table is refactored as well, avoiding recursive
function calls in case of a hash collision.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The hash of the filename in the goodbye table items allows to quickly compare to
a hashed filename.
Unfortunately, a matching hash is no garantee for matching filenames as hash
collisions are possible.
This patch fixes such possible collisions by further checking the filenames once
a matching hash has been found.
This introduces no significant extra cost (except for the filename comparison)
for cases with matching hashes, as the lookup call has to seek and read the file
attributes (including the filename) anyway.
In cases with hash collision, the next matching item is read and treaded
analogously (what means we need at least one extra seek).
As collisions should be not that frequent, this should be an acceptable penalty.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The functionality of stat is split into smaller sub-functions, which allows
to reuse them more flexible, as the code flow is similar but not always the same.
By this, the ugly and incorrect re-setting of the i-node in the lookup callback
function is avoided.
The correct i-node is now calculated beforehand and stat simply creates a
`libc::stat` struct from the provided parameters.
Also, this fixes incorrect i-node assignments in the readdir callback function.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The lookup call checks if the given filename is found in the directory referenced
by the i-node by calclulating the filenames hash and looking it up in the
directories goodbye table.
If found, the entries parameters are returned.
In order to be able to lookup the parent offset by a given file offset in the
readdir callback, this also stores the corresponding values in a HashMap.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In order to read the contents of the goodbye table while keeping the
functionality of list_dir in place as is.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
... and thereby allow it to read a single directory entry based on the
start and end archive offsets.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
... as they are not needed with the latest iteration of the fuse callback
function impl (which never made it into the repos).
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The previous implementation simply skipped over `size` bytes, which is not
correct as size includes also the header.
By relying on `SequentailDecoder`s read_filename function, this is correctly
taken care of plus some more integrity checks.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Also, removes an unused println statement in the decoder callback function and
fixes a typo.
Further, use ABI compatible Option<&T> for FFI to avoid use of raw pointers.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Implements functions attributes, open, read, read_link and get_dir
to be used by the fuse implementation which uses file offsets within the archive
as inodes to reference the archives items.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
By this it is possible to read and check just the first u64 of the corresponding
structs in order to identify the items.
This is needed for the fuse implementation in order to get entries based on the
archive offset, used as inode.
Directories are referenced by the offset to the goodbye tail while other items
are referenced by the offset of the filename followed by the entry.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
We can use .await now, which means the whole connection
state machine doesn't need to be typed out as "types"
anymore, so, at least until hyper_openssl gets updated to
proper dependencies, let's drop it.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The HttpsConnector will use this. Instead of implementing a
specialized MaybeTlsStream, this is simply a generic "either
this or that kind of Async Read/Write type".
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The writer.write_all() call accessed data marked as
undefined to valgrind. Note that we shouldn't write out
uninitialized memory for security reasons anyway.
(note that vec::undefined already did zero-initialize the
data, but also marked it as undefined for valgrind when
compiling with the valgrind feature)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The type was sized properly but the number was still wrong,
fixed this.
TODO! Once unions with non-Copy values are stable make this
a `union { full: [u8; 4096], data: TheActualHeader }`;
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
AFAICT we have no use for it anymore, its api entry points
are gone. If we do end up needing something from it, it's
still in the git history anyway. (And about two thirds of it
can be made much less awkward by utilizing async-await
anyway, so no love lost there...)
Moved the chunker back into src/backup/chunker.rs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
... and use std::mem::MaybeUninit or proxmox::tools::vec::uninitialized() instead.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
/var/run is considered deprecated and for instance in
systemd unit files lintian complains...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
By taking ownership it is easier to move the decoder into another struct,
e.g. into a session context in fuse.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allows to mount an archive to a specified mountpoint via the cli.
Once the archive is mounted, the process is send to the background.
By passing the --verbose flag, the process is kept in foreground and
debug output is provided.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This adds the basic code in order to create a fuse session and mount an archive.
It adds libfuse3-3 as runtime dependency and libfuse3-dev as build dependency.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The flag CA_FORMAT_SHA512_256 is used to switch between sha512 and sha256 to
calculate digest in casync.
As we use sha256, we can get rid of this flag for now.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The original name PxarExcludePattern makes no sense anymore as the patterns are
also used to match filenames during restore of the archive.
Therefore, exclude_pattern.rs is moved to match_pattern.rs and PxarExcludePattern
rename to MatchPattern.
Further, since it makes more sense the MatchTypes are now declared as None,
Positive, Negative, PartialPositive or PartialNegative, as this makes more sense
and seems more readable.
Positive matches are those without '!' prefix, Negatives with '!' prefix.
This makes also the filename matching in the encoder/decoder more intuitive and
the logic was adapted accordingly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Partial extraction of an archive with a glob pattern, e.g. '**/*.conf' lead to
the unexpected behaviour of restoring all partially matched directories (in this
example all of them).
This patch fixes this unexpected behaviour by only restoring those directories
were the directory or one of its sub-items fully matched the pattern and should
therefore be restored.
To achive this behavoiur, directory metadata is pushed onto a stack and restored
on demand.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In order to restore only directories when some of their content fully matched
a match pattern on partial restores, these directories and their metadata are
pushed onto this buffer and only restored successivley on demand.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This structure contains all the attributes allowing to easily store those within
a e.g. dir buffer.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
By borrowing these objects we preserve the functionality but make sure
that ownership doesn't change, avoiding problems when contained within other
structs such as e.g. a buffer storing these attributes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
By default, restoring an archive will fail if files with the same filename
already exist in the target directory.
By setting the allow_existing_dirs flag, the restore will not fail if an
existing directory is encountered.
The metadata (permissions, acls, ...) of the existing directory will be set
to the ones from the archive.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
To improve usability it is now possible to directly pass paths or match patterns
as arguments to pxar extract to partially restore an archive.
The patterns provided via CLI are appended to the ones read from file by the
--files-from option in order to have priority over those.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allows to partially restore an archive by passing match patterns to the restore
function.
The whole restore is performed in sequential, therefore the whole archive has to
be read.
By wrapping the RawFd into an Option it can be controlled if the corresponding
part is restored (in case of Some(fd)) or if the Reader reads over it
without restore (in case of None).
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
match_exclude_pattern() does not need a '&mut self' reference to the encoder,
move it therefore out of the impl.
Further, this patch contains some naming and formatting cosmetics.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
dir_count was used to track the number of directory entries to store in the
archive and bail if the maximum is exceeded.
As the number of entries is equally obtained from the list of the filenames to
include, use that one instead.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Partial matches make only sense for directories, files are always leafs of the
tree. Take this into account in order to avoid restoring of files which only
matched the front of a match pattern.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In order to improve usablity, the target on archive extraction will be the
current working directory by default.
A different target can be provided via the optional --target <PATH> parameter.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This splits the functionality of restore_sequential() into several smaller
functions in order to allow to reuse them when restoring by seeking based on
the goodbye table offsets.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Commit cd7dc87903 introduced the special treatment
for .pxarexclude files when stored in the archive.
The incorrect placement of a code snipplet from this path leads to an incorrect
offset and size stored in the goodbye table.
This fix places the start to the correct position, restoring the previously
correct behaviour.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
We can now use iter::from_fn() which makes for a much nicer
logic. The only thing better is going to be when we can use
generators with `yield`.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
.pxarexclude files allow to exclude or include parts of a subtree by matching
with a glob pattern. The globs are used according to the matches of fnmatch.
In addition '**' can be used to match multiple directories within the path.
Order of the entries matter, as later ones win over previous ones.
As the .pxarexclude files can be placed at any node of the directory hirarchy,
this implies that matching child entries win over parent entries.
The only exception to this behaviour is, when a parent entry already fully
matched the path, thereby excluding the child entries which would match
otherwise.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>