by packing the auth into a RwLock and starting a background
future that renews the ticket every 15 minutes
we still use the BroadcastFuture for the first ticket and only
if that is finished we start the scheduled future
we have to store an abort handle for the renewal future and abort it when
the http client is dropped, so we do not request new tickets forever
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
like we do for PVE. this is visible on the dashboard, and caused 403 on
each update which bothers me when looking at the dev console.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
restructured text comment syntax, i.e., everything it cannot parse is
a comment, is a real PITA!
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
A client can omit uploading chunks in the "known_chunks" list, those
then also won't be written on the server side. Check all those chunks
mentioned in the index but not uploaded for existance and report an
error if they don't exist instead of marking a potentially broken backup
as "successful".
This is only important if the base snapshot references corrupted chunks,
but has not been negatively verified. Also, it is important to only
verify this at the end, *after* all index writers are closed, since only
then can it be guaranteed that no GC will sweep referenced chunks away.
If a chunk is found missing, also mark the previous backup with a
verification failure, since we know the missing chunk has to referenced
in it (only way it could have been inserted to known_chunks with
checked=false). This has the benefit of automatically doing a
full-upload backup if the user attempts to retry after seeing the new
error, instead of requiring a manual verify or forget.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Do not allow clients to reuse chunks from the previous backup if it has
a failed validation result. This would result in a new "successful"
backup that potentially references broken chunks.
If the previous backup has not been verified, assume it is fine and
continue on.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
- remove chrono dependency
- depend on proxmox 0.3.8
- remove epoch_now, epoch_now_u64 and epoch_now_f64
- remove tm_editor (moved to proxmox crate)
- use new helpers from proxmox 0.3.8
* epoch_i64 and epoch_f64
* parse_rfc3339
* epoch_to_rfc3339_utc
* strftime_local
- BackupDir changes:
* store epoch and rfc3339 string instead of DateTime
* backup_time_to_string now return a Result
* remove unnecessary TryFrom<(BackupGroup, i64)> for BackupDir
- DynamicIndexHeader: change ctime to i64
- FixedIndexHeader: change ctime to i64
since converting from i64 epoch timestamp to DateTime is not always
possible. previously, passing invalid backup-time from client to server
(or vice-versa) panicked the corresponding tokio task. now we get proper
error messages including the invalid timestamp.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
otherwise operations like catalog shell panic when viewing pxar archives
containing such entries, e.g. with mtime very far ahead into the future.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by either printing the original, out-of-range timestamp as-is, or
bailing with a proper error message instead of panicking.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
even if it can't be handled by chrono. silently replacing it with epoch
0 is confusing..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and move the info defined in 'Schedules' there,
the explanation of calendar events is inspired by the systemd.time
manpage and the pve docs (especially the examples are mostly
copied/adapted from there)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
*/x is valid syntax for us, but not systemd, so to not confuse users
write it like systemd would accept it
also an timespec must at least have hours and minutes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
avoid hardcoding width in the docs itself, so that other render
outputs can choose another size.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It's not all perfect (yet) but way cleaner and simpler to use than
the sphinx one.
Custom do the scrolling for the fixed side bar and make some other
slight adjustments.
Main issue for now is that the "Developer Appendix" is always shown
in the navigation tree, but we only include that toctree for
devbuilds...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
else we get the default of 16k, which is quite low for our use case.
this improves the TLS upload benchmark speed by about 30-40% for me.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
fixes the error, "manifest does not contain
file 'X.pxar'", that occurs when trying to mount
a pxar archive with 'proxmox-backup-client mount':
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
if we have a stale backup without an manifest, we do not count
the remaining files in the backup dir anymore, but this means
we now have to check here if there are really any encrypted
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
similar to the other fix, if we do not set the buffer size manually,
we get better performance for high latency connections
restore benchmark from f.gruenbicher:
no delay, without patch: ~50MB/s
no delay, with patch: ~50MB/s
25ms delay, without patch: ~11MB/s
25ms delay, with path: ~50MB/s
my own restore benchmark:
no delay, without patch: ~1.5GiB/s
no delay, with patch: ~1.5GiB/s
25ms delay, without patch: 30MiB/s
25ms delay, with patch: ~950MiB/s
for some more details about those benchmarks see
https://lists.proxmox.com/pipermail/pbs-devel/2020-September/000600.html
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by leaving the buffer sizes on default, we get much better tcp performance
for high latency links
throughput is still impacted by latency, but much less so when
leaving the sizes at default.
the disadvantage is slightly higher memory usage of the server
(details below)
my local benchmarks (proxmox-backup-client benchmark):
pbs client:
PVE Host
Epyc 7351P (16core/32thread)
64GB Memory
pbs server:
VM on Host
1 Socket, 4 Cores (Host CPU type)
4GB Memory
average of 3 runs, rounded to MB/s
| no delay | 1ms | 5ms | 10ms | 25ms |
without this patch | 230MB/s | 55MB/s | 13MB/s | 7MB/s | 3MB/s |
with this patch | 293MB/s | 293MB/s | 249MB/s | 241MB/s | 104MB/s |
memory usage (resident memory) of proxmox-backup-proxy:
| peak during benchmarks | after benchmarks |
without this patch | 144MB | 100MB |
with this patch | 145MB | 130MB |
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>