verifying a single snapshot is now never skipped because of recent verify
verifying a group will now reverify after 29 days to be consistent
with the 'All OK (old)' display
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Avoids latency for restore-VMs that are finished fast but not ready
yet the first round while not checking to often for slower ones, iow,
we assume that the start up distribution is looking like a chi-square
Χ² with k=3.
With 25*round we get at max 45 rounds totalling to 25.875 s delay and
1.125 max between-round delay, which still provides an ok reaction
time.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the timeout for connecting may be much shorter if we get a response
(which doesn't needs to be Ok, e.g., "Connection refused"), so
instead of trying a fixed amount of 60 times lets try for 25s
independent of how often that will be then.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
glibc's malloc has a misguided heuristic to detect transient allocations that
will just result in allocation sizes below 32 MiB never using mmap.
That it turn means that those relatively big allocations are on the heap where
cleanup and returning memory to the OS is harder to do and easier to be blocked
by long living, small allocations at the top (end) of the heap.
Observing the malloc size distribution in a file-level backup run:
@size:
[0] 14 | |
[1] 25214 |@@@@@ |
[2, 4) 9090 |@ |
[4, 8) 12987 |@@ |
[8, 16) 93453 |@@@@@@@@@@@@@@@@@@@@ |
[16, 32) 30255 |@@@@@@ |
[32, 64) 237445 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[64, 128) 32692 |@@@@@@@ |
[128, 256) 22296 |@@@@ |
[256, 512) 16177 |@@@ |
[512, 1K) 5139 |@ |
[1K, 2K) 3352 | |
[2K, 4K) 214 | |
[4K, 8K) 1568 | |
[8K, 16K) 95 | |
[16K, 32K) 3457 | |
[32K, 64K) 3175 | |
[64K, 128K) 161 | |
[128K, 256K) 453 | |
[256K, 512K) 93 | |
[512K, 1M) 74 | |
[1M, 2M) 774 | |
[2M, 4M) 319 | |
[4M, 8M) 700 | |
[8M, 16M) 93 | |
[16M, 32M) 18 | |
We see that all allocations will be on the heap, and that while most
allocations are small, the relatively few big ones will still make up most of
the RSS and if blocked from being released back to the OS result in much higher
peak and average usage for the program than actually required.
Avoiding the "dynamic" mmap-threshold increasement algorithm and fixing it at
the original default of 128 KiB reduces RSS size by factor 10-20 when running
backups. As with memory mappings other mappings or the heap can never block
freeing the memory fully back to the OS.
But, the drawback of using mmap is more wasted space for unaligned or small
allocation sizes, and the fact that the kernel allegedly zeros out the data
before giving it to user space. The former doesn't really matter for us when
using it only for allocations bigger than 128 KiB, and the latter is a
trade-off, using 10 to 20 times less memory brings its own performance
improvement possibilities for the whole system after all ;-)
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
[ Thomas: added to comment & commit message + extra-empty-line fixes ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this fixes the leaked memory for the cache, as we had only pointers
in the map/list which were freed, not the underlying chunks
moves the 'clear' implementation out of the trait bounds so that
Drop can reuse it
this is used e.g. for file download from a pxar
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
rustdoc lints detected that two external hyperlinks were not
clickable.
The short cut used is only available for internal links, otherwise
one needs to use the Markdown syntax, so either [Text](URL) or <URL>.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
[ T: commit message text width, mention markdown ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
these need to be checked (and are) via libssl anyway before persisting,
and newer versions might contain new ciphers/variants/... (and things
like @STRENGTH or @SECLEVEL=n were missing).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
there isn't really a concept of 'nodes' in PBS (yet) anyway - and if
there ever is, it needs to be handled by the rest-server / specific API
endpoints (like in PVE), and not by the schema.
this allows dropping proxmox-sys from pbs-api-types (and thus nix and
some other transitive deps as well).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
it's the only thing requiring openssl in pbs-api-types, and it's only
used by the client to pretty-print the 'master' key, which is
client-specific.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
To much wasted space else.
Also rename "Options" to "Others", while it's not _that_ much better
it's slightly more intuitive than config -> options (which has some
redundancy)...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
also fixup missing emptyText for fingerprint (adapted from PVE's PBS
storage addition) and code-style in surrounding areas a bit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Recently, ZFS removed the pool global io stats from
/proc/spl/kstat/zfs/POOL/io with no replacement.
To gather stats about the datastores, access now the objset specific
entries there. To be able to make that efficient, cache a map of
dataset <-> obset ids, so that we do not have to parse all files each time.
We update the cache each time we try to get the info for a dataset
where we do not have a mapping.
We cannot update it on datastore add/remove since that happens in the
proxmox-backup daemon, while we need the info here in proxmox-backup-proxy.
Sadly with this we lose the io wait metric, but it seems that this is no
longer tracked in zfs at all, so nothing we can do for that.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the part number cannot go above 255 at the moment, but if it ever gets
bumped to a bigger integer type this boundary wouldn't cause a
compile-error. explicitly checking for overflowing u8 makes this a bit
more future-proof, and shuts up clippy as well ;)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>