instead of getting the 'peer_addr()' from the socket.
The advantage is that we must get this and thus can drop the mapping
from result -> option, and can drop the testing for None and a test case
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
glibc's malloc has a misguided heuristic to detect transient allocations that
will just result in allocation sizes below 32 MiB never using mmap.
That it turn means that those relatively big allocations are on the heap where
cleanup and returning memory to the OS is harder to do and easier to be blocked
by long living, small allocations at the top (end) of the heap.
Observing the malloc size distribution in a file-level backup run:
@size:
[0] 14 | |
[1] 25214 |@@@@@ |
[2, 4) 9090 |@ |
[4, 8) 12987 |@@ |
[8, 16) 93453 |@@@@@@@@@@@@@@@@@@@@ |
[16, 32) 30255 |@@@@@@ |
[32, 64) 237445 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[64, 128) 32692 |@@@@@@@ |
[128, 256) 22296 |@@@@ |
[256, 512) 16177 |@@@ |
[512, 1K) 5139 |@ |
[1K, 2K) 3352 | |
[2K, 4K) 214 | |
[4K, 8K) 1568 | |
[8K, 16K) 95 | |
[16K, 32K) 3457 | |
[32K, 64K) 3175 | |
[64K, 128K) 161 | |
[128K, 256K) 453 | |
[256K, 512K) 93 | |
[512K, 1M) 74 | |
[1M, 2M) 774 | |
[2M, 4M) 319 | |
[4M, 8M) 700 | |
[8M, 16M) 93 | |
[16M, 32M) 18 | |
We see that all allocations will be on the heap, and that while most
allocations are small, the relatively few big ones will still make up most of
the RSS and if blocked from being released back to the OS result in much higher
peak and average usage for the program than actually required.
Avoiding the "dynamic" mmap-threshold increasement algorithm and fixing it at
the original default of 128 KiB reduces RSS size by factor 10-20 when running
backups. As with memory mappings other mappings or the heap can never block
freeing the memory fully back to the OS.
But, the drawback of using mmap is more wasted space for unaligned or small
allocation sizes, and the fact that the kernel allegedly zeros out the data
before giving it to user space. The former doesn't really matter for us when
using it only for allocations bigger than 128 KiB, and the latter is a
trade-off, using 10 to 20 times less memory brings its own performance
improvement possibilities for the whole system after all ;-)
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
[ Thomas: added to comment & commit message + extra-empty-line fixes ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
rustdoc lints detected that two external hyperlinks were not
clickable.
The short cut used is only available for internal links, otherwise
one needs to use the Markdown syntax, so either [Text](URL) or <URL>.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
[ T: commit message text width, mention markdown ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Recently, ZFS removed the pool global io stats from
/proc/spl/kstat/zfs/POOL/io with no replacement.
To gather stats about the datastores, access now the objset specific
entries there. To be able to make that efficient, cache a map of
dataset <-> obset ids, so that we do not have to parse all files each time.
We update the cache each time we try to get the info for a dataset
where we do not have a mapping.
We cannot update it on datastore add/remove since that happens in the
proxmox-backup daemon, while we need the info here in proxmox-backup-proxy.
Sadly with this we lose the io wait metric, but it seems that this is no
longer tracked in zfs at all, so nothing we can do for that.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the part number cannot go above 255 at the moment, but if it ever gets
bumped to a bigger integer type this boundary wouldn't cause a
compile-error. explicitly checking for overflowing u8 makes this a bit
more future-proof, and shuts up clippy as well ;)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
some operations (e.g. garbage collection/restore/etc.) are very read
intensive on the chunks, and having atime=on and relatime=off (zfs default)
makes those write intensive operations too. Additionally, 'ext4' defaults to
relatime, so also change the default for api-created zpools.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
It is not necessary, so avoid it. The client can now be used
with multiple threads (without using a Mutex).
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
the 'utc' flag is now contained in the event itself and not given
as a flag to 'compute_next_event' anymore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
- imported pbs-api-types/src/common_regex.rs from old proxmox crate
- use hex crate to generate/parse hex digest
- remove all reference to proxmox crate (use proxmox-sys and
proxmox-serde instead)
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
commit c42a54795d introcuded a bug by
using fp.to_string(). Replace this with fp.signature() which correctly
returns the full fingerprint instead of the short version.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
by returning default data, in case the challenge data is not parseable.
this allows a new challenge to be started for the userid in question
without manual cleanup.
currently this can be triggered if an ongoing challenge created with
webauthn-rs 0.2.5 is stored in /run and attempted to be read
post-upgrade.
Reported-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
with current proxmox-tfa this became a hard error, since origin and rp
are not both Strings anymore..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we even use that for basically all the related schema names, "groups"
allone is just rather not so telling, i.e., "groups" what?
While due to the additive nature of `group-filter` is not the best
possible name for passing multiple arguments on the CLI (the web-ui
can present this more UX-friendly anyway) due to possible confusion
about if the filter act like AND vs OR it can be documented and even
if a user is confused they still are safe on more being synced than
less. Also, the original param name wasn't really _that_ better in
that regards
Dietmar also suggested to use singular for the CLI option, while
there can be more they're passed over repeating the option, each with
a single filter.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and add a completion handler to complete the backup groups
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this fixes bug #3533, since now a user can backup a single datastore
on multiple tape media pools in parallel, e.g. vms on one pool, ct on
another.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and convert existing (manually created/edited) jobs to the previous
default value of 'true'. the GUI has always set this value and defaults
to 'false'.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
like for manual pulls, but persisted in the sync job config and visible
in the relevant GUI parts.
GUI is read-only for now (and defaults to no filtering on creation), as
this is a rather advanced feature that requires a complex GUI to be
user-friendly (regex-freeform, type-combobox, remote group scanning +
selector with additional freeform input).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
without requiring workarounds based on ownership and limited
visibility/access.
if a group filter is set, remove_vanished will only consider filtered
groups for removal to prevent concurrent disjunct filters from trashing
eachother's synced groups.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this is basically the sync job config without ID and some stuff
converted already, and a convenient helper to generate the http client
from it.
Suggested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of `GroupListItem`s. we convert it anyway, so might as well do
that at the start.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
for gettin/setting the protected flag for snapshots (akin to notes)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
when a group could not be completely removed due to protected snapshot,
throw an error
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
and log if a vanished groups could not be completely deleted if it
contains protected snapshots
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
adds the info that a snapshot is protected to:
* snapshot list
* manual pruning (also dry-run)
* prune jobs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
some custom ACME endpoints do not have TOS, interpret this as
'the user has accepted the TOS', like we do for PVE.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
our export code can handle if the tape is inside the drive, so unloading
it first does not have an benefit, it even makes the exporting slower,
since we first unload it into its original slot, and then moving it
to an import/export slot
so drop the code that unloads the tape from the drive, and let the
export code itself handle that
change the 'eject' into a 'rewind' and comment why we do that first
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>