We probably can combine the base permission + owner check, but for
now add explicit ones to upfront so that the change is simpler as
only one thing is done.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
allow to list any namespace with privileges on it and allow to create
and delete namespaces if the user has modify permissions on the parent
namespace.
Creation is only allowed if the parent NS already exists.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
on depth == 0 we only yield the anchor ns, this simplifies usage in
the API as for, e.g. list-snapthos the depth == 0 means only the
snapshots from the passed namespace, nothing below.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ T: renamed from NAMESPACE_RECURSION_DEPTH_SCHEMA & moved to from
jobs to datastore ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
given a namespace, a source prefix and a target prefix this helper
strips the source prefix and replaces it with the target one (erroring
out if the prefix doesn't match).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
these are the ones for non-#[api] methods, also fill in the
namespace in prune operations
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The behavior on "any snapshot was protected" isn't yet ideal, as we
then do not cleanup any (sub) namespace, even if some of them where
cleaned from groups & snapshots completely. But that isn't easy to do
with our current depth-first pre-order iterator behavior, and it's
also not completely wrong either, the user can re-do the removal on
the sub-namespaces, so leave that for later.
Should get moved to a datastore::BackupNamespace type once/if we get
one
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Make it easier by adding an helper accepting either group or
directory
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The idea is to have namespaces in a datastore to allow grouping and
namespacing backups from different (but similar trusted) sources,
e.g., different PVE clusters, geo sites, use-cases or company
service-branches, without separating the underlying
deduplication domain and thus blowing up data and (GC/verify)
resource usage.
To avoid namespace ID clashes with anything existing or future
usecases use a intermediate `ns` level on *each* depth.
The current implementation treats that as internal and thus hides
that fact from the API, iow., the namespace path the users passes
along or gets returned won't include the `ns` level, they do not
matter there at all.
The max-depth of 8 is chosen with the following in mind:
- assume that end-users already are in a deeper level of a hierarchy,
most often they'll start at level one or two, as the higher ones
are used by the seller/admin to namespace different users/groups,
so lower than four would be very limiting for a lot of target use
cases
- all the more, a PBS could be used as huge second level archive in a
big company, so one could imagine a namespace structure like:
/<state>/<intra-state-location>/<datacenter>/<company-branch>/<workload-type>/<service-type>/
e.g.: /us/east-coast/dc12345/financial/report-storage/cassandra/
that's six levels that one can imagine for a reasonable use-case,
leave some room for the ones harder to imagine ;-)
- on the other hand, we do not want to allow unlimited levels as we
have request parameter limits and deep nesting can create other
issues as well (e.g., stack exhaustion), so doubling the minimum
level of 4 (1st point) we got room to breath even for the
more odd (or huge) use cases (2nd point)
- a per-level length of 32 (-1 due to separator) is enough to use
telling names, making lives of users and admin simpler, but not
blowing up parameter total length with the max depth of 8
- 8 * 32 = 256 which is nice buffer size
Much thanks for Wolfgang for all the great work on the type
implementation and assisting greatly with the design.
Co-authored-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and issue a warning. We can do this, because we know an empty chunk
cannot be valid, and we (assumedly) have a valid chunk in memory.
Having empty chunks on disk is currently possible when PBS crashes,
but the rename of the chunk was flushed to disk, when the actual data
was not.
If it's not empty but there is a size mismatch, return an error.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
used_datastores returned the 'target', but in the full_restore_worker,
we interpreted it as the source and searched for a mapping
(which we then locked)
since we cannot return a HashSet of Arc<T> (missing Hash trait on DataStore),
we have now a map of source -> target
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
thanks to commit 70142e607dda43fc778f39d52dc7bb3bba088cd3 from
proxmox repos's proxmox-http crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and remove already fixed fixmes.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ T: squash in cargo fmt fixup for some trailing ws ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
pull_store is the entrypoint used by other code, the rest does not need
to be visible at all.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
else this might remove groups which are not part of the pull scope. note
that setting/using remove_vanished already checks the required privs
earlier.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
On reload the old process hands over to the new process but needs to
keep running until all its worker tasks are finished to avoid
breaking a in-progress action like a xterm.js web shell or a backup
creation/restore.
During that wait time the receiving channel was already closed, but
the TCP sockt accept listener was still left active by mistake.
That paired with the `SO_REUSEPORT` being set on the underlying
socket, made the kernel choose either the old or new process for new
incoming connections, both still listened for them after all and
reuse-port + multiple processes is often used as load-balancer
mechanism.
As the old proxy accepted connections but didn't process them anymore
one could observer sporadic connection failures on any API call, well
any new connection to the proxy, depending on which process got the
it assigned.
The fix is to stop accepting new connections one we shutdown, so poll
the shutdown_future too during accept and just exit the accept-loop
on shutdown.
Note: This part of the code, nor other parts that could influence it,
wasn't changed at all in recent times, so it's still unresolved for
why it pops up only now.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Co-authored-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[ T: add more (root cause) info and reword a bit ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>