this check is not perfect since there are often multiple device
nodes per drive/changer, but from the scan api we should return always
the same, so for an api user this should be enough
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that an api user can get the drives belonging to a changer
without having to parse the config listing themselves
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
similar to the changers, create a listing at /tape/drive and put
the specific api calls below that
move the scan api call up one level
remove the status info from the config listing
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that an api user can see which drive belongs to which drivenum of a changer
for ones with multiple drives
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
add a changer listing here (copied from api2/config/changer)
and put the status and transfer api calls below that
puts the changer scan into the top level tape api
and removes the (now redundant) info from the config api path
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
all the verify methods pass along the following:
- task worker
- datastore
- corrupt and verified chunks
might as well pull that out into a common type, with the added bonus of
now having a single point for construction instead of copying the
default capacaties in three different modules..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
it's needed to derive Hash, and we always compare Authids or their
Userid components, never just the Tokenname part anyway..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
chunk_stream one can be collapsed, since split == split_to with at set
to buffer.len() anyway.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Currently there's not yet a node config and the WA config is
somewhat "tightly coupled" to the user entries in that
changing it can lock them all out, so for now I opted for
fewer reorganization and just use a digest of the
canonicalized config here, and keep it all in the tfa.json
file.
Experimentally using the flatten feature on the methods with
an`Updater` struct similar to what the api macro is supposed
to be able to derive on its own in the future.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
modifying @pam users credentials should be only possible for root@pam,
otherwise it can have unintended consequences.
also enforce the same limit on user creation (except self_service check,
since it makes no sense during user creation)
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
to wrap a Receiver in a Stream. this will likely move back into tokio
proper once we have a std Stream..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Child itself is no longer a Future, but it has a new wait() async fn
that does the same thing
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the old variant attempted to parse a tokenid as userid and returned the
cryptic parsing error to the client, which is rather confusing.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
percentage of verified groups, interpolating based on snapshot count
within the group. in most cases, this will also be closer to 'real'
progress since added snapshots (those which will be verified) in active
backup groups will be roughly evenly distributed, while number of total
snapshots per group will be heavily skewed towards those groups which
have existed the longest, even though most of those old snapshots will
only be re-verified very infrequently.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
before adding more fields to the tuple, let's just create the struct
inside the match arms to improve readability.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and use this information to add more information to client backup log
and guide the download manifest decision.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
else users have to manually search through a potentially very long task
log to find the entries that are different.. this is the same summary
printed at the end of a manual verify task.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
unprivileged users should only see the counts related to their part of
the datastore.
while we're at it, switch to a list groups, filter groups, count
snapshots approach (like list_snapshots) to speedup calls to this
endpoint when many unprivileged users share a datastore.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
used in the PBS GUI, but also for PVE usage queries which don't need all
the extra expensive information..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by listing groups first, then filtering, then listing group snapshots.
this cuts down the number of openat/getdirents calls for users that just
have a partial view of the datastore.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Add an optional string field to APTUpdateInfo which can be used for
extra information.
This is used for passing running kernel and running version information
in the versions API call together with proxmox-backup and
proxmox-backup-server.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
for now this only does the 'postfix' -> 'postfix@-' conversion,
fixes the issue that we only showed the 'postfix' service syslog
(which is rather empty in a default setup) instead of the instance one
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
log invalid owners to system log, and continue with next group just as
if permission checks fail for the following operations:
- verify store with limited permissions
- list store groups
- list store snapshots
all other call sites either handle it correctly already (sync/pull), or
operate on a single group/snapshot and can bubble up the error.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
similar to what we do for zfs. By bailing before partitioning, the disk is
still considered unused after a failed attempt.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
If a package is or will be installed from the enterprise repo, retrieve
the changelog from there as well (securely via HTTPS and authenticated
with the subcription key).
Extends the get_string method to take additional headers, in this case
used for 'Authorization'. Hyper does not have built-in basic auth
support AFAICT but it's simple enough to just build the header manually.
Take the opportunity and also set the User-Agent sensibly for GET
requests, just like for POST.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
to easily check the store of a worker_id
this fixes the issue that one could not filter by type 'syncjob' and
datastore simultaneously
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
very basic, based on API/concepts of PVE one.
Still missing, addint an extra_info string option to APTUpdateInfo
and pass along running kernel/PBS version there.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
with remote Authids, not local Userids.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
if the user/token could have either configured/manually executed the
task, but it was either executed via the schedule (root@pam) or
another user/token.
without this change, semi-privileged users (that cannot read all tasks
globally, but are DatastoreAdmin) could schedule jobs, but not read
their logs once the schedule executes them. it also makes sense for
multiple such users to see eachothers manually executed jobs, as long as
the privilege level on the datastore (or remote/remote_store/local
store) itself is sufficient.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since the store is not a path parameter, we need to do manual instead of
schema checks. also dropping Datastore.Backup here
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to allow on-demand scanning of remote datastores accessible for the
configured remote user.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we have information here not available in the access log, especially
if the /api2/extjs formatter is used, which encapsulates errors in a
200 response.
So keep the auth log for now, but extend it use from create ticket
calls to all authentication failures for API calls, this ensures one
can also fail2ban tokens.
Do that logging in a central place, which makes it simple but means
that we do not have the user ID information available to include in
the log.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and change all users of the /status/tasks api call to this
with this change we can now delete the /status/tasks api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of returning 0 elements (which does not really make sense anyway),
change it so that there is no limit anymore (besides usize::MAX)
this is technically a breaking change for the api, but i guess
no one is using limit=0 for anything sensible anyway
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
should cover all the current scenarios. remote server-side checks can't
be meaningfully unit-tested, but they are simple enough so should
hopefully never break.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
instead of manually, this has the advantage that we now set
the jobstate correctly and can return with an error if it is
currently running (instead of failing in the task)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by requiring
- Datastore.Backup permission for target datastore
- Remote.Read permission for source remote/datastore
- Datastore.Prune if vanished snapshots should be removed
- Datastore.Modify if another user should own the freshly synced
snapshots
reading a sync job entry only requires knowing about both the source
remote and the target datastore.
note that this does not affect the Authid used to authenticate with the
remote, which of course also needs permissions to access the source
datastore.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
instead of hard-coding 'backup@pam'. this allows a bit more flexibility
(e.g., syncing to a datastore that can directly be used as restore
source) without overly complicating things.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
again, base idea copied off PVE, but, we safe the information about
which pending version we send a mail out already in a separate
object, to keep the api return type APTUpdateInfo clean.
This also makes a few things a bit easier, as we can update the
package status without saving/restoring the notify information.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
apt changes some of its state/cache also if it errors out, most of
the time, so we actually want to print both, stderr and stdout.
Further, only warn if its exit code is non-zero, for the same
rationale, it may bring updates available even if it errors (e.g.,
because a future pbs-enterprise repo is additionally configured but
not accessible).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
equivalent to verifying a whole datastore, except for reading job
(entries), which is accessible to regular Datastore.Audit/Backup users
as well.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
for verifying a whole datastore. Datastore.Backup now allows verifying
only backups owned by the triggering user.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
they are returned when reading the manifest, which just requires
Datastore.Audit as well. Datastore.Read is for reading backup contents,
not metadata.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
even for otherwise unprivileged users.
since effective privileges of an API token are always intersected with
those of their owning user, this does not allow an unprivileged user to
elevate their privileges in practice, but avoids the need to involve a
privileged user to deploy API tokens.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
a user should be allowed to read/list/overwrite backups owned by their
own tokens, but a token should not be able to read/list/overwrite
backups owned by their owning user.
when changing ownership of a backup group, a user should be able to
transfer ownership to/from their own tokens if the backup is owned by
them (or one of their tokens).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since it's not possible to extend existing structs, UserWithTokens
duplicates most of user::User.. to avoid duplicating user::ApiToken as
well, this returns full API token IDs, not just the token name part.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
in most generic places. this is accompanied by a change in
RpcEnvironment to purposefully break existing call sites.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
with an optional Tokenname, appended with '!' as delimiter in the string
representation like for PVE.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by moving the properties of the storage status out again to the top
level object
also introduce proper structs for the types used, to get type-safety
and better documentation for the api calls
this changes the backup counts from an array of [groups,snapshots] to
an object/struct with { groups, snapshots } and include 'other' types
(though we do not have any at this moment)
this way it is better documented
this also adapts the ui code to cope with the api changes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This is indented to be used for the PVE storage library, replacing
the missuse of the much more expensive status API call.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For proxmox packages it works the same way as PVE, by retrieving the
changelog URL and issuing a HTTP GET to it, forwarding the output to the
client. As this is only supposed to be a workaround removed in the
future, a simple block_on is used to avoid async.
For debian packages we can simply call 'apt-get changelog' and forward
it's output.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
As always, libapt is mocking us with complexity, but we can get the
approximate result we want by retrieving dependencies of all
to-be-updated packages and then seeing if they are missing.
If they are, we assume they will be installed.
For this, query_detailed_info is extended to allow reading details for
non-installed packages, and this is also exposed in
list_installed_apt_packages via 'all_versions_for'. This is necessary so
we can retrieve changelogs for such packages.
Note that we cannot retrieve all that information all the time, as
querying details for packages that aren't installed takes a rather long
time.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Avoids custom hardcoded logic, but can only be used for debian packages
as of now. Adds a FIXME to switch over to use --print-uris only once our
package repos support that changelog format.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
To get package details for a specific version instead of only the
candidate.
Also cleanup filter function with extra struct instead of unnamed &str
parameters.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
gets rid of the return value and moving around of the zip
and decoder data
avoids cloning the path prefix on every recursion
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
by using the new ZipEncoder and recursively add files to it
the zip only contains directories, normal files and hardlinks (by simply
copying the content), no symlinks, etc.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
To cater to the paranoid, a new datastore-wide setting "verify-new" is
introduced. When set, a verify job will be spawned right after a new
backup is added to the store (only verifying the added snapshot).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Commit 9070d11f4c introduced this change for other call sites,
assuming it is correct, this one was missed.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
for now log auth errors also to the syslog, on a protected (LAN
and/or firewalled) setup this should normally happen due to
missconfiguration, not tries to break in.
This reduces syslog noise *a lot*. A current full journal output from
the current boot here has 72066 lines, of which 71444 (>99% !!) are
"successful auth for user ..." messages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
needs new proxmox dependency to get the RpcEnvironment changes,
adding client_ip getter and setter.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoid races when updating manifest data by flocking a lock file.
update_manifest is used to ensure updates always happen with the lock
held.
Snapshot deletion also acquires the lock, so it cannot interfere with an
outstanding manifest write.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
There's no point in having that as a seperate method, just parse the
thing into a struct and write it back out correctly.
Also makes further changes to the method simpler.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
...to avoid it being forgotten or pruned while in use.
Update lock error message for deletions to be consistent.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
To allow other reading operations on the base snapshot as well. No
semantic changes with this patch alone, as all other locks on snapshots
are exclusive.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
A removal can fail if the snapshot is already gone (this is fine, our
job is done either way) or we couldn't get a lock (also fine, it can't
be removed then, just warn the user so he knows what happened and why it
wasn't removed) - keep going either way.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
A snapshot that's currently being read can still appear in the prune
list, but should not be removed.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
To untangle the server code from the actual backup
implementation.
It would be ideal if the whole backup/ dir could become its
own crate with minimal dependencies, certainly without
depending on the actual api server. That would then also be
used more easily to create forensic tools for all the data
file types we have in the backup repositories.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
via HTTP2/backup reader protocol. they already could do so via the plain
HTTP download-file/.. API calls that the GUI uses, but the reader
environment required READ permission on the whole datastore instead of
just BACKUP on the backup group itself.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
a reader connection should not be allowed to read arbitrary chunks in
the datastore, but only those that were previously registered by opening
the corresponding index files.
this mechanism is needed to allow unprivileged users (that don't have
full READ permissions on the whole datastore) access to their own
backups via a reader environment.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
not triggered by any current code, but this would lead to a stack
exhaustion since borrow would call deref which would call borrow again..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Previously only Datastore.Modify was required for creating a new
datastore.
But, that endpoint allows one to pass an arbitrary path, of which all
parent directories will be created, this can allow any user with the
"Datastore Admin" role on "/datastores" to do some damage to the
system. Further, it is effectively a side channel for revealing the
systems directory structure through educated guessing and error
handling.
Add a new privilege "Datastore.Allocate" which, for now, is used
specifically for the create datastore API endpoint.
Add it only to the "Admin" role.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the same as the regular TaskState, but without its fields, so that
we can use the api macro and use it as api call parameter
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This can slow things down by a lot on setups with (relatively) high
seek time, in the order of doubling the backup times if cache isn't
populated with the last backups chunk inode info.
Effectively there's nothing known this protects us from in the
codebase. The only thing which was theorized about was the case
where a really long running backup job (over 24 hours) is still
running and writing new chunks, not indexed yet anywhere, then an
update (or manual action) triggers a reload of the proxy. There was
some theory that then a GC in the new daemon would not know about the
oldest writer in the old one, and thus use a less strict atime limit
for chunk sweeping - opening up a window for deleting chunks from the
long running backup.
But, this simply cannot happen as we have a per datastore process
wide flock, which is acquired shared by backup jobs and exclusive by
GC. In the same process GC and backup can both get it, as it has a
process locking granularity. If there's an old daemon with a writer,
that also has the lock open shared, and so no GC in the new process
can get exclusive access to it.
So, with that confirmed we have no need for a "half-assed"
verification in the backup finish step. Rather, we plan to add an
opt-in "full verify each backup on finish" option (see #2988)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We forgot to put braces around the DNS_NAME regex, and in
DNS_NAME_OR_IP_REGEX
this is wrong because the regex
^foo|bar$
matches 'foo' at the beginning and 'bar' at the end, so either
foobaz
bazbar
would match. only
^(foo|bar)$
matches only 'foo' and 'bar'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
* add square brackets to ipv6 adresses in BackupRepository if they not
already have some (we save them without in the remote config)
* in get_pull_parameters, we now create a BackupRepository first and use
those values (which does the [] mapping), this also has the advantage
that we have one place less were we hardcode 8007 as port
* in the ui, add square brackets for ipv6 adresses for remotes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when upgrading from a version where we stored all tasks in the 'active' file,
we did not completly account for finished tasks still there
we should update the file when encountering any finished task in
'active' as well as filter them out on the api call (if they get through)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this adds the ability to add port numbers in the backup repo spec
as well as remotes, so that user that are behind a
NAT/Firewall/Reverse proxy can still use it
also adds some explanation and examples to the docs to make it clearer
for h2 client i left the localhost:8007 part, since it is not
configurable where we bind to
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When creating a new zpool for a datastore, also instantiate an
import-unit for it. This helps in cases where '/etc/zfs/zool.cache'
get corrupted and thus the pool is not imported upon boot.
This patch needs the corresponding addition of 'zfs-import@.service' in
the zfsonlinux repository.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
we need this, because we append the port to this to get a target url
e.g. we print
format!("https://{}:8007/", address)
if address is now an ipv6 (e.g. fe80::1) it would become
https://fe80::1:8007/ which is a valid ipv6 on its own
by using square brackets we get:
https://[fe80::1]:8007/ which now connects to the correct ip/port
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this means that limiting with epoch now works correctly
also change the api type to i64, since that is what the starttime is
saved as
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this makes the filtering/limiting much nicer and readable
since we now have potentially an 'infinite' amount of tasks we iterate over,
and cannot now beforehand how many there are, we return the total count
as always 1 higher then requested iff we are not at the end (this is
the case when the amount of entries is smaller than the requested limit)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
also changes:
* correct comment about reset (replace 'sync' with 'action')
* check schedule change correctly (only when it is actually changed)
with this changes, we can drop the 'lookup_last_worker' method
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
like the sync jobs, so that if an admin configures a schedule it
really starts the next time that time is reached not immediately
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
listing, updating or deleting a user is now possible for the user
itself, in addition to higher-privileged users that have appropriate
privileges on '/access/users'.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
filtered by those they are privileged enough to read individually. this
allows such users to configure prune/GC schedules via the GUI (the API
already allowed it previously).
permission-wise, a user with this privilege can already:
- list all stores they have access to (returns just name/comment)
- read the config of each store they have access to individually
(returns full config of that datastore + digest of whole config)
but combines them to
- read configs of all datastores they have access to (returns full
config of those datastores + digest of whole config)
user that have AUDIT on just /datastore without propagate can now no
longer read all configurations (but this could be added it back, it just
seems to make little sense to me).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
like we do for PVE. this is visible on the dashboard, and caused 403 on
each update which bothers me when looking at the dev console.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
A client can omit uploading chunks in the "known_chunks" list, those
then also won't be written on the server side. Check all those chunks
mentioned in the index but not uploaded for existance and report an
error if they don't exist instead of marking a potentially broken backup
as "successful".
This is only important if the base snapshot references corrupted chunks,
but has not been negatively verified. Also, it is important to only
verify this at the end, *after* all index writers are closed, since only
then can it be guaranteed that no GC will sweep referenced chunks away.
If a chunk is found missing, also mark the previous backup with a
verification failure, since we know the missing chunk has to referenced
in it (only way it could have been inserted to known_chunks with
checked=false). This has the benefit of automatically doing a
full-upload backup if the user attempts to retry after seeing the new
error, instead of requiring a manual verify or forget.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Do not allow clients to reuse chunks from the previous backup if it has
a failed validation result. This would result in a new "successful"
backup that potentially references broken chunks.
If the previous backup has not been verified, assume it is fine and
continue on.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
- remove chrono dependency
- depend on proxmox 0.3.8
- remove epoch_now, epoch_now_u64 and epoch_now_f64
- remove tm_editor (moved to proxmox crate)
- use new helpers from proxmox 0.3.8
* epoch_i64 and epoch_f64
* parse_rfc3339
* epoch_to_rfc3339_utc
* strftime_local
- BackupDir changes:
* store epoch and rfc3339 string instead of DateTime
* backup_time_to_string now return a Result
* remove unnecessary TryFrom<(BackupGroup, i64)> for BackupDir
- DynamicIndexHeader: change ctime to i64
- FixedIndexHeader: change ctime to i64
since converting from i64 epoch timestamp to DateTime is not always
possible. previously, passing invalid backup-time from client to server
(or vice-versa) panicked the corresponding tokio task. now we get proper
error messages including the invalid timestamp.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
else we get the default of 16k, which is quite low for our use case.
this improves the TLS upload benchmark speed by about 30-40% for me.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The iterator of get_chunk_iterator is extended with a third parameter
indicating whether the current file is a chunk (false) or a .bad file
(true).
Count their sizes to the total of removed bytes, since it also frees
disk space.
.bad files are only deleted if the corresponding chunk exists, i.e. has
been rewritten. Otherwise we might delete data only marked bad because
of transient errors.
While at it, also clean up and use nix::unistd::unlinkat instead of
unsafe libc calls.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
we want to use dates for the calendarspec, and with that there are some
impossible combinations that cannot be detected during parsing
(e.g. some datetimes do not exist in some timezones, and the timezone
can change after setting the schedule)
so finding no timestamp is not an error anymore but a valid result
we omit logging in that case (since it is not an error anymore)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
for datastores where the requesting user has read or write permissions,
since the API method itself filters by that already. this is the same
permission setting and filtering that the datastore list API endpoint
does.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Save the state ("ok" or "failed") and the UPID of the respective
verify task. With this we can easily allow to open the relevant task
log and show when the last verify happened.
As we already load the manifest when listing the snapshots, just add
it there directly.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it really is not necessary, since the only time we are interested in
loading the state from the file is when we list it, and there
we use JobState::load directly to avoid the lock
we still need to create the file on syncjob creation though, so
that we have the correct time for the schedule
to do this we add a new create_state_file that overwrites it on creation
of a syncjob
for safety, we subtract 30 seconds from the in-memory state in case
the statefile is missing
since we call create_state_file from proxmox-backup-api,
we have to chown the lock file after creating to the backup user,
else the sync job scheduling cannot aquire the lock
also we remove the lock file on statefile removal
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that we can log if triggered by a schedule, and writing to a jobstatefile
also correctly polls now the abort_future of the worker, so that
users can stop a sync
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and move the pull parameters into the worker, so that the task log
contains the error if there is one
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the endtime should be the timestamp of the last log line
or if there is no log at all, the starttime
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
representing a state via an enum makes more sense in this case
we also implement FromStr and Display to make it easy to convet from/to
a string
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
To prevent forgetting the base snapshot of a running backup, and catch
the case when it still happens (e.g. via manual rm) to at least error
out instead of storing a potentially invalid backup.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This reverts commit d53fbe2474.
The HashSet and "register" function are unnecessary, as we already know
which backup is the one we need to check: the last one, stored as
'last_backup'.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
An flock on the snapshot dir itself is used in addition to the group dir
lock. The lock is used to avoid races with forget and prune, while
having more granularity than the group lock (i.e. the group lock is
necessary to prevent more than one backup per group, but the snapshot
lock still allows backups unrelated to the currently running to be
forgotten/pruned).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
an encrypted Index should never reference a plain-text chunk, and an
unencrypted Index should never reference an encrypted chunk.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
these checks were already in place for regular downloading of backed up
files, also do them when attempting to decode a catalog, or when
downloading decoded files referenced by a pxar index.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
If the datastore holds broken backups for some reason, do not attempt to
base following snapshots on those. This would lead to an error on
/previous, leaving the client no choice but to upload all chunks, even
though there might be potential for incremental savings.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Commit 9fa55e09 "finish_backup: test/verify manifest at server side"
moved the finished-marking above some checks, which means if those fail
the backup would still be marked as successful on the server.
Revert that part and comment the line for the future.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
'username' here is without realm, but we really want to use user@realm
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
tokios kill_on_drop sometimes leaves zombies around, especially
when there is not another tokio::process::Command spawned after
so instead of relying on the 'kill_on_drop' feature, we explicitly
kill the child on a worker abort. to be able to do this
we have to use 'tokio::select' instead of 'futures::select' since
the latter requires the future to be fused, which consumes the
child handle, leaving us no possibility to kill it after fusing.
(tokio::select does not need the futures to be fused, so we
can reuse the child future after the select again)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
so that we can print a list at the end of the worker which backups
are corrupt.
this is useful if there are many snapshots and some in between had an
error. Before this patch, the task log simply says to 'look in the logs'
but if the log is very long it makes it hard to see what exactly failed.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This should never trigger if everything else works correctly, but it is
still a very cheap check to avoid wrongly marking a backup as "OK" when
in fact some chunks might be missing.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Multiple backups within one backup group don't really make sense, but
break all sorts of guarantees (e.g. a second backup started after a
first would use a "known-chunks" list from the previous unfinished one,
which would be empty - but using the list from the last finished one is
not a fix either, as that one could be deleted or pruned once the first
simultaneous backup is finished).
Fix it by only allowing one backup per backup group at one time. This is
done via a flock on the backup group directory, thus remaining intact
even after a reload.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
To prevent a race with a background GC operation, do not allow deletion
of backups who's index might currently be referenced as the "known chunk
list" for successive backups. Otherwise the GC could delete chunks it
thinks are no longer referenced, while at the same time telling the
client that it doesn't need to upload said chunks because they already
exist.
Additionally, prevent deletion of whole backup groups, if there are
snapshots contained that appear to be currently in-progress. This is
currently unlikely to trigger, as that function is only used for sync
jobs, but it's a useful safeguard either way.
Deleting a single snapshot has a 'force' parameter, which is necessary
to allow deleting incomplete snapshots on an aborted backup. Pruning
also sets force=true to avoid the check, since it calculates which
snapshots to keep on its own.
To avoid code duplication, the is_finished method is factored out.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Also swap the order of a couple of `.map_err().await` to
`.await.map_err()` since that's generally more efficient.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
a blob can be empty (e.g. an empty pct fw conf), so we
have to set the minimum size to the header size
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
And make verify_crc private for now. We always call load_from_reader() to
verify the CRC.
Also add load_chunk() to datastore.rs (from chunk_store::read_chunk())
is a helper to spawn an internal tokio task without it showing up
in the task list
it is still tracked for reload and notifies the last_worker_listeners
this enables the console to survive a reload of proxmox-backup-proxy
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Even though it has nothing to do with vnc, we keep the name of the api
call for compatibility with our xtermjs client.
termproxy:
verifies that the user is allowed to open a console and starts
termproxy with the correct parameters
starts a TcpListener on "localhost:0" so that the kernel decides the
port (instead of trying to rerserving like in pve). Then it
leaves the fd open for termproxy and gives the number as port
and tells it via '--port-as-fd' that it should interpret this
as an open fd
the vncwebsocket api call checks the 'vncticket' (name for compatibility)
and connects the remote side (after an Upgrade) with a local TcpStream
connecting to the port given via WebSocket from the proxmox crate
to make sure that only the client can connect that called termproxy and
no one can connect to an arbitrary port on the host we have to include
the port in the ticket data
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
modeled after pves/pmgs vncticket (i substituted the vnc with term)
by putting the path and username as secret data in the ticket
when sending the ticket to /access/ticket it only verifies it,
checks the privs on the path and does not generate a new ticket
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Depends on patched apt-pkg-native-rs. Changelog-URL detection is
inspired by PVE perl code for now, though marked with fixme to use 'apt
changelog' later on, if/when our repos have APT-compatible changelogs
set up.
list_installed_apt_packages iterates all packages and creates an
APTUpdateInfo with detailed information for every package matched by the
given filter Fn.
Sadly, libapt-pkg has some questionable design choices regarding their
use of 'iterators', which means quite a bit of nesting...
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This aligns it with PVE and allows the widget toolkit's update window
"refresh" to work without modifications once POST /apt/update is
implemented.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
when a datastore has enough data to calculate the estimated full date,
but always has exactly the same usage, the factor b of the regression
is '0'
return 0 for that case so that the gui can show 'never' instead of
'not enough data'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This also replaces the recently introduced --encryption
parameter on the client with a --crypt-mode parameter.
This can be "none", "encrypt" or "sign-only".
Note that this introduces various changes in the API types
which previously did not take the above distinction into
account properly:
Both `BackupContent` and the manifest's `FileInfo`:
lose `encryption: Option<bool>`
gain `crypt_mode: Option<CryptMode>`
Within the backup manifest itself, the "crypt-mode" property
will always be set.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
To support incremental backups (where not all chunks are sent to the
server), a new parameter "reuse-csum" is introduced on the
"create_fixed_index" API call. When set and equal to last backups'
checksum, the backup writer clones the data from the last index of this
archive file, and only updates chunks it actually receives.
In incremental mode some checks usually done on closing an index cannot
be made, since they would be inaccurate.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
if *only* data chunks are registered (high chance during incremental
backup), then chunk_count might be one lower then upload_stat.count
because of the zero chunk being unconditionally uploaded but not used.
Thus when subtracting the two, an overflow would occur.
In general, don't let the client make the server panic, instead just set
duplicates to 0.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
and if it does bail, because otherwise we would get an
error on mounting and have a zpool that is not imported
and disks that are used
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
returns the dir listing of the given filepath of the backup snapshot
the filepath has to be base64 encoded or 'root'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want to save if a file of a backup is encrypted, so that we can
* show that info on the gui
* can later decide if we need to decrypt the backup
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
for now mostly copy/paste from nodes/nodename/tasks
(without the parameters)
but we should replace the 'read_task_list' with a method
that gives us the tasks since some timestamp
so that we can get a longer list of tasks than for the node
(we could of course embed this then in the nodes/node/task api call and
remove this again as long as the api is not stable)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
we check if all dynamic_writers are closed and if the backup contains
any valid files, we can only mark the backup finished after those
checks, else the backup task gets marked as OK, even though it
is not finished and no cleanups run
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
and drop the now unused extract_lists function
this also fixes a bug, where we did not add the datastore to the list at
all when there was no rrd data
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
there is now a 'extract_cached_data' which just returns
the data of the specified field, and an api function that converts
a list of fields to the correct serde value
this way we do not have to create a serde value in rrd/cache.rs
(makes for a better interface)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
returns a list of the datastores and their usages, a list of usages of
the past month (for the gui) and an estimation of when its full
(using the linear regression)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
disk_usage returned the same values as defined in StorageStatus,
so simply use that
with that we can replace the logic of the datastore status with that
function and also use it for root disk usage of the nodes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if the last sync job is too far in the past (or there was none at all
for now) we run it at the next iteration, so we want to show that
we now calculate the next_run by using either the real last endtime
as time or 0
then in the frontend, we check if the next_run is < now and show 'pending'
(we do it this way also for replication on pve)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
while touching it, make columns and tbar in DataStoreContent.js
declarative members and remove the (now) unnecessary initComponent
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this returns the list of syncjobs with status, as opposed to
config/sync (which is just the config)
also adds an api call where users can run the job manually under
/admin/sync/$ID/run
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The repo URL consists of
* optional userid
* optional host
* datastore name
All three have defined regex or format, but none of that is used, so
for example not all valid datastore names are accepted.
Move definition of the regex over to api2::types where we can access
all required regexes easily.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to avoid having arbitrary characters in the config (e.g. newlines)
note that this breaks existings configs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
with a catch: password is in the struct but we do not want it to return
via the api, so we only 'serialize' it when the string is not empty
(this can only happen when the format is not checked by us, iow.
when its returned from the api) and setting it manually to ""
when we return remotes from the api
this way we can still use the type but do not return the password
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>