for gettin/setting the protected flag for snapshots (akin to notes)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
when a group could not be completely removed due to protected snapshot,
throw an error
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
and log if a vanished groups could not be completely deleted if it
contains protected snapshots
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
adds the info that a snapshot is protected to:
* snapshot list
* manual pruning (also dry-run)
* prune jobs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
some custom ACME endpoints do not have TOS, interpret this as
'the user has accepted the TOS', like we do for PVE.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
our export code can handle if the tape is inside the drive, so unloading
it first does not have an benefit, it even makes the exporting slower,
since we first unload it into its original slot, and then moving it
to an import/export slot
so drop the code that unloads the tape from the drive, and let the
export code itself handle that
change the 'eject' into a 'rewind' and comment why we do that first
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
use fadvice(.., POSIX_FADV_DONTNEED) for RRD files. We read those files only once,
and always rewrite them.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
'export_media' can handle if the tape is in either a normal slot of the
library, or in the drive assigned to the current pool writer.
(because we need to lock the drive)
if it is, for some reason, in a different drive, the error message
'media is not online'
could be slightly confusing for a user, since it would appear in the drive list
add the 'or a differen drive' to make it clearer
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Because the types used inside the RRD have other requirements
than the API types:
- other serialization format
- the API may not support all RRD features
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Storing much more data points now got get better graphs.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
allow the current thread to do some other work in-between
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
And initialize only with proxmox-backup-proxy. Other binaries dont need it.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Append pending changes in a simple text based format that allows for
lockless appends as long as we stay below 4 KiB data per write.
Apply the journal every 30 minutes and on daemon startup.
Note that we do not ensure that the journal is synced, this is a
perfomance optimization we can make as the kernel defaults to
writeback in-flight data every 30s (sysctl vm/dirty_expire_centisecs)
anyway, so we lose at max half a minute of data on a crash, here one
should have in mind that we normally expose 1 minute as finest
granularity anyway, so not really much lost.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
send_command serializes everything so it cannot be used to send a
raw, optimized command. Normally that means we get an error like
> 'unable to parse parameters (expected json object)'
when used that way.
Switch over to send_raw_command which does not re-serializes the
command.
Fixes: 45b8a032 ("refactor send_command")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
until now, we manually polled the systemd service state during a reload
so that the sd_notify messages get processed in the correct order
(RELOAD(old) -> MAINPID(old) -> READY(new))
with systemd >= 246 there is now 'sd_notify_barrier' which
blocks until systemd processed all prior messages
with that change, the daemon does not need to know the service name anymore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With the merger the shop got moved from shop.maurer-it to
shop.proxmox.com, while we transparently redirect we also want to
stop doing that in a few years, so use new domain.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It also used `CertInfo` from pbs-tools which is also server
specific.
The original helper is now in the main crate's
client_helpers instead.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
nothing from here is used anymore, so remove it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this has a 'from_listener' (tokio::net::TcpListener) since hyper 0.14.5 in
the 'tcp' feature (we use 'full', which includes that; since 0.14.13
it is not behind a feature flag anymore).
this makes it possible to create a hyper server without our
'StaticIncoming' wrapper and thus makes it unnecessary.
The only other thing we have to do is to change the Service impl from
tokio::net::TcpStream to hyper::server::conn::AddStream to fulfill the trait
requirements.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
only bit 0-2 are fatal errors, bit 3-7 are used to indicate
some drive conditions. for details see the manpage of smartctl(8)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ Thomas: resolved merge-conflict due to moved run_command ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to avoid name conflicts with WorkerTaskContext
- renamed WorkerTask::log to WorkerTask::log_message
Note: Methods have different fuction signatures
Also renamed WorkerTask::warn to WorkerTask::log_warning for
consistency reasons.
Use the task_log!() and task_warn!() macros more often.
And application now needs to call init_worker_tasks() before using
worker tasks.
Notable changes:
- need to call init_worker_tasks() before using worker tasks.
- create_task_log_dirs() ís called inside init_worker_tasks()
- removed UpidExt trait
- use atomic_open_or_create_file()
- remove pbs_config and pbs_buildcfg dependency
it's only used for generating the docs for the interactive-shell
parts of the client.
Ideally we'd avoid that whole separate binary in the first place and
let the client dump it, but we'd need to have some more elaborate
"hide this command from the help/usage" mechanisms in the CLI
helper/formatter code to make that play out more nicely.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
some workers did not log when called via cli
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to have the proper link between the token list and the sub routes
in the api, include the 'tokenname' property in the token listing
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this provides some generic api call mechanisms like pvesh/pmgsh.
by default it uses the https api on localhost (creating a token
if called as root, else requesting the root@pam password interactively)
this is mainly intended for debugging, but it is also useful for
situations where some api calls do not have an equivalent in a binary
and a user does not want to go through the api
not implemented are the http2 api calls (since it is a separate api an
it wouldn't be that easy to do)
there are a few quirks though, related to the 'ls' command:
i extract the 'child-link' from the property name of the
'match_all' statement of the router, but this does not
always match with the property from the relevant 'get' api call
so it fails there (e.g. /tape/drive )
this can be fixed in the respective api calls (e.g. by renaming
the parameter that comes from the path)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we want to add something to it that needs access to the
proxmox_backup::api2 stuff, so it cannot live in a sub crate
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
basically a (semantic) revert of commit
991be99c37 "buildsys: workaround
linkage issues from openid/curl build server stuff separate"
This is no longer required because we moved proxmox_restore_daemon
code into extra crate (previous commit)
Originally-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Handle auth logs the same way as access log.
- Configure with ApiConfig
- CommandoSocket command to reload auth-logs "api-auth-log-reopen"
Inside API calls, we now access the ApiConfig using the RestEnvironment.
The openid_login api now also logs failed logins and return http_err!(UNAUTHORIZED, ..)
on failed logins.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This need impl UserInformation for Arc<CachedUserInfo> which is implemented
with proxmox 0.13.2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
ApiConfig: avoid using pbs_config::backup_user()
CommandoSocket: avoid using pbs_config::backup_user()
FileLogger: avoid using pbs_config::backup_user()
- use atomic_open_or_create_file()
Auth Trait: moved definitions to proxmox-rest-server/src/lib.rs
- removed CachedUserInfo patrameter
- return user as String (not Authid)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
so that we actually have the property that 'match_all' refers to for
the templated API path.
This is mostly for improving usage of the WIP pbs-shell, i.e., its
`ls` command, it has no other functional/semantic impact.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Also moved pbs-datastore/src/crypt_config.rs to pbs-tools/src/crypt_config.rs.
We do not want to depend on pbs-api-types there, so I use [u8;32] instead of
Fingerprint.
locking during the tests as regular user failed because we try to
chown to the backup user (which is not always possible).
Instead, do not lock at all, by implementing 'open_backup_lockfile' with
'create_mocked_lock'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This also moves a couple of required utilities such as
logrotate and some file descriptor methods to pbs-tools.
Note that the logrotate usage and run-dir handling should be
improved to work as a regular user as this *should* (IMHO)
be a regular unprivileged command (including running
qemu given the kvm privileges...)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Adds possibility to recover data from an index file. Options:
- chunks: path to the directory where the chunks are saved
- file: the index file that should be recovered(must be either .fidx or
didx)
- [opt] keyfile: path to a keyfile, if the data was encrypted, a keyfile is
needed
- [opt] skip-crc: boolean, if true, read chunks wont be verified with their
crc-sum, increases the restore speed by a lot
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Adds possibility to inspect .blob, .fidx and .didx files. For index
files a list of the chunks referenced will be printed in addition to
some other information. .blob files can be decoded into file or directly
into stdout. Without decode the tool just prints the size and encryption
mode of the blob file. Options:
- file: path to the file
- [opt] decode: path to a file or stdout(-), if specidied, the file will be
decoded into the specified location [only for blob files, no effect
with index files]
- [opt] keyfile: path to a keyfile, needed if decode is specified and the
data was encrypted
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Adds possibility to inspect chunks and find indexes that reference the
chunk. Options:
- chunk: path to the chunk file
- [opt] decode: path to a file or to stdout(-), if specified, the
chunk will be decoded into the specified location
- [opt] digest: needed when searching for references, if set, it will
be used for verification when decoding
- [opt] keyfile: path to a keyfile, needed if decode is specified and
the data was encrypted
- [opt] reference-filter: path in which indexes that reference the
chunk should be searched, can be a group, snapshot or the whole
datastore, if not specified no references will be searched
- [default=true] use-filename-as-digest: use chunk-filename as digest,
if no digest is specified
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Defined a new struct RemoteConfig (without name and password). This makes it
possible to bas64-encode the pasword in the config, but still allow plain
passwords with the API.
otherwise a user might get a task log like this:
-----
...
found 7 groups
TASK OK
-----
which could confuse the users as why there were no snapshots backed up
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
it seems that for some actions or in some circumstances, two minutes is
simply too short and the command aborts. Increase the default timeout to
10 minutes.
While it should give most commands enough time to finish, in case of a real
failure the procedure now takes up to 5 times longer, but IMHO thats an
OK tradeoff.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this should make the api call much faster, since it is not reading
the whole catalog anymore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
For some parts of the ui, we only need the snapshot list from the catalog,
and reading the whole catalog (can be multiple hundred MiB) is not
really necessary.
Instead, we write the list of snapshots into a seperate .index file. This file
is generated on demand and is much smaller and thus faster to read.
a test for a valid status_page, one with excess data
(in the descriptor as well in the page as a whole)
and a test with too little data
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if the library sends more data than advertised, simply cut it off,
but if it sends less data, bail out (depending on how much data is
missing, trying to parse it could lead to a panic, so bail out early)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in 'restore_archive', we reach that 'catalog.commit()' for
* every skipped snapshot (we already call 'commit_if_large' then before)
* every skipped chunk archive (no change in catalog since we do not read
the chunk archive in that case)
* after reading a catalog (no change in catalog)
in all other cases, we call 'commit_if_large' and return early,
meaning that the 'commit' there was executed too often and
unnecessary, so move it after the loop over the files, before
finishing the temporary database.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of having a public start/end_chunk_archive and register_chunks,
simply expose a 'register_chunk_archive' method since we always have
a list of chunks anywhere we want to add them
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We get the descriptor length from the library and use that in
'chunks_exact', which panics on length 0. Catch that case
and bail out, since that makes no sense here anyway.
This could prevent a panic, in case a library sends wrong data.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
debugging history showed that its surely nice to have more logs at
when stuff happens (and thus fails)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
now required as we always enforce lock files to be owned by the
backup user, and the restore code uses such code indirectly as the
REST server module is reused from proxmox-backup-server. Once that is
refactored out we may do away such things, but until then we need to
have a somewhat complete system env.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of 'blindly' trusting the changer to deliver the fields written
in the specification, trust the length data it returns in the header.
we slice the descriptor data into equal sized chunks of the correct
size, then we do not have care bout the len and empty checks anymore
this also makes the code to read the rest of the page obsolete,
since the next descriptor is on the correct offset anyway
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to 255. 8 drives per changer was a rather arbitrary limitation and could
well be reached in practice with big libraries.
Altough 255 is still a arbirtrary limitation, this is much less likely
to be reached in practice.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
pbs-datastore now ended up depending on tokio after all, but
that's fine for now
for the fuse code I added pbs-fuse-loop (has the old
fuse_loop and its 'loopdev' module)
ultimately only binaries should depend on this to avoid the
library link
the only thins remaining to move out the client binary are
the api method return types, those will need to be moved to
pbs-api-types...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Factor out open_backup_lockfile() method to acquire locks owned by
user backup with permission 0660.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to prune the whole datastore at once, with the given parameters.
We need a new api call since this can take a while and we need to start
a worker for this. The exisiting api call returns a list of removed/kept
snapshots and is synchronous.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
checks for PRIV_DATASTORE_MODIFY, or else if the auth_id is the backup
owner, and skips the group if not.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
it is the same as when pruning single groups.
for prune_jobs, we never start the worker if there is no prune option set.
but if we want to call 'prune_datastore' from somewhere else, we
have to check it here again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>