Compare commits

..

652 Commits

Author SHA1 Message Date
497a7b3f8e bump version to 2.0.10-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 11:36:42 +02:00
71549afa3f cargo: switch from proc-macro pin-project to declarative pin-project-lite
In our simple use cases they both should generate the same code, see
[0] for notable differences. While we cannot drop proc-macro due to
that switch, all of our dependencies that use pinning already use
pin-project-lite, so this allows us to drop a whole crate in general
while not loosing anything.

[0]: https://github.com/taiki-e/pin-project-lite#pin-project-vs-pin-project-lite

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 11:15:40 +02:00
a294588409 docs: troubleshooting: reformat & adapt
Text-width should be 80 cc in the docs.

Avoid using relative paths in examples, they only confuse users as
one has less of a specific idea what the example may do. Rather use a
"descriptive" example path.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 07:41:47 +02:00
5a83930667 docs/technical-overview: add troubleshooting section 2021-09-22 07:29:00 +02:00
c25ea25f0a debug: api ls: make path optional and default to "/"
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:11:36 +02:00
f7885eb263 docs: proxmox-backup-debug: add info about the 'api' subcommand
and mention PROXMOX_DEBUG_API_CODE and that its dangerous.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:48 +02:00
a48d534d39 docs: add proxmox-backup-debug to the list of command line tools
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:48 +02:00
bfa942c0cf api: make some workers log on CLI
some workers did not log when called via cli

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:37 +02:00
f54634a890 api: add missing token list match_all property
to have the proper link between the token list and the sub routes
in the api, include the 'tokenname' property in the token listing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:30 +02:00
efb7c5348c proxmox-backup-debug: add 'api' subcommands
this provides some generic api call mechanisms like pvesh/pmgsh.
by default it uses the https api on localhost (creating a token
if called as root, else requesting the root@pam password interactively)

this is mainly intended for debugging, but it is also useful for
situations where some api calls do not have an equivalent in a binary
and a user does not want to go through the api

not implemented are the http2 api calls (since it is a separate api an
it wouldn't be that easy to do)

there are a few quirks though, related to the 'ls' command:
i extract the 'child-link' from the property name of the
'match_all' statement of the router, but this does not
always match with the property from the relevant 'get' api call
so it fails there (e.g. /tape/drive )

this can be fixed in the respective api calls (e.g. by renaming
the parameter that comes from the path)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:30 +02:00
d6fcc1170a move proxmox-backup-debug back to main crate
we want to add something to it that needs access to the
proxmox_backup::api2 stuff, so it cannot live in a sub crate

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:30 +02:00
3f742f952a server: refactor abort_local_worker
we'll need this outside the module

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:30 +02:00
84af82e8cf rename pbs-systemd to proxmox-systemd 2021-09-21 10:06:27 +02:00
48109c5354 pbs-systemd: do not depend on pbs-tools
Instead, copy a few line of nom helper code, and implement
a simple run_command helper.
2021-09-21 10:06:27 +02:00
fd18775ac1 worker_state: move tasktype() code to src/api2/node/tasks.rs
Because this is API related code, and only used there.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:47:48 +02:00
e678a50ea1 buildsys: drop double-build hack to avoid linkage issues
basically a (semantic) revert of commit
991be99c37 "buildsys: workaround
linkage issues from openid/curl build server stuff separate"

This is no longer required because we moved proxmox_restore_daemon
code into extra crate (previous commit)

Originally-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
6523588c8d move proxmox_restore_daemon code into extra crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
6fbf0acc76 move src/server/rest.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
36b7085ec2 rest server: cleanup auth-log handling
Handle auth logs the same way as access log.
- Configure with ApiConfig
- CommandoSocket command to reload auth-logs "api-auth-log-reopen"

Inside API calls, we now access the ApiConfig using the RestEnvironment.

The openid_login api now also logs failed logins and return http_err!(UNAUTHORIZED, ..)
on failed logins.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
1b1a553741 rest server: do not use pbs_api_types::Authid
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
98b7d58b94 rest server: return UserInformation from ApiAuth::check_auth
This need impl UserInformation for Arc<CachedUserInfo> which is implemented
with proxmox 0.13.2

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
7fa9a37c7c make get_index and ApiConfig property (callback)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
f533d16ef6 rest server: simplify get_index() method signature
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
778c7d954b move normalize_uri_path and extract_cookie to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
605fe2e7e7 move src/tools/compression.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
1b552c109d move src/server/formatter.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
d4d49f7325 move src/server/environment.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
8bca935f08 move src/tools/daemon.rs to proxmox-rest-server workspace
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
fd6d243843 move ApiConfig, FileLogger and CommandoSocket to proxmox-rest-server workspace
ApiConfig: avoid using  pbs_config::backup_user()
CommandoSocket: avoid using  pbs_config::backup_user()
FileLogger: avoid using  pbs_config::backup_user()
- use atomic_open_or_create_file()

Auth Trait: moved definitions to proxmox-rest-server/src/lib.rs
- removed CachedUserInfo patrameter
- return user as String (not Authid)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
037f6b6d5e start new proxmox-rest-server workspace
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
8eef31724f api: disks/directory: add 'name' property to list of mounts
so that we actually have the property that 'match_all' refers to for
the templated API path.

This is mostly for improving usage of the WIP pbs-shell, i.e., its
`ls` command, it has no other functional/semantic impact.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-15 11:59:02 +02:00
2de1b06a06 api: disks/directory: factor out BASE_MOUNT_DIR path
will be reused in the next commit

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-15 11:54:18 +02:00
a332040a7f api: nodes: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-15 11:42:28 +02:00
957133077f api2: nodes: add missing node list api call
to have an api call for api path traversal

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-09-15 11:32:58 +02:00
36c6e7bb82 fix tests/worker-task-abort.rs - correctly spawn command socket
And wait for the task.

Note: The test is still ignored (but works now when run a root)
2021-09-14 10:42:44 +02:00
ccc3896ff3 avoid type re-exports 2021-09-14 08:35:43 +02:00
cef5c72682 move src/tape/helpers/snapshot_reader.rs to src/backup/snapshot_reader.rs 2021-09-14 07:42:06 +02:00
51a2d9e375 fix refs in generated docs 2021-09-13 13:40:20 +02:00
048b43af24 split tape code into new pbs_tape workspace 2021-09-13 12:54:59 +02:00
bfd2b47649 buildsys: cargo build: avoid redundant "--bin pxar" argument
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-13 12:11:24 +02:00
67a5cf4714 fix regression tests 2021-09-10 12:45:06 +02:00
6227654ad8 more api type cleanups: avoid re-exports 2021-09-10 12:25:32 +02:00
e384f16a19 proxmox-tape: add 'force-media-set' also to cli
we have it in the api and gui, but the cli was missing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-09-10 10:06:21 +02:00
89725197c0 move PruneOptions to pbs_api_types workspace 2021-09-10 09:21:27 +02:00
e7d4be9d85 move datastore config to pbs_config workspace 2021-09-10 08:40:58 +02:00
ba3d7e19fb move user configuration to pbs_config workspace
Also moved memcom.rs and cached_user_info.rs
2021-09-10 07:09:04 +02:00
b65dfff574 cleanup User configuration: use Updater 2021-09-09 13:14:28 +02:00
8cc3760e74 move acl to pbs_config workspaces, pbs_api_types cleanups 2021-09-09 10:50:08 +02:00
1cb08a0a05 move token_shadow to pbs_config workspace
Also moved out crypt.rs (libcrypt bindings) to pbs_tools workspace.
2021-09-08 14:00:14 +02:00
6f4228809e move network config to pbs_config workspace 2021-09-08 12:22:48 +02:00
5af3bcf062 changer config cleanup: use Updater 2021-09-08 09:29:01 +02:00
67d00d5c0e drop proxmox-backup-debug package, use server package instead
The datastore/backup debug helpers should always be available, they
can help a lot in dire times, so making them available directly via
the server package (alongside the manager CLI tool) is nicer for the
user.

Additionally, building a package can be quite time consuming in this
repo, as some tools like dwarves and other debug symbol stuff has to
scan the quite big rust binaries. So dropping a binary package shaves
of a noticeable bit of build time too.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-08 09:11:04 +02:00
cdc83c4eb2 tape job cleanup: user Updater 2021-09-08 08:55:55 +02:00
ffa403b5fd verify job cleanup: use Updater/flatten 2021-09-08 08:40:32 +02:00
5bd77f00e2 sync job cleanup: use Updater/flatten 2021-09-08 08:28:09 +02:00
802189f7f5 move verify.rs to pbs_config workspace 2021-09-08 08:01:07 +02:00
a4e5a0fc9f move sync.rs to pbs_config workspace 2021-09-08 06:57:23 +02:00
58bfa3b19c remove dead code
backup_user() and backup_group() are now in pbs_config workspace
2021-09-08 06:34:44 +02:00
f9c0a94140 buildsys: set pkg-buildcfg version automatically
the 'build' target now fixates the pbs-buildcfg version to
$(DEB_VERSION_UPSTREAM)

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-07 13:54:20 +02:00
e3619d4101 moved tape_job.rs to pbs_config workspace 2021-09-07 12:40:15 +02:00
5839c469c1 move tape_encryption_keys.rs to pbs_config workspace 2021-09-07 10:37:08 +02:00
bbdda58b35 moved key_derivation.rs from pbs_datastore to pbs-config/src/key_config.rs
Also moved pbs-datastore/src/crypt_config.rs to pbs-tools/src/crypt_config.rs.
We do not want to depend on pbs-api-types there, so I use [u8;32] instead of
Fingerprint.
2021-09-07 10:12:17 +02:00
ed2080762c move data_blob encode/decode from crypt_config.rs to data_blob.rs 2021-09-07 10:00:05 +02:00
45d5d873ce move Kdf and KeyInfo to pbs_api_types workspace 2021-09-07 09:59:59 +02:00
f46806414a tape/inventory: fix the tape tests as user, by mocking the lock
locking during the tests as regular user failed because we try to
chown to the backup user (which is not always possible).

Instead, do not lock at all, by implementing 'open_backup_lockfile' with
'create_mocked_lock'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-07 08:42:04 +02:00
ebf34e7edd pbs-config: add 'create_mocked_lock' helper
by making the field an option and making it None in the mocked case
this function is only intended for testing and hidden from the docs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-07 08:42:02 +02:00
aad2d162ab move media_pool config to pbs_config workspace 2021-09-06 08:56:04 +02:00
68149b9045 zsh: fix completions
seems like there was a typo in these from the beginning.

also fixes the wrong function name for proxmox-file-restore completion

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2021-09-03 10:29:48 +02:00
1ce8e905ea move drive config to pbs_config workspace
Also moved the tape type definitions to pbs_api_types.
2021-09-03 09:10:18 +02:00
ccb3b45e18 add missing file pbs-api-types/src/remote.rs 2021-09-02 17:36:13 +02:00
6afdda8832 move remote config into pbs-config workspace 2021-09-02 14:25:15 +02:00
2121174827 start new pbs-config workspace
moved src/config/domains.rs
2021-09-02 12:58:20 +02:00
df12c9ec4e add proxmox-backup-debug debian package
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-02 11:25:50 +02:00
4c1b776168 another import cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-01 14:46:01 +02:00
42dad3abd3 fixup imports in tests and examples
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-01 12:32:21 +02:00
6c76aa434d split proxmox-file-restore into its own crate
This also moves a couple of required utilities such as
logrotate and some file descriptor methods to pbs-tools.

Note that the logrotate usage and run-dir handling should be
improved to work as a regular user as this *should* (IMHO)
be a regular unprivileged command (including running
qemu given the kvm privileges...)

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-01 12:23:29 +02:00
e5f9b7f79e split out proxmox-backup-debug binary
and introduce pbs_tools::cli module

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 14:45:48 +02:00
dd2162f6bd more import cleanups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 14:01:03 +02:00
cabdabba3d fixup imports in debug binary
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 13:29:06 +02:00
3e593a2459 add index recovery to pb-debug
Adds possibility to recover data from an index file. Options:
 - chunks: path to the directory where the chunks are saved
 - file: the index file that should be recovered(must be either .fidx or
   didx)
 - [opt] keyfile: path to a keyfile, if the data was encrypted, a keyfile is
   needed
 - [opt] skip-crc: boolean, if true, read chunks wont be verified with their
   crc-sum, increases the restore speed by a lot

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 13:19:56 +02:00
7c5287bb95 add file inspection to pb-debug
Adds possibility to inspect .blob, .fidx and .didx files. For index
files a list of the chunks referenced will be printed in addition to
some other information. .blob files can be decoded into file or directly
into stdout. Without decode the tool just prints the size and encryption
mode of the blob file. Options:
 - file: path to the file
 - [opt] decode: path to a file or stdout(-), if specidied, the file will be
   decoded into the specified location [only for blob files, no effect
   with index files]
 - [opt] keyfile: path to a keyfile, needed if decode is specified and the
   data was encrypted

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 13:19:54 +02:00
7c72ae04f1 add chunk inspection to pb-debug
Adds possibility to inspect chunks and find indexes that reference the
chunk. Options:
 - chunk: path to the chunk file
 - [opt] decode: path to a file or to stdout(-), if specified, the
   chunk will be decoded into the specified location
 - [opt] digest: needed when searching for references, if set, it will
   be used for verification when decoding
 - [opt] keyfile: path to a keyfile, needed if decode is specified and
   the data was encrypted
 - [opt] reference-filter: path in which indexes that reference the
   chunk should be searched, can be a group, snapshot or the whole
   datastore, if not specified no references will be searched
 - [default=true] use-filename-as-digest: use chunk-filename as digest,
   if no digest is specified

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 13:19:51 +02:00
86582454e8 make api2::helpers::list_dir_content a CatalogReader method
this is its natural place and everything required is already
part of the catalog module

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 11:29:17 +02:00
013b1e8bca move some more API types
ArchiveEntry -> pbs-datastore
RestoreDaemonStatus -> pbs-api-types

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 11:29:17 +02:00
40ff84b138 ui: fix order of prune keep reasons
two things wrong with the old code:
 * the sort function wants -1, 0 and 1 as a return value for a<b, a==b and a>b
   respectively, not a bool (which a < b returns)
 * we have to sort the newest backups first, since the first reason is
   'keep-last'. until now, we sorted the oldest backup first, resulting
   in the older backups getting the 'keep-last' reason

reported by a user in the forum:
https://forum.proxmox.com/threads/prune-ui-and-prune-schedule-simulator-dont-match.94944/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-08-30 15:28:25 +02:00
b2065dc7d2 cleanup proxmox_backup::backup module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 14:14:04 +02:00
97dfc62f0d remote config: derive and use Updater
Defined a new struct RemoteConfig (without name and password). This makes it
possible to bas64-encode the pasword in the config, but still allow plain
passwords with the API.
2021-08-30 12:48:45 +02:00
e351ac786d split out proxmox-backup-client binary
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 11:39:01 +02:00
7b570c177d move some API return types to pbs-api-types
they'll be required by the api client

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 11:39:01 +02:00
6838b75904 Cargo.toml: drop features in 'patch' section
the features array does not need to be repeated here

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 11:39:01 +02:00
dbda1513c5 tape: media_pool: derive and use Updater 2021-08-30 11:17:14 +02:00
c62a6acb2e drive config cleanup: derive and use Updater 2021-08-30 10:50:20 +02:00
e4a5c072b4 openid cleanup: derive and use Updater 2021-08-30 09:48:53 +02:00
80f950c05d more Updatable -> UpdaterType fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
4933b853cd d/control bump
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
aec1b91eb8 bump proxmox-openid dependency to 0.7.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
2e2d64fdba bump proxmox dependency to 0.13.0
and with it:
* bump proxmox-http dependency to 0.4.0
* bump proxmox-apt dependency to 0.7.0

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
a37c8d2431 use ApiType trait
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
a8a20e9210 use new api updater features
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
be5b468975 bump version to 2.0.9-2
rebuild to include openid and to actually have a correct pbs-buildcfg
Cargo.toml version..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-08-24 14:48:57 +02:00
9789461363 bump version to 2.0-9-1 2021-08-09 09:54:33 +02:00
9f58e312d7 tape/pool_writer: fix typo
s/wrinting/writing/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-08-09 09:36:38 +02:00
cffe0b81e3 tape backup: mention groups that were empty
otherwise a user might get a task log like this:

-----
...
found 7 groups
TASK OK
-----

which could confuse the users as why there were no snapshots backed up

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-08-09 09:28:01 +02:00
bb14ed8cab cleanup: simplify next_expired_media() 2021-08-04 11:01:18 +02:00
023adb5945 ui: display next-media-label for tape backup jobs 2021-08-04 09:59:12 +02:00
e5545c9804 cli: proxmox-tape backup-job list: use status api and display next-run an d next-media-label 2021-08-04 09:59:12 +02:00
efe96ec039 tape: compute next-media-label for each tape backup job 2021-08-04 09:59:12 +02:00
1d3ae83359 tape: media_pool: implement guess_next_writable_media() 2021-08-04 09:59:12 +02:00
4bb3876352 tape: lto: increase default timeout to 10 minutes
it seems that for some actions or in some circumstances, two minutes is
simply too short and the command aborts. Increase the default timeout to
10 minutes.

While it should give most commands enough time to finish, in case of a real
failure the procedure now takes up to 5 times longer, but IMHO thats an
OK tradeoff.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-08-03 09:19:13 +02:00
400e90cfbe docs/file-formats: fix typo
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2021-08-03 09:17:33 +02:00
e16c289f50 bump version toö 2.0.8-1 2021-08-02 10:35:16 +02:00
140c159b36 bump proxmox-apt to 0.6 in debian/control
Build deps could not be installed

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2021-08-02 09:32:27 +02:00
8be69a8453 api/ui: allow zstd compression for new zpools
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-30 17:51:13 +02:00
9ba4833f3c cargo: update proxmox-apt to v0.6.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-30 10:43:40 +02:00
0b12a5a698 api: apt: adapt to further proxmox-apt back-end changes
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-30 10:37:27 +02:00
2eac359430 api: apt: adapt to proxmox-apt back-end changes
It's up to the caller to provide the current release for standard
repository detection/addition.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-30 10:37:27 +02:00
855b55dc14 api2: tape: media: use MediaCatalog::snapshot_list for content listing
this should make the api call much faster, since it is not reading
the whole catalog anymore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-07-29 13:34:36 +02:00
5ad40a3dd1 tape: media_catalog: add snapshot list cache for catalog
For some parts of the ui, we only need the snapshot list from the catalog,
and reading the whole catalog (can be multiple hundred MiB) is not
really necessary.

Instead, we write the list of snapshots into a seperate .index file. This file
is generated on demand and is much smaller and thus faster to read.
2021-07-29 13:34:31 +02:00
7116a2d9da tape: lock media_catalog file to to get a consistent view with load_catalog 2021-07-29 13:34:25 +02:00
0d5e990a62 cleanup: factor out tape catalog path helpers 2021-07-29 13:34:18 +02:00
4f57f4ad84 tape: changer: add tests for decode_element_status_page
a test for a valid status_page, one with excess data
(in the descriptor as well in the page as a whole)
and a test with too little data

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-28 12:23:21 +02:00
13e13d836f tape: changer: handle libraries that sends wrong amount of data
if the library sends more data than advertised, simply cut it off,
but if it sends less data, bail out (depending on how much data is
missing, trying to parse it could lead to a panic, so bail out early)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-28 12:22:48 +02:00
3ab2432ab6 tape: changer: remove unnecesary inquiry parameter
this is never used, so remove it.
Ok, since they are only non public functions.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-28 12:17:07 +02:00
76e8565076 api2: tape/restore: commit temporary catalog at the end
in 'restore_archive', we reach that 'catalog.commit()' for
* every skipped snapshot (we already call 'commit_if_large' then before)
* every skipped chunk archive (no change in catalog since we do not read
  the chunk archive in that case)
* after reading a catalog (no change in catalog)

in all other cases, we call 'commit_if_large' and return early,
meaning that the 'commit' there was executed too often and
unnecessary, so move it after the loop over the files, before
finishing the temporary database.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-28 11:28:03 +02:00
a5f30a562b docs: tape: add instructions on how to restore the catalog
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-26 13:41:38 +02:00
a2ef36d445 tape: media_catalog: improve chunk_archive interface
instead of having a public start/end_chunk_archive and register_chunks,
simply expose a 'register_chunk_archive' method since we always have
a list of chunks anywhere we want to add them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-26 10:18:13 +02:00
9a1ecae0b7 ui: tape/ChangerStatus: improve layout for large libraries
instead of having the grid be as tall as possible and the containing
panel scroll. limit the grids height to the panel size and scroll the
grid.

this has two advantages:
* if a user has many slots, it is now possible to to navigate the other
  grids to the position wanted
* having the grids scroll, means it can use extjs' buffered renderer,
  which makes the view much more responsive (in case of hundreds of
  slots)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-26 10:12:03 +02:00
42b010174e tape: changer: handle invalid descriptor data from library in status page
We get the descriptor length from the library and use that in
'chunks_exact', which panics on length 0. Catch that case
and bail out, since that makes no sense here anyway.

This could prevent a panic, in case a library sends wrong data.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-26 10:05:37 +02:00
68e77657e6 datastore config: cleanup code (use flatten attribute) 2021-07-23 12:43:33 +02:00
1b2f851e42 bump version to 2.0.7-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:44:41 +02:00
cc99866ea3 restore daemon: rust format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:26:10 +02:00
1ea3f23f7e file restore: improve some comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:25:34 +02:00
3f780ddf73 restore daemon: log about doing basic system env setup
debugging history showed that its surely nice to have more logs at
when stuff happens (and thus fails)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:24:30 +02:00
9edf96e6b6 restore daemon: setup backup system user and group
now required as we always enforce lock files to be owned by the
backup user, and the restore code uses such code indirectly as the
REST server module is reused from proxmox-backup-server. Once that is
refactored out we may do away such things, but until then we need to
have a somewhat complete system env.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:19:38 +02:00
73e1ba65ca restore daemon: add setup_system_env helper
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:10:55 +02:00
02631056b8 tape: changer: handle missing dvcid information
the dvcid information is not always available, so skip it if is missing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-22 12:00:30 +02:00
131d0f10c2 tape: changer: improve error message on wrong counts
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-22 11:37:14 +02:00
f9aa980c7d tape: changer: correctly consume data in decode_element_status_page
instead of 'blindly' trusting the changer to deliver the fields written
in the specification, trust the length data it returns in the header.

we slice the descriptor data into equal sized chunks of the correct
size, then we do not have care bout the len and empty checks anymore

this also makes the code to read the rest of the page obsolete,
since the next descriptor is on the correct offset anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-22 11:37:14 +02:00
ad0364c558 tools: xattr: don't test things beyond our control
whether the kernel allows super-long names or weird
namespace prefixes is not our concern...

also the latter fails under fakeroot

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-22 11:34:40 +02:00
76486eb3d1 bump version to 2.0.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:22:33 +02:00
65ab4ca976 docs: simplify list of ENV var alternative
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:21:40 +02:00
99a73fad15 doc: Document new environment variabless to specify secret values
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:10:40 +02:00
16a01c19dd support new ENV vars to get secret values through a file or a command
We want to allow passing a secret not only directly through the
environment value, but also indirectly through a file path, an open
file descriptor or a command that can write it to standard out.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:09:53 +02:00
86b8ba448c ui: server administration: repos: add online help
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:09:53 +02:00
9b8e8012a7 cargo: update proxmox to 0.12.1
For the FS compat improvement in the atomic create file helper

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:09:53 +02:00
b29292a87b build: unbreak 'nocheck'
to skip test cases for faster builds or in case your local system does
not support running (all) tests..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-21 17:22:18 +02:00
c1feb447e8 tape: changer: sg_pt: fix typo
ok, since its a private struct

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-21 17:02:16 +02:00
62a0e190cb tape: changer: sg_pt: add SCSI_VOLUME_TAG_LEN const
so that we do have less 'magic' constants without description

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-21 17:01:37 +02:00
5890143920 api: types: CHANGER_DRIVENUM_SCHEMA: increase maximum drives per changer
to 255. 8 drives per changer was a rather arbitrary limitation and could
well be reached in practice with big libraries.

Altough 255 is still a arbirtrary limitation, this is much less likely
to be reached in practice.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-21 16:59:02 +02:00
ef4df211ab move CachedChunkReader to pbs-datastore
this was actually still missing from the previous commit

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-21 14:20:03 +02:00
eb5e0ae65a move remaining client tools to pbs-tools/datastore
pbs-datastore now ended up depending on tokio after all, but
that's fine for now

for the fuse code I added pbs-fuse-loop (has the old
fuse_loop and its 'loopdev' module)
ultimately only binaries should depend on this to avoid the
library link

the only thins remaining to move out the client binary are
the api method return types, those will need to be moved to
pbs-api-types...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-21 14:12:24 +02:00
bbc71e3b02 client: fix panic message
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-21 13:28:55 +02:00
ac81ed17b9 fix regression test file permission problems
By simply using the current user/group instead of backup:backup
2021-07-21 09:30:22 +02:00
89145cde34 bump version to 2.0.5-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-21 09:12:46 +02:00
ef4b2c2470 buildcfg: fix version
now set here, but we really need to automate this soon, just to easy
to forget.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-21 09:11:57 +02:00
7190cbf2ac buildsys: run test before compile to avoid clobbering the openid build binaries
dh_auto_test also checks for the build flags used, including any
`--cfg`, so it rebuilds and overwrites our carefully assembled daemon
binaries with openid support as it is run after build and before
install.
So manually ensure the order of first test then build (argh, hackes
of hackes >.<)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 20:27:40 +02:00
f726e1e0ea buildsys: cargo build target: one binary per line
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 20:27:40 +02:00
6d81e65986 bump version to 2.0.5-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 19:34:49 +02:00
ba5f5083c3 d/control: record fonts-font-awesome dependency for docs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 19:34:49 +02:00
314db4072c docs: add missing font-awesome link for lto-barcode generator
else it cannot load the icons and does not show them in the action column

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-20 19:34:49 +02:00
baff2324f3 pbs-tools: fix doctest reference to moved cache modules
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 19:15:28 +02:00
02eae829f7 tests: move pxar test to its crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 18:54:23 +02:00
bb77143108 d/control: update build dependencies
needs to be done manually for now..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 18:54:23 +02:00
02cb5b5f80 cargo: bump proxmox-http to 0.3.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 18:54:23 +02:00
a301c362e3 add helpers to write configuration files 2021-07-20 18:54:23 +02:00
7526d86419 use new atomic_open_or_create_file
Factor out open_backup_lockfile() method to acquire locks owned by
user backup with permission 0660.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 18:54:23 +02:00
a00888e93f fixup examples
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-20 15:26:25 +02:00
fc5870be53 move channel/stream helpers to pbs-tools
pbs_tools
  ::blocking: std/async wrapping with block_in_place
  ::stream: stream <-> AsyncRead/AsyncWrite wrapping

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-20 11:27:40 +02:00
3c8c2827cb move required_X_param to pbs_tools::json
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-20 11:09:52 +02:00
6c221244df move lru cachers to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-20 10:57:22 +02:00
38629c3961 move ChunkStream to pbs-client
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-20 10:52:21 +02:00
513d019ac3 issue banner: avoid depending on proxmox crate for hostname
While this slightly duplicates code we just do not profit from the
central, lazy static variant here, as that is only really useful in
daemons to avoid doing frequent syscalls there.

proxmox just pull in far to much (e.g., tokio) and duplicating that
one line of simple code has no real maintenance cost, so just go for
that and use the nix crate directly.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-19 16:32:50 +02:00
3fa1b4b48c cleanup unused imports
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 15:55:19 +02:00
a6eac535e4 Makefile: fix build.rs reference
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 15:54:53 +02:00
58a3fae773 move pxar binary to separate crate
and move its few remaining proxmox_backup deps out to
pbs-tools

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 15:53:43 +02:00
0889806a3c resolve some more client imports
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 15:03:24 +02:00
51ec8a3c62 move some api types to pbs-api-types
and resolve some imports in the client binary

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 15:01:03 +02:00
a12b1be728 move build.rs and friends to pbs-buildcfg
with this the main crate won't be re-compiled every time a
*binary* is modified

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 14:59:18 +02:00
4d04cd9ab9 comment on test output paths
cargo should be getting a new env var for this soon

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 14:24:13 +02:00
a3399f4337 doc and tests fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 14:16:28 +02:00
2b7f8dd5ea move client to pbs-client subcrate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 12:58:43 +02:00
72fbe9ffa5 move 'wait_for_local_worker' from client to server
this just made no sense in the client

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:44:44 +02:00
0be8bce718 d/control: fixup proxmox feature flags
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:09:43 +02:00
4805edc4ec move more tools for the client into subcrates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:07:12 +02:00
9eb784076c move more helpers to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:07:12 +02:00
b9c5cd8291 add proxmox-backup-banner binary crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:07:12 +02:00
9008c0c177 bump proxmox-apt dependency
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:07:12 +02:00
f027c2146e ui: datastore/Prune: improve title of group prune window
we are not actually pruning the whole datastore, but only the single
group, so set that as a title

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:42:30 +02:00
afbf2e10f3 ui: datastore/Content: add 'Prune All' button
since the api call always starts a real worker, we cannot have a
preview. It would also be very hard to show that for all groups in a
non-confusing way. We reuse the pbsPruneInputPanel and add the dry-run
field there conditionally.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:42:09 +02:00
9805207aa5 api: admin/datastore: add new 'prune-datastore' api call
to prune the whole datastore at once, with the given parameters.
We need a new api call since this can take a while and we need to start
a worker for this. The exisiting api call returns a list of removed/kept
snapshots and is synchronous.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:40:05 +02:00
8e0b852f24 server/prune_job: add proper permission checks to 'prune_datastore'
checks for PRIV_DATASTORE_MODIFY, or else if the auth_id is the backup
owner, and skips the group if not.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:39:01 +02:00
0052dc6d28 server/prune_job: add 'keep_all' logic to 'prune_datastore'
it is the same as when pruning single groups.
for prune_jobs, we never start the worker if there is no prune option set.
but if we want to call 'prune_datastore' from somewhere else, we
have to check it here again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:38:28 +02:00
61f05679d2 server/prune_job: factor out 'prune_datastore'
we want to use that outside of a prune job

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:36:45 +02:00
9751ef4b36 backup/datastore: refactor check_backup_owner there
and add a 'owns_backup' convenience function

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:36:02 +02:00
0a240aaa9a api: admin/datastore: simplify prune api call
by using the api macro and reusing the PruneOptions from pbs-datastore

this means we can now drop the 'add_common_prune_prameters' macro

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:34:36 +02:00
e0665a64bd client: simplify prune api method
by using the api macro on the async method and reusing the PruneOptions
from pbs-datastore with 'flatten: true'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:34:28 +02:00
dc46aa9a00 pbs-datastore/prune: make PruneOptions an api type
so that we can reuse it from here

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:34:18 +02:00
ced694589d api-types: move PRUNE_SCHEMA_KEEP_* to pbs-api-types
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:26:09 +02:00
6c053ffc89 tape: changer: sg_pt: make extra scsi request for dvcid
some libraries cannot handle a request with volume tags and DVCID set at
the same time.

So we make 2 separate requests and merge them, since we want to keep
the vendor/model/serial data.

to not overcomplicate the code, add another special type to ElementType

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 08:46:06 +02:00
9f5b57a348 buildsys: Prepare new way for path dependencies
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-15 09:56:32 +02:00
f1c4b8df34 features update
so we can drop default-features in proxmox for build-deps to
be more lean

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-15 09:56:05 +02:00
269e274bb5 d/control: update proxmox b-d
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-14 13:51:38 +02:00
bfd357c5a1 depend on proxmox 0.11.6 (changed make_tmp_file() return type) 2021-07-14 13:37:26 +02:00
9517a5759a fix #3526: correctly filter tasks with 'since' and 'until'
The previous assumption was that the Tasks returned by the Iterator are
sorted by the starttime, but that is not actually the case, and
could never have been, since we append the tasks into the log when
they are finished (not started) and running tasks are always iterated
first.

To correctly filter (and simplify the the api call) we forgo the
combinators, and use a for loop instead. This way we only have to do
the since/until checks only once per Task, but have to do the
start/limit counting ourselves.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-14 09:39:14 +02:00
a5d51b0c4f docs: tape: drop technology preview admonitions
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-13 16:48:05 +02:00
d9822cd3cb fix #3515: file-restore-daemon: allow LVs/PVs with dash in name
LVM replaces any dashes '-' in an LV or PV name with two '--' for the
created device node in /dev/mapper/ to distinguish the seperating
character between the PV and LV name.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-13 12:07:51 +02:00
66501529a2 file-restore: increase lock timeout on QEMU map
This lock is held during VM startup, so that multiple calls will not
start VMs twice. But this means that the timeout needs to incorporate
the time it might take a VM to boot, so increase it quite a bit.

This could previously lead to "interrupted system call" errors when
accessing backups with many disks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-13 12:07:23 +02:00
2072dede4a api2: tape: restore: add warning for list restore
if an error occurs, the snapshot dirs will already be created, and we
do not clean them up (some might already be finished).

Warn the user that they are not cleaned up.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-13 12:02:01 +02:00
31c94d1645 chunk_store/insert_chunk: add more information to file errors
otherwise this context is missing in some tasks (e.g. tape restore)
and it is unclear where it came from

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-13 11:55:33 +02:00
9ee4c23833 tape: changer: sg_pt: always retry until timeout 2021-07-13 10:39:28 +02:00
a14a1c7b90 ui: tape/BackupOverview: increase timeout for media-set content
a single catalog can be over 100MiB, and a media-set can have multiple
catalogs to read (no technical upper limit). On slow disks, this can
take much longer than 30 seconds (the default timeout).

The real solution would be to have some kind of index only for the gui
relevant part, e.g. a table in the beginning of the catalog, or
alternatively a seperate file with that info. Until we have such a
solution increase the timeout as a stopgap.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-13 09:44:17 +02:00
9ef88578af bump version to 2.0.4-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 18:51:41 +02:00
c4c4b5a3ef auth: 'crypt' is not thread safe
According to crypt(3):
"crypt places its result in a static storage area, which will be
overwritten by subsequent calls to crypt. It is not safe to call crypt
from multiple threads simultaneously."

This means that multiple login calls as a PBS-realm user can collide and
produce intermittent authentication failures. A visible case is for
file-restore, where VMs with many disks lead to just as many auth-calls
at the same time, as the GUI tries to expand each tree element on load.

Instead, use the thread-safe variant 'crypt_r', which places the result
into a pre-allocated buffer of type 'crypt_data'. The C struct is laid
out according to 'lib/crypt.h.in' and the man page mentioned above.

Use the opportunity and make both arguments to the rust 'crypt' function
take a &[u8].

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-12 18:38:48 +02:00
0ed40b19c7 tape: changer: sg_pt: query element types separately
Some changers do not like the DVCID bit when querying non-drives,
this includes when querying 'all' elements.

To circumvent this, we query each type by itself (like mtx does it),
and only add the DVCID bit for drives (Data Transfer Elements).

Reported by a user in the forum:
https://forum.proxmox.com/threads/ibm-3584-ts3500-support.92291/

and limit to 1000 elements per request.
(Because some changers limit that request with the options we set)

instead of checking if the data len was equal to the allocation_len
for getting more data, we count the returned elements and compare
that with the number we requested

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-07-12 18:19:26 +02:00
a0cd0f9cec change tape drive lock path
New kernel has stricter checks on tmpfs with stick-bit on directories, so some
commands (i.e. proxmox-tape changer status) fails when executed as root, because
permission checks fails when locking the drive.

This patch move the drive locks to /run/proxmox-backup/drive-lock.

Note: This is incompatible to old locking mechmanism, so users may not
run tape backups during update (or running backup can fail).
2021-07-12 17:26:49 +02:00
49e47c491b d/postinst: drop some legacy update handling
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 16:14:28 +02:00
424d2d68d3 buildsys: try to avoid duplicate build due to "phony" docs dependency
Make docs target depend directly on the some docs-only required
binaries and add a new intermediate ".do-cargo-build" target that is
explicitly not a PHONY target.

That avoids one extra set of full builds.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 13:19:20 +02:00
415690a0e7 bump version to 2.0.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 09:53:07 +02:00
2c0abe9234 Revert "api: access: domains: add ExtraRealmInfo and RealmInfo structs"
This reverts commit da7ec1d2af.

not necessary, since we have the api in config/access/openid

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 09:53:07 +02:00
2649c89358 Revert "api: access: domains: add get/create/update/delete domain call"
This reverts commit 5117cf4f17.

we already have that in api2/config/access

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 09:53:07 +02:00
bbd34d70d5 api: config: access: openid: use better Privilige Realm.Allocate
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 09:53:07 +02:00
9779ad0b00 api: config: access: openid: use correct parameter for matching
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 09:53:07 +02:00
70fd0652a1 ui: panel/AccessControl: define baseUrland useTypeInUrl for AuthView
both are not the default

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 09:53:07 +02:00
6b85671dd2 buildsys: fixup clean target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 09:35:05 +02:00
82bdf6b5e7 api: tfa: module path cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-12 08:43:14 +02:00
ba2679c9d7 ui: datastore content: style edit notes pencil like action-col icon
as those have a hover effect and use dark-grey vs. the quite "harsh"
looking plain black. We need to override the margin though, as else
the floated layout adds another line.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 07:57:40 +02:00
8866cbccc8 ui: update group notes: fix obj access and rewrite to async
eslint is configured to not allow using quoted object keys if they
could be just passed in dot notation, e.g.,
wrong: `group["comment"]`
good:  `group.comment`

It's not a big problem but eslint fails the build with the wrong one,
so this needs to be fixed anyway..

Also, rewrite to async, shorter and less indentation

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 07:55:02 +02:00
b3477d286f d/control: bump versioned dependency to widget-toolkit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 07:20:45 +02:00
68e2ea99ba ui: add support for notes on backup groups
Currently done a little bit hacky in a seperate API call following the
initial list_snapshots, as we previously didn't call list_groups at all
and instead calculated the groups from the snapshots.

This calls it async and updates the view with group comments when data
arrives. The editor is simply reused with the 'group-notes' API call,
since the semantics are the same.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-12 07:13:44 +02:00
d6688884f6 api: add support for notes on backup groups
Stored in atomically-updated 'notes' file in backup group directory.
Available via dedicated GET/PUT API calls, as well as the first line
being included in list_groups (similar to list_snapshots).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 07:13:28 +02:00
7d3482f5bf ui: node status: fix font-awesome icon size
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 06:56:09 +02:00
7a39b41c20 ui: node status: reduce padding like in PVE
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 06:55:30 +02:00
4672273fe6 ui: dashboard: show node's repository/subscription status
Mostly copied from PVE, slightly adapted to be consistent with other
things in the dashboard, e.g. use a store for the repository info.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-12 06:29:24 +02:00
01284de0b2 ui: window/Settings: add summarycolumns settings
like in pve

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 06:29:21 +02:00
b20368ee1b ui: panel/NodeInfo: make it like in pve
this changes the node info panel to a similar layout as in pve,
with the ksm sharing and version field removed

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 06:29:18 +02:00
e584593cb5 ui: factor out NodeInfoPanel
so that Dashboard.js will be less cluttered when we add more information
there.

No functional change, but reworked the fingerprint button disabling to
use a property of the view instead of a viewmodel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 06:29:14 +02:00
069a6e28a7 ui: tapeRestore: make window non-resizable
While it would be nice to be able to resize that window for more
snapshots/datastores in view, this would need quite some reworking on the
input panel side. So for now, disable resizing of that window, otherwise
the grids look weird as they only scale horizontally but not vertically.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 06:24:01 +02:00
8fab19da73 fix #3447: ui: Dashboard: disallow selection of datastore statistics row
since we cannot do anything with a selected row anyway, simply
disallow it

this avoids having the row in the same color as the progressbar, without
being able to deselect the row again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 06:18:45 +02:00
991be99c37 buildsys: workaround linkage issues from openid/curl build server stuff separate
this blows up build times, but we do not plan for using it longer
than required (i.e., the server is finally split into its own binary
crate providing only those binaries).

Note, using `cargo b --release` to build is naturally unaffected by
this change, so for dev builds just continue to use that.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 06:16:32 +02:00
1900d7810c buildsys: mark clean targets phony and split out deb pkg-clean
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-11 13:25:39 +02:00
6b5013edb3 rest: log response: avoid unnecessary mut on variable
a match expresses the fallback slightly nicer and needs no mut,
which is always nice to avoid.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-11 13:05:19 +02:00
f313494d48 d/rules: drop dh_dwz override, handled now better
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933541#17

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-11 13:00:12 +02:00
353dcf1d13 ui: access: revert icon swap, use realm over auth.
Moving icons around is not to ideal for people accustomed to the old
ones, at least if they are used for a new component on the same view.

Rather use the address-book icon, which is also used for adding a new
realm in PVE, we can rather switch over PVE to that and the text
"Realms", as that is also the label one sees when logging in, so a
better fit to keep that consistent.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-11 12:53:45 +02:00
3006d70ebe ui: use Async tools from widget toolkit
The api2 one passes the whole response (for more flexibility) on
reject, so we need to adapt to that.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 16:53:12 +02:00
681e096448 ui: adapt to widget toolkit changes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 16:52:25 +02:00
ac9a9e8002 ui: add /access/domains to PermissionPathsStore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
ecbc385b7b ui: add Authentication tab to Access Control
so that user can add/edit/delete realms

changes the icon of tfa to 'id-badge' so that we can keep the same icon
for authentication as pve and not have duplicate icons

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
5117cf4f17 api: access: domains: add get/create/update/delete domain call
modeled like our other section config api calls
two drawbacks of doing it this way:
* we have to copy some api properties again for the update call,
  since not all of them are updateable (username-claim)
* we only handle openid for now, which we would have to change
  when we add ldap/ad

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
da7ec1d2af api: access: domains: add ExtraRealmInfo and RealmInfo structs
these will be used as parameters/return types for the read/create/etc.
calls for realms

for now we copy the necessary attributes (only from openid) since
our api macros/tools are not good enought to generate the necessary
api definitions for section configs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
934de1d691 config: acl: add PRIV_REALM_ALLOCATE
will be used for realm creation/update/deletion

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
0c27d880b0 api: access: domains: add BasicRealmInfo struct and use it
to have better type safety and as preparation for adding more types

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
be3a0295b6 client: import updates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 15:32:12 +02:00
aa2838c27a move client::pull to server::pull
it's not used by the client and not part of the client, it
just makes use *of* the client, but is used on the
datastore/server...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 15:17:53 +02:00
ea584a7510 move more api types for the client
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 15:17:53 +02:00
ba0ccc5991 move some tools used by the client
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 15:17:52 +02:00
75f83c6a81 move some api types and resolve imports
in preparation of moving client & proxmox_client_tools out
into a pbs-client subcrate

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 15:17:52 +02:00
0dda5a6695 ui: add APT repositories
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:45:45 +02:00
289738dc1a api: apt: add endpoints for adding/changing repositories
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:45:45 +02:00
d830804f02 api: apt: add repositories call
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:45:45 +02:00
82cc4b56e5 depend on proxmox-apt
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:45:45 +02:00
923f94a4d7 api: access: openid: add PROXMOX_BACKUP_RUN_DIR_M
otherwise it does not compile with 'RUSTFLAGS="--cfg openid"'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 13:03:32 +02:00
bbff317aa7 api: disk list: sort by name
So callers get more stable results. Most noticeable, the disk list in
the web UI doesn't jump around upon reloading, and while sorting could
be done directly there, like this other callers get the benefit too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:02:30 +02:00
20429238e0 disks: also check for file systems with lsblk
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:02:30 +02:00
364299740f disks: refactor partition type handling
in preparation to also get the file system type from lsblk.

Co-developed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:02:29 +02:00
b81818b6ad subscription: set higher-level error to message instead of bailing
While the PVE one "bails" too, it has an eval around those and moves
the error to the message property, so lets do so too to ensure a user
can force an update on a too old subscription

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:43:23 +02:00
2f02e431b0 moving more code to pbs-datastore
prune and fixed/dynamic index

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 10:40:14 +02:00
e64f38cb6b move chunk_stat, read_chunk to pbs-datastore
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 10:40:14 +02:00
ae24382634 bump version to 2.0.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-08 14:44:26 +02:00
82cae19d19 ui: datastore/OptionView: only navigate up when we removed the datastore
and not on window close

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 14:41:13 +02:00
3f5fbc5620 ui: datastore edit: make keep-last label like the others
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-08 14:06:26 +02:00
000e6cad5c ui: TapeRestore: mark datastore selector as 'not a form field'
since extjs 7.0 those will get picked up by our query logic and
sent to the backend. prevent that by setting isFormField to false
(we assemble the values differently)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 14:05:21 +02:00
49f44cedbf api: config: delete datastore: also remove tape backup jobs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 12:15:59 +02:00
eb1c59cc2a api: add keep-job-configs flag to datastore remove endpoint
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Suggested Fixes:
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 12:15:50 +02:00
c7d032fc17 ui: use task list component from widget toolkit
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-08 11:47:44 +02:00
73b77d4787 ui: tasks: use format_task_status
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-08 11:43:43 +02:00
67466ce564 ui: MainView: fix redirectTo call
takes now an object parameter in extjs 7.0

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 11:43:43 +02:00
4e0faf5ef3 ui: use isActionDisabled
isDisabled is deprecated for actions in actioncolumns
(it produces a warning for now)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 11:43:43 +02:00
c23192d34e move chunk_store to pbs-datastore
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 14:37:47 +02:00
83771aa037 move tools::process_locker to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 14:16:34 +02:00
95f9d67ce9 move UPID to pbs-api-types, add UPIDExt
pbs-server side related methods are added via the UPIDExt
trait

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 13:51:03 +02:00
314d360fcd buildsys: run tests on entire workspace by default
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 12:17:10 +02:00
f8a74456cc test fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 12:17:10 +02:00
4906bac10f linking fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:59:33 +02:00
86c831a5c3 fixup examples
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:49:42 +02:00
a5951b4f38 move manifest and backup_info to pbs-datastore
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:34:56 +02:00
f75292bd8d move tools::json to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:34:56 +02:00
bfff4eaa7f move backup id related types to pbs-api-types
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:34:56 +02:00
067dc06dba add pbs-systemd: move string and unit handling there
the systemd config/unit parsing stays in pbs for now since
that's not usually required and uses our section config
parser

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:34:56 +02:00
18cdf20afc move tools::nom to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 10:08:26 +02:00
e57841c442 move run_command to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 10:04:05 +02:00
751f6b6148 move userid types to pbs-api-types
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 09:53:48 +02:00
3c430e9a55 move id and single line comment format to pbs-api-types
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 09:49:38 +02:00
155f657f6b move TaskState trait to pbs-datastore
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 09:24:39 +02:00
86fb38776b add pbs-api-types subcrate, move key_derivation
move key_derivation to pbs-datastore

pbs-api-types should only contain "basic" types which
* are usually required by clients
* don't depend on pbs-related code directly

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 09:04:09 +02:00
f323e90602 add pbs-datastore module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 15:11:52 +02:00
770a36e53a add pbs-tools subcrate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 15:10:37 +02:00
d420962fbc split out pbs-runtime module
These are mostly tokio specific "hacks" or "workarounds" we
only really need/want in our binaries without pulling it in
via our library crates.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 14:52:25 +02:00
01fd2447b2 buildsys: don't use debcargo
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 14:50:46 +02:00
85beb7d875 tree-wide: switch to using mod.rs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 12:04:52 +02:00
af06decd1b split out pbs-buildcfg module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 12:00:14 +02:00
aceae32baa Cargo.toml: regroup imports
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 11:48:28 +02:00
74a4f9efc9 bump version to 2.0.1-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 23:14:54 +02:00
fb1e7a86f4 ui: minimally increase font-size of product title and version
Similar like we did for Proxmox VE's manager. The main title and
version should stand a bit more out compared to simple nav/button
texts.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 23:13:33 +02:00
dc99315cf9 ui: app: fix openID helper usage and rework style
one really does not need a if and an extra intermediate variable for
assigning a simple bool...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 23:12:08 +02:00
34bd1109b0 bump version to 2.0.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:45:00 +02:00
13a2445744 buildsys: docs: clean: also clean generated JS files
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:44:13 +02:00
c968da789e acme: nit code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:44:13 +02:00
3f84541412 fix #3496: acme: plugin: add sleep for dns propagation
the dns plugin config allow for a specified amount of time to wait for
the TXT record to be set and propagated through DNS.

This patch adds a sleep for this amount of time.
The log message was taken from the perl implementation in proxmox-acme
for consistency.

Tested with the powerdns plugin in my test setup.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-07-03 21:44:13 +02:00
4d8bd03668 config: acme: make validation_delay crate public
we need the setting in acme::plugin.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-07-03 21:44:13 +02:00
f9bd5e1691 acme: plugin: fix error message
extract_challenge is used by both dns-01 and http-01 challenges.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-07-03 21:44:13 +02:00
ecd66ecaf6 restore daemon: use millisecond log resolution
During startup most of the stuff is happening in milliseconds (or
less), so the timestamp granularity of seconds made it hard to tell
if the previous command required 990ms or 1ms, which is quite the
difference in the restore daemon context.

Using micros seems not to bring too much additional information, a
millisecond is already an ok lower time resolution for logging, so
switch only to millis for now.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:44:13 +02:00
33d7292f29 restore daemon: create /run/proxmox-backup on startup
fixes file restore again.

The new Memcom tracking file lives in `/run/proxmox-backup` and is
always created on REST interaction, as CachedUserInfo uses it to
efficiently track config changes, and such a cache is used in each
REST handle_request.

Further, the Memcom infra expects the base run PBS dir to exists
already, which is an OK assumption to have, but in the file-restore
daemon we have a significantly more minimal environment, and the run
dir was simply not required there, even /run isn't a tmpfs yet.

Fixes fda19dcc6f ("fix CachedUserInfo by using a shared memory version counter")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:43:07 +02:00
f4d371d2d2 REST: set error message extenesion for bad-request response log
We send it already to the user via the response body, but the
log_response does not has, nor wants to have FWIW, access to the
async body stream, so pass it through the ErrorMessageExtension
mechanism like we do else where.

Note that this is not only useful for PBS API proxy/daemon but also
the REST server of the file-restore daemon running inside the restore
VM, and it really is *very* helpful to debug things there..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
835d0e5dd3 memcom: rustfmt + (trailing) whitespace cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
9a06eb1618 file restore daemon: log about basic steps
to make the log more useful..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
309e14ebb7 file restore daemon: reword warning about manual execution
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
2d48533378 REST: rust fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
fffd6874e6 bump version to 2.0.0-2
only for the file-restore daemon, other packages where not uploaded!

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 02:15:24 +02:00
0ddd48f0b5 d/control: record breaks for older proxmox-backup-restore-image
> requires a Breaks on the old restore image (else the restore daemon
> crashes because of missing lock/LVM support).
- F.G., mailing list

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 02:13:56 +02:00
cb590dbc07 file-restore-daemon/disk: add LVM (thin) support
Parses JSON output from 'pvs' and 'lvs' LVM utils and does two passes:
one to scan for thinpools and create a device node for their
metadata_lv, and a second to load all LVs, thin-provisioned or not.

Should support every LV-type that LVM supports, as we only parse LVM
tools and use 'vgscan --mknodes' to create device nodes for us.

Produces a two-layer BucketComponent hierarchy with VGs followed by LVs,
PVs are mapped to their respective disk node.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:54:31 +02:00
6c4f762c49 file-restore-daemon/disk: ignore already-mounted error and prefix zpool
Prefix zpool mount paths to avoid clashing with other mount namespaces
(like LVM).

Also ignore "already-mounted" error and return it as success instead -
as we always assume that a mount path is unique, this is a safe
assumption, as nothing else could have been mounted here.

This fixes an issue where a mountpoint=legacy subvol might be available
on different disks, and thus have different Bucket instances that don't
share the mountpoint cache, which could lead to an error if the user
tried opening it multiple times on different disks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:54:23 +02:00
7a0afee391 file-restore-daemon/disk: fix component path errors
otherwise the path ends in an array ["foo", "bar"] instead of "foo/bar"

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:54:19 +02:00
0dda883994 file-restore-daemon/disk: dedup BucketComponents and make size optional
To support nested BucketComponents, it is necessary to dedup them, as
otherwise two components like:
  /foo/bar
  /foo/baz
will result in /foo being shown twice at the first hierarchy.

Also make the size property based on index and optional, as for example
/foo in the example above might not have a size, and bar/baz might have
differing sizes.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:54:13 +02:00
c2e2078b3f openid: conditionally disable api endpoint
since it pulls in lots of additional linked libraries for all binaries
compiled as part of proxmox-backup. it can easily be re-enabled with
`--cfg openid` added to the RUSTFLAGS env variable.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:52:01 +02:00
26a3450f19 openid: move helper from config to api2
it's not really needed in the config module, and this makes it easier to
disable the proxmox-openid dependency linkage as a stop-gap measure.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:52:01 +02:00
324c069848 d/control: bump versioned dependency for proxmox-widget-toolkit to 3.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 01:50:12 +02:00
bd4c5607ca d/control: commit update
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 01:49:50 +02:00
e1d85f1840 ui: login: cleanups, mostly openID related
similar to what was done in PVE.

 - factor out openid_login_param to widget-toolkit as
   getOpenIDRedirectionAuthorization and use it
 - use camel case to match our JS style guide and our framework (and
   basically the rest of the JS world)
 - minor cleanups like moving variable definition into the single if
   branch their used

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 01:46:24 +02:00
1ce1a5e5cc docs: installation: drop debian-release specific note
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 01:44:53 +02:00
6f66a0ca71 docs: faq: update support table
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 01:44:09 +02:00
62a5b3907b docs: initial update to repositories for bullseye
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-02 19:02:13 +02:00
85b6c4ead4 ui: login: fix another bogus gettext usage
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-02 09:31:50 +02:00
a190979c04 ui: login: fix bogus gettext usage
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-02 08:40:14 +02:00
4a489ae3de ui: dashboard/DataStoreStatistics: fix closing <i> tag
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-01 07:51:00 +02:00
9ac8b73e07 tape/drive: fix logging when requesting media
we try to load the correct media in a loop until we find the correct tape.
when encountering an error or wrong tape, we want to log that (and send
an email if one is set) that requests the correct tape.

while trying to avoid printing the same errors more than once in a row,
we had at least one case (starting with an empty tape in the drive)
which would not print/send any tape request.

reworking that code to use a custom 'TapeRequest' enum, which contains
the state + error message, and a helper that prints and sends an email
when the state changes

this reduces the change check/log to a single variable, instead of 4
(tried, last_media_uuid, last_error, failure_reason)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-30 10:25:48 +02:00
414be8b675 tape: fix LTO locate_file for HP drives
Add test code to the first locate_file command, compute locate_offset.
Subsequent locate_file commands use that offset.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-30 09:08:58 +02:00
fda19dcc6f fix CachedUserInfo by using a shared memory version counter 2021-06-30 08:54:30 +02:00
cd975e5787 ui: implement OpenId login 2021-06-30 08:54:30 +02:00
3b7b1dfb8e api: add openid redirect/login API 2021-06-30 08:54:30 +02:00
d8a47ec649 cleanup user/token is_active() check 2021-06-30 08:54:30 +02:00
252cd3b781 implement new helper is_active_user_id() 2021-06-30 08:54:30 +02:00
0decd11efb cli: add CLI to manage openid realms. 2021-06-30 08:54:30 +02:00
b84d2592fb add API to manage openid realms 2021-06-30 08:54:30 +02:00
0219ba2cc5 check_acl_path: add /access/domains and /access/openid 2021-06-30 08:54:30 +02:00
bbff6c4968 config: new domains.cfg to configure openid realm
Or other realmy types...
2021-06-30 08:54:30 +02:00
bb88c6a29d depend on proxmox-openid-rs 2021-06-30 08:54:30 +02:00
a02466966d update enterprise repository to bullseye
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:57:50 +02:00
b0fc11804e d/changelog: add actual changelog for initial 2.0 build
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:40:44 +02:00
d9d81741e3 buildsys: switch to bullseye as upload dist target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:10:26 +02:00
9678366102 bump version to 2.0.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:07:46 +02:00
a2c73c78dd buildsys: call dpkg-buildpackage directly in deb-all
else we may double-build

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:07:15 +02:00
c6a0e7d98e d/control: bump versioned dependency for ExtJS 7.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:05:58 +02:00
85417b2a88 docs: build api-viewer from widget-toolkit-dev
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
d738669066 docs: add Toolkit.js to lto-barcode
and generate a single js file for it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
442d6da8fb docs: add Toolkit.js to prune simulator
from proxmox-widget-toolkit-dev and not as normal dependency,
else we would have to ship widget-toolkit on the wiki

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
62f10a01db docs/prune-simulator: remove displayField for Calendar Field
in extjs 7.0, specifying displayField overwrites the displayTpl,
which we want to use here, so remove it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
5667b76381 fix #3393 (again): pxar/create: try to read xattrs/fcaps/acls by default
we have a static list of filesystems and their capabilities regarding
file attributes and fs features (e.g. sockets/fifos/etc) which also
includes xattrs,acls and fcaps

if we did not know a filesystem by its magic number (for example cephfs),
we did not even attempt to read xattrs, etc.

this patch adds those flags by default to unknown filesystems, and
removes them when we encounter EOPNOTSUPP (to remove the number
of syscalls)

with this, we should be able to catch xattrs/acls/fcaps on all
(unknown) fs types that support them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-28 14:04:22 +02:00
d9b318a444 file-restore/disk: support ZFS subvols with mountpoint=legacy
These require mounting using the regular 'mount' syscall.
Auto-generates an appropriate mount path.

Note that subvols with mountpoint=none cannot be mounted this way, and
would require setting the mountpoint property, which is not possible as
the zpools have to be imported with readonly=on.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-28 13:58:41 +02:00
86ce56f193 file-restore/disk: support ZFS pools
Uses the ZFS utils to detect, import and mount zpools. These are
available as a new Bucket type 'zpool'.

Requires some minor changes to the existing disk and partiton detection
code, so the ZFS-specific part can use the information gathered in the
previous pass to associate drive names with their 'drive-xxxN.img.fidx'
node.

For detecting size, the zpool has to be imported. This is only done with
pools containing 5 or less disks, as anything else might take too long
(and should be seldomly found within VMs).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-28 13:58:41 +02:00
8d72c2c32e file-restore: increase RAM for ZFS and disable ARC
Even through best efforts at keeping it small, including the ZFS tools
in the initramfs seems to have exhausted the small overhead we had left
- give it a bit more RAM to compensate.

Also disable the ZFS ARC, as it's no use in such a memory constrained
environment, and we cache on the QEMU/rust layer anyway.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-28 13:58:41 +02:00
c48c38ab8c async_lru_cache: fix handling of errors in fetch
The future needs to be removed from the pending map in any case, even if
it returned an error, else all upcoming calls to access this key will
always return the same error.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-28 13:48:26 +02:00
3d3769830b tape/helpers/snapshot_reader: sort chunks by inode (per index)
sort the chunks we want to backup to tape by inode, to gain some
speed on spinning disks. this is done per index, not globally.

costs a bit memory, but not too much, about 16 bytes per chunk which
would mean ~4MiB for a 1TiB index with 4MiB chunks.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 12:16:14 +02:00
4921a411ad backup/datastore: refactor chunk inode sorting to the datastore
so that we can reuse that information

the removal of the adding to the corrupted list is ok, since
'get_chunks_in_order' returns them at the end of the list
and we do the same if the loading fails later in 'verify_index_chunks'
so we still mark them corrupt
(assuming that the load will fail if the stat does)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 12:14:52 +02:00
81c767efce proxmox-backup-manager: show task log on datastore create
since the output:
Result: "<UPID>"
is not really interesting, show instead the task log while
the datastore is creating, since it is now run in a worker

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 12:09:55 +02:00
60abf03f05 close #3459: manager: add --ignore-verified and --outdated-after parameters
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 11:03:51 +02:00
dcbf29e71b api: add ignore-verified and outdated-after to datastore verify endpoint
preparatory change for fixing #3459

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 11:03:51 +02:00
037e6c0ca8 verify-job: move snapshot filter into function
preparatory steps for fixing #3459

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 11:03:44 +02:00
c7024b282a d/control: set R-R-R to run binary d/rules targets as root
the build still requires root to make helper binaries setuid

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-14 13:02:20 +02:00
90ff75f85c update to zstd 0.6
compatible with libzstd from bullseye.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-14 13:01:43 +02:00
2165f0d450 api: define and use REALM_ID_SCHEMA 2021-06-10 11:10:00 +02:00
1e7639bfc4 fixup minimum lru capacity
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-08 10:13:46 +02:00
4121628d99 tools/lru_cache: make minimum capacity 1
Setting this to 0 is not just useless, but breaks the logic horribly
enough to cause random segfaults - better forbid this, to avoid someone
else having to debug it again ;)

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:55 +02:00
da78b90f9c backup: remove AsyncIndexReader
superseded by CachedChunkReader, with less code and more speed

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:46 +02:00
1ef6e8b6a7 replace AsyncIndexReader with SeekableCachedChunkReader
admin/datastore reads linearly only, so no need for cache (capacity of 1
basically means no cache except for the currently active chunk).
mount can do random access too, so cache last 8 chunks for possibly a
mild performance improvement.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:44 +02:00
10351f7075 backup: add AsyncRead/Seek to CachedChunkReader
Implemented as a seperate struct SeekableCachedChunkReader that contains
the original as an Arc, since the read_at future captures the
CachedChunkReader, which would otherwise not work with the lifetimes
required by AsyncRead. This is also the reason we cannot use a shared
read buffer and have to allocate a new one for every read. It also means
that the struct items required for AsyncRead/Seek do not need to be
included in a regular CachedChunkReader.

This is intended as a replacement for AsyncIndexReader, so we have less
code duplication and can utilize the LRU cache there too (even though
actual request concurrency is not supported in these traits).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:40 +02:00
70a152deb7 backup: add CachedChunkReader utilizing AsyncLruCache
Provides a fast arbitrary read implementation with full async and
concurrency support.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:37 +02:00
5446bfbba8 tools: add AsyncLruCache as a wrapper around sync LruCache
Supports concurrent 'access' calls to the same key via a
BroadcastFuture. These are stored in a seperate HashMap, the LruCache
underneath is only modified once a valid value has been retrieved.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:34 +02:00
400885e620 tools/BroadcastFuture: add testcase for better understanding
Explicitly test that data will stay available and can be retrieved
immediately via listen(), even if the future producing the data and
notifying the consumers was already run in the past.

Wasn't broken or anything, but helps with understanding IMO.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 09:42:29 +02:00
f960fc3b6f fix #3433: use PVE's wearout logic in PBS
in PVE, the logic how wearout gets read from the smartctl output was
changed from a vendor -> id map to a sorted list of specific
attribute field names.

copy that list to pbs (in the same order), and use that to get the
wearout

in the future we might want to split the disk logic into its own crate
and reuse it in pve

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-08 08:31:37 +02:00
ddfa4d679a ui: tape: DriveSelector: make wider and fine-tune column flex
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:57:45 +02:00
10e8026786 ui: tape: DriveSelector: code cleanup, group config together
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:30:57 +02:00
2527c039df ui: tape: TapeBackupJob: use correct default value for pbsUserSelector
if we want the empty value as a valid default value in a combogrid,
we have to explicitely select 'null' else the field will be marked as
dirty

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:30:29 +02:00
93d8a2044e ui: tape: DriveSelector: do not autoselect the drive
in case an invalid drive was configured, now it marks the field
invalid instead of autoselecting the first valid one

this could have lead to users configuring the wrong drive in a
tape-backup-job when they edited one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-07 16:30:13 +02:00
d2354a16cd client/pull: log snapshots that are skipped because of time
we skip snapshots that are older than the newest snapshot of the group in
the target datastore, log it so the user can know why it is not synced

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-07 10:51:25 +02:00
34ee1f1c76 ui: DataStoreList: add remove button
so that a user can remove a datastore from the gui,
though no data is deleted, this has to be done elsewhere (for now)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-04 09:41:52 +02:00
2de4dc3a81 backup/chunk_store: optionally log progress on creation
and enable it for the worker variants

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-04 09:32:09 +02:00
b90036dadd cleanup: factor out config::datastore::lock_config() 2021-06-04 09:04:14 +02:00
4708f4fc21 api2/config/datastore: change create datastore api call to a worker
so that longer running creates (e.g. a slow storage), does not
run in a timeout and we can follow its creation

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-04 09:02:05 +02:00
062cf75cdf proxmox-backup-proxy: fix leftover references on datastore removal
when we remove a datastore via api/cli, the proxy
has sometimes leftover references to that datastore in its
DATASTORE_MAP which includes an open filehandle on the
'.lock' file

this prevents unmounting/exporting the datastore even after removal,
only a reload/restart of the proxy did help

add a command to our command socket, which removes all non
configured datastores from the map, dropping the open filehandle

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-04 08:22:53 +02:00
e5950360ca tape/drive: improve tape device locking behaviour
by implementing a custom error type that is either 'TimeOut' or
'Other'.

In the api, check in the worker loop for exactly 'TimeOut' errors and continue only
then. All other errors lead to a aborted task.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-02 17:08:00 +02:00
5b358ff0b1 server/prune_job: fix locking during prune jobs
removing the backup dir must acquire the snapshot lock, else it can
happen that we remove a snapshot while it is being restored
or backed up to tape

the original commit that adds the force flag
(c9756b40d1)
mentions that the prune checks itself if the snapshot is in use,
but i could not find such code, so simply set force to false

to avoid failing and aborting the prune job, warn if it could not
and continue

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-02 17:04:49 +02:00
4c00391d78 ui: dashboard/TaskSummary: add type 'close' to the close tool
otherwise the button is not visible

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-01 16:38:07 +02:00
9594362e35 ui: datastore/DataStoreListSummary: catch and show errors per datastore
so that the update does not get canceled because of a bad datastore
hide the irrelevant fields in that case

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-01 16:38:06 +02:00
3420029b5e Revert "file-restore-daemon: work around tokio DuplexStream bug"
This reverts commit 75f9f40922, which is
no longer needed now that we use tokio >= 1.6 which contains the proper
fix.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-01 10:31:19 +02:00
f432a1c927 bump tokio dependency to 1.6
it contains a bug fix that allows dropping the workaround in

75f9f40922 file-restore-daemon: work around tokio DuplexStream bug

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-01 10:30:57 +02:00
e8b32f2d87 bump version to 1.1.9-1 2021-06-01 08:27:18 +02:00
3e3b505cc8 reorder serde usage/derive
this is deprecated with rustc 1.52+, and will become a hard error at
some point:

https://github.com/rust-lang/rust/issues/79202

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-31 14:53:08 +02:00
0bca966ec5 fix typo: s/dies/does/ 2021-05-31 11:01:15 +02:00
84737fb33f lto/sg_tape/encryption: remove non lto-4 supported byte
from the SspDataEncryptionCapabilityPage

it seems we do not need it, since the EXTDECC flag is only used for
determining if the drive is capable to be configured via
ADI (Automation/Drive Interface) which we do not use at all.

this makes the call work with LTO-4 again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-31 10:58:38 +02:00
e21a15ab17 ui: tape: s/Restore Wizard/Restore/
Mostly to avoid an extra translation text for basically no gain.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-27 11:26:46 +02:00
90066d22a0 ui: MainView: use new beforeChangePath signature
subpath can be optional in extjs 7.0, so handle that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
dbf5dad1c4 ui: css: fix text-align pmx-button-badge
this was previously set on the button class, but has since been removed
add it here to have the badge number centered again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
c793da1edc ui: MainView: navigation: use different ui class
by default the treelist gets the 'nav' ui, which in newer extjs
versions has a custom styling (unlike before)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
f8735e5988 ui: datastore/Summary: change destroy listener
by using beforedestroy instead of destroy (like we do everywhere else)
to avoid race condition when the controller has
already removed some handlers on destruction

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
e9805b2486 ui: panel/UsageChart: change downloadServerUrl
to not have the sencha url by default

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
eb90405a78 ui: form/CalendarEvent: do not set displayField
we use displayTpl here, setting displayField will override the template

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
ecf5f468c3 ui: MainView: do not use unnecessary panels
using container here is fine, we do not need panel behaviour which
is more bloated. Removes two ARIA warnings.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 19:18:40 +02:00
51aee8cac8 ui: tape restore wizard: set emptyText to media set selector
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:10:16 +02:00
7d5049c350 ui: tape restore wizard: always show snapshot grid
looks (almost confusingly) empty else and no real disadvantage in
showing the disabled one until a media-set is selected and loaded

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:09:36 +02:00
01a99f5651 ui: tape overview: rename to "Restore Wizard" and use icons
To create a correlation with the restore action column

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:05:31 +02:00
2914e99ff3 ui: tape overview: use correct icon for Media-Pools
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:04:07 +02:00
f9b824ac30 ui: tape overview: include more context in restore tooltips
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:03:35 +02:00
9a535ec77b ui: tape restore: small code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-26 19:00:12 +02:00
ffba023c91 ui: tape/TapeRestore: fix some properties
remove leftover from when it was an Proxmox.window.Edit, and
add the missing 'modal'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
e01689978e ui: tape/TapeRestore: allow preselecting a datastore
for that we need to split the prefilter additions, else
we always filter the snaphots too and giving 'undefined' filters
all snapshots...

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
68ac8976eb ui: tape/TapeRestore: don't send snapshotlist when restoring whole datastores
for the case that the user selects only whole datastores, we do not
want to send and (exhaustive) list of snapshots that get restored,
but we only want to honor the mapping the user gives

this avoids using the backup restore codepath that iterates twice
over the tapes and would generally be slower for a lot of snapshots

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
afb790db73 ui: tape/BackupOverview: add generic 'Restore' button
this will open the restore window without anything preselected

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
0732de361a ui: tape/TapeRestore: add MediaSetSelector
when no uuid/mediaset is given.
we change a bit how we use the uuid by moving it into the viewmodel
(instead of a simple property on the view) so that we can always
use the selected one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
d455270fa1 ui: tape: add MediaSetSelector
so that we can let the user select a media-set

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
1336be16c9 ui: tape/BackupOverview: rename action column to restore
to make it clear that this button is for restore and for
now we do not have any plans to add buttons here

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
03380db560 api2/tape: add api call to list media sets
we want a 'media-set' selector in the gui, this makes it
very easy to do and is not as costly as reusing the media list,
since we do not need to iterate over all media (e.g. unassigned)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
927ebc702c ui: tape/BackupOverview: expand pools by default
normally, users will not have many tape media pools,
and are more interested in the actual media-sets, so
expand those nodes by default

if the list gets very long, the user can collapse some pools anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-26 18:10:57 +02:00
c24cb13382 api: node/journal: fix parameter extraction of /nodes/node/journal
by extracting them via the api macro into the function signature

this fixes an issue, where giving 'since' and 'until' where not
used since we tried to extract them as 'str' while they were numbers.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-25 13:26:51 +02:00
3a804a8a20 file-restore-daemon: limit concurrent download calls
While the issue with vsock packets starving kernel memory is mostly
worked around by the '64k -> 4k buffer' patch in
'proxmox-backup-restore-image', let's be safe and also limit the number
of concurrent transfers. 8 downloads per VM seems like a fair value.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 11:56:43 +02:00
1fde4167ea file-restore-daemon: watchdog: add inhibit for long downloads
The extract API call may be active for more than the watchdog timeout,
so a simple ping is not enough.

This adds an "inhibit" API, which will stop the watchdog from completing
as long as at least one WatchdogInhibitor instance is alive. Keep one in
the download task, so it will be dropped once it completes (or errors).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 11:56:43 +02:00
75f9f40922 file-restore-daemon: work around tokio DuplexStream bug
See this PR for more info: https://github.com/tokio-rs/tokio/pull/3756

As a workaround use a pair of connected unix sockets - this obviously
incurs some overhead, albeit not measureable on my machine. Once tokio
includes the fix we can go back to a DuplexStream for performance and
simplicity.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 11:56:43 +02:00
e9c2638f90 apt: fix removal of non-existant http-proxy config
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-25 11:54:46 +02:00
338c545f85 tasks: fix typos in API description
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2021-05-25 07:54:57 +02:00
e379b4a31c file-restore-daemon: disk: add RawFs bucket type
Used to specify a filesystem placed directly on a disk, without a
partition table inbetween. Detected by simply attempting to mount the
disk itself.

A helper "make_dev_node" is extracted to avoid code duplication.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 07:53:22 +02:00
3d7ca2bdb9 file-restore-daemon: disk: allow arbitrary component count per bucket
A bucket might contain multiple (or 0) layers of components in its path
specification, so allow a mapping between bucket type strings and
expected component depth. For partitions, this is 1, as there is only
the partition number layer below the "part" node.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 07:53:22 +02:00
d34019e246 file-restore-daemon: disk: ignore "invalid fs" error
Mainly just causes log spam, we print a more useful error in the end if
all mounts fail anyway.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-25 07:53:22 +02:00
7cb2ebba79 bump version to 1.1.8-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:25:31 +02:00
4e8581950e cargo: bump proxmox-http version to 0.2.1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:25:31 +02:00
2a9a3d632e ui: config: move node ops (http-proxy) into existing "Authentication"
Mainly as Config -> Option is a weird name, Authentication has only
one obj. grid, the node options are only the http-proxy for now and
that is a sort of authentication, so good enough for me for now, but
should be rethought for 2.0 and/or once more node opts are added

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:23:35 +02:00
b6d07fa038 d/control: bump versioned dependency for proxmox-widget-toolkit
for the new gridRows feature the ObjectGrid gained.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:23:35 +02:00
4599e7959c ui: rework node-config to static
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:23:35 +02:00
82ed13c7d7 ui: add node options under 'Configuration -> Options'
for now only http-proxy lives there, but we will add more options later,
such as
* email from
* default gui language

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 18:23:35 +02:00
5aaa81ab89 docs: add short initial http-proxy docs
better than nothing and something to point to in the UI

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:23:35 +02:00
8a06d1935e ui: webauthn view: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 18:23:35 +02:00
f44254b4bd ui: hyphenate "Media-Set" to make it clearer that its one noun
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:53:50 +02:00
07875ce13e ui: tape content: set icon-class for reload button
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:51:57 +02:00
98dc770efa ui: tape restore: drop (now) unused references
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:45:15 +02:00
8848f1d487 ui: tape restore: avoid component/value lookup, use parameters
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:44:16 +02:00
5128ae48a0 tape: restore: cope with not fully instantiated components
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:24:52 +02:00
104ae6093a ui: tape: small code/style cleanups
It was not actually bad, so they're quite opinionated to be honest,
but at least xtypes props must go first and variable declaration
should try to be as near as possible to the actual use as long as
code stays sensible readable/short.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 15:19:51 +02:00
e830d63f6a ui: tape restore: update datastore map emptyText depending on default
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 14:31:05 +02:00
ce32cd487a ui: webauthn: drop bogus destroy stopStore call
will only result in an exception (in debug mode at least)...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 14:29:11 +02:00
f36c659365 ui: tape/BackupOverview: do not reload on restore
a restore does not change the tape content, so a reload has no benefit here.
since we're touching those lines, change to 'autoShow' property

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 13:25:45 +02:00
47e5cbdb03 ui: tape/BackupOverview: also allow to filter by group for restore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 13:25:45 +02:00
4923a76f22 ui: tape/window/TapeRestore: enabling selecting multiple snapshots
by including the new snapshotselector. If a whole media-set is to be
restored, select all snapshots

to achieve this, we drop the 'restoreid' and 'datastores' properties
for the restore window, and replace them by a 'prefilter' object
(with 'store' and 'snapshot' properties)

to be able to show the snapshots, we now have to always load the
content of that media-set, so drop the short-circuit if we have
the datastores already.

change the layout of the restore window into a two-step window
so that the first tab is the selection what to restore, and on the
second tab the user chooses where to restore (drive, datastore, etc.)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 13:25:45 +02:00
e01ca6a2dd ui: tape/TapeRestore: improve SnapshotGrid
* handle not rendered call of getErrors
* return 'all' as value if all snaphots where selected
  (for better distinction)
* remove the default height
* add checkChange on stores filterChange
  (now change also fires on the gridfilter plugin change)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 13:25:45 +02:00
5e989333cd ui: tape/TapeRestore: fix small DataStoreMappingGrid bugs
enable scrolling by default, and handle the case that getErrors gets
called when the component is not yet rendered

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-21 13:25:45 +02:00
af39c399bc ui: dashboards statistics: visualize datastores where quering the usage failed
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 13:22:07 +02:00
64591e731e api: status: graceful-degrade when a datastore lookup fails
This can happen if the underlying storage failed, in which case we do
not want to fail the whole API call, as it should report the status
of all datastores. So rather add the error inline to the related
store entry and continue.

Allows to nicely visualize those stores in the gui.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 13:20:55 +02:00
5658504b90 dashboard statistics: prepare a more graceful error handling in datastore-usage
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 13:20:55 +02:00
64e0786aa9 api: datastore status: refactor reused rrd get-data code into closure
Nicer and shorter than just using a variable for the common parameters

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 13:20:55 +02:00
90761f0f62 api: datastore status: code cleanup, reduce indentation level
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-21 13:20:55 +02:00
74f74d1e64 ui: tape/window/TapeRestore: add SnapshotGrid Component
this will be used for letting the user select multiple, individual
snapshots on restore (instead of having a single or the whole media-set)

if a 'prefilter' object is given, we filter the grid by those
values using the gridfilter plugins (like in pve's bulk action windows)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-18 07:51:23 +02:00
4db4b9706c ui: tape/BackupOverview: move restore buttons inline
instead of having them in the toolbar. This makes the UI more consistent
with the datastore content view.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-18 07:51:23 +02:00
00a5072ad3 ui: tape/BackupOverview: fix wrong media-set text for singlerestore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-18 07:51:23 +02:00
3d3d698bb3 buildsys: split long debcargo invocation into multiple lines
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-18 07:49:01 +02:00
1b9521bb87 bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-17 11:46:27 +02:00
1d781c5b20 update proxmox-http dependency
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-17 11:29:24 +02:00
8e8836d1ea d/control: update after http refactoring
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 11:02:57 +02:00
a904e3755d ui: datastore/Content: change group remove to SafeDestroy Window
so that a user does not accidentally remove a whole group instead
of a snapshot

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-17 10:42:03 +02:00
7ba99fef86 ui: datastore/Content: fix wrong tooltip for forgetting
sometimes it's a group, not a snapshot

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-17 10:41:44 +02:00
7d2be91bc9 move SimpleHttp to proxmox_http
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:32:33 +02:00
578895336a SimpleHttp: factor out product-specific bits
in preparation of moving the abstraction to proxmox_http

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:32:22 +02:00
8c090937f5 move tools::http to proxmox_http
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:31:54 +02:00
4229633d98 move ProxyConfig to proxmox_http
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:31:27 +02:00
3ed7e87538 HttpsConnector: make keepalive configurable
it's the only PBS-specific part in there, so let's make it
product-agnostic before moving it off to proxmox-http.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:31:15 +02:00
5b43cc4487 move MaybeTlsStream wrapper to proxmox_http
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:30:05 +02:00
3241392117 refactor: move socket helper to proxmox crate
and constant to tools module.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:29:42 +02:00
c474a66b41 move websocket to new 'proxmox_http' crate
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-17 10:26:41 +02:00
b32cf6a1e0 ui: datastore/Content: add forget button for groups
since we can remove whole groups via api now

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-17 08:45:10 +02:00
f32791b4b2 api2/admin/datastore: add delete for groups
so that a user can delete a whole group at once, until now, the fastest
way for this was to prune to one snapshot, and delete that

code is basically a copy/paste from the snapshot delete, sans
the 'backup-time' parameter

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-17 08:45:10 +02:00
8f33fe8e59 d/control: update proxmox tools dependency
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-14 13:19:20 +02:00
d19010481d tape/test: repair tests after changing 'start_write_session'
i added a parameter and forgot to adapt the tests

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-14 10:01:54 +02:00
6b11524a8b ui: tape: add 'Force new Media Set' checkbox to manual backup
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-14 08:58:46 +02:00
e953029e8f api2/tape/backup: add 'force-media-set' parameter to manual backup
so that a user can force a new media set, e.g. if he uses the
allocation policy 'continue', but wants to manually start a new
media-set.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-14 08:58:33 +02:00
10f788b7eb ui: tape: ChangerStatus fixup for empty barcode
empty barcode means that label-text is '', not undefined
we forgot this line

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-14 08:48:10 +02:00
9348544e46 ui: tape: TapeRestoreWindow: fix button text
s/Create/Restore/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-12 21:20:20 +02:00
126ccbcfa6 acme: improve errors when account loading fails
if the account does not exist, error with its name
if file loading fails, the error includes the full path
if the content fails to parse, show file & parse error
and in each case mention that it's about loading the acme account file

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-12 12:22:21 +02:00
440472cb32 correctly set apt proxy configuration 2021-05-12 12:19:24 +02:00
4ce7da516d reload cert inside command socket handler 2021-05-12 12:03:27 +02:00
a7f8efcf35 ui: add task descriptions for ACME related tasks
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-11 18:08:10 +02:00
9fe4c79005 api: acme accounts: use name as worker ID
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-11 18:07:03 +02:00
f09f4d5fd5 config: acme: rust format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-11 17:35:54 +02:00
38b4f9b534 config: acme: fall-back to the "default" account
syncs behavior with both, the displayed state in the PBS
web-interface, and the behavior of PVE/PMG.

Without this a standard setup would result in a Error like:
> TASK ERROR: no acme client configured

which was pretty confusing, as the actual error was something else
(no account configured), and the web-interface showed "default" as
selected account, so a user had no idea what actually was wrong and
how to fix it.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-11 17:33:07 +02:00
fca1cef29f hot-reload proxy certificate when updating via the API
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
45b8a0327f refactor send_command
- refactor the combinators,
- make it take a `&T: Serialize` instead of a Value, and
  allow sending the raw string via `send_raw_command`.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
a723c08715 proxy: implement 'reload-certificate' command
to be used via the command socket

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
c381a162fb proxy: factor out tls acceptor creation
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
b4931192c3 proxy: Arc usage cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
cc269b9ff9 proxy: "continue on error" for the accept call, too
as this gets rid of 2 levels of indentation

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
a5e3be4992 proxy: factor out accept_connection
no functional changes, moved code and named the channel's
type for more readability

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
137309cc4e bump version to 1.1.7-1 2021-05-11 13:23:29 +02:00
85f4e834d8 client: use stderr for all fingerprint confirm msgs
an interactive client might still want machine-readable output on
stdout.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-11 13:12:54 +02:00
065013ccec client: refactor verification callback
return a result with optional fingerprint instead of tuple, allowing
easy extraction of a meaningful error message.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-11 13:12:54 +02:00
56d98ba966 client: improve fingerprint variable names
and pass as reference instead of cloning.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-11 13:12:54 +02:00
dda1b4fa44 fix #3391: improve mismatched fingerprint handling
if the expected fingerprint and the one returned by the server don't
match, print a warning and allow confirmation and proceeding if running
interactive.

previous:

$ proxmox-backup-client ...
Error: error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:../ssl/statem/statem_clnt.c:1915:

new:

$ proxmox-backup-client ...
WARNING: certificate fingerprint does not match expected fingerprint!
expected:    ac:cb:6a:bc:d6:b7:b4:77:3e:17:05:d6:b6:29:dd:1f:05:9c:2b:3a:df:84:3b:4d:f9:06:2c:be:da:06:52:12
fingerprint: ab:cb:6a:bc:d6:b7:b4:77:3e:17:05:d6:b6:29:dd:1f:05:9c:2b:3a:df:84:3b:4d:f9:06:2c:be:da:06:52:12
Are you sure you want to continue connecting? (y/n): n
Error: error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:../ssl/statem/statem_clnt.c:1915:

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-11 13:12:54 +02:00
68b102269f ui: tape: add single snapshot restore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 12:57:33 +02:00
0ecdaa0dc0 bin/proxmox-tape: add optional snapshots to restore command
and add the appropriate completion helper

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 12:57:14 +02:00
13f435caab tape/inventory: add completion helper for tape snapshots
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 12:56:55 +02:00
ff99780303 api2/tape/restore: add optional snapshots to 'restore'
this makes it possible to only restore some snapshots from a tape media-set
instead of the whole. If the user selects only a small part, this will
probably be faster (and definitely uses less space on the target
datastores).

the user has to provide a list of snapshots to restore in the form of
'store:type/group/id'
e.g. 'mystore:ct/100/2021-01-01T00:00:00Z'

we achieve this by first restoring the index to a temp dir, retrieving
a list of chunks, and using the catalog, we generate a list of
media/files that we need to (partially) restore.

finally, we copy the snapshots to the correct dir in the datastore,
and clean up the temp dir

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 12:53:38 +02:00
fa9507020a api2/tape/restore: refactor restore code into its own function
and create the 'email' and 'restore_owner' variable at the beginning,
so that we can reuse them and do not have to pass the sources of those
through too many functions

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 12:53:25 +02:00
1bff50afea tape locate_file: fix off by one error 2021-05-11 12:37:04 +02:00
37ff72720b docs/api-viewer: improve rendering of array format
by showing
'[format, ...]'
where 'format' is the simple format from the type of the items

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-11 09:12:55 +02:00
2d5d264f99 tape/pool_writer: do not unwrap on channel send
if the reader thread is already gone here, we panic here, resulting in
a nondescript error message, so simply ignore/warn in that case and
return gracefully

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-11 09:07:45 +02:00
c9c07445b7 ui: window/SyncJobEdit: disable autoSelect for remote datastore
when changin the remote, there is a high chance that there are different
datastores, and if a user does not pay attention, now the first store
of the new remote is selected, instead of the one with the same name

disable autoSelect and let the user manually select a remote datastore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-10 16:56:42 +02:00
a4388ffc36 ui: tape: rename 'Datastore' to 'Target Datastore'
we have 2 modi in that window:
* backup has multiple datastores
* backup has single datastore

In the first case we show a 'mapping' grid so that
the user can only restore a part. Here a user sees all source
Datastores and can select a target for each one.

In the second case we only have a single 'Datastore' selector, but
we do not show the source. Because of this, the naming is slightly ambiguous
(is it the 'Source' or the 'Target' ?), so rename it to 'Target Datastore'.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-10 16:56:37 +02:00
ea1458923e manager: acme plugin: auto-complete available DNS challenge types
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 15:55:49 +02:00
e857f1fae8 completion: ACME plugin type: comment out http type for now, not useful
It may make sense in the future, e.g., if the built-in standalone
type is not enough, e.g., as HTTP**s**, HTTP 2 or even QUIC (HTTP 3)
is wanted in some setups, but for now there's no scenario where one
would profit from adding a new HTTP plugin, especially as it requires
the `data` property to be set, which makes no sense..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 15:50:08 +02:00
3ec42e81b1 manager: acme plugin: remove ID completion helper from add command
we cannot add a plugin with an existing ID so this completion helper
is rather counterproductive...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 15:47:37 +02:00
be1163acfe config: acme: drop now unused foreach_dns_plugin
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 14:41:44 +02:00
d308dc8af7 acme: use proxmox-acme-plugins and load schema from there
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 14:41:12 +02:00
60643023ad api: move AcmeChallengeSchema to acme types module
It will be reused in a later patch in another module which should not
depend on the actual API implementation (ugly and cyclic)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 14:39:07 +02:00
875d53ef6c api: acme: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-10 11:56:38 +02:00
b41f9e9fec acme: fix bad nonce retry counter
Actually return the error on the 3rd try.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-10 11:52:04 +02:00
a1b71c3c7d fix #3296: use proxy client to retrieve changelog
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-05-10 08:48:52 +02:00
013fa2d886 fix #3296: use proxy for subscriptions
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-05-10 08:48:05 +02:00
72e311c6b2 fix 3296: add http_proxy to node config, and provide a cli
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-05-10 08:37:46 +02:00
2732c47466 cleanup src/api2/node/config.rs
- add return type
- fix permissions
- fix descriptions
2021-05-10 08:25:43 +02:00
0466089316 move api related type/regx definition from backup_info.rs to src/api2/types/mod.rs 2021-05-07 12:45:44 +02:00
5e42d38598 api2/types: add TAPE_RESTORE_SNAPSHOT_SCHEMA
which is 'store:type/id/time'

needed to refactor SNAPSHOT_PATH_REGEX_STR from backup_info

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-07 12:27:50 +02:00
82a4bb5e80 api2/tape/restore: return backup manifest in try_restore_snapshot_archive
we'll use that for partial snapshot restore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-07 12:25:30 +02:00
94bc7957c1 progress: shorter format
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-07 12:14:37 +02:00
c9e6b07145 progress: add current group to output
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-07 12:14:23 +02:00
3c06eba17a docs: online help info: surpress warnings during scan
We get lots of warnings due to sphinx complaining about missing
includes for generated synopsis. We do not reference to any of those
for now, so we can ignore that now and supress all standard and
warning output.

Note: Errors are still reported.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-07 11:51:30 +02:00
8081e4aa7b fix #3331: improve progress for last snapshot in group
especially for the last group, without this the progress would report:

"percentage done: 100.00% (1 of 2 groups, 1 of 1 group snapshots)"

instead of the more logical

"percentage done: 100.00% (2 of 2 groups)"

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-05-07 11:20:17 +02:00
d8769d659e use build.rs to pass REPOID to rustc-env 2021-05-07 10:11:39 +02:00
572cd0381b file-restore: add debug mode with serial access
Set PBS_QEMU_DEBUG=1 on a command that starts a VM and then connect to
the debug root shell via:
  minicom -D \unix#/run/proxmox-backup/file-restore-serial-10.sock
or similar.

Note that this requires 'proxmox-backup-restore-image-debug' to work,
the postinst script is updated to also generate the corresponding image.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-07 10:00:12 +02:00
5e91b40087 d/control: update for cargo manifest update
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-07 09:26:41 +02:00
936eceda61 file-restore: support more drives
A PCI bus can only support up to 32 devices, so excluding built-in
devices that left us with a maximum of about 25 drives. By adding a new
PCI bridge every 32 devices (starting at bridge ID 2 to avoid conflicts
with automatic bridges), we can theoretically support up to 8096 drives.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-07 09:03:17 +02:00
61c4087041 file-restore: add more RAM for VMs with many drives or debug
The guest kernel requires more memory depending on how many disks are
attached. 256 seems to be enough for basically any reasonable and
unreasonable amount of disks though.

For debug instance, make it 1G, as these are never started automatically
anyway, and need at least 512MB since the initramfs (especially when
including a debug build of the daemon) is substantially bigger.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-07 09:03:17 +02:00
7d39e47182 file-restore: try to kill VM when stale
Helps to clean up a VM that has crashed, is not responding to vsock API
calls, but still has a running QEMU instance.

We always check the process commandline to ensure we don't kill a random
process that took over the PID.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-07 09:03:17 +02:00
c4e1af3069 make sure URI paths start with a slash
Otherwise we get an empty error message.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-07 08:46:47 +02:00
3e234af16e tape: improve inline docs for READ POSITION LONG 2021-05-06 11:45:40 +02:00
bbbf662d20 tape: use LOCATE(16) SCSI command
Turns out this works on LTO4 and newer.
2021-05-06 10:51:59 +02:00
25d78b1068 client: use build_authority in build_uri
so we don't need to also duplicate the IPv6 bracket logic

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-06 10:27:40 +02:00
78bf292343 call create_run_dir() at daemon startup 2021-05-06 10:23:54 +02:00
e5ef69ecf7 cleanup: split SimpleHttp client into extra file 2021-05-06 10:22:24 +02:00
b7b9a57425 api2/tape/restore: remove unnecessary params from (try_)restore_snapshot_archive
we do not need them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-06 08:02:14 +02:00
c4a04b7c62 api2/tape/restore: factor out check_datastore_privs
so that we can reuse it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-06 08:01:31 +02:00
2e41dbe828 tape/media_catalog: add helpers to look for snapshot/chunk files
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-06 07:58:03 +02:00
56d36ca439 tape/drive: add 'move_to_file' to TapeDriver trait
so that we can directly move to a specified file on the tape

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-06 07:55:08 +02:00
e0ba5553be http proxy: add necessary brackets for IPv6 proxy 2021-05-05 11:57:04 +02:00
8d6fb677c1 proxmox_restore_daemon: mount ntfs with 'utf8' option
otherwise, the kernel driver exposes file names as iso 8859-1,
but we want to have them as utf8.

This mapping should always work, since UTF16 can be cleanly converted
to UTF8.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-05 11:07:31 +02:00
a2daecc25d client/http_client: add necessary brackets
if we are given a 'naked' ipv6 without square brackets around it,
we need to add them ourselves, since the address is ambigious otherwise
when we add the port.

e.g. giving 'fe80::1' as address we arrive at the url (with the default port)
'https://fe80::1:8007/'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-05-05 10:30:39 +02:00
ee0c5c8e01 use api_string_type macro
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-05-05 08:24:37 +02:00
ae5b1e188f docs: tape: clarify LTO-4/5 support
some features we need (e.g. READ POSITION long form) are only officially
available with LTO-5, but work on many LTO-4 drives, so move LTO-4 to
'best-effort' support.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-04 13:21:44 +02:00
49f9aca627 tape/restore: optimize chunk restore behaviour
by checking the 'checked_chunks' before trying to write to disk
and by doing the existance check in the parallel handler. This way,
we do not have to check the existance of a chunk multiple times
(if multiple source datastores gets restored to the same target
datastore) and also we do not have to wait on the stat before reading
the next chunk.

We have to change the &WorkerTask to an Arc though, otherwise we
cannot log to the worker from the parallel handler

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-04 13:06:31 +02:00
4cba875379 bump version to 1.1.6-2 2021-05-04 12:25:16 +02:00
7ab4382476 update debian/control 2021-05-04 12:24:16 +02:00
eaef6c8d00 Revert "temporarily disable broken test"
This reverts commit 888d89e2dd.

The code this depends on should now be available.
2021-05-04 12:11:35 +02:00
95f3692545 fix permissions set in create_run_dir
This directory needs to be owned by the backup user instead
of root.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 12:11:35 +02:00
686173dc2a bump version to 1.1.6-1 2021-05-04 12:09:56 +02:00
39c5db7f0f move basic ACME types into src/api2/types/acme.rs
And rename AccountName into AcmeAccountName.
2021-05-04 11:32:18 +02:00
603aa09d54 tape restore: do not verify restored files
Because this is too slow and causes the tape motor to stop. Instead,
remove the verify_state from the manifest.
2021-05-04 11:05:32 +02:00
88aa3076f0 tape restore: add restore speed to logs 2021-05-04 11:05:32 +02:00
5400fe171c tape restore: write datastore in separate thread 2021-05-04 11:05:32 +02:00
87bf9f569f tape restore: split restore_chunk_archive
Split out a separate function scan_chunk_archive() for catalog restores.

Note: Required, because we need to optimize restore_chunk_archive() to
write datastore in separate threads (else thape drive will stop during restore)
2021-05-04 11:05:32 +02:00
8fb24a2c0a daily-update: check acme certificates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:43:50 +02:00
4b5d9b6e64 ui: add certificate & acme view
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:42:50 +02:00
72bd8293e3 add acme commands to proxmox-backup-manager
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:39:16 +02:00
09989d9963 add node/{node}/config api path
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:32:42 +02:00
4088d5bc62 add node/{node}/certificates api call
API like in PVE:

GET    .../info             => current cert information
POST   .../custom           => upload custom certificate
DELETE .../custom           => delete custom certificate
POST   .../acme/certificate => order acme certificate
PUT    .../acme/certificate => renew expiring acme cert

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:31:30 +02:00
d4b84c1dec add config/acme api path
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:30:49 +02:00
426847e1ce node config cleanups 2021-05-04 09:29:31 +02:00
79b902d512 add node config
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 09:29:31 +02:00
73c607497e cleanup acme client 2021-05-04 09:28:53 +02:00
f2f526b61d add acme client
This is the highlevel part using proxmox-acme-rs to create
requests and our hyper code to issue them to the acme
server.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 07:56:52 +02:00
cb67ecaddb add acme config
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-04 07:43:43 +02:00
5bf9b0b0bb docs: user-management: add note about untrusted certificates for webauthn
Since currently it works fine with untrusted certs, but that may change
anytime.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-03 12:01:23 +02:00
7a61f89e5a tape backup job: fix typo in error message
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-03 12:01:23 +02:00
671c6a96e7 bin: use extract_output_format where necessary
else we sometimes forget to remove it from the 'params' variable
and use that further, running into 'invalid parameter' errors

found by giving 'output-format' paramter to proxmox-tape status

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-05-03 08:58:35 +02:00
f0d23e5370 add ctime and size function to IndexFile trait
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2021-04-30 11:40:45 +02:00
d1bee4344d ui: tape: handle tapes in changers without barcode
by checking for definedness of the label (tapes without barcode
have the empty string as label-text) and falling back to the
source slot for the load action

Note: Changed the load-slot API from PUT to POST

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-04-30 10:23:53 +02:00
d724116c0c add dns alias schema
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-30 08:10:57 +02:00
888d89e2dd temporarily disable broken test
this test was added before the used NodeConfig schema was committed,
cannot work...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-29 16:18:20 +02:00
a6471bc346 bump version to 1.1.5-3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-29 15:26:24 +02:00
6b1da1c166 file restore: log which filesystems we support
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-29 15:24:45 +02:00
18210d8958 file-restore: use 'norecovery' for xfs filesystem
This allows mounting XFS partitons with 'dirty' states, like from a
running VM. Otherwise XFS tries to write recovery information, which
fails on a read-only mount.

Tested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-29 15:09:09 +02:00
bc5c1a9aa6 add 'config file format' to tools::config
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 12:11:54 +02:00
3df77ef5da config::acl: make /system/certificates a valid path
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 12:08:00 +02:00
e8d9d9adfa bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 12:05:22 +02:00
01d152720f Cargo.toml: depend on proxmox-acme-rs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 12:04:28 +02:00
5e58381ea9 catalog shell: replace LoopState with ControlFlow
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 11:17:21 +02:00
0b6d9442bd tools: add ControlFlow type
modeled after std::ops::ControlFlow

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 11:15:20 +02:00
134ed9e14f CertInfo: add is_expired_after_epoch
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 11:07:04 +02:00
0796b642de CertInfo: add not_{after, before}_unix
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-29 10:59:02 +02:00
f912ba6a3e config: factor out certificate writing
for reuse in the certificate api

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-28 12:58:41 +02:00
a576e6685b tools::fs::scan_subdir: use nix::Error instead of anyhow
allows using SysError trait on it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-28 12:32:57 +02:00
b1c793cfa5 systemd: add reload_unit
via try-reload-or-restart

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-28 12:15:26 +02:00
c0147e49c4 tools/http: make user agent configurable 2021-04-28 12:15:26 +02:00
d52b120905 tools/http: set USER_AGENT inside request 2021-04-28 12:15:26 +02:00
84c8a580b5 bump version to 1.1.5-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-28 11:35:12 +02:00
467bd01cdf api: add schema for http proxy configuration - HTTP_PROXY_SCHEMA 2021-04-28 11:23:06 +02:00
7a7fcb4715 http: add helper to parse proxy configuration 2021-04-28 11:23:06 +02:00
cf8e44bc30 HttpsConnector: add proxy authorization support 2021-04-28 11:23:06 +02:00
279e7eb497 buildsys: add pbs-client repo in upload target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-28 09:41:45 +02:00
606828cc65 file-restore: strip .img.fidx suffix from drive serials
Drive serials have a character limit of 20, longer names like
"drive-virtio0.img.fidx" or "drive-efidisk0.img.fidx" would get cut off.

Fix this by removing the suffix, it is not necessary to uniquely
identify an image.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-27 16:41:29 +02:00
aac424674c bump version to 1.1.5-1 2021-04-27 12:21:08 +02:00
8fd1e10830 tools/sgutils2: add size workaround for mode_sense
Some drives will always return the number of bytes given in the
allocation_length field, but correctly report the data len in the mode
sense header. Simply ignore the excess data.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-27 11:37:03 +02:00
12509a6d9e tape: improve inline docs 2021-04-27 11:37:03 +02:00
5e169f387c tape: add read_medium_configuration_page() to detect WORM media
And use it inside format_media().
2021-04-27 11:37:03 +02:00
8369ade880 file-restore: fix package name for kernel/initramfs image
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-27 11:17:38 +02:00
73cef112eb tape: remove MediumType struct, which is only valid on IBM drives
HP drives do not return this information.

Note: This breaks format on WORM media, because we have not way
to detect WOREM media (how?).
2021-04-27 09:58:27 +02:00
4a0132382a bump version to 1.1.4-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-27 08:41:05 +02:00
6ee69fccd3 tools/sgutils2: improve error messages
include the expected and unexpected sizes in the error message,
so that it's easier to debug in case of an error

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-27 08:24:50 +02:00
a862835be2 file-restore: use less memory for VM and reboot on panic
With the vsock-pkt-buffer fix in proxmox-backup-restore-image, we can
use way less memory for the VM without risking any crashes. 128 MiB
seems to be the lowest it will go and still be fully reliable.

While at it, add the "panic=1" argument to the kernel command line, so
in case the kernel *does* run out of memory, it will at least restart
automatically.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-26 15:46:37 +02:00
ddbd63ed5f file-restore: exit with code 1 in case streaming fails
This way the task gets marked as "failed" in PVE.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-26 15:46:37 +02:00
6a59fa0e18 file-restore: add size to image files and components
Read image sizes (.pxar.fidx/.img.didx) from manifest and partition
sizes from /sys/...

Requires a change to ArchiveEntry, as DirEntryAttribute::Directory
does not have a size associated with it (and that's probably good).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-26 15:46:37 +02:00
1ed9069ad3 http proxy: improve response parser
Avoid strange error message in case of connect error (only parse status + headers).
We are not interested in the response body, so simply ignore it.
2021-04-26 11:21:11 +02:00
a588b67906 api2/config/datastore: use update_job_last_run_time for schedules
this way, the api call does not error out when the file is locked
currently (which means that job is running and we do not need
to update the time)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-26 10:51:06 +02:00
37a634f550 server/jobstate: improve name of 'try_update_state_file'
and improve comment

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-26 10:50:36 +02:00
951fe0cb7d server/jobstate: add 'updatd' to Finish variant
when a user updates a job schedule, we want to save that point in time
to calculate future runs, otherwise when a user updates a schedule to
a time that would have been between the last run and 'now' the
schedule is triggered instantly

for example:
schedule 08:00
last run today 08:00
now it is 12:00

before this patch:
update schedule to 11:00
 -> triggered instantly since we calculate from 08:00

after this patch:
update schedule to 11:00
 -> triggered tomorrow 11:00 since we calculate from today 12:00

the change in the enum type is ok, since by default serde does not
error on unknown fields and the new field is optional

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-26 09:48:34 +02:00
4ca3f0c6ae api2/tape/backup: list backed up snapshots on failed backup notification
if a backup task failed (e.g. it was aborted), show the snapshots
which were successfully backed up in the notification

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-23 16:25:17 +02:00
69e5ba29c4 ui: tape: reload drive status on user actions
when the user start an action where we know that it locks the drive,
reload the tape store, so that the state is refreshed

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-23 16:25:17 +02:00
e045d154e9 file-restore: avoid unnecessary clone
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-04-23 13:22:30 +02:00
6526709d48 file-restore: add context to b64-decode error
to make the following cryptic error:

 proxmox-file-restore failed: Error: Invalid byte 46, offset 5.

more understandable:

 proxmox-file-restore failed: Error: Failed base64-decoding path '/root.pxar.didx' - Invalid byte 46, offset 5.

when a user passes in a non-base64 path but sets `--base64`.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-04-23 13:19:40 +02:00
603f80d813 bump version to 1.1.3-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-23 10:52:17 +02:00
398636b61c api2/node/status: extend node status
to be more on par with pve

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-23 10:30:30 +02:00
eb70464839 api2/nodes/status: use NodeStatus struct
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-23 10:30:30 +02:00
75054859ff api2/types: add necessary types for node status
we want to use concrete types instead of value

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-23 10:30:30 +02:00
8e898895cc tape: do not query density_code in SgTape::new()
Because this can fail with NoSense/MediumChanged and other informational
Sense codes.
2021-04-23 09:56:44 +02:00
4be6beab6f tape: format_media - implement special case for WORM media 2021-04-23 08:33:13 +02:00
a3b4b5b50e tape: define and use MediumType enum 2021-04-23 07:54:42 +02:00
33b8d7e5e8 tape: use loaded media_type in format_media (instead of drive_density)
Required to format LTO4 media loaded in LTO5 drive).

Also contains some SCSI code cleanups.
2021-04-23 07:27:30 +02:00
f2f43e1904 server/rest: fix new type ambiguity
basically the same as commit eeff085d9d
Will be required once we get to use a newer rustc, at least the
client build for archlinux was broken due to this.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-22 21:24:44 +02:00
c002d48b0c bump version to 1.1.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-22 20:15:18 +02:00
15998ed12a file-restore: support encrypted VM backups
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-22 17:55:30 +02:00
9d8ab62769 client-tools: add crypto_parameters_keep_fd
same functionality as crypto_parameters, except it keeps the file
descriptor passed as "keyfd" open (and seeks to the beginning after
reading), if one is given.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-22 17:55:30 +02:00
3526a76ef3 file-restore: don't force PBS_FINGERPRINT env var
It is valid to not set it, in case the server has a valid certificate.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-22 17:55:30 +02:00
b9e0fcbdcd tape: implement report_desnity 2021-04-22 13:54:31 +02:00
a7188b3a75 tape: fix FORMAT for LTO-4 drives
FORMAT requires LTO-5 or newer, so we do a rewind/erase if FORMAT fails.
2021-04-22 11:44:49 +02:00
b6c06dce9d http proxy: implement read_connect_response()
Limit memory usage in case we get strange data from proxy.
2021-04-22 10:06:14 +02:00
4adf47b606 file-restore: allow extracting a full pxar archive
If the path for within the archive is empty, assume "/" to extract all
of it.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-21 17:20:54 +02:00
4d0dc29951 file-restore: Add 'v' (Virtual) ArchiveEntry type
For the actual partitions and blockdevices in a backup, which the
user sees like folders in the file-restore ui

Encoded as "None", to avoid cluttering DirEntryAttribute, where it
wouldn't make any sense to have.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-21 17:19:40 +02:00
1011fb552b file-restore: print warnings on stderr
as we print JSON on stdout to be parsed

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-21 17:18:12 +02:00
2fd2d29281 file-restore: don't list non-pxar/-img *idx archives
These can't be entered or restored anyway, and cause issues with catalog
files for example.

Also a clippy fix.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-21 17:18:06 +02:00
9104152a83 HttpsConnector: add proxy support 2021-04-21 15:29:17 +02:00
02a58862dd HttpsConnector: code cleanup 2021-04-21 15:29:17 +02:00
26153589ba new http client implementation SimpleHttp (avoid static HTTP_CLIENT)
This one will have proxy support.
2021-04-21 15:29:17 +02:00
17b3e4451f MaybeTlsStream: implement poll_write_vectored()
This is just an performance optimization.
2021-04-21 15:29:17 +02:00
a2072cc346 http: rename EitherStream to MaybeTlsStream
And rename the enum values. Added an additional enum called Proxied.

The enum in now more specialized, but we only use it for the http client anyways.
2021-04-21 15:29:17 +02:00
fea23d0323 fix #3393: tools/xattr: allow xattr 'security.NTACL'
in some configurations, samba stores NTFS-ACLs in this xattr[0], so
we should backup (if we can)

altough the 'security' namespace is special (e.g. in use by
selinux, etc.) this value is normally only used by samba and we
should be able to back it up.

to restore it, the user needs at least 'CAP_SYS_ADMIN' rights, otherwise
it cannot be set

0: https://www.samba.org/samba/docs/current/man-html/vfs_acl_xattr.8.html

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-21 14:49:46 +02:00
71e83e1b1f tape/changer/sg_pt_changer: read whole descriptor size for each entry
Some changer seem to append more data than we expect, but correctly
annotates that size in the subheader.

For each descriptor entry, read as much as the size given in the
subheader (or until the end of the reader), else our position in
the reader is wrong for the next entry, and we will parse
incorrect data.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-21 14:07:41 +02:00
28570d19a6 tape restore: avoid multiple stat calls for same chunk 2021-04-16 13:17:17 +02:00
1369bcdbba tape restore: verify if all chunks exist 2021-04-16 12:20:44 +02:00
5e4d81e957 tape restore: simplify log (list datastores on single line) 2021-04-16 11:35:05 +02:00
0f4721f305 tape restore: fix datastore locking 2021-04-16 09:09:05 +02:00
483 changed files with 24659 additions and 13644 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "1.1.2"
version = "2.0.10"
authors = [
"Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>",
@ -18,6 +18,26 @@ homepage = "https://www.proxmox.com"
exclude = [ "build", "debian", "tests/catar_data/test_symlink/symlink1"]
[workspace]
members = [
"pbs-buildcfg",
"pbs-client",
"pbs-config",
"pbs-datastore",
"pbs-fuse-loop",
"pbs-runtime",
"proxmox-rest-server",
"proxmox-systemd",
"pbs-tape",
"pbs-tools",
"proxmox-backup-banner",
"proxmox-backup-client",
"proxmox-file-restore",
"proxmox-restore-daemon",
"pxar-bin",
]
[lib]
name = "proxmox_backup"
path = "src/lib.rs"
@ -48,22 +68,13 @@ openssl = "0.10"
pam = "0.7"
pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pin-project = "1.0"
pathpatterns = "0.1.2"
proxmox = { version = "0.11.1", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.1"
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io" ] }
regex = "1.2"
rustyline = "7"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
siphasher = "0.3"
syslog = "4.0"
tokio = { version = "1.0", features = [ "fs", "io-util", "io-std", "macros", "net", "parking_lot", "process", "rt", "rt-multi-thread", "signal", "time" ] }
tokio = { version = "1.6", features = [ "fs", "io-util", "io-std", "macros", "net", "parking_lot", "process", "rt", "rt-multi-thread", "signal", "time" ] }
tokio-openssl = "0.6.1"
tokio-stream = "0.1.0"
tokio-util = { version = "0.6", features = [ "codec", "io" ] }
@ -74,10 +85,39 @@ url = "2.1"
walkdir = "2"
webauthn-rs = "0.2.5"
xdg = "2.2"
zstd = { version = "0.4", features = [ "bindgen" ] }
nom = "5.1"
crossbeam-channel = "0.5"
# Used only by examples currently:
zstd = { version = "0.6", features = [ "bindgen" ] }
pathpatterns = "0.1.2"
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
proxmox = { version = "0.13.3", features = [ "sortable-macro", "api-macro", "cli", "router", "tfa" ] }
proxmox-acme-rs = "0.2.1"
proxmox-apt = "0.7.0"
proxmox-http = { version = "0.4.0", features = [ "client", "http-helpers", "websocket" ] }
proxmox-openid = "0.7.0"
pbs-api-types = { path = "pbs-api-types" }
pbs-buildcfg = { path = "pbs-buildcfg" }
pbs-client = { path = "pbs-client" }
pbs-config = { path = "pbs-config" }
pbs-datastore = { path = "pbs-datastore" }
pbs-runtime = { path = "pbs-runtime" }
proxmox-rest-server = { path = "proxmox-rest-server" }
proxmox-systemd = { path = "proxmox-systemd" }
pbs-tools = { path = "pbs-tools" }
pbs-tape = { path = "pbs-tape" }
# Local path overrides
# NOTE: You must run `cargo update` after changing this for it to take effect!
[patch.crates-io]
#proxmox = { path = "../proxmox/proxmox" }
#proxmox-http = { path = "../proxmox/proxmox-http" }
#pxar = { path = "../pxar" }
[features]
default = []
#valgrind = ["valgrind_request"]

113
Makefile
View File

@ -17,7 +17,8 @@ USR_BIN := \
# Binaries usable by admins
USR_SBIN := \
proxmox-backup-manager
proxmox-backup-manager \
proxmox-backup-debug \
# Binaries for services:
SERVICE_BIN := \
@ -30,6 +31,24 @@ SERVICE_BIN := \
RESTORE_BIN := \
proxmox-restore-daemon
SUBCRATES := \
pbs-api-types \
pbs-buildcfg \
pbs-client \
pbs-config \
pbs-datastore \
pbs-fuse-loop \
pbs-runtime \
proxmox-rest-server \
proxmox-systemd \
pbs-tape \
pbs-tools \
proxmox-backup-banner \
proxmox-backup-client \
proxmox-file-restore \
proxmox-restore-daemon \
pxar-bin
ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release
COMPILEDIR := target/release
@ -57,13 +76,15 @@ RESTORE_DBG_DEB=proxmox-backup-file-restore-dbgsym_${DEB_VERSION}_${ARCH}.deb
DOC_DEB=${PACKAGE}-docs_${DEB_VERSION}_all.deb
DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} \
${RESTORE_DEB} ${RESTORE_DBG_DEB}
${RESTORE_DEB} ${RESTORE_DBG_DEB} ${DEBUG_DEB} ${DEBUG_DBG_DEB}
DSC = rust-${PACKAGE}_${DEB_VERSION}.dsc
DESTDIR=
all: cargo-build $(SUBDIRS)
tests ?= --workspace
all: $(SUBDIRS)
.PHONY: $(SUBDIRS)
$(SUBDIRS):
@ -75,19 +96,23 @@ test:
$(CARGO) test $(tests) $(CARGO_BUILD_ARGS)
doc:
$(CARGO) doc --no-deps $(CARGO_BUILD_ARGS)
$(CARGO) doc --workspace --no-deps $(CARGO_BUILD_ARGS)
# always re-create this dir
.PHONY: build
build:
@echo "Setting pkg-buildcfg version to: $(DEB_VERSION_UPSTREAM)"
sed -i -e 's/^version =.*$$/version = "$(DEB_VERSION_UPSTREAM)"/' \
pbs-buildcfg/Cargo.toml
rm -rf build
rm -f debian/control
debcargo package --config debian/debcargo.toml --changelog-ready --no-overlay-write-back --directory build proxmox-backup $(shell dpkg-parsechangelog -l debian/changelog -SVersion | sed -e 's/-.*//')
sed -e '1,/^$$/ ! d' build/debian/control > build/debian/control.src
cat build/debian/control.src build/debian/control.in > build/debian/control
rm build/debian/control.in build/debian/control.src
cp build/debian/control debian/control
rm build/Cargo.lock
mkdir build
cp -a debian \
Cargo.toml src \
$(SUBCRATES) \
docs etc examples tests www zsh-completions \
defines.mk Makefile \
./build/
rm -f build/Cargo.lock
find build/debian -name "*.hint" -delete
$(foreach i,$(SUBDIRS), \
$(MAKE) -C build/$(i) clean ;)
@ -107,7 +132,9 @@ deb: build
lintian $(DEBS)
.PHONY: deb-all
deb-all: $(DOC_DEB) $(DEBS)
deb-all: build
cd build; dpkg-buildpackage -b -us -uc --no-pre-clean
lintian $(DEBS) $(DOC_DEB)
.PHONY: dsc
dsc: $(DSC)
@ -115,27 +142,61 @@ $(DSC): build
cd build; dpkg-buildpackage -S -us -uc -d -nc
lintian $(DSC)
.PHONY: clean distclean deb clean
distclean: clean
clean:
clean: clean-deb
$(foreach i,$(SUBDIRS), \
$(MAKE) -C $(i) clean ;)
$(CARGO) clean
rm -rf *.deb *.dsc *.tar.gz *.buildinfo *.changes build
rm -f .do-cargo-build
find . -name '*~' -exec rm {} ';'
# allows one to avoid running cargo clean when one just wants to tidy up after a packgae build
clean-deb:
rm -rf *.deb *.dsc *.tar.gz *.buildinfo *.changes build/
.PHONY: dinstall
dinstall: ${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB}
dinstall: ${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} \
${DEBUG_DEB} ${DEBUG_DBG_DEB}
dpkg -i $^
# make sure we build binaries before docs
docs: cargo-build
docs: $(COMPILEDIR)/dump-catalog-shell-cli $(COMPILEDIR)/docgen
.PHONY: cargo-build
cargo-build:
$(CARGO) build $(CARGO_BUILD_ARGS)
rm -f .do-cargo-build
$(MAKE) $(COMPILED_BINS)
$(COMPILED_BINS) $(COMPILEDIR)/dump-catalog-shell-cli $(COMPILEDIR)/docgen: .do-cargo-build
.do-cargo-build:
$(CARGO) build $(CARGO_BUILD_ARGS) \
--bin proxmox-backup-api \
--bin proxmox-backup-proxy \
--bin proxmox-backup-manager \
--bin docgen \
--package proxmox-backup-banner \
--bin proxmox-backup-banner \
--package proxmox-backup-client \
--bin proxmox-backup-client \
--bin proxmox-backup-debug \
--package proxmox-file-restore \
--bin proxmox-file-restore \
--package pxar-bin \
--bin pxar \
--package pbs-tape \
--bin pmt \
--bin pmtx \
--package proxmox-restore-daemon \
--bin proxmox-restore-daemon \
--package proxmox-backup \
--bin dump-catalog-shell-cli \
--bin proxmox-daily-update \
--bin proxmox-file-restore \
--bin proxmox-tape \
--bin sg-tape-cmd
touch "$@"
$(COMPILED_BINS): cargo-build
.PHONY: lint
lint:
@ -161,12 +222,16 @@ install: $(COMPILED_BINS)
install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/ ;)
$(MAKE) -C www install
$(MAKE) -C docs install
ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))
$(MAKE) test # HACK, only test now to avoid clobbering build files with wrong config
endif
.PHONY: upload
upload: ${SERVER_DEB} ${CLIENT_DEB} ${RESTORE_DEB} ${DOC_DEB}
upload: ${SERVER_DEB} ${CLIENT_DEB} ${RESTORE_DEB} ${DOC_DEB} ${DEBUG_DEB}
# check if working directory is clean
git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} | \
ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve,pmg" --dist buster
tar cf - ${RESTORE_DEB} ${RESTORE_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve" --dist buster
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} ${CLIENT_DEB} \
${CLIENT_DBG_DEB} ${DEBUG_DEB} ${DEBUG_DBG_DEB} \
| ssh -X repoman@repo.proxmox.com upload --product pbs --dist bullseye
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve,pmg,pbs-client" --dist bullseye
tar cf - ${RESTORE_DEB} ${RESTORE_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve" --dist bullseye

456
debian/changelog vendored
View File

@ -1,3 +1,459 @@
rust-proxmox-backup (2.0.10-1) UNRELEASED; urgency=medium
* ui: fix order of prune keep reasons
* server: add proxmox-backup-debug binary with chunk/file inspection, an API
shell with completion support
* restructured code base to reduce linkage and libraray ABI version
constraints for all non-server binaries (client, pxar, file-restore)
* zsh: fix passign parameters in auto-completion scripts
* tape: also add 'force-media-set' to availablea CLI options
* api: nodes: add missing node list (index) api endpoint
* docs: proxmox-backup-debug: add info about the new 'api' subcommand
* docs/technical-overview: add troubleshooting section
-- Proxmox Support Team <support@proxmox.com> Tue, 21 Sep 2021 14:00:48 +0200
rust-proxmox-backup (2.0.9-2) bullseye; urgency=medium
* tape backup: mention groups that were empty
* tape: compute next-media-label for each tape backup job
* tape: lto: increase default timeout to 10 minutes
* ui: display next-media-label for tape backup jobs
* cli: proxmox-tape backup-job list: use status api and display next-run
and next-media-label
-- Proxmox Support Team <support@proxmox.com> Tue, 24 Aug 2021 14:44:12 +0200
rust-proxmox-backup (2.0.8-1) bullseye; urgency=medium
* use proxmox-apt to 0.6
* api: apt: adapt to proxmox-apt back-end changes
* api/ui: allow zstd compression for new zpools
* tape: media_catalog: add snapshot list cache for catalog
* api2: tape: media: use MediaCatalog::snapshot_list for content listing
* tape: lock media_catalog file to to get a consistent view with load_catalog
* tape: changer: handle libraries that sends wrong amount of data
* tape: changer: remove unnecesary inquiry parameter
* api2: tape/restore: commit temporary catalog at the end
* docs: tape: add instructions on how to restore the catalog
* ui: tape/ChangerStatus: improve layout for large libraries
* tape: changer: handle invalid descriptor data from library in status page
* datastore config: cleanup code (use flatten attribute)
-- Proxmox Support Team <support@proxmox.com> Mon, 02 Aug 2021 10:34:55 +0200
rust-proxmox-backup (2.0.7-1) bullseye; urgency=medium
* tape changer: better cope with models that are not following spec
proposals when returning the status page
* tape changer: make DVCID information optional, not all devices return it
* restore daemon: setup the 'backup' system user and group in the minimal
restore environment, as we like to ensure that all state files are ownend
by them.
-- Proxmox Support Team <support@proxmox.com> Fri, 23 Jul 2021 08:43:51 +0200
rust-proxmox-backup (2.0.6-1) bullseye; urgency=medium
* increase maximum drives per changer to 255
* allow one to pass a secret not only directly through the environment value,
but also indirectly through a file path, an open file descriptor or a
command that can write the secret to standard out.
* pull in new proxmox library version to improve the file system
comaptibility on creation of atomic files, e.g., lock files.
-- Proxmox Support Team <support@proxmox.com> Thu, 22 Jul 2021 10:22:19 +0200
rust-proxmox-backup (2.0.5-2) bullseye; urgency=medium
* ui: tape: backup overview: increase timeout for media-set content
* tape: changer: always retry until timeout
* file-restore: increase lock timeout on QEMU map
* fix #3515: file-restore-daemon: allow LVs/PVs with dash in name
* fix #3526: correctly filter tasks with 'since' and 'until'
* tape: changer: make scsi request for DVCID a separate one, as some
libraries cannot handle requesting that combined with volume tags in one
go
* api, ui: datastore: add new 'prune-datastore' api call and expose it with
a 'Prune All' button
* make creating log files more robust so that theys are always owned by the
less privileged `backup` user
-- Proxmox Support Team <support@proxmox.com> Wed, 21 Jul 2021 09:12:39 +0200
rust-proxmox-backup (2.0.4-1) bullseye; urgency=medium
* change tape drive lock path to avoid issues with sticky bit on tmpfs
mountpoint
* tape: changer: query transport-element types separately
* auth: improve thread safety of 'crypt' C-library
-- Proxmox Support Team <support@proxmox.com> Mon, 12 Jul 2021 18:51:21 +0200
rust-proxmox-backup (2.0.3-1) bullseye; urgency=medium
* api: apt: add repositories info and update calls
* ui: administration: add APT repositories status and update panel
* api: access domains: add get/create/update/delete endpoints for realms
* ui: access control: add 'Realm' tab for adding and editing OpenID Connect
identity provider
* fix #3447: ui: Dashboard: disallow selection of datastore statistics row
* ui: tapeRestore: make window non-resizable
* ui: dashboard: rework resource-load panel to a more detailed status panel,
showing, among other things, uptime, Kernel version, CPU info and
repository status.
* ui: adminsitration/dashboard: auto-scale columns count and add
browser-local setting to override that to a fixed value of columns.
* fix #3212: api, ui: add support for notes on backup groups
-- Proxmox Support Team <support@proxmox.com> Mon, 12 Jul 2021 08:07:41 +0200
rust-proxmox-backup (2.0.2-1) bullseye; urgency=medium
* ui: use task list component from widget toolkit
* api: add keep-job-configs flag to datastore remove endpoint
* api: config: delete datastore: also remove tape backup jobs
* ui: tape restore: mark datastore selector as 'not a form field' to fix
compatibility with ExtJS 7.0
* ui: datastore removal: only navigate away when the user actually confirmed
the removal of that datastore
-- Proxmox Support Team <support@proxmox.com> Thu, 08 Jul 2021 14:44:12 +0200
rust-proxmox-backup (2.0.1-2) bullseye; urgency=medium
* file restore daemon: log basic startup steps
* REST-API: set error message extension for bad-request response log to
ensure the actual error is logged in any (access) log, making debugging
such issues easier
* restore daemon: create /run/proxmox-backup on startup as there's now some
runtime state saved there, which failed all API requests to the restore
daemon otherwise
* restore daemon: use millisecond log resolution
* fix #3496: acme: plugin: actually sleep after setting the TXT record,
ensuring DNS propagation of that record. This makes it catch up with the
docs/web-interface, where the option was already available.
* docs: initial update to repositories for bullseye
-- Proxmox Support Team <support@proxmox.com> Sat, 03 Jul 2021 23:14:49 +0200
rust-proxmox-backup (2.0.0-2) bullseye; urgency=medium
* file-restore-daemon/disk: add LVM (thin) support
-- Proxmox Support Team <support@proxmox.com> Sat, 03 Jul 2021 02:15:16 +0200
rust-proxmox-backup (2.0.0-1) bullseye; urgency=medium
* initial bump for Debian 11 Bullseye / Proxmox Backup Server 2.0
* ui: datastore list summary: catch and show errors per datastore
* ui: dashboard: task summary: add a 'close' tool to the header
* ensure that backups which are currently being restored or backed up to a
tape won't get pruned
* improve error handling when locking a tape drive for a backup job
* client/pull: log snapshots that are skipped because of creation time being
older than last sync time
* ui: datastore options: add remove button to drop a datastore from the
configuration, without removing any actual data
* ui: tape: drive selector: do not auto select the drive
* ui: tape: backup job: use correct default value for pbsUserSelector
* fix #3433: disks: port over Proxmox VE's S.M.A.R.T wearout logic
* backup: add helpers for async last recently used (LRU) caches for chunk
and index reading of backup snapshot
* fix #3459: manager: add --ignore-verified and --outdated-after parameters
* proxmox-backup-manager: show task log on datastore create
* tape: snapshot reader: read chunks sorted by inode (per index) to improve
sequential reads when backing up data from slow spinning disks to tape.
* file-restore: support ZFS pools
* improve fix for #3393: pxar create: try to read xattrs/fcaps/acls by default
* fix compatibility with ExtJS 7.0
* docs: build api-viewer from widget-toolkit-dev
-- Proxmox Support Team <support@proxmox.com> Mon, 28 Jun 2021 19:35:40 +0200
rust-proxmox-backup (1.1.9-1) stable; urgency=medium
* lto/sg_tape/encryption: remove non lto-4 supported byte
* ui: improve tape restore
* ui: panel/UsageChart: change downloadServerUrl
* ui: css fixes and cleanups
* api2/tape: add api call to list media sets
* ui: tape/BackupOverview: expand pools by default
* api: node/journal: fix parameter extraction of /nodes/node/journal
* file-restore-daemon: limit concurrent download calls
* file-restore-daemon: watchdog: add inhibit for long downloads
* file-restore-daemon: work around tokio DuplexStream bug
* apt: fix removal of non-existant http-proxy config
* file-restore-daemon: disk: add RawFs bucket type
* file-restore-daemon: disk: ignore "invalid fs" error
-- Proxmox Support Team <support@proxmox.com> Tue, 01 Jun 2021 08:24:01 +0200
rust-proxmox-backup (1.1.8-1) stable; urgency=medium
* api-proxy: implement 'reload-certificate' command and hot-reload proxy
certificate when updating via the API
* ui: add task descriptions for ACME/Let's Encrypt related tasks
* correctly set apt proxy configuration
* ui: configuration: support setting a HTTP proxy for APT and subscription
checks.
* ui: tape: add 'Force new Media-Set' checkbox to manual backup
* ui: datastore/Content: add forget (delete) button for whole backup groups
* ui: tape: backup overview: move restore buttons inline to action-buttons,
making the UX more similar to the datastore content tree-view
* ui: tape restore: enabling selecting multiple snapshots
* ui: dashboards statistics: visualize datastores where querying the usage
failed
-- Proxmox Support Team <support@proxmox.com> Fri, 21 May 2021 18:21:28 +0200
rust-proxmox-backup (1.1.7-1) unstable; urgency=medium
* client: use stderr for all fingerprint confirm msgs
* fix #3391: improve mismatched fingerprint handling
* tape: add single snapshot restore
* docs/api-viewer: improve rendering of array format
* tape/pool_writer: do not unwrap on channel send
* ui: window/SyncJobEdit: disable autoSelect for remote datastore
* ui: tape: rename 'Datastore' to 'Target Datastore'
* manager: acme plugin: auto-complete available DNS challenge types
* manager: acme plugin: remove ID completion helper from add command
* completion: ACME plugin type: comment out http type for now, not useful
* acme: use proxmox-acme-plugins and load schema from there
* fix 3296: add http_proxy to node config, and provide a cli
* fix #3331: improve progress for last snapshot in group
* file-restore: add debug mode with serial access
* file-restore: support more drives
* file-restore: add more RAM for VMs with many drives or debug
* file-restore: try to kill VM when stale
* make sure URI paths start with a slash
* tape: use LOCATE(16) SCSI command
* call create_run_dir() at daemon startup
* tape/drive: add 'move_to_file' to TapeDriver trait
* proxmox_restore_daemon: mount ntfs with 'utf8' option
* client/http_client: add necessary brackets for ipv6
* docs: tape: clarify LTO-4/5 support
* tape/restore: optimize chunk restore behaviour
-- Proxmox Support Team <support@proxmox.com> Tue, 11 May 2021 13:22:49 +0200
rust-proxmox-backup (1.1.6-2) unstable; urgency=medium
* fix permissions set in create_run_dir
-- Proxmox Support Team <support@proxmox.com> Tue, 04 May 2021 12:25:00 +0200
rust-proxmox-backup (1.1.6-1) unstable; urgency=medium
* tape restore: do not verify restored files
* tape restore: add restore speed to logs
* tape restore: write datastore in separate thread
* add ACME support
* add node config
* docs: user-management: add note about untrusted certificates for
webauthn
* bin: use extract_output_format where necessary
* add ctime and size function to IndexFile trait
* ui: tape: handle tapes in changers without barcode
-- Proxmox Support Team <support@proxmox.com> Tue, 04 May 2021 12:09:25 +0200
rust-proxmox-backup (1.1.5-3) stable; urgency=medium
* file-restore: use 'norecovery' for XFS filesystem to allow mounting
those which where not un-mounted during backup
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Apr 2021 15:26:13 +0200
rust-proxmox-backup (1.1.5-2) stable; urgency=medium
* file-restore: strip .img.fidx suffix from drive serials to avoid running
in the 20 character limit SCSI serial values have.
-- Proxmox Support Team <support@proxmox.com> Wed, 28 Apr 2021 11:15:08 +0200
rust-proxmox-backup (1.1.5-1) unstable; urgency=medium
* tools/sgutils2: add size workaround for mode_sense
* tape: add read_medium_configuration_page() to detect WORM media
* file-restore: fix package name for kernel/initramfs image
* tape: remove MediumType struct, which is only valid on IBM drives
-- Proxmox Support Team <support@proxmox.com> Tue, 27 Apr 2021 12:20:04 +0200
rust-proxmox-backup (1.1.4-1) unstable; urgency=medium
* file-restore: add size to image files and components
* file-restore: exit with code 1 in case streaming fails
* file-restore: use less memory for VM (now 128 MiB) and reboot on panic
* ui: tape: improve reload drive-status logic on user actions
* tape backup: list the snapshots we could back up on failed backup
notification
* Improve on a scheduling issue when updating the calendar event such, that
it would had triggered between the last-run and now. Use the next future
event as actual next trigger instead.
* SCSI mode sense: include the expected and unexpected sizes in the error
message, to allow easier debugging
-- Proxmox Support Team <support@proxmox.com> Tue, 27 Apr 2021 08:27:10 +0200
rust-proxmox-backup (1.1.3-2) unstable; urgency=medium
* improve check for LTO4 tapes
* api: node status: return further information about SWAP, IO-wait, CPU info
and Kernel version
-- Proxmox Support Team <support@proxmox.com> Fri, 23 Apr 2021 10:52:08 +0200
rust-proxmox-backup (1.1.3-1) unstable; urgency=medium
* tape restore: improve datastore locking when GC runs at the same time
* tape restore: always do quick chunk verification
* tape: improve compatibillity with some changers
* tape: work-around missing format command on LTO-4 drives, fall-back to
slower rewind erease
* fix #3393: pxar: allow and safe the 'security.NTACL' extended attribute
* file-restore: support encrypted VM backups
-- Proxmox Support Team <support@proxmox.com> Thu, 22 Apr 2021 20:14:58 +0200
rust-proxmox-backup (1.1.2-1) unstable; urgency=medium
* backup verify: always re-check if we can skip a chunk in the actual verify

66
debian/control vendored
View File

@ -1,8 +1,8 @@
Source: rust-proxmox-backup
Section: admin
Priority: optional
Build-Depends: debhelper (>= 11),
dh-cargo (>= 18),
Build-Depends: debhelper (>= 12),
dh-cargo (>= 24),
cargo:native,
rustc:native,
libstd-rust-dev,
@ -17,6 +17,7 @@ Build-Depends: debhelper (>= 11),
librust-endian-trait-0.6+default-dev,
librust-env-logger-0.7+default-dev,
librust-flate2-1+default-dev,
librust-foreign-types-0.3+default-dev,
librust-futures-0.3+default-dev,
librust-h2-0.3+default-dev,
librust-h2-0.3+stream-dev,
@ -36,13 +37,21 @@ Build-Depends: debhelper (>= 11),
librust-pam-sys-0.5+default-dev,
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-1+default-dev,
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.11+api-macro-dev (>= 0.11.1-~~),
librust-proxmox-0.11+default-dev (>= 0.11.1-~~),
librust-proxmox-0.11+sortable-macro-dev (>= 0.11.1-~~),
librust-proxmox-0.11+websocket-dev (>= 0.11.1-~~),
librust-pin-project-lite-0.2+default-dev,
librust-proxmox-0.13+api-macro-dev,
librust-proxmox-0.13+cli-dev,
librust-proxmox-0.13+default-dev,
librust-proxmox-0.13+router-dev,
librust-proxmox-0.13+sortable-macro-dev,
librust-proxmox-0.13+tfa-dev,
librust-proxmox-acme-rs-0.2+default-dev (>= 0.2.1-~~),
librust-proxmox-apt-0.7+default-dev,
librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~),
librust-proxmox-http-0.4+client-dev,
librust-proxmox-http-0.4+default-dev ,
librust-proxmox-http-0.4+http-helpers-dev,
librust-proxmox-http-0.4+websocket-dev,
librust-proxmox-openid-0.7+default-dev,
librust-pxar-0.10+default-dev (>= 0.10.1-~~),
librust-pxar-0.10+tokio-io-dev (>= 0.10.1-~~),
librust-regex-1+default-dev (>= 1.2-~~),
@ -53,18 +62,18 @@ Build-Depends: debhelper (>= 11),
librust-siphasher-0.3+default-dev,
librust-syslog-4+default-dev,
librust-thiserror-1+default-dev,
librust-tokio-1+default-dev,
librust-tokio-1+fs-dev,
librust-tokio-1+io-std-dev,
librust-tokio-1+io-util-dev,
librust-tokio-1+macros-dev,
librust-tokio-1+net-dev,
librust-tokio-1+parking-lot-dev,
librust-tokio-1+process-dev,
librust-tokio-1+rt-dev,
librust-tokio-1+rt-multi-thread-dev,
librust-tokio-1+signal-dev,
librust-tokio-1+time-dev,
librust-tokio-1+default-dev (>= 1.6-~~),
librust-tokio-1+fs-dev (>= 1.6-~~),
librust-tokio-1+io-std-dev (>= 1.6-~~),
librust-tokio-1+io-util-dev (>= 1.6-~~),
librust-tokio-1+macros-dev (>= 1.6-~~),
librust-tokio-1+net-dev (>= 1.6-~~),
librust-tokio-1+parking-lot-dev (>= 1.6-~~),
librust-tokio-1+process-dev (>= 1.6-~~),
librust-tokio-1+rt-dev (>= 1.6-~~),
librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~),
librust-tokio-1+signal-dev (>= 1.6-~~),
librust-tokio-1+time-dev (>= 1.6-~~),
librust-tokio-openssl-0.6+default-dev (>= 0.6.1-~~),
librust-tokio-stream-0.1+default-dev,
librust-tokio-util-0.6+codec-dev,
@ -76,8 +85,8 @@ Build-Depends: debhelper (>= 11),
librust-walkdir-2+default-dev,
librust-webauthn-rs-0.2+default-dev (>= 0.2.5-~~),
librust-xdg-2+default-dev (>= 2.2-~~),
librust-zstd-0.4+bindgen-dev,
librust-zstd-0.4+default-dev,
librust-zstd-0.6+bindgen-dev,
librust-zstd-0.6+default-dev,
libacl1-dev,
libfuse3-dev,
libsystemd-dev,
@ -91,6 +100,7 @@ Build-Depends: debhelper (>= 11),
graphviz <!nodoc>,
latexmk <!nodoc>,
patchelf,
proxmox-widget-toolkit-dev <!nodoc>,
pve-eslint (>= 7.18.0-1),
python3-docutils,
python3-pygments,
@ -101,16 +111,18 @@ Build-Depends: debhelper (>= 11),
texlive-xetex <!nodoc>,
xindy <!nodoc>
Maintainer: Proxmox Support Team <support@proxmox.com>
Standards-Version: 4.4.1
Standards-Version: 4.5.1
Vcs-Git: git://git.proxmox.com/git/proxmox-backup.git
Vcs-Browser: https://git.proxmox.com/?p=proxmox-backup.git;a=summary
Homepage: https://www.proxmox.com
Rules-Requires-Root: binary-targets
Package: proxmox-backup-server
Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libjs-extjs (>= 7~),
libjs-qrcodejs (>= 1.20201119),
libproxmox-acme-plugins,
libsgutils2-2,
libzstd1 (>= 1.3.8),
lvm2,
@ -119,7 +131,7 @@ Depends: fonts-font-awesome,
postfix | mail-transport-agent,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.5-1),
proxmox-widget-toolkit (>= 3.3-2),
pve-xtermjs (>= 4.7.0-1),
sg3-utils,
smartmontools,
@ -143,7 +155,8 @@ Description: Proxmox Backup Client tools
Package: proxmox-backup-docs
Build-Profiles: <!nodoc>
Section: doc
Depends: libjs-extjs,
Depends: fonts-font-awesome,
libjs-extjs,
libjs-mathjax,
${misc:Depends},
Architecture: all
@ -156,6 +169,7 @@ Depends: ${misc:Depends},
${shlibs:Depends},
Recommends: pve-qemu-kvm (>= 5.0.0-9),
proxmox-backup-restore-image,
Breaks: proxmox-backup-restore-image (<< 0.3.1)
Description: Proxmox Backup single file restore tools for pxar and block device backups
This package contains the Proxmox Backup single file restore client for
restoring individual files and folders from both host/container and VM/block

54
debian/control.in vendored
View File

@ -1,54 +0,0 @@
Package: proxmox-backup-server
Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libjs-qrcodejs (>= 1.20201119),
libsgutils2-2,
libzstd1 (>= 1.3.8),
lvm2,
openssh-server,
pbs-i18n,
postfix | mail-transport-agent,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.5-1),
pve-xtermjs (>= 4.7.0-1),
sg3-utils,
smartmontools,
${misc:Depends},
${shlibs:Depends},
Recommends: zfsutils-linux,
ifupdown2,
Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface.
Package: proxmox-backup-client
Architecture: any
Depends: qrencode,
${misc:Depends},
${shlibs:Depends},
Description: Proxmox Backup Client tools
This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups.
Package: proxmox-backup-docs
Build-Profiles: <!nodoc>
Section: doc
Depends: libjs-extjs,
libjs-mathjax,
${misc:Depends},
Architecture: all
Description: Proxmox Backup Documentation
This package contains the Proxmox Backup Documentation files.
Package: proxmox-backup-file-restore
Architecture: any
Depends: ${misc:Depends},
${shlibs:Depends},
Recommends: pve-qemu-kvm (>= 5.0.0-9),
proxmox-backup-restore-image,
Description: Proxmox Backup single file restore tools for pxar and block device backups
This package contains the Proxmox Backup single file restore client for
restoring individual files and folders from both host/container and VM/block
device backups. It includes a block device restore driver using QEMU.

42
debian/debcargo.toml vendored
View File

@ -1,42 +0,0 @@
overlay = "."
crate_src_path = ".."
whitelist = ["tests/*.c"]
maintainer = "Proxmox Support Team <support@proxmox.com>"
[source]
vcs_git = "git://git.proxmox.com/git/proxmox-backup.git"
vcs_browser = "https://git.proxmox.com/?p=proxmox-backup.git;a=summary"
section = "admin"
build_depends = [
"bash-completion",
"debhelper (>= 12~)",
"fonts-dejavu-core <!nodoc>",
"fonts-lato <!nodoc>",
"fonts-open-sans <!nodoc>",
"graphviz <!nodoc>",
"latexmk <!nodoc>",
"patchelf",
"pve-eslint (>= 7.18.0-1)",
"python3-docutils",
"python3-pygments",
"python3-sphinx <!nodoc>",
"rsync",
"texlive-fonts-extra <!nodoc>",
"texlive-fonts-recommended <!nodoc>",
"texlive-xetex <!nodoc>",
"xindy <!nodoc>",
]
build_depends_excludes = [
"debhelper (>=11)",
]
[packages.lib]
depends = [
"libacl1-dev",
"libfuse3-dev",
"libsystemd-dev",
"uuid-dev",
"libsgutils2-dev",
]

36
debian/postinst vendored
View File

@ -26,43 +26,7 @@ case "$1" in
fi
deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true
# FIXME: Remove with 1.1
if test -n "$2"; then
if dpkg --compare-versions "$2" 'lt' '0.9.4-1'; then
if grep -s -q -P -e '^\s+verify-schedule ' /etc/proxmox-backup/datastore.cfg; then
echo "NOTE: drop all verify schedules from datastore config."
echo "You can now add more flexible verify jobs"
flock -w 30 /etc/proxmox-backup/.datastore.lck \
sed -i '/^\s\+verify-schedule /d' /etc/proxmox-backup/datastore.cfg || true
fi
fi
if dpkg --compare-versions "$2" 'le' '0.9.5-1'; then
chown --quiet backup:backup /var/log/proxmox-backup/api/auth.log || true
fi
if dpkg --compare-versions "$2" 'le' '0.9.7-1'; then
if [ -e /etc/proxmox-backup/remote.cfg ]; then
echo "NOTE: Switching over remote.cfg to new field names.."
flock -w 30 /etc/proxmox-backup/.remote.lck \
sed -i \
-e 's/^\s\+userid /\tauth-id /g' \
/etc/proxmox-backup/remote.cfg || true
fi
fi
if dpkg --compare-versions "$2" 'le' '1.0.14-1'; then
# FIXME: Remove with 2.0
if grep -s -q -P -e '^linux:' /etc/proxmox-backup/tape.cfg; then
echo "========="
echo "= NOTE: You have now unsupported 'linux' tape drives configured."
echo "= * Execute 'udevadm control --reload-rules && udevadm trigger' to update /dev"
echo "= * Edit '/etc/proxmox-backup/tape.cfg', remove 'linux' entries and re-add over CLI/GUI"
echo "========="
fi
fi
# FIXME: remove with 2.0
if [ -d "/var/lib/proxmox-backup/tape" ] &&
[ "$(stat --printf '%a' '/var/lib/proxmox-backup/tape')" != "750" ]; then
chmod 0750 /var/lib/proxmox-backup/tape || true
fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."

8
debian/proxmox-backup-debug.bc vendored Normal file
View File

@ -0,0 +1,8 @@
# proxmox-backup-debug bash completion
# see http://tiswww.case.edu/php/chet/bash/FAQ
# and __ltrim_colon_completions() in /usr/share/bash-completion/bash_completion
# this modifies global var, but I found no better way
COMP_WORDBREAKS=${COMP_WORDBREAKS//:}
complete -C 'proxmox-backup-debug bashcomplete' proxmox-backup-debug

View File

@ -1,5 +1,6 @@
/usr/share/doc/proxmox-backup/proxmox-backup.pdf /usr/share/doc/proxmox-backup/html/proxmox-backup.pdf
/usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/prune-simulator/extjs
/usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/lto-barcode/extjs
/usr/share/fonts-font-awesome/ /usr/share/doc/proxmox-backup/html/lto-barcode/font-awesome
/usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/api-viewer/extjs
/usr/share/javascript/mathjax /usr/share/doc/proxmox-backup/html/_static/mathjax

View File

@ -6,6 +6,7 @@ update_initramfs() {
# regenerate initramfs for single file restore VM
INST_PATH="/usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore"
CACHE_PATH="/var/cache/proxmox-backup/file-restore-initramfs.img"
CACHE_PATH_DBG="/var/cache/proxmox-backup/file-restore-initramfs-debug.img"
# cleanup first, in case proxmox-file-restore was uninstalled since we do
# not want an unuseable image lying around
@ -20,7 +21,7 @@ update_initramfs() {
# avoid leftover temp file
cleanup() {
rm -f "$CACHE_PATH.tmp"
rm -f "$CACHE_PATH.tmp" "$CACHE_PATH_DBG.tmp"
}
trap cleanup EXIT
@ -34,6 +35,15 @@ update_initramfs() {
| cpio -o --format=newc -A -F "$CACHE_PATH.tmp" )
mv -f "$CACHE_PATH.tmp" "$CACHE_PATH"
if [ -f "$INST_PATH/initramfs-debug.img" ]; then
echo "Updating file-restore debug initramfs..."
cp "$INST_PATH/initramfs-debug.img" "$CACHE_PATH_DBG.tmp"
( cd "$INST_PATH"; \
printf "./proxmox-restore-daemon" \
| cpio -o --format=newc -A -F "$CACHE_PATH_DBG.tmp" )
mv -f "$CACHE_PATH_DBG.tmp" "$CACHE_PATH_DBG"
fi
trap - EXIT
}

View File

@ -1,4 +1,5 @@
debian/proxmox-backup-manager.bc proxmox-backup-manager
debian/proxmox-backup-debug.bc proxmox-backup-debug
debian/proxmox-tape.bc proxmox-tape
debian/pmtx.bc pmtx
debian/pmt.bc pmt

View File

@ -9,6 +9,7 @@ usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-daily-update
usr/lib/x86_64-linux-gnu/proxmox-backup/sg-tape-cmd
usr/sbin/proxmox-backup-debug
usr/sbin/proxmox-backup-manager
usr/bin/pmtx
usr/bin/pmt
@ -17,6 +18,7 @@ usr/share/javascript/proxmox-backup/index.hbs
usr/share/javascript/proxmox-backup/css/ext6-pbs.css
usr/share/javascript/proxmox-backup/images
usr/share/javascript/proxmox-backup/js/proxmox-backup-gui.js
usr/share/man/man1/proxmox-backup-debug.1
usr/share/man/man1/proxmox-backup-manager.1
usr/share/man/man1/proxmox-backup-proxy.1
usr/share/man/man1/proxmox-tape.1
@ -31,6 +33,7 @@ usr/share/man/man5/verification.cfg.5
usr/share/man/man5/media-pool.cfg.5
usr/share/man/man5/tape.cfg.5
usr/share/man/man5/tape-job.cfg.5
usr/share/zsh/vendor-completions/_proxmox-backup-debug
usr/share/zsh/vendor-completions/_proxmox-backup-manager
usr/share/zsh/vendor-completions/_proxmox-tape
usr/share/zsh/vendor-completions/_pmtx

8
debian/rules vendored
View File

@ -32,6 +32,9 @@ override_dh_auto_build:
override_dh_missing:
dh_missing --fail-missing
override_dh_auto_test:
# ignore here to avoid rebuilding the binaries with the wrong target
override_dh_auto_install:
dh_auto_install -- \
PROXY_USER=backup \
@ -45,11 +48,6 @@ override_dh_installsystemd:
override_dh_fixperms:
dh_fixperms --exclude sg-tape-cmd
# workaround https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933541
# TODO: remove once available (Debian 11 ?)
override_dh_dwz:
dh_dwz --no-dwz-multifile
override_dh_strip:
dh_strip
for exe in $$(find \

View File

@ -5,6 +5,7 @@ GENERATED_SYNOPSIS := \
proxmox-backup-client/synopsis.rst \
proxmox-backup-client/catalog-shell-synopsis.rst \
proxmox-backup-manager/synopsis.rst \
proxmox-backup-debug/synopsis.rst \
proxmox-file-restore/synopsis.rst \
pxar/synopsis.rst \
pmtx/synopsis.rst \
@ -27,7 +28,8 @@ MAN1_PAGES := \
proxmox-backup-proxy.1 \
proxmox-backup-client.1 \
proxmox-backup-manager.1 \
proxmox-file-restore.1
proxmox-file-restore.1 \
proxmox-backup-debug.1
MAN5_PAGES := \
media-pool.cfg.5 \
@ -46,23 +48,35 @@ PRUNE_SIMULATOR_FILES := \
prune-simulator/clear-trigger.png \
prune-simulator/prune-simulator.js
PRUNE_SIMULATOR_JS_SOURCE := \
/usr/share/javascript/proxmox-widget-toolkit-dev/Toolkit.js \
prune-simulator/prune-simulator_source.js
LTO_BARCODE_JS_SOURCE := \
/usr/share/javascript/proxmox-widget-toolkit-dev/Toolkit.js \
lto-barcode/code39.js \
lto-barcode/prefix-field.js \
lto-barcode/label-style.js \
lto-barcode/tape-type.js \
lto-barcode/paper-size.js \
lto-barcode/page-layout.js \
lto-barcode/page-calibration.js \
lto-barcode/label-list.js \
lto-barcode/label-setup.js \
lto-barcode/lto-barcode.js
LTO_BARCODE_FILES := \
lto-barcode/index.html \
lto-barcode/code39.js \
lto-barcode/prefix-field.js \
lto-barcode/label-style.js \
lto-barcode/tape-type.js \
lto-barcode/paper-size.js \
lto-barcode/page-layout.js \
lto-barcode/page-calibration.js \
lto-barcode/label-list.js \
lto-barcode/label-setup.js \
lto-barcode/lto-barcode.js
lto-barcode/lto-barcode-generator.js
API_VIEWER_SOURCES= \
api-viewer/index.html \
api-viewer/apidoc.js
API_VIEWER_FILES := \
api-viewer/apidata.js \
/usr/share/javascript/proxmox-widget-toolkit-dev/APIViewer.js \
# Sphinx documentation setup
SPHINXOPTS =
SPHINXBUILD = sphinx-build
@ -187,17 +201,32 @@ proxmox-file-restore/synopsis.rst: ${COMPILEDIR}/proxmox-file-restore
proxmox-file-restore.1: proxmox-file-restore/man1.rst proxmox-file-restore/description.rst proxmox-file-restore/synopsis.rst
rst2man $< >$@
proxmox-backup-debug/synopsis.rst: ${COMPILEDIR}/proxmox-backup-debug
${COMPILEDIR}/proxmox-backup-debug printdoc > proxmox-backup-debug/synopsis.rst
proxmox-backup-debug.1: proxmox-backup-debug/man1.rst proxmox-backup-debug/description.rst proxmox-backup-debug/synopsis.rst
rst2man $< >$@
.PHONY: onlinehelpinfo
onlinehelpinfo:
@echo "Generating OnlineHelpInfo.js..."
$(SPHINXBUILD) -b proxmox-scanrefs $(ALLSPHINXOPTS) $(BUILDDIR)/scanrefs
$(SPHINXBUILD) -b proxmox-scanrefs -Q $(ALLSPHINXOPTS) $(BUILDDIR)/scanrefs
@echo "Build finished. OnlineHelpInfo.js is in $(BUILDDIR)/scanrefs."
api-viewer/apidata.js: ${COMPILEDIR}/docgen
${COMPILEDIR}/docgen apidata.js >$@
api-viewer/apidoc.js: api-viewer/apidata.js api-viewer/PBSAPI.js
cat api-viewer/apidata.js api-viewer/PBSAPI.js >$@
api-viewer/apidoc.js: ${API_VIEWER_FILES}
cat ${API_VIEWER_FILES} >$@.tmp
mv $@.tmp $@
prune-simulator/prune-simulator.js: ${PRUNE_SIMULATOR_JS_SOURCE}
cat ${PRUNE_SIMULATOR_JS_SOURCE} >$@.tmp
mv $@.tmp $@
lto-barcode/lto-barcode-generator.js: ${LTO_BARCODE_JS_SOURCE}
cat ${LTO_BARCODE_JS_SOURCE} >$@.tmp
mv $@.tmp $@
.PHONY: html
html: ${GENERATED_SYNOPSIS} images/proxmox-logo.svg custom.css conf.py ${PRUNE_SIMULATOR_FILES} ${LTO_BARCODE_FILES} ${API_VIEWER_SOURCES}
@ -228,6 +257,7 @@ epub3: ${GENERATED_SYNOPSIS}
clean:
rm -r -f *~ *.1 ${BUILDDIR} ${GENERATED_SYNOPSIS} api-viewer/apidata.js
rm -f api-viewer/apidoc.js lto-barcode/lto-barcode-generator.js prune-simulator/prune-simulator.js
install_manual_pages: ${MAN1_PAGES} ${MAN5_PAGES}

View File

@ -1,511 +0,0 @@
// avoid errors when running without development tools
if (!Ext.isDefined(Ext.global.console)) {
var console = {
dir: function() {},
log: function() {}
};
}
Ext.onReady(function() {
Ext.define('pve-param-schema', {
extend: 'Ext.data.Model',
fields: [
'name', 'type', 'typetext', 'description', 'verbose_description',
'enum', 'minimum', 'maximum', 'minLength', 'maxLength',
'pattern', 'title', 'requires', 'format', 'default',
'disallow', 'extends', 'links',
{
name: 'optional',
type: 'boolean'
}
]
});
var store = Ext.define('pve-updated-treestore', {
extend: 'Ext.data.TreeStore',
model: Ext.define('pve-api-doc', {
extend: 'Ext.data.Model',
fields: [
'path', 'info', 'text',
]
}),
proxy: {
type: 'memory',
data: pbsapi
},
sorters: [{
property: 'leaf',
direction: 'ASC'
}, {
property: 'text',
direction: 'ASC'
}],
filterer: 'bottomup',
doFilter: function(node) {
this.filterNodes(node, this.getFilters().getFilterFn(), true);
},
filterNodes: function(node, filterFn, parentVisible) {
var me = this,
bottomUpFiltering = me.filterer === 'bottomup',
match = filterFn(node) && parentVisible || (node.isRoot() && !me.getRootVisible()),
childNodes = node.childNodes,
len = childNodes && childNodes.length, i, matchingChildren;
if (len) {
for (i = 0; i < len; ++i) {
matchingChildren = me.filterNodes(childNodes[i], filterFn, match || bottomUpFiltering) || matchingChildren;
}
if (bottomUpFiltering) {
match = matchingChildren || match;
}
}
node.set("visible", match, me._silentOptions);
return match;
},
}).create();
var render_description = function(value, metaData, record) {
var pdef = record.data;
value = pdef.verbose_description || value;
// TODO: try to render asciidoc correctly
metaData.style = 'white-space:pre-wrap;'
return Ext.htmlEncode(value);
};
var render_type = function(value, metaData, record) {
var pdef = record.data;
return pdef['enum'] ? 'enum' : (pdef.type || 'string');
};
var render_format = function(value, metaData, record) {
var pdef = record.data;
metaData.style = 'white-space:normal;'
if (pdef.typetext)
return Ext.htmlEncode(pdef.typetext);
if (pdef['enum'])
return pdef['enum'].join(' | ');
if (pdef.format)
return pdef.format;
if (pdef.pattern)
return Ext.htmlEncode(pdef.pattern);
return '';
};
var real_path = function(path) {
return path.replace(/^.*\/_upgrade_(\/)?/, "/");
};
var permission_text = function(permission) {
let permhtml = "";
if (permission.user) {
if (!permission.description) {
if (permission.user === 'world') {
permhtml += "Accessible without any authentication.";
} else if (permission.user === 'all') {
permhtml += "Accessible by all authenticated users.";
} else {
permhtml += 'Onyl accessible by user "' +
permission.user + '"';
}
}
} else if (permission.check) {
permhtml += "<pre>Check: " +
Ext.htmlEncode(Ext.JSON.encode(permission.check)) + "</pre>";
} else if (permission.userParam) {
permhtml += `<div>Check if user matches parameter '${permission.userParam}'`;
} else if (permission.or) {
permhtml += "<div>Or<div style='padding-left: 10px;'>";
Ext.Array.each(permission.or, function(sub_permission) {
permhtml += permission_text(sub_permission);
})
permhtml += "</div></div>";
} else if (permission.and) {
permhtml += "<div>And<div style='padding-left: 10px;'>";
Ext.Array.each(permission.and, function(sub_permission) {
permhtml += permission_text(sub_permission);
})
permhtml += "</div></div>";
} else {
//console.log(permission);
permhtml += "Unknown syntax!";
}
return permhtml;
};
var render_docu = function(data) {
var md = data.info;
// console.dir(data);
var items = [];
var clicmdhash = {
GET: 'get',
POST: 'create',
PUT: 'set',
DELETE: 'delete'
};
Ext.Array.each(['GET', 'POST', 'PUT', 'DELETE'], function(method) {
var info = md[method];
if (info) {
var usage = "";
usage += "<table><tr><td>HTTP:&nbsp;&nbsp;&nbsp;</td><td>"
+ method + " " + real_path("/api2/json" + data.path) + "</td></tr>";
var sections = [
{
title: 'Description',
html: Ext.htmlEncode(info.description),
bodyPadding: 10
},
{
title: 'Usage',
html: usage,
bodyPadding: 10
}
];
if (info.parameters && info.parameters.properties) {
var pstore = Ext.create('Ext.data.Store', {
model: 'pve-param-schema',
proxy: {
type: 'memory'
},
groupField: 'optional',
sorters: [
{
property: 'name',
direction: 'ASC'
}
]
});
Ext.Object.each(info.parameters.properties, function(name, pdef) {
pdef.name = name;
pstore.add(pdef);
});
pstore.sort();
var groupingFeature = Ext.create('Ext.grid.feature.Grouping',{
enableGroupingMenu: false,
groupHeaderTpl: '<tpl if="groupValue">Optional</tpl><tpl if="!groupValue">Required</tpl>'
});
sections.push({
xtype: 'gridpanel',
title: 'Parameters',
features: [groupingFeature],
store: pstore,
viewConfig: {
trackOver: false,
stripeRows: true
},
columns: [
{
header: 'Name',
dataIndex: 'name',
flex: 1
},
{
header: 'Type',
dataIndex: 'type',
renderer: render_type,
flex: 1
},
{
header: 'Default',
dataIndex: 'default',
flex: 1
},
{
header: 'Format',
dataIndex: 'type',
renderer: render_format,
flex: 2
},
{
header: 'Description',
dataIndex: 'description',
renderer: render_description,
flex: 6
}
]
});
}
if (info.returns) {
var retinf = info.returns;
var rtype = retinf.type;
if (!rtype && retinf.items)
rtype = 'array';
if (!rtype)
rtype = 'object';
var rpstore = Ext.create('Ext.data.Store', {
model: 'pve-param-schema',
proxy: {
type: 'memory'
},
groupField: 'optional',
sorters: [
{
property: 'name',
direction: 'ASC'
}
]
});
var properties;
if (rtype === 'array' && retinf.items.properties) {
properties = retinf.items.properties;
}
if (rtype === 'object' && retinf.properties) {
properties = retinf.properties;
}
Ext.Object.each(properties, function(name, pdef) {
pdef.name = name;
rpstore.add(pdef);
});
rpstore.sort();
var groupingFeature = Ext.create('Ext.grid.feature.Grouping',{
enableGroupingMenu: false,
groupHeaderTpl: '<tpl if="groupValue">Optional</tpl><tpl if="!groupValue">Obligatory</tpl>'
});
var returnhtml;
if (retinf.items) {
returnhtml = '<pre>items: ' + Ext.htmlEncode(JSON.stringify(retinf.items, null, 4)) + '</pre>';
}
if (retinf.properties) {
returnhtml = returnhtml || '';
returnhtml += '<pre>properties:' + Ext.htmlEncode(JSON.stringify(retinf.properties, null, 4)) + '</pre>';
}
var rawSection = Ext.create('Ext.panel.Panel', {
bodyPadding: '0px 10px 10px 10px',
html: returnhtml,
hidden: true
});
sections.push({
xtype: 'gridpanel',
title: 'Returns: ' + rtype,
features: [groupingFeature],
store: rpstore,
viewConfig: {
trackOver: false,
stripeRows: true
},
columns: [
{
header: 'Name',
dataIndex: 'name',
flex: 1
},
{
header: 'Type',
dataIndex: 'type',
renderer: render_type,
flex: 1
},
{
header: 'Default',
dataIndex: 'default',
flex: 1
},
{
header: 'Format',
dataIndex: 'type',
renderer: render_format,
flex: 2
},
{
header: 'Description',
dataIndex: 'description',
renderer: render_description,
flex: 6
}
],
bbar: [
{
xtype: 'button',
text: 'Show RAW',
handler: function(btn) {
rawSection.setVisible(!rawSection.isVisible());
btn.setText(rawSection.isVisible() ? 'Hide RAW' : 'Show RAW');
}}
]
});
sections.push(rawSection);
}
if (!data.path.match(/\/_upgrade_/)) {
var permhtml = '';
if (!info.permissions) {
permhtml = "Root only.";
} else {
if (info.permissions.description) {
permhtml += "<div style='white-space:pre-wrap;padding-bottom:10px;'>" +
Ext.htmlEncode(info.permissions.description) + "</div>";
}
permhtml += permission_text(info.permissions);
}
// we do not have this information for PBS api
//if (!info.allowtoken) {
// permhtml += "<br />This API endpoint is not available for API tokens."
//}
sections.push({
title: 'Required permissions',
bodyPadding: 10,
html: permhtml
});
}
items.push({
title: method,
autoScroll: true,
defaults: {
border: false
},
items: sections
});
}
});
var ct = Ext.getCmp('docview');
ct.setTitle("Path: " + real_path(data.path));
ct.removeAll(true);
ct.add(items);
ct.setActiveTab(0);
};
Ext.define('Ext.form.SearchField', {
extend: 'Ext.form.field.Text',
alias: 'widget.searchfield',
emptyText: 'Search...',
flex: 1,
inputType: 'search',
listeners: {
'change': function(){
var value = this.getValue();
if (!Ext.isEmpty(value)) {
store.filter({
property: 'path',
value: value,
anyMatch: true
});
} else {
store.clearFilter();
}
}
}
});
var tree = Ext.create('Ext.tree.Panel', {
title: 'Resource Tree',
tbar: [
{
xtype: 'searchfield',
}
],
tools: [
{
type: 'expand',
tooltip: 'Expand all',
tooltipType: 'title',
callback: (tree) => tree.expandAll(),
},
{
type: 'collapse',
tooltip: 'Collapse all',
tooltipType: 'title',
callback: (tree) => tree.collapseAll(),
},
],
store: store,
width: 200,
region: 'west',
split: true,
margins: '5 0 5 5',
rootVisible: false,
listeners: {
selectionchange: function(v, selections) {
if (!selections[0])
return;
var rec = selections[0];
render_docu(rec.data);
location.hash = '#' + rec.data.path;
}
}
});
Ext.create('Ext.container.Viewport', {
layout: 'border',
renderTo: Ext.getBody(),
items: [
tree,
{
xtype: 'tabpanel',
title: 'Documentation',
id: 'docview',
region: 'center',
margins: '5 5 5 0',
layout: 'fit',
items: []
}
]
});
var deepLink = function() {
var path = window.location.hash.substring(1).replace(/\/\s*$/, '')
var endpoint = store.findNode('path', path);
if (endpoint) {
tree.getSelectionModel().select(endpoint);
tree.expandPath(endpoint.getPath());
render_docu(endpoint.data);
}
}
window.onhashchange = deepLink;
deepLink();
});

View File

@ -49,15 +49,31 @@ Environment Variables
When set, this value is used for the password required for the backup server.
You can also set this to a API token secret.
``PBS_PASSWORD_FD``, ``PBS_PASSWORD_FILE``, ``PBS_PASSWORD_CMD``
Like ``PBS_PASSWORD``, but read data from an open file descriptor, a file
name or from the `stdout` of a command, respectively. The first defined
environment variable from the order above is preferred.
``PBS_ENCRYPTION_PASSWORD``
When set, this value is used to access the secret encryption key (if
protected by password).
``PBS_ENCRYPTION_PASSWORD_FD``, ``PBS_ENCRYPTION_PASSWORD_FILE``, ``PBS_ENCRYPTION_PASSWORD_CMD``
Like ``PBS_ENCRYPTION_PASSWORD``, but read data from an open file descriptor,
a file name or from the `stdout` of a command, respectively. The first
defined environment variable from the order above is preferred.
``PBS_FINGERPRINT`` When set, this value is used to verify the server
certificate (only used if the system CA certificates cannot validate the
certificate).
.. Note:: Passwords must be valid UTF8 an may not contain
newlines. For your convienience, we just use the first line as
password, so you can add arbitrary comments after the
first newline.
Output Format
-------------

View File

@ -21,3 +21,7 @@ Command Line Tools
.. include:: pxar/description.rst
``proxmox-backup-debug``
~~~~~~~~
.. include:: proxmox-backup-debug/description.rst

View File

@ -24,11 +24,13 @@ future plans to support 32-bit processors.
How long will my Proxmox Backup Server version be supported?
------------------------------------------------------------
+-----------------------+--------------------+---------------+------------+--------------------+
|Proxmox Backup Version | Debian Version | First Release | Debian EOL | Proxmox Backup EOL |
+=======================+====================+===============+============+====================+
|Proxmox Backup 1.x | Debian 10 (Buster) | 2020-11 | tba | tba |
+-----------------------+--------------------+---------------+------------+--------------------+
+-----------------------+----------------------+---------------+------------+--------------------+
|Proxmox Backup Version | Debian Version | First Release | Debian EOL | Proxmox Backup EOL |
+=======================+======================+===============+============+====================+
|Proxmox Backup 2.x | Debian 11 (Bullseye) | 2021-07 | tba | tba |
+-----------------------+----------------------+---------------+------------+--------------------+
|Proxmox Backup 1.x | Debian 10 (Buster) | 2020-11 | ~Q2/2022 | Q2-Q3/2022 |
+-----------------------+----------------------+---------------+------------+--------------------+
Can I copy or synchronize my datastore to another location?

View File

@ -51,7 +51,7 @@ data:
* - ``MAGIC: [u8; 8]``
* - ``CRC32: [u8; 4]``
* - ``ÌV: [u8; 16]``
* - ``IV: [u8; 16]``
* - ``TAG: [u8; 16]``
* - ``Data: (max 16MiB)``

View File

@ -19,7 +19,7 @@ for various management tasks such as disk management.
`Proxmox Backup`_ without the server part.
The disk image (ISO file) provided by Proxmox includes a complete Debian system
("buster" for version 1.x) as well as all necessary packages for the `Proxmox Backup`_ server.
as well as all necessary packages for the `Proxmox Backup`_ server.
The installer will guide you through the setup process and allow
you to partition the local disk(s), apply basic system configurations

View File

@ -34,17 +34,7 @@
</style>
<link rel="stylesheet" type="text/css" href="font-awesome/css/font-awesome.css"/>
<script type="text/javascript" src="extjs/ext-all.js"></script>
<script type="text/javascript" src="code39.js"></script>
<script type="text/javascript" src="prefix-field.js"></script>
<script type="text/javascript" src="label-style.js"></script>
<script type="text/javascript" src="tape-type.js"></script>
<script type="text/javascript" src="paper-size.js"></script>
<script type="text/javascript" src="page-layout.js"></script>
<script type="text/javascript" src="page-calibration.js"></script>
<script type="text/javascript" src="label-list.js"></script>
<script type="text/javascript" src="label-setup.js"></script>
<script type="text/javascript" src="lto-barcode.js"></script>
<script type="text/javascript" src="lto-barcode-generator.js"></script>
</head>
<body>
</body>

View File

@ -1,7 +1,5 @@
// FIXME: HACK! Makes scrolling in number spinner work again. fixed in ExtJS >= 6.1
if (Ext.isFirefox) {
Ext.$eventNameMap.DOMMouseScroll = 'DOMMouseScroll';
}
// for toolkit.js
function gettext(val) { return val; };
function draw_labels(target_id, label_list, page_layout, calibration) {
let max_labels = compute_max_labels(page_layout);

View File

@ -17,15 +17,13 @@ update``.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
deb http://ftp.debian.org/debian bullseye main contrib
deb http://ftp.debian.org/debian bullseye-updates main contrib
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
deb http://security.debian.org/debian-security bullseye-security main contrib
.. FIXME for 7.0: change security update suite to bullseye-security
In addition, you need a package repository from Proxmox to get Proxmox Backup
updates.
@ -45,31 +43,21 @@ key with the following commands:
.. code-block:: console
# wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
# wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
Verify the SHA512 checksum afterwards with:
Verify the SHA512 checksum afterwards with the expected output below:
.. code-block:: console
# sha512sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
# sha512sum /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
7fb03ec8a1675723d2853b84aa4fdb49a46a3bb72b9951361488bfd19b29aab0a789a4f8c7406e71a69aabbc727c936d3549731c4659ffa1a08f44db8fdcebfa /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
The output should be:
and the md5sum, with the expected output below:
.. code-block:: console
acca6f416917e8e11490a08a1e2842d500b3a5d9f322c6319db0927b2901c3eae23cfb5cd5df6facf2b57399d3cfa52ad7769ebdd75d9b204549ca147da52626 /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
and the md5sum:
.. code-block:: console
# md5sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
Here, the output should be:
.. code-block:: console
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
# md5sum /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
bcc35c7173e0845c0d6ad6470b70f50e /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
.. _sysadmin_package_repos_enterprise:
@ -84,7 +72,7 @@ enabled by default:
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list``
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise
deb https://enterprise.proxmox.com/debian/pbs bullseye pbs-enterprise
To never miss important security fixes, the superuser (``root@pam`` user) is
@ -114,15 +102,15 @@ We recommend to configure this repository in ``/etc/apt/sources.list``.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
deb http://ftp.debian.org/debian bullseye main contrib
deb http://ftp.debian.org/debian bullseye-updates main contrib
# PBS pbs-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pbs buster pbs-no-subscription
deb http://download.proxmox.com/debian/pbs bullseye pbs-no-subscription
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
deb http://security.debian.org/debian-security bullseye-security main contrib
`Proxmox Backup`_ Test Repository
@ -140,7 +128,7 @@ You can access this repository by adding the following line to
.. code-block:: sources.list
:caption: sources.list entry for ``pbstest``
deb http://download.proxmox.com/debian/pbs buster pbstest
deb http://download.proxmox.com/debian/pbs bullseye pbstest
.. _package_repositories_client_only:
@ -161,6 +149,26 @@ APT-based Proxmox Backup Client Repository
For modern Linux distributions using `apt` as package manager, like all Debian
and Ubuntu Derivative do, you may be able to use the APT-based repository.
In order to configure this repository you need to first :ref:`setup the Proxmox
release key <package_repos_secure_apt>`. After that, add the repository URL to
the APT sources lists.
**Repositories for Debian 11 (Bullseye) based releases**
This repository is tested with:
- Debian Bullseye
Edit the file ``/etc/apt/sources.list.d/pbs-client.list`` and add the following
snipped
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://download.proxmox.com/debian/pbs-client bullseye main
**Repositories for Debian 10 (Buster) based releases**
This repository is tested with:
- Debian Buster
@ -168,9 +176,6 @@ This repository is tested with:
It may work with older, and should work with more recent released versions.
In order to configure this repository you need to first :ref:`setup the Proxmox
release key <package_repos_secure_apt>`. After that, add the repository URL to
the APT sources lists.
Edit the file ``/etc/apt/sources.list.d/pbs-client.list`` and add the following
snipped
@ -178,3 +183,19 @@ snipped
:caption: File: ``/etc/apt/sources.list``
deb http://download.proxmox.com/debian/pbs-client buster main
.. _node_options_http_proxy:
Repository Access Behind HTTP Proxy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some setups have restricted access to the internet, sometimes only through a
central proxy. You can setup a HTTP proxy through the Proxmox Backup Server's
web-interface in the `Configuration -> Authentication` tab.
Once configured this proxy will be used for apt network requests and for
checking a Proxmox Backup Server support subscription.
Standard HTTP proxy configurations are accepted, `[http://]<host>[:port]` where
the `<host>` part may include an authorization, for example:
`http://user:pass@proxy.example.org:12345`

View File

@ -0,0 +1,14 @@
Implements debugging functionality to inspect Proxmox Backup datastore
files, verify the integrity of chunks.
Also contains an 'api' subcommand where arbitrary api paths can be called
(get/create/set/delete) as well as display their parameters (usage) and
their child-links (ls).
By default, it connects to the proxmox-backup-proxy on localhost via https,
but by setting the environment variable `PROXMOX_DEBUG_API_CODE` to `1` the
tool directly calls the corresponding code.
.. WARNING:: Using `PROXMOX_DEBUG_API_CODE` can be dangerous and is only intended
for debugging purposes. It is not intended for use on a production system.

View File

@ -0,0 +1,33 @@
==========================
proxmox-backup-debug
==========================
.. include:: ../epilog.rst
-------------------------------------------------------------
Debugging command line tool for Backup and Restore
-------------------------------------------------------------
:Author: |AUTHOR|
:Version: Version |VERSION|
:Manual section: 1
Synopsis
==========
.. include:: synopsis.rst
Common Options
==============
.. include:: ../output-format.rst
Description
============
.. include:: description.rst
.. include:: ../pbs-copyright.rst

View File

@ -1,7 +1,5 @@
// FIXME: HACK! Makes scrolling in number spinner work again. fixed in ExtJS >= 6.1
if (Ext.isFirefox) {
Ext.$eventNameMap.DOMMouseScroll = 'DOMMouseScroll';
}
// for Toolkit.js
function gettext(val) { return val; };
Ext.onReady(function() {
const NOW = new Date();
@ -37,7 +35,6 @@ Ext.onReady(function() {
editable: true,
displayField: 'text',
valueField: 'value',
queryMode: 'local',

View File

@ -3,9 +3,6 @@
Tape Backup
===========
.. CAUTION:: Tape Backup is a technical preview feature, not meant for
production use.
.. image:: images/screenshots/pbs-gui-tape-changer-overview.png
:align: right
:alt: Tape Backup: Tape changer overview
@ -67,8 +64,10 @@ tape compression feature has no advantage.
Supported Hardware
------------------
Proxmox Backup Server supports `Linear Tape-Open`_ generation 4 (LTO-4)
or later.
Proxmox Backup Server supports `Linear Tape-Open`_ generation 5 (LTO-5)
or later and has best-effort support for generation 4 (LTO-4). While
many LTO-4 systems are known to work, some might need firmware updates or
do not implement necessary features to work with Proxmox Backup Server.
Tape changing is carried out using the SCSI Medium Changer protocol,
so all modern tape libraries should work.
@ -846,6 +845,17 @@ Update Inventory
Restore Catalog
~~~~~~~~~~~~~~~
To restore a catalog from an existing tape, just insert the tape into the drive
and execute:
.. code-block:: console
# proxmox-tape catalog
You can restore from a tape even without an existing catalog, but only the
whole media set. If you do this, the catalog will be automatically created.
Encryption Key Management
~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -164,3 +164,66 @@ Verification of encrypted chunks
For encrypted chunks, only the checksum of the original (plaintext) data is
available, making it impossible for the server (without the encryption key), to
verify its content against it. Instead only the CRC-32 checksum gets checked.
Troubleshooting
---------------
Index files(.fidx, .didx) contain information about how to rebuild a file, more
precisely, they contain an ordered list of references to the chunks the original
file was split up in. If there is something wrong with a snapshot it might be
useful to find out which chunks are referenced in this specific snapshot, and
check wheather all of them are present and intact. The command for getting the
list of referenced chunks could look something like this:
.. code-block:: console
# proxmox-backup-debug inspect file drive-scsi0.img.fidx
The same command can be used to look at .blob file, without ``--decode`` just
the size and the encryption type, if any, is printed. If ``--decode`` is set the
blob file is decoded into the specified file('-' will decode it directly into
stdout).
.. code-block:: console
# proxmox-backup-debug inspect file qemu-server.conf.blob --decode -
would print the decoded contents of `qemu-server.conf.blob`. If the file you're
trying to inspect is encrypted, a path to the keyfile has to be provided using
``--keyfile``.
Checking in which index files a specific chunk file is referenced can be done
with:
.. code-block:: console
# proxmox-backup-debug inspect chunk b531d3ffc9bd7c65748a61198c060678326a431db7eded874c327b7986e595e0 --reference-filter /path/in/a/datastore/directory
Here ``--reference-filter`` specifies where index files should be searched, this
can be an arbitrary path. If, for some reason, the filename of the chunk was
changed you can explicitly specify the digest using ``--digest``, by default the
chunk filename is used as the digest to look for. Specifying no
``--reference-filter`` will just print the CRC and encryption status of the
chunk. You can also decode chunks, to do so ``--decode`` has to be set. If the
chunk is encrypted a ``--keyfile`` has to be provided for decoding.
Restore without a running PBS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to restore spefiic files of snapshots without a running PBS using
the `recover` sub-command, provided you have access to the intact index and
chunk files. Note that you also need the corresponding key file if the backup
was encrypted.
.. code-block:: console
# proxmox-backup-debug recover index drive-scsi0.img.fidx /path/to/.chunks
In above example the `/path/to/.chunks` argument is the path to the directory
that contains contains the chunks, and `drive-scsi0.img.fidx` is the index-file
of the file you'd lile to restore. Both paths can be absolute or relative. With
``--skip-crc`` it is possible to disable the crc checks of the chunks, this will
speed up the process slightly and allows for trying to restore (partially)
corrupt chunks. It's recommended to always try without the skip-CRC option
first.

View File

@ -360,7 +360,9 @@ WebAuthn
For WebAuthn to work, you need to have two things:
* a trusted HTTPS certificate (for example, by using `Let's Encrypt
<https://pbs.proxmox.com/wiki/index.php/HTTPS_Certificate_Configuration>`_)
<https://pbs.proxmox.com/wiki/index.php/HTTPS_Certificate_Configuration>`_).
While it probably works with an untrusted certificate, some browsers may warn
or refuse WebAuthn operations if it is not trusted.
* setup the WebAuthn configuration (see *Configuration -> Authentication* in the
Proxmox Backup Server web-interface). This can be auto-filled in most setups.

View File

@ -1 +1 @@
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise
deb https://enterprise.proxmox.com/debian/pbs bullseye pbs-enterprise

View File

@ -2,8 +2,8 @@ use std::io::Write;
use anyhow::{Error};
use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader};
use pbs_api_types::Authid;
use pbs_client::{HttpClient, HttpClientOptions, BackupReader};
pub struct DummyWriter {
bytes: usize,
@ -59,7 +59,7 @@ async fn run() -> Result<(), Error> {
}
fn main() {
if let Err(err) = proxmox_backup::tools::runtime::main(run()) {
if let Err(err) = pbs_runtime::main(run()) {
eprintln!("ERROR: {}", err);
}
println!("DONE");

View File

@ -69,7 +69,7 @@ fn send_request(
}
fn main() -> Result<(), Error> {
proxmox_backup::tools::runtime::main(run())
pbs_runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -69,7 +69,7 @@ fn send_request(
}
fn main() -> Result<(), Error> {
proxmox_backup::tools::runtime::main(run())
pbs_runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -6,10 +6,10 @@ use hyper::{Body, Request, Response};
use openssl::ssl::{SslAcceptor, SslFiletype, SslMethod};
use tokio::net::{TcpListener, TcpStream};
use proxmox_backup::configdir;
use pbs_buildcfg::configdir;
fn main() -> Result<(), Error> {
proxmox_backup::tools::runtime::main(run())
pbs_runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -5,7 +5,7 @@ use hyper::{Body, Request, Response};
use tokio::net::{TcpListener, TcpStream};
fn main() -> Result<(), Error> {
proxmox_backup::tools::runtime::main(run())
pbs_runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -5,7 +5,7 @@ extern crate proxmox_backup;
use anyhow::{Error};
use std::io::{Read, Write};
use proxmox_backup::backup::*;
use pbs_datastore::Chunker;
struct ChunkWriter {
chunker: Chunker,

View File

@ -1,7 +1,6 @@
extern crate proxmox_backup;
//use proxmox_backup::backup::chunker::*;
use proxmox_backup::backup::*;
use pbs_datastore::Chunker;
fn main() {

View File

@ -3,7 +3,7 @@ use futures::*;
extern crate proxmox_backup;
use proxmox_backup::backup::*;
use pbs_client::ChunkStream;
// Test Chunker with real data read from a file.
//
@ -13,7 +13,7 @@ use proxmox_backup::backup::*;
// Note: I can currently get about 830MB/s
fn main() {
if let Err(err) = proxmox_backup::tools::runtime::main(run()) {
if let Err(err) = pbs_runtime::main(run()) {
panic!("ERROR: {}", err);
}
}

View File

@ -1,7 +1,7 @@
use anyhow::{Error};
use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::*;
use pbs_client::{HttpClient, HttpClientOptions, BackupWriter};
use pbs_api_types::Authid;
async fn upload_speed() -> Result<f64, Error> {
@ -27,7 +27,7 @@ async fn upload_speed() -> Result<f64, Error> {
}
fn main() {
match proxmox_backup::tools::runtime::main(upload_speed()) {
match pbs_runtime::main(upload_speed()) {
Ok(mbs) => {
println!("average upload speed: {} MB/s", mbs);
}

20
pbs-api-types/Cargo.toml Normal file
View File

@ -0,0 +1,20 @@
[package]
name = "pbs-api-types"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "general API type helpers for PBS"
[dependencies]
anyhow = "1.0"
lazy_static = "1.4"
libc = "0.2"
nix = "0.19.1"
openssl = "0.10"
regex = "1.2"
serde = { version = "1.0", features = ["derive"] }
proxmox = { version = "0.13.3", default-features = false, features = [ "api-macro" ] }
proxmox-systemd = { path = "../proxmox-systemd" }
pbs-tools = { path = "../pbs-tools" }

284
pbs-api-types/src/acl.rs Normal file
View File

@ -0,0 +1,284 @@
use std::str::FromStr;
use serde::{Deserialize, Serialize};
use serde::de::{value, IntoDeserializer};
use proxmox::api::api;
use proxmox::api::schema::{
ApiStringFormat, BooleanSchema, EnumEntry, Schema, StringSchema,
};
use proxmox::{constnamedbitmap, const_regex};
const_regex! {
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
}
// define Privilege bitfield
constnamedbitmap! {
/// Contains a list of privilege name to privilege value mappings.
///
/// The names are used when displaying/persisting privileges anywhere, the values are used to
/// allow easy matching of privileges as bitflags.
PRIVILEGES: u64 => {
/// Sys.Audit allows knowing about the system and its status
PRIV_SYS_AUDIT("Sys.Audit");
/// Sys.Modify allows modifying system-level configuration
PRIV_SYS_MODIFY("Sys.Modify");
/// Sys.Modify allows to poweroff/reboot/.. the system
PRIV_SYS_POWER_MANAGEMENT("Sys.PowerManagement");
/// Datastore.Audit allows knowing about a datastore,
/// including reading the configuration entry and listing its contents
PRIV_DATASTORE_AUDIT("Datastore.Audit");
/// Datastore.Allocate allows creating or deleting datastores
PRIV_DATASTORE_ALLOCATE("Datastore.Allocate");
/// Datastore.Modify allows modifying a datastore and its contents
PRIV_DATASTORE_MODIFY("Datastore.Modify");
/// Datastore.Read allows reading arbitrary backup contents
PRIV_DATASTORE_READ("Datastore.Read");
/// Allows verifying a datastore
PRIV_DATASTORE_VERIFY("Datastore.Verify");
/// Datastore.Backup allows Datastore.Read|Verify and creating new snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_BACKUP("Datastore.Backup");
/// Datastore.Prune allows deleting snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_PRUNE("Datastore.Prune");
/// Permissions.Modify allows modifying ACLs
PRIV_PERMISSIONS_MODIFY("Permissions.Modify");
/// Remote.Audit allows reading remote.cfg and sync.cfg entries
PRIV_REMOTE_AUDIT("Remote.Audit");
/// Remote.Modify allows modifying remote.cfg
PRIV_REMOTE_MODIFY("Remote.Modify");
/// Remote.Read allows reading data from a configured `Remote`
PRIV_REMOTE_READ("Remote.Read");
/// Sys.Console allows access to the system's console
PRIV_SYS_CONSOLE("Sys.Console");
/// Tape.Audit allows reading tape backup configuration and status
PRIV_TAPE_AUDIT("Tape.Audit");
/// Tape.Modify allows modifying tape backup configuration
PRIV_TAPE_MODIFY("Tape.Modify");
/// Tape.Write allows writing tape media
PRIV_TAPE_WRITE("Tape.Write");
/// Tape.Read allows reading tape backup configuration and media contents
PRIV_TAPE_READ("Tape.Read");
/// Realm.Allocate allows viewing, creating, modifying and deleting realms
PRIV_REALM_ALLOCATE("Realm.Allocate");
}
}
/// Admin always has all privileges. It can do everything except a few actions
/// which are limited to the 'root@pam` superuser
pub const ROLE_ADMIN: u64 = std::u64::MAX;
/// NoAccess can be used to remove privileges from specific (sub-)paths
pub const ROLE_NO_ACCESS: u64 = 0;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Audit can view configuration and status information, but not modify it.
pub const ROLE_AUDIT: u64 = 0
| PRIV_SYS_AUDIT
| PRIV_DATASTORE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Admin can do anything on the datastore.
pub const ROLE_DATASTORE_ADMIN: u64 = 0
| PRIV_DATASTORE_AUDIT
| PRIV_DATASTORE_MODIFY
| PRIV_DATASTORE_READ
| PRIV_DATASTORE_VERIFY
| PRIV_DATASTORE_BACKUP
| PRIV_DATASTORE_PRUNE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Reader can read/verify datastore content and do restore
pub const ROLE_DATASTORE_READER: u64 = 0
| PRIV_DATASTORE_AUDIT
| PRIV_DATASTORE_VERIFY
| PRIV_DATASTORE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Backup can do backup and restore, but no prune.
pub const ROLE_DATASTORE_BACKUP: u64 = 0
| PRIV_DATASTORE_BACKUP;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.PowerUser can do backup, restore, and prune.
pub const ROLE_DATASTORE_POWERUSER: u64 = 0
| PRIV_DATASTORE_PRUNE
| PRIV_DATASTORE_BACKUP;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Audit can audit the datastore.
pub const ROLE_DATASTORE_AUDIT: u64 = 0
| PRIV_DATASTORE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.Audit can audit the remote
pub const ROLE_REMOTE_AUDIT: u64 = 0
| PRIV_REMOTE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.Admin can do anything on the remote.
pub const ROLE_REMOTE_ADMIN: u64 = 0
| PRIV_REMOTE_AUDIT
| PRIV_REMOTE_MODIFY
| PRIV_REMOTE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.SyncOperator can do read and prune on the remote.
pub const ROLE_REMOTE_SYNC_OPERATOR: u64 = 0
| PRIV_REMOTE_AUDIT
| PRIV_REMOTE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Audit can audit the tape backup configuration and media content
pub const ROLE_TAPE_AUDIT: u64 = 0
| PRIV_TAPE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Admin can do anything on the tape backup
pub const ROLE_TAPE_ADMIN: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_MODIFY
| PRIV_TAPE_READ
| PRIV_TAPE_WRITE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Operator can do tape backup and restore (but no configuration changes)
pub const ROLE_TAPE_OPERATOR: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_READ
| PRIV_TAPE_WRITE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Reader can do read and inspect tape content
pub const ROLE_TAPE_READER: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_READ;
/// NoAccess can be used to remove privileges from specific (sub-)paths
pub const ROLE_NAME_NO_ACCESS: &str = "NoAccess";
#[api(
type_text: "<role>",
)]
#[repr(u64)]
#[derive(Serialize, Deserialize)]
/// Enum representing roles via their [PRIVILEGES] combination.
///
/// Since privileges are implemented as bitflags, each unique combination of privileges maps to a
/// single, unique `u64` value that is used in this enum definition.
pub enum Role {
/// Administrator
Admin = ROLE_ADMIN,
/// Auditor
Audit = ROLE_AUDIT,
/// Disable Access
NoAccess = ROLE_NO_ACCESS,
/// Datastore Administrator
DatastoreAdmin = ROLE_DATASTORE_ADMIN,
/// Datastore Reader (inspect datastore content and do restores)
DatastoreReader = ROLE_DATASTORE_READER,
/// Datastore Backup (backup and restore owned backups)
DatastoreBackup = ROLE_DATASTORE_BACKUP,
/// Datastore PowerUser (backup, restore and prune owned backup)
DatastorePowerUser = ROLE_DATASTORE_POWERUSER,
/// Datastore Auditor
DatastoreAudit = ROLE_DATASTORE_AUDIT,
/// Remote Auditor
RemoteAudit = ROLE_REMOTE_AUDIT,
/// Remote Administrator
RemoteAdmin = ROLE_REMOTE_ADMIN,
/// Syncronisation Opertator
RemoteSyncOperator = ROLE_REMOTE_SYNC_OPERATOR,
/// Tape Auditor
TapeAudit = ROLE_TAPE_AUDIT,
/// Tape Administrator
TapeAdmin = ROLE_TAPE_ADMIN,
/// Tape Operator
TapeOperator = ROLE_TAPE_OPERATOR,
/// Tape Reader
TapeReader = ROLE_TAPE_READER,
}
impl FromStr for Role {
type Err = value::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Self::deserialize(s.into_deserializer())
}
}
pub const ACL_PATH_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&ACL_PATH_REGEX);
pub const ACL_PATH_SCHEMA: Schema = StringSchema::new(
"Access control path.")
.format(&ACL_PATH_FORMAT)
.min_length(1)
.max_length(128)
.schema();
pub const ACL_PROPAGATE_SCHEMA: Schema = BooleanSchema::new(
"Allow to propagate (inherit) permissions.")
.default(true)
.schema();
pub const ACL_UGID_TYPE_SCHEMA: Schema = StringSchema::new(
"Type of 'ugid' property.")
.format(&ApiStringFormat::Enum(&[
EnumEntry::new("user", "User"),
EnumEntry::new("group", "Group")]))
.schema();
#[api(
properties: {
propagate: {
schema: ACL_PROPAGATE_SCHEMA,
},
path: {
schema: ACL_PATH_SCHEMA,
},
ugid_type: {
schema: ACL_UGID_TYPE_SCHEMA,
},
ugid: {
type: String,
description: "User or Group ID.",
},
roleid: {
type: Role,
}
}
)]
#[derive(Serialize, Deserialize)]
/// ACL list entry.
pub struct AclListItem {
pub path: String,
pub ugid: String,
pub ugid_type: String,
pub propagate: bool,
pub roleid: String,
}

View File

@ -0,0 +1,57 @@
use std::fmt::{self, Display};
use anyhow::Error;
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use pbs_tools::format::{as_fingerprint, bytes_as_fingerprint};
#[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Defines whether data is encrypted (using an AEAD cipher), only signed, or neither.
pub enum CryptMode {
/// Don't encrypt.
None,
/// Encrypt.
Encrypt,
/// Only sign.
SignOnly,
}
#[derive(Debug, Eq, PartialEq, Hash, Clone, Deserialize, Serialize)]
#[serde(transparent)]
/// 32-byte fingerprint, usually calculated with SHA256.
pub struct Fingerprint {
#[serde(with = "bytes_as_fingerprint")]
bytes: [u8; 32],
}
impl Fingerprint {
pub fn new(bytes: [u8; 32]) -> Self {
Self { bytes }
}
pub fn bytes(&self) -> &[u8; 32] {
&self.bytes
}
}
/// Display as short key ID
impl Display for Fingerprint {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", as_fingerprint(&self.bytes[0..8]))
}
}
impl std::str::FromStr for Fingerprint {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Error> {
let mut tmp = s.to_string();
tmp.retain(|c| c != ':');
let bytes = proxmox::tools::hex_to_digest(&tmp)?;
Ok(Fingerprint::new(bytes))
}
}

View File

@ -0,0 +1,622 @@
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{
ApiStringFormat, ApiType, ArraySchema, EnumEntry, IntegerSchema, ReturnType, Schema,
StringSchema, Updater,
};
use proxmox::const_regex;
use crate::{
PROXMOX_SAFE_ID_FORMAT, SHA256_HEX_REGEX, SINGLE_LINE_COMMENT_SCHEMA, CryptMode, UPID,
Fingerprint, Userid, Authid,
GC_SCHEDULE_SCHEMA, DATASTORE_NOTIFY_STRING_SCHEMA, PRUNE_SCHEDULE_SCHEMA,
};
const_regex!{
pub BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$");
pub BACKUP_ID_REGEX = concat!(r"^", BACKUP_ID_RE!(), r"$");
pub BACKUP_DATE_REGEX = concat!(r"^", BACKUP_TIME_RE!() ,r"$");
pub GROUP_PATH_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$");
pub BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
pub SNAPSHOT_PATH_REGEX = concat!(r"^", SNAPSHOT_PATH_REGEX_STR!(), r"$");
pub DATASTORE_MAP_REGEX = concat!(r"(:?", PROXMOX_SAFE_ID_REGEX_STR!(), r"=)?", PROXMOX_SAFE_ID_REGEX_STR!());
}
pub const CHUNK_DIGEST_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const DIR_NAME_SCHEMA: Schema = StringSchema::new("Directory name")
.min_length(1)
.max_length(4096)
.schema();
pub const BACKUP_ARCHIVE_NAME_SCHEMA: Schema = StringSchema::new("Backup archive name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.schema();
pub const BACKUP_ID_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_ID_REGEX);
pub const BACKUP_ID_SCHEMA: Schema = StringSchema::new("Backup ID.")
.format(&BACKUP_ID_FORMAT)
.schema();
pub const BACKUP_TYPE_SCHEMA: Schema = StringSchema::new("Backup type.")
.format(&ApiStringFormat::Enum(&[
EnumEntry::new("vm", "Virtual Machine Backup"),
EnumEntry::new("ct", "Container Backup"),
EnumEntry::new("host", "Host Backup"),
]))
.schema();
pub const BACKUP_TIME_SCHEMA: Schema = IntegerSchema::new("Backup time (Unix epoch.)")
.minimum(1_547_797_308)
.schema();
pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const CHUNK_DIGEST_SCHEMA: Schema = StringSchema::new("Chunk digest (SHA256).")
.format(&CHUNK_DIGEST_FORMAT)
.schema();
pub const DATASTORE_MAP_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&DATASTORE_MAP_REGEX);
pub const DATASTORE_MAP_SCHEMA: Schema = StringSchema::new("Datastore mapping.")
.format(&DATASTORE_MAP_FORMAT)
.min_length(3)
.max_length(65)
.type_text("(<source>=)?<target>")
.schema();
pub const DATASTORE_MAP_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Datastore mapping list.", &DATASTORE_MAP_SCHEMA)
.schema();
pub const DATASTORE_MAP_LIST_SCHEMA: Schema = StringSchema::new(
"A list of Datastore mappings (or single datastore), comma separated. \
For example 'a=b,e' maps the source datastore 'a' to target 'b and \
all other sources to the default 'e'. If no default is given, only the \
specified sources are mapped.")
.format(&ApiStringFormat::PropertyString(&DATASTORE_MAP_ARRAY_SCHEMA))
.schema();
pub const PRUNE_SCHEMA_KEEP_DAILY: Schema = IntegerSchema::new("Number of daily backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_HOURLY: Schema =
IntegerSchema::new("Number of hourly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_LAST: Schema = IntegerSchema::new("Number of backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_MONTHLY: Schema =
IntegerSchema::new("Number of monthly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_WEEKLY: Schema =
IntegerSchema::new("Number of weekly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_YEARLY: Schema =
IntegerSchema::new("Number of yearly backups to keep.")
.minimum(1)
.schema();
#[api(
properties: {
"keep-last": {
schema: PRUNE_SCHEMA_KEEP_LAST,
optional: true,
},
"keep-hourly": {
schema: PRUNE_SCHEMA_KEEP_HOURLY,
optional: true,
},
"keep-daily": {
schema: PRUNE_SCHEMA_KEEP_DAILY,
optional: true,
},
"keep-weekly": {
schema: PRUNE_SCHEMA_KEEP_WEEKLY,
optional: true,
},
"keep-monthly": {
schema: PRUNE_SCHEMA_KEEP_MONTHLY,
optional: true,
},
"keep-yearly": {
schema: PRUNE_SCHEMA_KEEP_YEARLY,
optional: true,
},
}
)]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
/// Common pruning options
pub struct PruneOptions {
#[serde(skip_serializing_if="Option::is_none")]
pub keep_last: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_hourly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_daily: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_weekly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_monthly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_yearly: Option<u64>,
}
#[api(
properties: {
name: {
schema: DATASTORE_SCHEMA,
},
path: {
schema: DIR_NAME_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
schema: DATASTORE_NOTIFY_STRING_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
},
"prune-schedule": {
optional: true,
schema: PRUNE_SCHEDULE_SCHEMA,
},
"keep-last": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST,
},
"keep-hourly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_HOURLY,
},
"keep-daily": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_DAILY,
},
"keep-weekly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_WEEKLY,
},
"keep-monthly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_MONTHLY,
},
"keep-yearly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_YEARLY,
},
"verify-new": {
description: "If enabled, all new backups will be verified right after completion.",
optional: true,
type: bool,
},
}
)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all="kebab-case")]
/// Datastore configuration properties.
pub struct DataStoreConfig {
#[updater(skip)]
pub name: String,
#[updater(skip)]
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub gc_schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub prune_schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_last: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_hourly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_daily: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_weekly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_monthly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_yearly: Option<u64>,
/// If enabled, all backups will be verified right after completion.
#[serde(skip_serializing_if="Option::is_none")]
pub verify_new: Option<bool>,
/// Send job email notification to this user
#[serde(skip_serializing_if="Option::is_none")]
pub notify_user: Option<Userid>,
/// Send notification only for job errors
#[serde(skip_serializing_if="Option::is_none")]
pub notify: Option<String>,
}
#[api(
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about a datastore.
pub struct DataStoreListItem {
pub store: String,
pub comment: Option<String>,
}
#[api(
properties: {
"filename": {
schema: BACKUP_ARCHIVE_NAME_SCHEMA,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about archive files inside a backup snapshot.
pub struct BackupContent {
pub filename: String,
/// Info if file is encrypted, signed, or neither.
#[serde(skip_serializing_if = "Option::is_none")]
pub crypt_mode: Option<CryptMode>,
/// Archive size (from backup manifest).
#[serde(skip_serializing_if = "Option::is_none")]
pub size: Option<u64>,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Result of a verify operation.
pub enum VerifyState {
/// Verification was successful
Ok,
/// Verification reported one or more errors
Failed,
}
#[api(
properties: {
upid: {
type: UPID,
},
state: {
type: VerifyState,
},
},
)]
#[derive(Serialize, Deserialize)]
/// Task properties.
pub struct SnapshotVerifyState {
/// UPID of the verify task
pub upid: UPID,
/// State of the verification. Enum.
pub state: VerifyState,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
comment: {
schema: SINGLE_LINE_COMMENT_SCHEMA,
optional: true,
},
verification: {
type: SnapshotVerifyState,
optional: true,
},
fingerprint: {
type: String,
optional: true,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
},
},
owner: {
type: Authid,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about backup snapshot.
pub struct SnapshotListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub backup_time: i64,
/// The first line from manifest "notes"
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
/// The result of the last run verify task
#[serde(skip_serializing_if = "Option::is_none")]
pub verification: Option<SnapshotVerifyState>,
/// Fingerprint of encryption key
#[serde(skip_serializing_if = "Option::is_none")]
pub fingerprint: Option<Fingerprint>,
/// List of contained archive files.
pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes).
#[serde(skip_serializing_if = "Option::is_none")]
pub size: Option<u64>,
/// The owner of the snapshots group
#[serde(skip_serializing_if = "Option::is_none")]
pub owner: Option<Authid>,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"last-backup": {
schema: BACKUP_TIME_SCHEMA,
},
"backup-count": {
type: Integer,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
},
},
owner: {
type: Authid,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about a backup group.
pub struct GroupListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub last_backup: i64,
/// Number of contained snapshots
pub backup_count: u64,
/// List of contained archive files.
pub files: Vec<String>,
/// The owner of group
#[serde(skip_serializing_if = "Option::is_none")]
pub owner: Option<Authid>,
/// The first line from group "notes"
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Prune result.
pub struct PruneListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub backup_time: i64,
/// Keep snapshot
pub keep: bool,
}
#[api(
properties: {
ct: {
type: TypeCounts,
optional: true,
},
host: {
type: TypeCounts,
optional: true,
},
vm: {
type: TypeCounts,
optional: true,
},
other: {
type: TypeCounts,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize, Default)]
/// Counts of groups/snapshots per BackupType.
pub struct Counts {
/// The counts for CT backups
pub ct: Option<TypeCounts>,
/// The counts for Host backups
pub host: Option<TypeCounts>,
/// The counts for VM backups
pub vm: Option<TypeCounts>,
/// The counts for other backup types
pub other: Option<TypeCounts>,
}
#[api()]
#[derive(Serialize, Deserialize, Default)]
/// Backup Type group/snapshot counts.
pub struct TypeCounts {
/// The number of groups of the type.
pub groups: u64,
/// The number of snapshots of the type.
pub snapshots: u64,
}
#[api(
properties: {
"upid": {
optional: true,
type: UPID,
},
},
)]
#[derive(Clone, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Garbage collection status.
pub struct GarbageCollectionStatus {
pub upid: Option<String>,
/// Number of processed index files.
pub index_file_count: usize,
/// Sum of bytes referred by index files.
pub index_data_bytes: u64,
/// Bytes used on disk.
pub disk_bytes: u64,
/// Chunks used on disk.
pub disk_chunks: usize,
/// Sum of removed bytes.
pub removed_bytes: u64,
/// Number of removed chunks.
pub removed_chunks: usize,
/// Sum of pending bytes (pending removal - kept for safety).
pub pending_bytes: u64,
/// Number of pending chunks (pending removal - kept for safety).
pub pending_chunks: usize,
/// Number of chunks marked as .bad by verify that have been removed by GC.
pub removed_bad: usize,
/// Number of chunks still marked as .bad after garbage collection.
pub still_bad: usize,
}
impl Default for GarbageCollectionStatus {
fn default() -> Self {
GarbageCollectionStatus {
upid: None,
index_file_count: 0,
index_data_bytes: 0,
disk_bytes: 0,
disk_chunks: 0,
removed_bytes: 0,
removed_chunks: 0,
pending_bytes: 0,
pending_chunks: 0,
removed_bad: 0,
still_bad: 0,
}
}
}
#[api(
properties: {
"gc-status": {
type: GarbageCollectionStatus,
optional: true,
},
counts: {
type: Counts,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Overall Datastore status and useful information.
pub struct DataStoreStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
/// Status of last GC
#[serde(skip_serializing_if="Option::is_none")]
pub gc_status: Option<GarbageCollectionStatus>,
/// Group/Snapshot counts
#[serde(skip_serializing_if="Option::is_none")]
pub counts: Option<Counts>,
}
pub const ADMIN_DATASTORE_LIST_SNAPSHOTS_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of snapshots.",
&SnapshotListItem::API_SCHEMA,
).schema(),
};
pub const ADMIN_DATASTORE_LIST_SNAPSHOT_FILES_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of archive files inside a backup snapshots.",
&BackupContent::API_SCHEMA,
).schema(),
};
pub const ADMIN_DATASTORE_LIST_GROUPS_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of backup groups.",
&GroupListItem::API_SCHEMA,
).schema(),
};
pub const ADMIN_DATASTORE_PRUNE_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of snapshots and a flag indicating if there are kept or removed.",
&PruneListItem::API_SCHEMA,
).schema(),
};

View File

@ -1,7 +1,8 @@
use serde::{Deserialize, Serialize};
use proxmox::api::api;
#[api()]
#[api]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// General status information about a running VM file-restore daemon
@ -12,4 +13,3 @@ pub struct RestoreDaemonStatus {
/// not set, as then the status call will have reset the timer before returning the value
pub timeout: i64,
}

392
pbs-api-types/src/jobs.rs Normal file
View File

@ -0,0 +1,392 @@
use serde::{Deserialize, Serialize};
use proxmox::const_regex;
use proxmox::api::{api, schema::*};
use crate::{
Userid, Authid, REMOTE_ID_SCHEMA, DRIVE_NAME_SCHEMA, MEDIA_POOL_NAME_SCHEMA,
SINGLE_LINE_COMMENT_SCHEMA, PROXMOX_SAFE_ID_FORMAT, DATASTORE_SCHEMA,
};
const_regex!{
/// Regex for verification jobs 'DATASTORE:ACTUAL_JOB_ID'
pub VERIFICATION_JOB_WORKER_ID_REGEX = concat!(r"^(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):");
/// Regex for sync jobs 'REMOTE:REMOTE_DATASTORE:LOCAL_DATASTORE:ACTUAL_JOB_ID'
pub SYNC_JOB_WORKER_ID_REGEX = concat!(r"^(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):");
}
pub const JOB_ID_SCHEMA: Schema = StringSchema::new("Job ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const SYNC_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run sync job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_systemd::time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const GC_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run garbage collection job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_systemd::time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const PRUNE_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run prune job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_systemd::time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const VERIFICATION_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run verify job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_systemd::time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const REMOVE_VANISHED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
"Delete vanished backups. This remove the local copy if the remote backup was deleted.")
.default(true)
.schema();
#[api(
properties: {
"next-run": {
description: "Estimated time of the next run (UNIX epoch).",
optional: true,
type: Integer,
},
"last-run-state": {
description: "Result of the last run.",
optional: true,
type: String,
},
"last-run-upid": {
description: "Task UPID of the last run.",
optional: true,
type: String,
},
"last-run-endtime": {
description: "Endtime of the last run.",
optional: true,
type: Integer,
},
}
)]
#[derive(Serialize,Deserialize,Default)]
#[serde(rename_all="kebab-case")]
/// Job Scheduling Status
pub struct JobScheduleStatus {
#[serde(skip_serializing_if="Option::is_none")]
pub next_run: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_state: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_upid: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_endtime: Option<i64>,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// When do we send notifications
pub enum Notify {
/// Never send notification
Never,
/// Send notifications for failed and successful jobs
Always,
/// Send notifications for failed jobs only
Error,
}
#[api(
properties: {
gc: {
type: Notify,
optional: true,
},
verify: {
type: Notify,
optional: true,
},
sync: {
type: Notify,
optional: true,
},
},
)]
#[derive(Debug, Serialize, Deserialize)]
/// Datastore notify settings
pub struct DatastoreNotify {
/// Garbage collection settings
pub gc: Option<Notify>,
/// Verify job setting
pub verify: Option<Notify>,
/// Sync job setting
pub sync: Option<Notify>,
}
pub const DATASTORE_NOTIFY_STRING_SCHEMA: Schema = StringSchema::new(
"Datastore notification setting")
.format(&ApiStringFormat::PropertyString(&DatastoreNotify::API_SCHEMA))
.schema();
pub const IGNORE_VERIFIED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
"Do not verify backups that are already verified if their verification is not outdated.")
.default(true)
.schema();
pub const VERIFICATION_OUTDATED_AFTER_SCHEMA: Schema = IntegerSchema::new(
"Days after that a verification becomes outdated")
.minimum(1)
.schema();
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
"ignore-verified": {
optional: true,
schema: IGNORE_VERIFIED_BACKUPS_SCHEMA,
},
"outdated-after": {
optional: true,
schema: VERIFICATION_OUTDATED_AFTER_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: VERIFICATION_SCHEDULE_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all="kebab-case")]
/// Verification Job
pub struct VerificationJobConfig {
/// unique ID to address this job
#[updater(skip)]
pub id: String,
/// the datastore ID this verificaiton job affects
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
/// if not set to false, check the age of the last snapshot verification to filter
/// out recent ones, depending on 'outdated_after' configuration.
pub ignore_verified: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
/// Reverify snapshots after X days, never if 0. Ignored if 'ignore_verified' is false.
pub outdated_after: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// when to schedule this job in calendar event notation
pub schedule: Option<String>,
}
#[api(
properties: {
config: {
type: VerificationJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Status of Verification Job
pub struct VerificationJobStatus {
#[serde(flatten)]
pub config: VerificationJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
}
#[api(
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
drive: {
schema: DRIVE_NAME_SCHEMA,
},
"eject-media": {
description: "Eject media upon job completion.",
type: bool,
optional: true,
},
"export-media-set": {
description: "Export media set upon job completion.",
type: bool,
optional: true,
},
"latest-only": {
description: "Backup latest snapshots only.",
type: bool,
optional: true,
},
"notify-user": {
optional: true,
type: Userid,
},
}
)]
#[derive(Serialize,Deserialize,Clone,Updater)]
#[serde(rename_all="kebab-case")]
/// Tape Backup Job Setup
pub struct TapeBackupJobSetup {
pub store: String,
pub pool: String,
pub drive: String,
#[serde(skip_serializing_if="Option::is_none")]
pub eject_media: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub export_media_set: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub latest_only: Option<bool>,
/// Send job email notification to this user
#[serde(skip_serializing_if="Option::is_none")]
pub notify_user: Option<Userid>,
}
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
setup: {
type: TapeBackupJobSetup,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize,Clone,Updater)]
#[serde(rename_all="kebab-case")]
/// Tape Backup Job
pub struct TapeBackupJobConfig {
#[updater(skip)]
pub id: String,
#[serde(flatten)]
pub setup: TapeBackupJobSetup,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub schedule: Option<String>,
}
#[api(
properties: {
config: {
type: TapeBackupJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Status of Tape Backup Job
pub struct TapeBackupJobStatus {
#[serde(flatten)]
pub config: TapeBackupJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
/// Next tape used (best guess)
#[serde(skip_serializing_if="Option::is_none")]
pub next_media_label: Option<String>,
}
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
"owner": {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
"remote-store": {
schema: DATASTORE_SCHEMA,
},
"remove-vanished": {
schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize,Clone,Updater)]
#[serde(rename_all="kebab-case")]
/// Sync Job
pub struct SyncJobConfig {
#[updater(skip)]
pub id: String,
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Authid>,
pub remote: String,
pub remote_store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub remove_vanished: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub schedule: Option<String>,
}
#[api(
properties: {
config: {
type: SyncJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Status of Sync Job
pub struct SyncJobStatus {
#[serde(flatten)]
pub config: SyncJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
}

View File

@ -0,0 +1,56 @@
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use crate::CERT_FINGERPRINT_SHA256_SCHEMA;
#[api(default: "scrypt")]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
/// Encrtypt the Key with a password using PBKDF2
PBKDF2,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api(
properties: {
kdf: {
type: Kdf,
},
fingerprint: {
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
optional: true,
},
},
)]
#[derive(Deserialize, Serialize)]
/// Encryption Key Information
pub struct KeyInfo {
/// Path to key (if stored in a file)
#[serde(skip_serializing_if="Option::is_none")]
pub path: Option<String>,
pub kdf: Kdf,
/// Key creation time
pub created: i64,
/// Key modification time
pub modified: i64,
/// Key fingerprint
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<String>,
/// Password hint
#[serde(skip_serializing_if="Option::is_none")]
pub hint: Option<String>,
}

399
pbs-api-types/src/lib.rs Normal file
View File

@ -0,0 +1,399 @@
//! Basic API types used by most of the PBS code.
use serde::{Deserialize, Serialize};
use anyhow::bail;
use proxmox::api::api;
use proxmox::api::schema::{ApiStringFormat, ArraySchema, Schema, StringSchema};
use proxmox::const_regex;
use proxmox::{IPRE, IPRE_BRACKET, IPV4OCTET, IPV4RE, IPV6H16, IPV6LS32, IPV6RE};
#[rustfmt::skip]
#[macro_export]
macro_rules! PROXMOX_SAFE_ID_REGEX_STR { () => { r"(?:[A-Za-z0-9_][A-Za-z0-9._\-]*)" }; }
#[rustfmt::skip]
#[macro_export]
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9_][A-Za-z0-9._\-]*") }
#[rustfmt::skip]
#[macro_export]
macro_rules! BACKUP_TYPE_RE { () => (r"(?:host|vm|ct)") }
#[rustfmt::skip]
#[macro_export]
macro_rules! BACKUP_TIME_RE { () => (r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z") }
#[rustfmt::skip]
#[macro_export]
macro_rules! SNAPSHOT_PATH_REGEX_STR {
() => (
concat!(r"(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")")
);
}
mod acl;
pub use acl::*;
mod datastore;
pub use datastore::*;
mod jobs;
pub use jobs::*;
mod key_derivation;
pub use key_derivation::{Kdf, KeyInfo};
mod network;
pub use network::*;
#[macro_use]
mod userid;
pub use userid::Authid;
pub use userid::Userid;
pub use userid::{Realm, RealmRef};
pub use userid::{Tokenname, TokennameRef};
pub use userid::{Username, UsernameRef};
pub use userid::{PROXMOX_GROUP_ID_SCHEMA, PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA};
#[macro_use]
mod user;
pub use user::*;
pub mod upid;
pub use upid::*;
mod crypto;
pub use crypto::{CryptMode, Fingerprint};
pub mod file_restore;
mod remote;
pub use remote::*;
mod tape;
pub use tape::*;
mod zfs;
pub use zfs::*;
#[rustfmt::skip]
#[macro_use]
mod local_macros {
macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_NAME { () => (concat!(r"(?:(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!(), ")")) }
macro_rules! CIDR_V4_REGEX_STR { () => (concat!(r"(?:", IPV4RE!(), r"/\d{1,2})$")) }
macro_rules! CIDR_V6_REGEX_STR { () => (concat!(r"(?:", IPV6RE!(), r"/\d{1,3})$")) }
macro_rules! DNS_ALIAS_LABEL { () => (r"(?:[a-zA-Z0-9_](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_ALIAS_NAME {
() => (concat!(r"(?:(?:", DNS_ALIAS_LABEL!() , r"\.)*", DNS_ALIAS_LABEL!(), ")"))
}
}
const_regex! {
pub IP_V4_REGEX = concat!(r"^", IPV4RE!(), r"$");
pub IP_V6_REGEX = concat!(r"^", IPV6RE!(), r"$");
pub IP_REGEX = concat!(r"^", IPRE!(), r"$");
pub CIDR_V4_REGEX = concat!(r"^", CIDR_V4_REGEX_STR!(), r"$");
pub CIDR_V6_REGEX = concat!(r"^", CIDR_V6_REGEX_STR!(), r"$");
pub CIDR_REGEX = concat!(r"^(?:", CIDR_V4_REGEX_STR!(), "|", CIDR_V6_REGEX_STR!(), r")$");
pub HOSTNAME_REGEX = r"^(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)$";
pub DNS_NAME_REGEX = concat!(r"^", DNS_NAME!(), r"$");
pub DNS_ALIAS_REGEX = concat!(r"^", DNS_ALIAS_NAME!(), r"$");
pub DNS_NAME_OR_IP_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE!(), r")$");
pub SHA256_HEX_REGEX = r"^[a-f0-9]{64}$"; // fixme: define in common_regex ?
pub PASSWORD_REGEX = r"^[[:^cntrl:]]*$"; // everything but control characters
pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$";
pub SYSTEMD_DATETIME_REGEX = r"^\d{4}-\d{2}-\d{2}( \d{2}:\d{2}(:\d{2})?)?$"; // fixme: define in common_regex ?
pub FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";
/// Regex for safe identifiers.
///
/// This
/// [article](https://dwheeler.com/essays/fixing-unix-linux-filenames.html)
/// contains further information why it is reasonable to restict
/// names this way. This is not only useful for filenames, but for
/// any identifier command line tools work with.
pub PROXMOX_SAFE_ID_REGEX = concat!(r"^", PROXMOX_SAFE_ID_REGEX_STR!(), r"$");
pub SINGLE_LINE_COMMENT_REGEX = r"^[[:^cntrl:]]*$";
pub BACKUP_REPO_URL_REGEX = concat!(
r"^^(?:(?:(",
USER_ID_REGEX_STR!(), "|", APITOKEN_ID_REGEX_STR!(),
")@)?(",
DNS_NAME!(), "|", IPRE_BRACKET!(),
"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$"
);
pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$";
pub SUBSCRIPTION_KEY_REGEX = concat!(r"^pbs(?:[cbsp])-[0-9a-f]{10}$");
}
pub const IP_V4_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&IP_V4_REGEX);
pub const IP_V6_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&IP_V6_REGEX);
pub const IP_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&IP_REGEX);
pub const CIDR_V4_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&CIDR_V4_REGEX);
pub const CIDR_V6_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&CIDR_V6_REGEX);
pub const CIDR_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&CIDR_REGEX);
pub const PVE_CONFIG_DIGEST_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const PASSWORD_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&PASSWORD_REGEX);
pub const UUID_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&UUID_REGEX);
pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX);
pub const SUBSCRIPTION_KEY_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SUBSCRIPTION_KEY_REGEX);
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SYSTEMD_DATETIME_REGEX);
pub const HOSTNAME_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&HOSTNAME_REGEX);
pub const DNS_ALIAS_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_ALIAS_REGEX);
pub const SEARCH_DOMAIN_SCHEMA: Schema =
StringSchema::new("Search domain for host-name lookup.").schema();
pub const FIRST_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("First name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const SECOND_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("Second name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const THIRD_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("Third name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const HOSTNAME_SCHEMA: Schema = StringSchema::new("Hostname (as defined in RFC1123).")
.format(&HOSTNAME_FORMAT)
.schema();
pub const DNS_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_REGEX);
pub const DNS_NAME_OR_IP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_OR_IP_REGEX);
pub const DNS_NAME_OR_IP_SCHEMA: Schema = StringSchema::new("DNS name or IP address.")
.format(&DNS_NAME_OR_IP_FORMAT)
.schema();
pub const NODE_SCHEMA: Schema = StringSchema::new("Node name (or 'localhost')")
.format(&ApiStringFormat::VerifyFn(|node| {
if node == "localhost" || node == proxmox::tools::nodename() {
Ok(())
} else {
bail!("no such node '{}'", node);
}
}))
.schema();
pub const TIME_ZONE_SCHEMA: Schema = StringSchema::new(
"Time zone. The file '/usr/share/zoneinfo/zone.tab' contains the list of valid names.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name (/sys/block/<name>).")
.format(&BLOCKDEVICE_NAME_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const DISK_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Disk name list.", &BLOCKDEVICE_NAME_SCHEMA)
.schema();
pub const DISK_LIST_SCHEMA: Schema = StringSchema::new(
"A list of disk names, comma separated.")
.format(&ApiStringFormat::PropertyString(&DISK_ARRAY_SCHEMA))
.schema();
pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
.format(&PASSWORD_FORMAT)
.min_length(5)
.max_length(64)
.schema();
pub const REALM_ID_SCHEMA: Schema = StringSchema::new("Realm name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(2)
.max_length(32)
.schema();
pub const FINGERPRINT_SHA256_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&FINGERPRINT_SHA256_REGEX);
pub const CERT_FINGERPRINT_SHA256_SCHEMA: Schema =
StringSchema::new("X509 certificate fingerprint (sha256).")
.format(&FINGERPRINT_SHA256_FORMAT)
.schema();
pub const PROXMOX_SAFE_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const SINGLE_LINE_COMMENT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SINGLE_LINE_COMMENT_REGEX);
pub const SINGLE_LINE_COMMENT_SCHEMA: Schema = StringSchema::new("Comment (single line).")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.schema();
pub const SUBSCRIPTION_KEY_SCHEMA: Schema = StringSchema::new("Proxmox Backup Server subscription key.")
.format(&SUBSCRIPTION_KEY_FORMAT)
.min_length(15)
.max_length(16)
.schema();
pub const SERVICE_ID_SCHEMA: Schema = StringSchema::new("Service ID.")
.max_length(256)
.schema();
pub const PROXMOX_CONFIG_DIGEST_SCHEMA: Schema = StringSchema::new(
"Prevent changes if current configuration file has different \
SHA256 digest. This can be used to prevent concurrent \
modifications.",
)
.format(&PVE_CONFIG_DIGEST_FORMAT)
.schema();
/// API schema format definition for repository URLs
pub const BACKUP_REPO_URL: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_REPO_URL_REGEX);
// Complex type definitions
#[api()]
#[derive(Default, Serialize, Deserialize)]
/// Storage space usage information.
pub struct StorageStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
}
pub const PASSWORD_HINT_SCHEMA: Schema = StringSchema::new("Password hint.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(1)
.max_length(64)
.schema();
#[api]
#[derive(Deserialize, Serialize)]
/// RSA public key information
pub struct RsaPubKeyInfo {
/// Path to key (if stored in a file)
#[serde(skip_serializing_if="Option::is_none")]
pub path: Option<String>,
/// RSA exponent
pub exponent: String,
/// Hex-encoded RSA modulus
pub modulus: String,
/// Key (modulus) length in bits
pub length: usize,
}
impl std::convert::TryFrom<openssl::rsa::Rsa<openssl::pkey::Public>> for RsaPubKeyInfo {
type Error = anyhow::Error;
fn try_from(value: openssl::rsa::Rsa<openssl::pkey::Public>) -> Result<Self, Self::Error> {
let modulus = value.n().to_hex_str()?.to_string();
let exponent = value.e().to_dec_str()?.to_string();
let length = value.size() as usize * 8;
Ok(Self {
path: None,
exponent,
modulus,
length,
})
}
}
#[api()]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
/// Describes a package for which an update is available.
pub struct APTUpdateInfo {
/// Package name
pub package: String,
/// Package title
pub title: String,
/// Package architecture
pub arch: String,
/// Human readable package description
pub description: String,
/// New version to be updated to
pub version: String,
/// Old version currently installed
pub old_version: String,
/// Package origin
pub origin: String,
/// Package priority in human-readable form
pub priority: String,
/// Package section
pub section: String,
/// URL under which the package's changelog can be retrieved
pub change_log_url: String,
/// Custom extra field for additional package information
#[serde(skip_serializing_if="Option::is_none")]
pub extra_info: Option<String>,
}
#[api()]
#[derive(Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "UPPERCASE")]
pub enum RRDMode {
/// Maximum
Max,
/// Average
Average,
}
#[api()]
#[repr(u64)]
#[derive(Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum RRDTimeFrameResolution {
/// 1 min => last 70 minutes
Hour = 60,
/// 30 min => last 35 hours
Day = 60*30,
/// 3 hours => about 8 days
Week = 60*180,
/// 12 hours => last 35 days
Month = 60*720,
/// 1 week => last 490 days
Year = 60*10080,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Node Power command type.
pub enum NodePowerCommand {
/// Restart the server
Reboot,
/// Shutdown the server
Shutdown,
}

View File

@ -0,0 +1,308 @@
use serde::{Deserialize, Serialize};
use proxmox::api::{api, schema::*};
use crate::{
PROXMOX_SAFE_ID_REGEX,
IP_V4_FORMAT, IP_V6_FORMAT, IP_FORMAT,
CIDR_V4_FORMAT, CIDR_V6_FORMAT, CIDR_FORMAT,
};
pub const NETWORK_INTERFACE_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const IP_V4_SCHEMA: Schema =
StringSchema::new("IPv4 address.")
.format(&IP_V4_FORMAT)
.max_length(15)
.schema();
pub const IP_V6_SCHEMA: Schema =
StringSchema::new("IPv6 address.")
.format(&IP_V6_FORMAT)
.max_length(39)
.schema();
pub const IP_SCHEMA: Schema =
StringSchema::new("IP (IPv4 or IPv6) address.")
.format(&IP_FORMAT)
.max_length(39)
.schema();
pub const CIDR_V4_SCHEMA: Schema =
StringSchema::new("IPv4 address with netmask (CIDR notation).")
.format(&CIDR_V4_FORMAT)
.max_length(18)
.schema();
pub const CIDR_V6_SCHEMA: Schema =
StringSchema::new("IPv6 address with netmask (CIDR notation).")
.format(&CIDR_V6_FORMAT)
.max_length(43)
.schema();
pub const CIDR_SCHEMA: Schema =
StringSchema::new("IP address (IPv4 or IPv6) with netmask (CIDR notation).")
.format(&CIDR_FORMAT)
.max_length(43)
.schema();
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Interface configuration method
pub enum NetworkConfigMethod {
/// Configuration is done manually using other tools
Manual,
/// Define interfaces with statically allocated addresses.
Static,
/// Obtain an address via DHCP
DHCP,
/// Define the loopback interface.
Loopback,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
#[allow(non_camel_case_types)]
#[repr(u8)]
/// Linux Bond Mode
pub enum LinuxBondMode {
/// Round-robin policy
balance_rr = 0,
/// Active-backup policy
active_backup = 1,
/// XOR policy
balance_xor = 2,
/// Broadcast policy
broadcast = 3,
/// IEEE 802.3ad Dynamic link aggregation
#[serde(rename = "802.3ad")]
ieee802_3ad = 4,
/// Adaptive transmit load balancing
balance_tlb = 5,
/// Adaptive load balancing
balance_alb = 6,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
#[allow(non_camel_case_types)]
#[repr(u8)]
/// Bond Transmit Hash Policy for LACP (802.3ad)
pub enum BondXmitHashPolicy {
/// Layer 2
layer2 = 0,
/// Layer 2+3
#[serde(rename = "layer2+3")]
layer2_3 = 1,
/// Layer 3+4
#[serde(rename = "layer3+4")]
layer3_4 = 2,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Network interface type
pub enum NetworkInterfaceType {
/// Loopback
Loopback,
/// Physical Ethernet device
Eth,
/// Linux Bridge
Bridge,
/// Linux Bond
Bond,
/// Linux VLAN (eth.10)
Vlan,
/// Interface Alias (eth:1)
Alias,
/// Unknown interface type
Unknown,
}
pub const NETWORK_INTERFACE_NAME_SCHEMA: Schema = StringSchema::new("Network interface name.")
.format(&NETWORK_INTERFACE_FORMAT)
.min_length(1)
.max_length(libc::IFNAMSIZ-1)
.schema();
pub const NETWORK_INTERFACE_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Network interface list.", &NETWORK_INTERFACE_NAME_SCHEMA)
.schema();
pub const NETWORK_INTERFACE_LIST_SCHEMA: Schema = StringSchema::new(
"A list of network devices, comma separated.")
.format(&ApiStringFormat::PropertyString(&NETWORK_INTERFACE_ARRAY_SCHEMA))
.schema();
#[api(
properties: {
name: {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
},
"type": {
type: NetworkInterfaceType,
},
method: {
type: NetworkConfigMethod,
optional: true,
},
method6: {
type: NetworkConfigMethod,
optional: true,
},
cidr: {
schema: CIDR_V4_SCHEMA,
optional: true,
},
cidr6: {
schema: CIDR_V6_SCHEMA,
optional: true,
},
gateway: {
schema: IP_V4_SCHEMA,
optional: true,
},
gateway6: {
schema: IP_V6_SCHEMA,
optional: true,
},
options: {
description: "Option list (inet)",
type: Array,
items: {
description: "Optional attribute line.",
type: String,
},
},
options6: {
description: "Option list (inet6)",
type: Array,
items: {
description: "Optional attribute line.",
type: String,
},
},
comments: {
description: "Comments (inet, may span multiple lines)",
type: String,
optional: true,
},
comments6: {
description: "Comments (inet6, may span multiple lines)",
type: String,
optional: true,
},
bridge_ports: {
schema: NETWORK_INTERFACE_ARRAY_SCHEMA,
optional: true,
},
slaves: {
schema: NETWORK_INTERFACE_ARRAY_SCHEMA,
optional: true,
},
bond_mode: {
type: LinuxBondMode,
optional: true,
},
"bond-primary": {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
optional: true,
},
bond_xmit_hash_policy: {
type: BondXmitHashPolicy,
optional: true,
},
}
)]
#[derive(Debug, Serialize, Deserialize)]
/// Network Interface configuration
pub struct Interface {
/// Autostart interface
#[serde(rename = "autostart")]
pub autostart: bool,
/// Interface is active (UP)
pub active: bool,
/// Interface name
pub name: String,
/// Interface type
#[serde(rename = "type")]
pub interface_type: NetworkInterfaceType,
#[serde(skip_serializing_if="Option::is_none")]
pub method: Option<NetworkConfigMethod>,
#[serde(skip_serializing_if="Option::is_none")]
pub method6: Option<NetworkConfigMethod>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv4 address with netmask
pub cidr: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv4 gateway
pub gateway: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv6 address with netmask
pub cidr6: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv6 gateway
pub gateway6: Option<String>,
#[serde(skip_serializing_if="Vec::is_empty")]
pub options: Vec<String>,
#[serde(skip_serializing_if="Vec::is_empty")]
pub options6: Vec<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub comments: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub comments6: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// Maximum Transmission Unit
pub mtu: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub bridge_ports: Option<Vec<String>>,
/// Enable bridge vlan support.
#[serde(skip_serializing_if="Option::is_none")]
pub bridge_vlan_aware: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub slaves: Option<Vec<String>>,
#[serde(skip_serializing_if="Option::is_none")]
pub bond_mode: Option<LinuxBondMode>,
#[serde(skip_serializing_if="Option::is_none")]
#[serde(rename = "bond-primary")]
pub bond_primary: Option<String>,
pub bond_xmit_hash_policy: Option<BondXmitHashPolicy>,
}
impl Interface {
pub fn new(name: String) -> Self {
Self {
name,
interface_type: NetworkInterfaceType::Unknown,
autostart: false,
active: false,
method: None,
method6: None,
cidr: None,
gateway: None,
cidr6: None,
gateway6: None,
options: Vec::new(),
options6: Vec::new(),
comments: None,
comments6: None,
mtu: None,
bridge_ports: None,
bridge_vlan_aware: None,
slaves: None,
bond_mode: None,
bond_primary: None,
bond_xmit_hash_policy: None,
}
}
}

View File

@ -0,0 +1,86 @@
use serde::{Deserialize, Serialize};
use super::*;
use proxmox::api::{api, schema::*};
pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth token for remote host.")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const REMOTE_PASSWORD_BASE64_SCHEMA: Schema = StringSchema::new("Password or auth token for remote host (stored as base64 string).")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const REMOTE_ID_SCHEMA: Schema = StringSchema::new("Remote ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
#[api(
properties: {
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
host: {
schema: DNS_NAME_OR_IP_SCHEMA,
},
port: {
optional: true,
description: "The (optional) port",
type: u16,
},
"auth-id": {
type: Authid,
},
fingerprint: {
optional: true,
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
},
},
)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all = "kebab-case")]
/// Remote configuration properties.
pub struct RemoteConfig {
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
pub host: String,
#[serde(skip_serializing_if="Option::is_none")]
pub port: Option<u16>,
pub auth_id: Authid,
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<String>,
}
#[api(
properties: {
name: {
schema: REMOTE_ID_SCHEMA,
},
config: {
type: RemoteConfig,
},
password: {
schema: REMOTE_PASSWORD_BASE64_SCHEMA,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Remote properties.
pub struct Remote {
pub name: String,
// Note: The stored password is base64 encoded
#[serde(skip_serializing_if="String::is_empty")]
#[serde(with = "proxmox::tools::serde::string_as_base64")]
pub password: String,
#[serde(flatten)]
pub config: RemoteConfig,
}

View File

@ -10,10 +10,11 @@ use proxmox::api::{
ArraySchema,
IntegerSchema,
StringSchema,
Updater,
},
};
use crate::api2::types::{
use crate::{
PROXMOX_SAFE_ID_FORMAT,
OptionalDeviceIdentification,
};
@ -62,10 +63,11 @@ Import/Export, i.e. any media in those slots are considered to be
},
},
)]
#[derive(Serialize,Deserialize)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all = "kebab-case")]
/// SCSI tape changer
pub struct ScsiTapeChanger {
#[updater(skip)]
pub name: String,
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]

View File

@ -6,10 +6,10 @@ use serde::{Deserialize, Serialize};
use proxmox::api::{
api,
schema::{Schema, IntegerSchema, StringSchema},
schema::{Schema, IntegerSchema, StringSchema, Updater},
};
use crate::api2::types::{
use crate::{
PROXMOX_SAFE_ID_FORMAT,
CHANGER_NAME_SCHEMA,
OptionalDeviceIdentification,
@ -28,7 +28,7 @@ pub const LTO_DRIVE_PATH_SCHEMA: Schema = StringSchema::new(
pub const CHANGER_DRIVENUM_SCHEMA: Schema = IntegerSchema::new(
"Associated changer drive number (requires option changer)")
.minimum(0)
.maximum(8)
.maximum(255)
.default(0)
.schema();
@ -69,10 +69,11 @@ pub struct VirtualTapeDrive {
},
}
)]
#[derive(Serialize,Deserialize)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all = "kebab-case")]
/// Lto SCSI tape driver
pub struct LtoTapeDrive {
#[updater(skip)]
pub name: String,
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]

View File

@ -1,17 +1,46 @@
use ::serde::{Deserialize, Serialize};
use proxmox::{
api::api,
api::{api, schema::*},
tools::Uuid,
};
use crate::api2::types::{
MEDIA_UUID_SCHEMA,
MEDIA_SET_UUID_SCHEMA,
use crate::{
UUID_FORMAT,
MediaStatus,
MediaLocation,
};
pub const MEDIA_SET_UUID_SCHEMA: Schema =
StringSchema::new("MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).")
.format(&UUID_FORMAT)
.schema();
pub const MEDIA_UUID_SCHEMA: Schema =
StringSchema::new("Media Uuid.")
.format(&UUID_FORMAT)
.schema();
#[api(
properties: {
"media-set-uuid": {
schema: MEDIA_SET_UUID_SCHEMA,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media Set list entry
pub struct MediaSetListEntry {
/// Media set name
pub media_set_name: String,
pub media_set_uuid: Uuid,
/// MediaSet creation time stamp
pub media_set_ctime: i64,
/// Media Pool
pub pool: String,
}
#[api(
properties: {
location: {

View File

@ -9,7 +9,7 @@ use proxmox::api::{
},
};
use crate::api2::types::{
use crate::{
PROXMOX_SAFE_ID_FORMAT,
CHANGER_NAME_SCHEMA,
};
@ -35,8 +35,8 @@ pub enum MediaLocation {
proxmox::forward_deserialize_to_from_str!(MediaLocation);
proxmox::forward_serialize_to_display!(MediaLocation);
impl MediaLocation {
pub const API_SCHEMA: Schema = StringSchema::new(
impl proxmox::api::schema::ApiType for MediaLocation {
const API_SCHEMA: Schema = StringSchema::new(
"Media location (e.g. 'offline', 'online-<changer_name>', 'vault-<vault_name>')")
.format(&ApiStringFormat::VerifyFn(|text| {
let location: MediaLocation = text.parse()?;

View File

@ -4,28 +4,23 @@
//! so we cannot use them directly for the API. Instead, we represent
//! them as String.
use anyhow::Error;
use std::str::FromStr;
use anyhow::Error;
use serde::{Deserialize, Serialize};
use proxmox::api::{
api,
schema::{Schema, StringSchema, ApiStringFormat},
schema::{Schema, StringSchema, ApiStringFormat, Updater},
};
use proxmox_systemd::time::{parse_calendar_event, parse_time_span, CalendarEvent, TimeSpan};
use crate::{
tools::systemd::time::{
CalendarEvent,
TimeSpan,
parse_time_span,
parse_calendar_event,
},
api2::types::{
PROXMOX_SAFE_ID_FORMAT,
SINGLE_LINE_COMMENT_FORMAT,
SINGLE_LINE_COMMENT_SCHEMA,
TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
},
PROXMOX_SAFE_ID_FORMAT,
SINGLE_LINE_COMMENT_FORMAT,
SINGLE_LINE_COMMENT_SCHEMA,
TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
};
pub const MEDIA_POOL_NAME_SCHEMA: Schema = StringSchema::new("Media pool name.")
@ -138,10 +133,11 @@ impl std::str::FromStr for RetentionPolicy {
},
},
)]
#[derive(Serialize,Deserialize)]
#[derive(Serialize,Deserialize,Updater)]
/// Media pool configuration
pub struct MediaPoolConfig {
/// The pool name
#[updater(skip)]
pub name: String,
/// Media Set allocation policy
#[serde(skip_serializing_if="Option::is_none")]

View File

@ -0,0 +1,94 @@
//! Types for tape backup API
mod device;
pub use device::*;
mod changer;
pub use changer::*;
mod drive;
pub use drive::*;
mod media_pool;
pub use media_pool::*;
mod media_status;
pub use media_status::*;
mod media_location;
pub use media_location::*;
mod media;
pub use media::*;
use ::serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{Schema, StringSchema, ApiStringFormat};
use proxmox::tools::Uuid;
use proxmox::const_regex;
use crate::{
FINGERPRINT_SHA256_FORMAT, BACKUP_ID_SCHEMA, BACKUP_TYPE_SCHEMA,
};
const_regex!{
pub TAPE_RESTORE_SNAPSHOT_REGEX = concat!(r"^", PROXMOX_SAFE_ID_REGEX_STR!(), r":", SNAPSHOT_PATH_REGEX_STR!(), r"$");
}
pub const TAPE_RESTORE_SNAPSHOT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&TAPE_RESTORE_SNAPSHOT_REGEX);
pub const TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA: Schema = StringSchema::new(
"Tape encryption key fingerprint (sha256)."
)
.format(&FINGERPRINT_SHA256_FORMAT)
.schema();
pub const TAPE_RESTORE_SNAPSHOT_SCHEMA: Schema = StringSchema::new(
"A snapshot in the format: 'store:type/id/time")
.format(&TAPE_RESTORE_SNAPSHOT_FORMAT)
.type_text("store:type/id/time")
.schema();
#[api(
properties: {
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
"label-text": {
schema: MEDIA_LABEL_SCHEMA,
optional: true,
},
"media": {
schema: MEDIA_UUID_SCHEMA,
optional: true,
},
"media-set": {
schema: MEDIA_SET_UUID_SCHEMA,
optional: true,
},
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
optional: true,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Content list filter parameters
pub struct MediaContentListFilter {
pub pool: Option<String>,
pub label_text: Option<String>,
pub media: Option<Uuid>,
pub media_set: Option<Uuid>,
pub backup_type: Option<String>,
pub backup_id: Option<String>,
}

203
pbs-api-types/src/upid.rs Normal file
View File

@ -0,0 +1,203 @@
use std::sync::atomic::{AtomicUsize, Ordering};
use anyhow::{bail, Error};
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{ApiStringFormat, ApiType, Schema, StringSchema, ArraySchema, ReturnType};
use proxmox::const_regex;
use proxmox::sys::linux::procfs;
use crate::Authid;
/// Unique Process/Task Identifier
///
/// We use this to uniquely identify worker task. UPIDs have a short
/// string repesentaion, which gives additional information about the
/// type of the task. for example:
/// ```text
/// UPID:{node}:{pid}:{pstart}:{task_id}:{starttime}:{worker_type}:{worker_id}:{userid}:
/// UPID:elsa:00004F37:0039E469:00000000:5CA78B83:garbage_collection::root@pam:
/// ```
/// Please note that we use tokio, so a single thread can run multiple
/// tasks.
// #[api] - manually implemented API type
#[derive(Debug, Clone)]
pub struct UPID {
/// The Unix PID
pub pid: libc::pid_t,
/// The Unix process start time from `/proc/pid/stat`
pub pstart: u64,
/// The task start time (Epoch)
pub starttime: i64,
/// The task ID (inside the process/thread)
pub task_id: usize,
/// Worker type (arbitrary ASCII string)
pub worker_type: String,
/// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>,
/// The authenticated entity who started the task
pub auth_id: Authid,
/// The node name.
pub node: String,
}
proxmox::forward_serialize_to_display!(UPID);
proxmox::forward_deserialize_to_from_str!(UPID);
const_regex! {
pub PROXMOX_UPID_REGEX = concat!(
r"^UPID:(?P<node>[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?):(?P<pid>[0-9A-Fa-f]{8}):",
r"(?P<pstart>[0-9A-Fa-f]{8,9}):(?P<task_id>[0-9A-Fa-f]{8,16}):(?P<starttime>[0-9A-Fa-f]{8}):",
r"(?P<wtype>[^:\s]+):(?P<wid>[^:\s]*):(?P<authid>[^:\s]+):$"
);
}
pub const PROXMOX_UPID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_UPID_REGEX);
pub const UPID_SCHEMA: Schema = StringSchema::new("Unique Process/Task Identifier")
.min_length("UPID:N:12345678:12345678:12345678:::".len())
.max_length(128) // arbitrary
.format(&PROXMOX_UPID_FORMAT)
.schema();
impl ApiType for UPID {
const API_SCHEMA: Schema = UPID_SCHEMA;
}
impl UPID {
/// Create a new UPID
pub fn new(
worker_type: &str,
worker_id: Option<String>,
auth_id: Authid,
) -> Result<Self, Error> {
let pid = unsafe { libc::getpid() };
let bad: &[_] = &['/', ':', ' '];
if worker_type.contains(bad) {
bail!("illegal characters in worker type '{}'", worker_type);
}
static WORKER_TASK_NEXT_ID: AtomicUsize = AtomicUsize::new(0);
let task_id = WORKER_TASK_NEXT_ID.fetch_add(1, Ordering::SeqCst);
Ok(UPID {
pid,
pstart: procfs::PidStat::read_from_pid(nix::unistd::Pid::from_raw(pid))?.starttime,
starttime: proxmox::tools::time::epoch_i64(),
task_id,
worker_type: worker_type.to_owned(),
worker_id,
auth_id,
node: proxmox::tools::nodename().to_owned(),
})
}
}
impl std::str::FromStr for UPID {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if let Some(cap) = PROXMOX_UPID_REGEX.captures(s) {
let worker_id = if cap["wid"].is_empty() {
None
} else {
let wid = proxmox_systemd::unescape_unit(&cap["wid"])?;
Some(wid)
};
Ok(UPID {
pid: i32::from_str_radix(&cap["pid"], 16).unwrap(),
pstart: u64::from_str_radix(&cap["pstart"], 16).unwrap(),
starttime: i64::from_str_radix(&cap["starttime"], 16).unwrap(),
task_id: usize::from_str_radix(&cap["task_id"], 16).unwrap(),
worker_type: cap["wtype"].to_string(),
worker_id,
auth_id: cap["authid"].parse()?,
node: cap["node"].to_string(),
})
} else {
bail!("unable to parse UPID '{}'", s);
}
}
}
impl std::fmt::Display for UPID {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
let wid = if let Some(ref id) = self.worker_id {
proxmox_systemd::escape_unit(id, false)
} else {
String::new()
};
// Note: pstart can be > 32bit if uptime > 497 days, so this can result in
// more that 8 characters for pstart
write!(f, "UPID:{}:{:08X}:{:08X}:{:08X}:{:08X}:{}:{}:{}:",
self.node, self.pid, self.pstart, self.task_id, self.starttime, self.worker_type, wid, self.auth_id)
}
}
#[api()]
#[derive(Eq, PartialEq, Debug, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum TaskStateType {
/// Ok
OK,
/// Warning
Warning,
/// Error
Error,
/// Unknown
Unknown,
}
#[api(
properties: {
upid: { schema: UPID::API_SCHEMA },
},
)]
#[derive(Serialize, Deserialize)]
/// Task properties.
pub struct TaskListItem {
pub upid: String,
/// The node name where the task is running on.
pub node: String,
/// The Unix PID
pub pid: i64,
/// The task start time (Epoch)
pub pstart: u64,
/// The task start time (Epoch)
pub starttime: i64,
/// Worker type (arbitrary ASCII string)
pub worker_type: String,
/// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>,
/// The authenticated entity who started the task
pub user: Authid,
/// The task end time (Epoch)
#[serde(skip_serializing_if="Option::is_none")]
pub endtime: Option<i64>,
/// Task end status
#[serde(skip_serializing_if="Option::is_none")]
pub status: Option<String>,
}
pub const NODE_TASKS_LIST_TASKS_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"A list of tasks.",
&TaskListItem::API_SCHEMA,
).schema(),
};

208
pbs-api-types/src/user.rs Normal file
View File

@ -0,0 +1,208 @@
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{
BooleanSchema, IntegerSchema, Schema, StringSchema, Updater,
};
use super::{SINGLE_LINE_COMMENT_FORMAT, SINGLE_LINE_COMMENT_SCHEMA};
use super::userid::{Authid, Userid, PROXMOX_TOKEN_ID_SCHEMA};
pub const ENABLE_USER_SCHEMA: Schema = BooleanSchema::new(
"Enable the account (default). You can set this to '0' to disable the account.")
.default(true)
.schema();
pub const EXPIRE_USER_SCHEMA: Schema = IntegerSchema::new(
"Account expiration date (seconds since epoch). '0' means no expiration date.")
.default(0)
.minimum(0)
.schema();
pub const FIRST_NAME_SCHEMA: Schema = StringSchema::new("First name.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const LAST_NAME_SCHEMA: Schema = StringSchema::new("Last name.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const EMAIL_SCHEMA: Schema = StringSchema::new("E-Mail Address.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
#[api(
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
firstname: {
optional: true,
schema: FIRST_NAME_SCHEMA,
},
lastname: {
schema: LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: EMAIL_SCHEMA,
optional: true,
},
tokens: {
type: Array,
optional: true,
description: "List of user's API tokens.",
items: {
type: ApiToken
},
},
}
)]
#[derive(Serialize,Deserialize)]
/// User properties with added list of ApiTokens
pub struct UserWithTokens {
pub userid: Userid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub firstname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub lastname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub email: Option<String>,
#[serde(skip_serializing_if="Vec::is_empty", default)]
pub tokens: Vec<ApiToken>,
}
#[api(
properties: {
tokenid: {
schema: PROXMOX_TOKEN_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize)]
/// ApiToken properties.
pub struct ApiToken {
pub tokenid: Authid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
}
impl ApiToken {
pub fn is_active(&self) -> bool {
if !self.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = self.expire {
let now = proxmox::tools::time::epoch_i64();
if expire > 0 && expire <= now {
return false;
}
}
true
}
}
#[api(
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
firstname: {
optional: true,
schema: FIRST_NAME_SCHEMA,
},
lastname: {
schema: LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: EMAIL_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize,Deserialize,Updater)]
/// User properties.
pub struct User {
#[updater(skip)]
pub userid: Userid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub firstname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub lastname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub email: Option<String>,
}
impl User {
pub fn is_active(&self) -> bool {
if !self.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = self.expire {
let now = proxmox::tools::time::epoch_i64();
if expire > 0 && expire <= now {
return false;
}
}
true
}
}

View File

@ -30,7 +30,7 @@ use lazy_static::lazy_static;
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{ApiStringFormat, Schema, StringSchema};
use proxmox::api::schema::{ApiStringFormat, ApiType, Schema, StringSchema, UpdaterType};
use proxmox::const_regex;
// we only allow a limited set of characters
@ -38,10 +38,15 @@ use proxmox::const_regex;
// colon separated lists)!
// slash is not allowed because it is used as pve API delimiter
// also see "man useradd"
#[macro_export]
macro_rules! USER_NAME_REGEX_STR { () => (r"(?:[^\s:/[:cntrl:]]+)") }
#[macro_export]
macro_rules! GROUP_NAME_REGEX_STR { () => (USER_NAME_REGEX_STR!()) }
#[macro_export]
macro_rules! TOKEN_NAME_REGEX_STR { () => (PROXMOX_SAFE_ID_REGEX_STR!()) }
#[macro_export]
macro_rules! USER_ID_REGEX_STR { () => (concat!(USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!())) }
#[macro_export]
macro_rules! APITOKEN_ID_REGEX_STR { () => (concat!(USER_ID_REGEX_STR!() , r"!", TOKEN_NAME_REGEX_STR!())) }
const_regex! {
@ -93,7 +98,6 @@ pub const PROXMOX_AUTH_REALM_STRING_SCHEMA: StringSchema =
.max_length(32);
pub const PROXMOX_AUTH_REALM_SCHEMA: Schema = PROXMOX_AUTH_REALM_STRING_SCHEMA.schema();
#[api(
type: String,
format: &PROXMOX_USER_NAME_FORMAT,
@ -393,19 +397,21 @@ impl<'a> TryFrom<&'a str> for &'a TokennameRef {
}
/// A complete user id consisting of a user name and a realm
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
#[derive(Clone, Debug, PartialEq, Eq, Hash, UpdaterType)]
pub struct Userid {
data: String,
name_len: usize,
}
impl Userid {
pub const API_SCHEMA: Schema = StringSchema::new("User ID")
impl ApiType for Userid {
const API_SCHEMA: Schema = StringSchema::new("User ID")
.format(&PROXMOX_USER_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
}
impl Userid {
const fn new(data: String, name_len: usize) -> Self {
Self { data, name_len }
}
@ -522,19 +528,21 @@ impl PartialEq<String> for Userid {
}
/// A complete authentication id consisting of a user id and an optional token name.
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
#[derive(Clone, Debug, Eq, PartialEq, Hash, UpdaterType)]
pub struct Authid {
user: Userid,
tokenname: Option<Tokenname>
}
impl Authid {
pub const API_SCHEMA: Schema = StringSchema::new("Authentication ID")
impl ApiType for Authid {
const API_SCHEMA: Schema = StringSchema::new("Authentication ID")
.format(&PROXMOX_AUTH_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
}
impl Authid {
const fn new(user: Userid, tokenname: Option<Tokenname>) -> Self {
Self { user, tokenname }
}

81
pbs-api-types/src/zfs.rs Normal file
View File

@ -0,0 +1,81 @@
use serde::{Deserialize, Serialize};
use proxmox::api::{api, schema::*};
use proxmox::const_regex;
const_regex! {
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
}
pub const ZFS_ASHIFT_SCHEMA: Schema = IntegerSchema::new(
"Pool sector size exponent.")
.minimum(9)
.maximum(16)
.default(12)
.schema();
pub const ZPOOL_NAME_SCHEMA: Schema = StringSchema::new("ZFS Pool Name")
.format(&ApiStringFormat::Pattern(&ZPOOL_NAME_REGEX))
.schema();
#[api(default: "On")]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// The ZFS compression algorithm to use.
pub enum ZfsCompressionType {
/// Gnu Zip
Gzip,
/// LZ4
Lz4,
/// LZJB
Lzjb,
/// ZLE
Zle,
/// ZStd
ZStd,
/// Enable compression using the default algorithm.
On,
/// Disable compression.
Off,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// The ZFS RAID level to use.
pub enum ZfsRaidLevel {
/// Single Disk
Single,
/// Mirror
Mirror,
/// Raid10
Raid10,
/// RaidZ
RaidZ,
/// RaidZ2
RaidZ2,
/// RaidZ3
RaidZ3,
}
#[api()]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// zpool list item
pub struct ZpoolListItem {
/// zpool name
pub name: String,
/// Health
pub health: String,
/// Total size
pub size: u64,
/// Used size
pub alloc: u64,
/// Free space
pub free: u64,
/// ZFS fragnentation level
pub frag: u64,
/// ZFS deduplication ratio
pub dedup: f64,
}

9
pbs-buildcfg/Cargo.toml Normal file
View File

@ -0,0 +1,9 @@
[package]
name = "pbs-buildcfg"
version = "2.0.10"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "macros used for pbs related paths such as configdir and rundir"
build = "build.rs"
[dependencies]

24
pbs-buildcfg/build.rs Normal file
View File

@ -0,0 +1,24 @@
// build.rs
use std::env;
use std::process::Command;
fn main() {
let repoid = match env::var("REPOID") {
Ok(repoid) => repoid,
Err(_) => {
match Command::new("git")
.args(&["rev-parse", "HEAD"])
.output()
{
Ok(output) => {
String::from_utf8(output.stdout).unwrap()
}
Err(err) => {
panic!("git rev-parse failed: {}", err);
}
}
}
};
println!("cargo:rustc-env=REPOID={}", repoid);
}

View File

@ -1,9 +1,24 @@
//! Exports configuration data from the build system
pub const PROXMOX_PKG_VERSION: &str =
concat!(
env!("CARGO_PKG_VERSION_MAJOR"),
".",
env!("CARGO_PKG_VERSION_MINOR"),
);
pub const PROXMOX_PKG_RELEASE: &str = env!("CARGO_PKG_VERSION_PATCH");
pub const PROXMOX_PKG_REPOID: &str = env!("REPOID");
/// The configured configuration directory
pub const CONFIGDIR: &str = "/etc/proxmox-backup";
pub const JS_DIR: &str = "/usr/share/javascript/proxmox-backup";
/// Unix system user used by proxmox-backup-proxy
pub const BACKUP_USER_NAME: &str = "backup";
/// Unix system group used by proxmox-backup-proxy
pub const BACKUP_GROUP_NAME: &str = "backup";
#[macro_export]
macro_rules! PROXMOX_BACKUP_RUN_DIR_M { () => ("/run/proxmox-backup") }
@ -43,6 +58,10 @@ pub const PROXMOX_BACKUP_API_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(),
pub const PROXMOX_BACKUP_INITRAMFS_FN: &str =
concat!(PROXMOX_BACKUP_CACHE_DIR_M!(), "/file-restore-initramfs.img");
/// filename of the cached initramfs to use for debugging single file restore
pub const PROXMOX_BACKUP_INITRAMFS_DBG_FN: &str =
concat!(PROXMOX_BACKUP_CACHE_DIR_M!(), "/file-restore-initramfs-debug.img");
/// filename of the kernel to use for booting single file restore VMs
pub const PROXMOX_BACKUP_KERNEL_FN: &str =
concat!(PROXMOX_BACKUP_FILE_RESTORE_BIN_DIR_M!(), "/bzImage");
@ -52,7 +71,7 @@ pub const PROXMOX_BACKUP_KERNEL_FN: &str =
/// This is a simply way to get the full path for configuration files.
/// #### Example:
/// ```
/// # #[macro_use] extern crate proxmox_backup;
/// use pbs_buildcfg::configdir;
/// let cert_path = configdir!("/proxy.pfx");
/// ```
#[macro_export]
@ -66,6 +85,6 @@ macro_rules! configdir {
#[macro_export]
macro_rules! rundir {
($subdir:expr) => {
concat!(PROXMOX_BACKUP_RUN_DIR_M!(), $subdir)
concat!($crate::PROXMOX_BACKUP_RUN_DIR_M!(), $subdir)
};
}

40
pbs-client/Cargo.toml Normal file
View File

@ -0,0 +1,40 @@
[package]
name = "pbs-client"
version = "0.1.0"
authors = ["Wolfgang Bumiller <w.bumiller@proxmox.com>"]
edition = "2018"
description = "The main proxmox backup client crate"
[dependencies]
anyhow = "1.0"
bitflags = "1.2.1"
bytes = "1.0"
futures = "0.3"
h2 = { version = "0.3", features = [ "stream" ] }
http = "0.2"
hyper = { version = "0.14", features = [ "full" ] }
lazy_static = "1.4"
libc = "0.2"
nix = "0.19.1"
openssl = "0.10"
percent-encoding = "2.1"
pin-project-lite = "0.2"
regex = "1.2"
rustyline = "7"
serde_json = "1.0"
tokio = { version = "1.6", features = [ "fs", "signal" ] }
tokio-stream = "0.1.0"
tower-service = "0.3.0"
xdg = "2.2"
pathpatterns = "0.1.2"
proxmox = { version = "0.13.3", default-features = false, features = [ "cli" ] }
proxmox-fuse = "0.1.1"
proxmox-http = { version = "0.4.0", features = [ "client", "http-helpers", "websocket" ] }
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
pbs-api-types = { path = "../pbs-api-types" }
pbs-buildcfg = { path = "../pbs-buildcfg" }
pbs-datastore = { path = "../pbs-datastore" }
pbs-runtime = { path = "../pbs-runtime" }
pbs-tools = { path = "../pbs-tools" }

View File

@ -9,10 +9,15 @@ use serde_json::{json, Value};
use proxmox::tools::digest_to_hex;
use crate::{
tools::compute_file_csum,
backup::*,
};
use pbs_tools::crypt_config::CryptConfig;
use pbs_tools::sha::sha256;
use pbs_datastore::{PROXMOX_BACKUP_READER_PROTOCOL_ID_V1, BackupManifest};
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::data_blob_reader::DataBlobReader;
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::MANIFEST_BLOB_NAME;
use super::{HttpClient, H2Client};
@ -148,7 +153,7 @@ impl BackupReader {
&self,
manifest: &BackupManifest,
name: &str,
) -> Result<DataBlobReader<File>, Error> {
) -> Result<DataBlobReader<'_, File>, Error> {
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
@ -158,7 +163,8 @@ impl BackupReader {
self.download(name, &mut tmpfile).await?;
let (csum, size) = compute_file_csum(&mut tmpfile)?;
tmpfile.seek(SeekFrom::Start(0))?;
let (csum, size) = sha256(&mut tmpfile)?;
manifest.verify_file(name, &csum, size)?;
tmpfile.seek(SeekFrom::Start(0))?;

View File

@ -3,12 +3,7 @@ use std::fmt;
use anyhow::{format_err, Error};
use proxmox::api::schema::*;
use crate::api2::types::*;
/// API schema format definition for repository URLs
pub const BACKUP_REPO_URL: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_REPO_URL_REGEX);
use pbs_api_types::{BACKUP_REPO_URL_REGEX, IP_V6_REGEX, Authid, Userid};
/// Reference remote backup locations
///

View File

@ -1,12 +1,12 @@
use std::collections::HashSet;
use std::future::Future;
use std::os::unix::fs::OpenOptionsExt;
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error};
use futures::future::AbortHandle;
use futures::stream::Stream;
use futures::*;
use futures::future::{self, AbortHandle, Either, FutureExt, TryFutureExt};
use futures::stream::{Stream, StreamExt, TryStreamExt};
use serde_json::{json, Value};
use tokio::io::AsyncReadExt;
use tokio::sync::{mpsc, oneshot};
@ -14,9 +14,16 @@ use tokio_stream::wrappers::ReceiverStream;
use proxmox::tools::digest_to_hex;
use pbs_tools::crypt_config::CryptConfig;
use pbs_tools::format::HumanByte;
use pbs_datastore::{CATALOG_NAME, PROXMOX_BACKUP_PROTOCOL_ID_V1};
use pbs_datastore::data_blob::{ChunkInfo, DataBlob, DataChunkBuilder};
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::{ArchiveType, BackupManifest, MANIFEST_BLOB_NAME};
use super::merge_known_chunks::{MergeKnownChunks, MergedChunkInfo};
use crate::backup::*;
use crate::tools::format::HumanByte;
use super::{H2Client, HttpClient};
@ -282,7 +289,7 @@ impl BackupWriter {
if let Some(manifest) = options.previous_manifest {
// try, but ignore errors
match archive_type(archive_name) {
match ArchiveType::from_path(archive_name) {
Ok(ArchiveType::FixedIndex) => {
let _ = self
.download_previous_fixed_index(
@ -333,7 +340,7 @@ impl BackupWriter {
let archive = if self.verbose {
archive_name.to_string()
} else {
crate::tools::format::strip_server_file_extension(archive_name)
pbs_tools::format::strip_server_file_extension(archive_name)
};
if archive_name != CATALOG_NAME {
let speed: HumanByte =
@ -452,7 +459,7 @@ impl BackupWriter {
.and_then(move |(merged_chunk_info, response): (MergedChunkInfo, Option<h2::client::ResponseFuture>)| {
match (response, merged_chunk_info) {
(Some(response), MergedChunkInfo::Known(list)) => {
future::Either::Left(
Either::Left(
response
.map_err(Error::from)
.and_then(H2Client::h2api_response)
@ -462,7 +469,7 @@ impl BackupWriter {
)
}
(None, MergedChunkInfo::Known(list)) => {
future::Either::Right(future::ok(MergedChunkInfo::Known(list)))
Either::Right(future::ok(MergedChunkInfo::Known(list)))
}
_ => unreachable!(),
}
@ -735,7 +742,7 @@ impl BackupWriter {
let new_info = MergedChunkInfo::Known(vec![(offset, digest)]);
future::Either::Left(h2.send_request(request, upload_data).and_then(
Either::Left(h2.send_request(request, upload_data).and_then(
move |response| async move {
upload_queue
.send((new_info, Some(response)))
@ -746,7 +753,7 @@ impl BackupWriter {
},
))
} else {
future::Either::Right(async move {
Either::Right(async move {
upload_queue
.send((merged_chunk_info, None))
.await

View File

@ -18,12 +18,14 @@ use proxmox::api::cli::{self, CliCommand, CliCommandMap, CliHelper, CommandLineI
use proxmox::tools::fs::{create_path, CreateOptions};
use pxar::{EntryKind, Metadata};
use crate::backup::catalog::{self, DirEntryAttribute};
use pbs_runtime::block_in_place;
use pbs_datastore::catalog::{self, DirEntryAttribute};
use pbs_tools::ops::ControlFlow;
use crate::pxar::Flags;
use crate::pxar::fuse::{Accessor, FileEntry};
use crate::tools::runtime::block_in_place;
type CatalogReader = crate::backup::CatalogReader<std::fs::File>;
type CatalogReader = pbs_datastore::catalog::CatalogReader<std::fs::File>;
const MAX_SYMLINK_COUNT: usize = 40;
@ -77,13 +79,13 @@ pub fn catalog_shell_cli() -> CommandLineInterface {
"restore-selected",
CliCommand::new(&API_METHOD_RESTORE_SELECTED_COMMAND)
.arg_param(&["target"])
.completion_cb("target", crate::tools::complete_file_name),
.completion_cb("target", pbs_tools::fs::complete_file_name),
)
.insert(
"restore",
CliCommand::new(&API_METHOD_RESTORE_COMMAND)
.arg_param(&["target"])
.completion_cb("target", crate::tools::complete_file_name),
.completion_cb("target", pbs_tools::fs::complete_file_name),
)
.insert(
"find",
@ -984,7 +986,8 @@ impl Shell {
.metadata()
.clone();
let extractor = crate::pxar::extract::Extractor::new(rootdir, root_meta, true, Flags::DEFAULT);
let extractor =
crate::pxar::extract::Extractor::new(rootdir, root_meta, true, Flags::DEFAULT);
let mut extractor = ExtractorState::new(
&mut self.catalog,
@ -998,11 +1001,6 @@ impl Shell {
}
}
enum LoopState {
Break,
Continue,
}
struct ExtractorState<'a> {
path: Vec<u8>,
path_len: usize,
@ -1060,8 +1058,8 @@ impl<'a> ExtractorState<'a> {
let entry = match self.read_dir.next() {
Some(entry) => entry,
None => match self.handle_end_of_directory()? {
LoopState::Break => break, // done with root directory
LoopState::Continue => continue,
ControlFlow::Break(()) => break, // done with root directory
ControlFlow::Continue(()) => continue,
},
};
@ -1079,11 +1077,11 @@ impl<'a> ExtractorState<'a> {
Ok(())
}
fn handle_end_of_directory(&mut self) -> Result<LoopState, Error> {
fn handle_end_of_directory(&mut self) -> Result<ControlFlow<()>, Error> {
// go up a directory:
self.read_dir = match self.read_dir_stack.pop() {
Some(r) => r,
None => return Ok(LoopState::Break), // out of root directory
None => return Ok(ControlFlow::Break(())), // out of root directory
};
self.matches = self
@ -1102,7 +1100,7 @@ impl<'a> ExtractorState<'a> {
self.extractor.leave_directory()?;
Ok(LoopState::Continue)
Ok(ControlFlow::CONTINUE)
}
async fn handle_new_directory(

View File

@ -6,7 +6,7 @@ use anyhow::{Error};
use futures::ready;
use futures::stream::{Stream, TryStream};
use super::Chunker;
use pbs_datastore::Chunker;
/// Split input stream into dynamic sized chunks
pub struct ChunkStream<S: Unpin> {

View File

@ -0,0 +1,230 @@
use std::io::{self, Seek, SeekFrom};
use std::ops::Range;
use std::sync::{Arc, Mutex};
use std::task::Context;
use std::pin::Pin;
use anyhow::{bail, format_err, Error};
use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::read_chunk::ReadChunk;
use pbs_datastore::index::IndexFile;
use pbs_tools::lru_cache::LruCache;
struct CachedChunk {
range: Range<u64>,
data: Vec<u8>,
}
impl CachedChunk {
/// Perform sanity checks on the range and data size:
pub fn new(range: Range<u64>, data: Vec<u8>) -> Result<Self, Error> {
if data.len() as u64 != range.end - range.start {
bail!(
"read chunk with wrong size ({} != {})",
data.len(),
range.end - range.start,
);
}
Ok(Self { range, data })
}
}
pub struct BufferedDynamicReader<S> {
store: S,
index: DynamicIndexReader,
archive_size: u64,
read_buffer: Vec<u8>,
buffered_chunk_idx: usize,
buffered_chunk_start: u64,
read_offset: u64,
lru_cache: LruCache<usize, CachedChunk>,
}
struct ChunkCacher<'a, S> {
store: &'a mut S,
index: &'a DynamicIndexReader,
}
impl<'a, S: ReadChunk> pbs_tools::lru_cache::Cacher<usize, CachedChunk> for ChunkCacher<'a, S> {
fn fetch(&mut self, index: usize) -> Result<Option<CachedChunk>, Error> {
let info = match self.index.chunk_info(index) {
Some(info) => info,
None => bail!("chunk index out of range"),
};
let range = info.range;
let data = self.store.read_chunk(&info.digest)?;
CachedChunk::new(range, data).map(Some)
}
}
impl<S: ReadChunk> BufferedDynamicReader<S> {
pub fn new(index: DynamicIndexReader, store: S) -> Self {
let archive_size = index.index_bytes();
Self {
store,
index,
archive_size,
read_buffer: Vec::with_capacity(1024 * 1024),
buffered_chunk_idx: 0,
buffered_chunk_start: 0,
read_offset: 0,
lru_cache: LruCache::new(32),
}
}
pub fn archive_size(&self) -> u64 {
self.archive_size
}
fn buffer_chunk(&mut self, idx: usize) -> Result<(), Error> {
//let (start, end, data) = self.lru_cache.access(
let cached_chunk = self.lru_cache.access(
idx,
&mut ChunkCacher {
store: &mut self.store,
index: &self.index,
},
)?.ok_or_else(|| format_err!("chunk not found by cacher"))?;
// fixme: avoid copy
self.read_buffer.clear();
self.read_buffer.extend_from_slice(&cached_chunk.data);
self.buffered_chunk_idx = idx;
self.buffered_chunk_start = cached_chunk.range.start;
//println!("BUFFER {} {}", self.buffered_chunk_start, end);
Ok(())
}
}
impl<S: ReadChunk> pbs_tools::io::BufferedRead for BufferedDynamicReader<S> {
fn buffered_read(&mut self, offset: u64) -> Result<&[u8], Error> {
if offset == self.archive_size {
return Ok(&self.read_buffer[0..0]);
}
let buffer_len = self.read_buffer.len();
let index = &self.index;
// optimization for sequential read
if buffer_len > 0
&& ((self.buffered_chunk_idx + 1) < index.index().len())
&& (offset >= (self.buffered_chunk_start + (self.read_buffer.len() as u64)))
{
let next_idx = self.buffered_chunk_idx + 1;
let next_end = index.chunk_end(next_idx);
if offset < next_end {
self.buffer_chunk(next_idx)?;
let buffer_offset = (offset - self.buffered_chunk_start) as usize;
return Ok(&self.read_buffer[buffer_offset..]);
}
}
if (buffer_len == 0)
|| (offset < self.buffered_chunk_start)
|| (offset >= (self.buffered_chunk_start + (self.read_buffer.len() as u64)))
{
let end_idx = index.index().len() - 1;
let end = index.chunk_end(end_idx);
let idx = index.binary_search(0, 0, end_idx, end, offset)?;
self.buffer_chunk(idx)?;
}
let buffer_offset = (offset - self.buffered_chunk_start) as usize;
Ok(&self.read_buffer[buffer_offset..])
}
}
impl<S: ReadChunk> std::io::Read for BufferedDynamicReader<S> {
fn read(&mut self, buf: &mut [u8]) -> Result<usize, std::io::Error> {
use pbs_tools::io::BufferedRead;
use std::io::{Error, ErrorKind};
let data = match self.buffered_read(self.read_offset) {
Ok(v) => v,
Err(err) => return Err(Error::new(ErrorKind::Other, err.to_string())),
};
let n = if data.len() > buf.len() {
buf.len()
} else {
data.len()
};
buf[0..n].copy_from_slice(&data[0..n]);
self.read_offset += n as u64;
Ok(n)
}
}
impl<S: ReadChunk> std::io::Seek for BufferedDynamicReader<S> {
fn seek(&mut self, pos: SeekFrom) -> Result<u64, std::io::Error> {
let new_offset = match pos {
SeekFrom::Start(start_offset) => start_offset as i64,
SeekFrom::End(end_offset) => (self.archive_size as i64) + end_offset,
SeekFrom::Current(offset) => (self.read_offset as i64) + offset,
};
use std::io::{Error, ErrorKind};
if (new_offset < 0) || (new_offset > (self.archive_size as i64)) {
return Err(Error::new(
ErrorKind::Other,
format!(
"seek is out of range {} ([0..{}])",
new_offset, self.archive_size
),
));
}
self.read_offset = new_offset as u64;
Ok(self.read_offset)
}
}
/// This is a workaround until we have cleaned up the chunk/reader/... infrastructure for better
/// async use!
///
/// Ideally BufferedDynamicReader gets replaced so the LruCache maps to `BroadcastFuture<Chunk>`,
/// so that we can properly access it from multiple threads simultaneously while not issuing
/// duplicate simultaneous reads over http.
#[derive(Clone)]
pub struct LocalDynamicReadAt<R: ReadChunk> {
inner: Arc<Mutex<BufferedDynamicReader<R>>>,
}
impl<R: ReadChunk> LocalDynamicReadAt<R> {
pub fn new(inner: BufferedDynamicReader<R>) -> Self {
Self {
inner: Arc::new(Mutex::new(inner)),
}
}
}
impl<R: ReadChunk> ReadAt for LocalDynamicReadAt<R> {
fn start_read_at<'a>(
self: Pin<&'a Self>,
_cx: &mut Context,
buf: &'a mut [u8],
offset: u64,
) -> MaybeReady<io::Result<usize>, ReadAtOperation<'a>> {
use std::io::Read;
MaybeReady::Ready(tokio::task::block_in_place(move || {
let mut reader = self.inner.lock().unwrap();
reader.seek(SeekFrom::Start(offset))?;
Ok(reader.read(buf)?)
}))
}
fn poll_complete<'a>(
self: Pin<&'a Self>,
_op: ReadAtOperation<'a>,
) -> MaybeReady<io::Result<usize>, ReadAtOperation<'a>> {
panic!("LocalDynamicReadAt::start_read_at returned Pending");
}
}

View File

@ -20,14 +20,17 @@ use proxmox::{
tools::fs::{file_get_json, replace_file, CreateOptions},
};
use proxmox_http::client::HttpsConnector;
use proxmox_http::uri::build_authority;
use pbs_api_types::{Authid, Userid};
use pbs_tools::broadcast_future::BroadcastFuture;
use pbs_tools::json::json_object_to_query;
use pbs_tools::ticket;
use pbs_tools::percent_encoding::DEFAULT_ENCODE_SET;
use super::pipe_to_stream::PipeToSendStream;
use crate::api2::types::{Authid, Userid};
use crate::tools::{
self,
BroadcastFuture,
DEFAULT_ENCODE_SET,
http::HttpsConnector,
};
use super::PROXMOX_BACKUP_TCP_KEEPALIVE_TIME;
/// Timeout used for several HTTP operations that are expected to finish quickly but may block in
/// certain error conditions. Keep it generous, to avoid false-positive under high load.
@ -233,7 +236,7 @@ fn store_ticket_info(prefix: &str, server: &str, username: &str, ticket: &str, t
let mut new_data = json!({});
let ticket_lifetime = tools::ticket::TICKET_LIFETIME - 60;
let ticket_lifetime = ticket::TICKET_LIFETIME - 60;
let empty = serde_json::map::Map::new();
for (server, info) in data.as_object().unwrap_or(&empty) {
@ -259,7 +262,7 @@ fn load_ticket_info(prefix: &str, server: &str, userid: &Userid) -> Option<(Stri
let path = base.place_runtime_file("tickets").ok()?;
let data = file_get_json(&path, None).ok()?;
let now = proxmox::tools::time::epoch_i64();
let ticket_lifetime = tools::ticket::TICKET_LIFETIME - 60;
let ticket_lifetime = ticket::TICKET_LIFETIME - 60;
let uinfo = data[server][userid.as_str()].as_object()?;
let timestamp = uinfo["timestamp"].as_i64()?;
let age = now - timestamp;
@ -273,6 +276,18 @@ fn load_ticket_info(prefix: &str, server: &str, userid: &Userid) -> Option<(Stri
}
}
fn build_uri(server: &str, port: u16, path: &str, query: Option<String>) -> Result<Uri, Error> {
Uri::builder()
.scheme("https")
.authority(build_authority(server, port)?)
.path_and_query(match query {
Some(query) => format!("/{}?{}", path, query),
None => format!("/{}", path),
})
.build()
.map_err(|err| format_err!("error building uri - {}", err))
}
impl HttpClient {
pub fn new(
server: &str,
@ -283,13 +298,13 @@ impl HttpClient {
let verified_fingerprint = Arc::new(Mutex::new(None));
let mut fingerprint = options.fingerprint.take();
let mut expected_fingerprint = options.fingerprint.take();
if fingerprint.is_some() {
if expected_fingerprint.is_some() {
// do not store fingerprints passed via options in cache
options.fingerprint_cache = false;
} else if options.fingerprint_cache && options.prefix.is_some() {
fingerprint = load_fingerprint(options.prefix.as_ref().unwrap(), server);
expected_fingerprint = load_fingerprint(options.prefix.as_ref().unwrap(), server);
}
let mut ssl_connector_builder = SslConnector::builder(SslMethod::tls()).unwrap();
@ -301,9 +316,9 @@ impl HttpClient {
let fingerprint_cache = options.fingerprint_cache;
let prefix = options.prefix.clone();
ssl_connector_builder.set_verify_callback(openssl::ssl::SslVerifyMode::PEER, move |valid, ctx| {
let (valid, fingerprint) = Self::verify_callback(valid, ctx, fingerprint.clone(), interactive);
if valid {
if let Some(fingerprint) = fingerprint {
match Self::verify_callback(valid, ctx, expected_fingerprint.as_ref(), interactive) {
Ok(None) => true,
Ok(Some(fingerprint)) => {
if fingerprint_cache && prefix.is_some() {
if let Err(err) = store_fingerprint(
prefix.as_ref().unwrap(), &server, &fingerprint) {
@ -311,9 +326,13 @@ impl HttpClient {
}
}
*verified_fingerprint.lock().unwrap() = Some(fingerprint);
}
true
},
Err(err) => {
eprintln!("certificate validation failed - {}", err);
false
},
}
valid
});
} else {
ssl_connector_builder.set_verify(openssl::ssl::SslVerifyMode::NONE);
@ -324,7 +343,7 @@ impl HttpClient {
httpc.enforce_http(false); // we want https...
httpc.set_connect_timeout(Some(std::time::Duration::new(10, 0)));
let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build());
let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build(), PROXMOX_BACKUP_TCP_KEEPALIVE_TIME);
let client = Client::builder()
//.http2_initial_stream_window_size( (1 << 31) - 2)
@ -459,42 +478,47 @@ impl HttpClient {
}
fn verify_callback(
valid: bool, ctx:
&mut X509StoreContextRef,
expected_fingerprint: Option<String>,
openssl_valid: bool,
ctx: &mut X509StoreContextRef,
expected_fingerprint: Option<&String>,
interactive: bool,
) -> (bool, Option<String>) {
if valid { return (true, None); }
) -> Result<Option<String>, Error> {
if openssl_valid {
return Ok(None);
}
let cert = match ctx.current_cert() {
Some(cert) => cert,
None => return (false, None),
None => bail!("context lacks current certificate."),
};
let depth = ctx.error_depth();
if depth != 0 { return (false, None); }
if depth != 0 { bail!("context depth != 0") }
let fp = match cert.digest(openssl::hash::MessageDigest::sha256()) {
Ok(fp) => fp,
Err(_) => return (false, None), // should not happen
Err(err) => bail!("failed to calculate certificate FP - {}", err), // should not happen
};
let fp_string = proxmox::tools::digest_to_hex(&fp);
let fp_string = fp_string.as_bytes().chunks(2).map(|v| std::str::from_utf8(v).unwrap())
.collect::<Vec<&str>>().join(":");
if let Some(expected_fingerprint) = expected_fingerprint {
if expected_fingerprint.to_lowercase() == fp_string {
return (true, Some(fp_string));
let expected_fingerprint = expected_fingerprint.to_lowercase();
if expected_fingerprint == fp_string {
return Ok(Some(fp_string));
} else {
return (false, None);
eprintln!("WARNING: certificate fingerprint does not match expected fingerprint!");
eprintln!("expected: {}", expected_fingerprint);
}
}
// If we're on a TTY, query the user
if interactive && tty::stdin_isatty() {
println!("fingerprint: {}", fp_string);
eprintln!("fingerprint: {}", fp_string);
loop {
print!("Are you sure you want to continue connecting? (y/n): ");
eprint!("Are you sure you want to continue connecting? (y/n): ");
let _ = std::io::stdout().flush();
use std::io::{BufRead, BufReader};
let mut line = String::new();
@ -502,18 +526,19 @@ impl HttpClient {
Ok(_) => {
let trimmed = line.trim();
if trimmed == "y" || trimmed == "Y" {
return (true, Some(fp_string));
return Ok(Some(fp_string));
} else if trimmed == "n" || trimmed == "N" {
return (false, None);
bail!("Certificate fingerprint was not confirmed.");
} else {
continue;
}
}
Err(_) => return (false, None),
Err(err) => bail!("Certificate fingerprint was not confirmed - {}.", err),
}
}
}
(false, None)
bail!("Certificate fingerprint was not confirmed.");
}
pub async fn request(&self, mut req: Request<Body>) -> Result<Value, Error> {
@ -614,16 +639,11 @@ impl HttpClient {
data: Option<Value>,
) -> Result<Value, Error> {
let path = path.trim_matches('/');
let mut url = format!("https://{}:{}/{}", &self.server, self.port, path);
if let Some(data) = data {
let query = tools::json_object_to_query(data).unwrap();
url.push('?');
url.push_str(&query);
}
let url: Uri = url.parse().unwrap();
let query = match data {
Some(data) => Some(json_object_to_query(data)?),
None => None,
};
let url = build_uri(&self.server, self.port, path, query)?;
let req = Request::builder()
.method("POST")
@ -757,39 +777,38 @@ impl HttpClient {
}
pub fn request_builder(server: &str, port: u16, method: &str, path: &str, data: Option<Value>) -> Result<Request<Body>, Error> {
let path = path.trim_matches('/');
let url: Uri = format!("https://{}:{}/{}", server, port, path).parse()?;
if let Some(data) = data {
if method == "POST" {
let url = build_uri(server, port, path, None)?;
let request = Request::builder()
.method(method)
.uri(url)
.header("User-Agent", "proxmox-backup-client/1.0")
.header(hyper::header::CONTENT_TYPE, "application/json")
.body(Body::from(data.to_string()))?;
return Ok(request);
Ok(request)
} else {
let query = tools::json_object_to_query(data)?;
let url: Uri = format!("https://{}:{}/{}?{}", server, port, path, query).parse()?;
let query = json_object_to_query(data)?;
let url = build_uri(server, port, path, Some(query))?;
let request = Request::builder()
.method(method)
.uri(url)
.header("User-Agent", "proxmox-backup-client/1.0")
.header(hyper::header::CONTENT_TYPE, "application/x-www-form-urlencoded")
.body(Body::empty())?;
return Ok(request);
Ok(request)
}
} else {
let url = build_uri(server, port, path, None)?;
let request = Request::builder()
.method(method)
.uri(url)
.header("User-Agent", "proxmox-backup-client/1.0")
.header(hyper::header::CONTENT_TYPE, "application/x-www-form-urlencoded")
.body(Body::empty())?;
Ok(request)
}
let request = Request::builder()
.method(method)
.uri(url)
.header("User-Agent", "proxmox-backup-client/1.0")
.header(hyper::header::CONTENT_TYPE, "application/x-www-form-urlencoded")
.body(Body::empty())?;
Ok(request)
}
}
@ -970,29 +989,25 @@ impl H2Client {
let path = path.trim_matches('/');
let content_type = content_type.unwrap_or("application/x-www-form-urlencoded");
let query = match param {
Some(param) => {
let query = json_object_to_query(param)?;
// We detected problem with hyper around 6000 characters - so we try to keep on the safe side
if query.len() > 4096 {
bail!("h2 query data too large ({} bytes) - please encode data inside body", query.len());
}
Some(query)
}
None => None,
};
if let Some(param) = param {
let query = tools::json_object_to_query(param)?;
// We detected problem with hyper around 6000 characters - seo we try to keep on the safe side
if query.len() > 4096 { bail!("h2 query data too large ({} bytes) - please encode data inside body", query.len()); }
let url: Uri = format!("https://{}:8007/{}?{}", server, path, query).parse()?;
let request = Request::builder()
.method(method)
.uri(url)
.header("User-Agent", "proxmox-backup-client/1.0")
.header(hyper::header::CONTENT_TYPE, content_type)
.body(())?;
Ok(request)
} else {
let url: Uri = format!("https://{}:8007/{}", server, path).parse()?;
let request = Request::builder()
.method(method)
.uri(url)
.header("User-Agent", "proxmox-backup-client/1.0")
.header(hyper::header::CONTENT_TYPE, content_type)
.body(())?;
Ok(request)
}
let url = build_uri(server, 8007, path, query)?;
let request = Request::builder()
.method(method)
.uri(url)
.header("User-Agent", "proxmox-backup-client/1.0")
.header(hyper::header::CONTENT_TYPE, content_type)
.body(())?;
Ok(request)
}
}

View File

@ -5,13 +5,15 @@
use anyhow::Error;
use crate::{
api2::types::{Userid, Authid},
tools::ticket::Ticket,
auth_helpers::private_auth_key,
};
use pbs_api_types::{Authid, Userid};
use pbs_tools::ticket::Ticket;
use pbs_tools::cert::CertInfo;
use pbs_tools::auth::private_auth_key;
pub mod catalog_shell;
pub mod dynamic_index;
pub mod pxar;
pub mod tools;
mod merge_known_chunks;
pub mod pipe_to_stream;
@ -43,7 +45,10 @@ pub use backup_repo::*;
mod backup_specification;
pub use backup_specification::*;
pub mod pull;
mod chunk_stream;
pub use chunk_stream::{ChunkStream, FixedChunkStream};
pub const PROXMOX_BACKUP_TCP_KEEPALIVE_TIME: u32 = 120;
/// Connect to localhost:8007 as root@pam
///
@ -55,7 +60,7 @@ pub fn connect_to_localhost() -> Result<HttpClient, Error> {
let client = if uid.is_root() {
let ticket = Ticket::new("PBS", Userid::root_userid())?
.sign(private_auth_key(), None)?;
let fingerprint = crate::tools::cert::CertInfo::new()?.fingerprint()?;
let fingerprint = CertInfo::new()?.fingerprint()?;
let options = HttpClientOptions::new_non_interactive(ticket, Some(fingerprint));
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?

View File

@ -1,11 +1,11 @@
use std::pin::Pin;
use std::task::{Context, Poll};
use anyhow::{Error};
use futures::*;
use pin_project::pin_project;
use anyhow::Error;
use futures::{ready, Stream};
use pin_project_lite::pin_project;
use crate::backup::ChunkInfo;
use pbs_datastore::data_blob::ChunkInfo;
pub enum MergedChunkInfo {
Known(Vec<(u64, [u8; 32])>),
@ -16,11 +16,12 @@ pub trait MergeKnownChunks: Sized {
fn merge_known_chunks(self) -> MergeKnownChunksQueue<Self>;
}
#[pin_project]
pub struct MergeKnownChunksQueue<S> {
#[pin]
input: S,
buffer: Option<MergedChunkInfo>,
pin_project! {
pub struct MergeKnownChunksQueue<S> {
#[pin]
input: S,
buffer: Option<MergedChunkInfo>,
}
}
impl<S> MergeKnownChunks for S

View File

@ -5,8 +5,8 @@
use std::pin::Pin;
use std::task::{Context, Poll};
use bytes::Bytes;
use anyhow::{format_err, Error};
use bytes::Bytes;
use futures::{ready, Future};
use h2::SendStream;

View File

@ -23,12 +23,15 @@ use proxmox::c_str;
use proxmox::sys::error::SysError;
use proxmox::tools::fd::RawFdNum;
use proxmox::tools::vec;
use proxmox::tools::fd::Fd;
use pbs_datastore::catalog::BackupCatalogWriter;
use pbs_tools::{acl, fs, xattr};
use pbs_tools::str::strip_ascii_whitespace;
use crate::pxar::catalog::BackupCatalogWriter;
use crate::pxar::metadata::errno_is_unsupported;
use crate::pxar::Flags;
use crate::pxar::tools::assert_single_path_component;
use crate::tools::{acl, fs, xattr, Fd};
/// Pxar options for creating a pxar archive/stream
#[derive(Default, Clone)]
@ -169,7 +172,7 @@ where
bail!("refusing to backup a virtual file system");
}
let fs_feature_flags = Flags::from_magic(fs_magic);
let mut fs_feature_flags = Flags::from_magic(fs_magic);
let stat = nix::sys::stat::fstat(source_dir.as_raw_fd())?;
let metadata = get_metadata(
@ -177,6 +180,7 @@ where
&stat,
feature_flags & fs_feature_flags,
fs_magic,
&mut fs_feature_flags,
)
.map_err(|err| format_err!("failed to get metadata for source directory: {}", err))?;
@ -359,7 +363,7 @@ impl Archiver {
}
};
let line = crate::tools::strip_ascii_whitespace(&line);
let line = strip_ascii_whitespace(&line);
if line.is_empty() || line[0] == b'#' {
continue;
@ -533,7 +537,7 @@ impl Archiver {
None => return Ok(()),
};
let metadata = get_metadata(fd.as_raw_fd(), &stat, self.flags(), self.fs_magic)?;
let metadata = get_metadata(fd.as_raw_fd(), &stat, self.flags(), self.fs_magic, &mut self.fs_feature_flags)?;
if self
.patterns
@ -742,7 +746,7 @@ impl Archiver {
}
}
fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Result<Metadata, Error> {
fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64, fs_feature_flags: &mut Flags) -> Result<Metadata, Error> {
// required for some of these
let proc_path = Path::new("/proc/self/fd/").join(fd.to_string());
@ -757,14 +761,14 @@ fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Resu
..Default::default()
};
get_xattr_fcaps_acl(&mut meta, fd, &proc_path, flags)?;
get_xattr_fcaps_acl(&mut meta, fd, &proc_path, flags, fs_feature_flags)?;
get_chattr(&mut meta, fd)?;
get_fat_attr(&mut meta, fd, fs_magic)?;
get_quota_project_id(&mut meta, fd, flags, fs_magic)?;
Ok(meta)
}
fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags) -> Result<(), Error> {
fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags, fs_feature_flags: &mut Flags) -> Result<(), Error> {
if !flags.contains(Flags::WITH_FCAPS) {
return Ok(());
}
@ -775,7 +779,10 @@ fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags) -> Result<(), Error>
Ok(())
}
Err(Errno::ENODATA) => Ok(()),
Err(Errno::EOPNOTSUPP) => Ok(()),
Err(Errno::EOPNOTSUPP) => {
fs_feature_flags.remove(Flags::WITH_FCAPS);
Ok(())
}
Err(Errno::EBADF) => Ok(()), // symlinks
Err(err) => bail!("failed to read file capabilities: {}", err),
}
@ -786,6 +793,7 @@ fn get_xattr_fcaps_acl(
fd: RawFd,
proc_path: &Path,
flags: Flags,
fs_feature_flags: &mut Flags,
) -> Result<(), Error> {
if !flags.contains(Flags::WITH_XATTRS) {
return Ok(());
@ -793,19 +801,22 @@ fn get_xattr_fcaps_acl(
let xattrs = match xattr::flistxattr(fd) {
Ok(names) => names,
Err(Errno::EOPNOTSUPP) => return Ok(()),
Err(Errno::EOPNOTSUPP) => {
fs_feature_flags.remove(Flags::WITH_XATTRS);
return Ok(());
},
Err(Errno::EBADF) => return Ok(()), // symlinks
Err(err) => bail!("failed to read xattrs: {}", err),
};
for attr in &xattrs {
if xattr::is_security_capability(&attr) {
get_fcaps(meta, fd, flags)?;
get_fcaps(meta, fd, flags, fs_feature_flags)?;
continue;
}
if xattr::is_acl(&attr) {
get_acl(meta, proc_path, flags)?;
get_acl(meta, proc_path, flags, fs_feature_flags)?;
continue;
}
@ -910,7 +921,7 @@ fn get_quota_project_id(
Ok(())
}
fn get_acl(metadata: &mut Metadata, proc_path: &Path, flags: Flags) -> Result<(), Error> {
fn get_acl(metadata: &mut Metadata, proc_path: &Path, flags: Flags, fs_feature_flags: &mut Flags) -> Result<(), Error> {
if !flags.contains(Flags::WITH_ACL) {
return Ok(());
}
@ -919,10 +930,10 @@ fn get_acl(metadata: &mut Metadata, proc_path: &Path, flags: Flags) -> Result<()
return Ok(());
}
get_acl_do(metadata, proc_path, acl::ACL_TYPE_ACCESS)?;
get_acl_do(metadata, proc_path, acl::ACL_TYPE_ACCESS, fs_feature_flags)?;
if metadata.is_dir() {
get_acl_do(metadata, proc_path, acl::ACL_TYPE_DEFAULT)?;
get_acl_do(metadata, proc_path, acl::ACL_TYPE_DEFAULT, fs_feature_flags)?;
}
Ok(())
@ -932,6 +943,7 @@ fn get_acl_do(
metadata: &mut Metadata,
proc_path: &Path,
acl_type: acl::ACLType,
fs_feature_flags: &mut Flags,
) -> Result<(), Error> {
// In order to be able to get ACLs with type ACL_TYPE_DEFAULT, we have
// to create a path for acl_get_file(). acl_get_fd() only allows to get
@ -939,7 +951,10 @@ fn get_acl_do(
let acl = match acl::ACL::get_file(&proc_path, acl_type) {
Ok(acl) => acl,
// Don't bail if underlying endpoint does not support acls
Err(Errno::EOPNOTSUPP) => return Ok(()),
Err(Errno::EOPNOTSUPP) => {
fs_feature_flags.remove(Flags::WITH_ACL);
return Ok(());
}
// Don't bail if the endpoint cannot carry acls
Err(Errno::EBADF) => return Ok(()),
// Don't bail if there is no data

View File

@ -27,12 +27,12 @@ use proxmox::tools::{
io::{sparse_copy, sparse_copy_async},
};
use pbs_tools::zip::{ZipEncoder, ZipEntry};
use crate::pxar::dir_stack::PxarDirStack;
use crate::pxar::metadata;
use crate::pxar::Flags;
use crate::tools::zip::{ZipEncoder, ZipEntry};
pub struct PxarExtractOptions<'a> {
pub match_list: &'a[MatchEntry],
pub extract_match_default: bool,
@ -215,7 +215,7 @@ where
}
/// Common state for file extraction.
pub(crate) struct Extractor {
pub struct Extractor {
feature_flags: Flags,
allow_existing_dirs: bool,
dir_stack: PxarDirStack,

View File

@ -368,7 +368,10 @@ impl Flags {
Flags::WITH_SYMLINKS |
Flags::WITH_DEVICE_NODES |
Flags::WITH_FIFOS |
Flags::WITH_SOCKETS
Flags::WITH_SOCKETS |
Flags::WITH_XATTRS |
Flags::WITH_ACL |
Flags::WITH_FCAPS
},
}
}

View File

@ -26,7 +26,7 @@ use pxar::accessor::{self, EntryRangeInfo, ReadAt};
use proxmox_fuse::requests::{self, FuseRequest};
use proxmox_fuse::{EntryParam, Fuse, ReplyBufState, Request, ROOT_ID};
use crate::tools::xattr;
use pbs_tools::xattr;
/// We mark inodes for regular files this way so we know how to access them.
const NON_DIRECTORY_INODE: u64 = 1u64 << 63;

View File

@ -13,9 +13,10 @@ use proxmox::c_result;
use proxmox::sys::error::SysError;
use proxmox::tools::fd::RawFdNum;
use pbs_tools::{acl, fs, xattr};
use crate::pxar::tools::perms_from_metadata;
use crate::pxar::Flags;
use crate::tools::{acl, fs, xattr};
//
// utility functions

View File

@ -47,12 +47,11 @@
//! (user, group, acl, ...) because this is already defined by the
//! linked `ENTRY`.
pub mod catalog;
pub(crate) mod create;
pub(crate) mod dir_stack;
pub(crate) mod extract;
pub(crate) mod metadata;
pub mod fuse;
pub(crate) mod metadata;
pub(crate) mod tools;
mod flags;

View File

@ -12,11 +12,9 @@ use nix::dir::Dir;
use nix::fcntl::OFlag;
use nix::sys::stat::Mode;
use crate::backup::CatalogWriter;
use crate::tools::{
StdChannelWriter,
TokioWriterAdapter,
};
use pbs_datastore::catalog::CatalogWriter;
use pbs_tools::sync::StdChannelWriter;
use pbs_tools::tokio::TokioWriterAdapter;
/// Stream implementation to encode and upload .pxar archives.
///
@ -113,7 +111,7 @@ impl Stream for PxarBackupStream {
}
}
match crate::tools::runtime::block_in_place(|| self.rx.as_ref().unwrap().recv()) {
match pbs_runtime::block_in_place(|| self.rx.as_ref().unwrap().recv()) {
Ok(data) => Poll::Ready(Some(data)),
Err(_) => {
let error = self.error.lock().unwrap();

View File

@ -5,9 +5,14 @@ use std::sync::{Arc, Mutex};
use anyhow::{bail, Error};
use pbs_tools::crypt_config::CryptConfig;
use pbs_api_types::CryptMode;
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::read_chunk::ReadChunk;
use pbs_datastore::read_chunk::AsyncReadChunk;
use pbs_runtime::block_on;
use super::BackupReader;
use crate::backup::{AsyncReadChunk, CryptConfig, CryptMode, DataBlob, ReadChunk};
use crate::tools::runtime::block_on;
/// Read chunks from remote host using ``BackupReader``
#[derive(Clone)]

View File

@ -7,15 +7,9 @@ use futures::*;
use proxmox::api::cli::format_and_print_result;
use super::HttpClient;
use crate::{
server::{
worker_is_active_local,
UPID,
},
tools,
};
use pbs_tools::percent_encoding::percent_encode_component;
use super::HttpClient;
/// Display task log on console
///
@ -54,13 +48,13 @@ pub async fn display_task_log(
let abort = abort_count.load(Ordering::Relaxed);
if abort > 0 {
let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
let path = format!("api2/json/nodes/localhost/tasks/{}", percent_encode_component(upid_str));
let _ = client.delete(&path, None).await?;
}
let param = json!({ "start": start, "limit": limit, "test-status": true });
let path = format!("api2/json/nodes/localhost/tasks/{}/log", tools::percent_encode_component(upid_str));
let path = format!("api2/json/nodes/localhost/tasks/{}/log", percent_encode_component(upid_str));
let result = client.get(&path, Some(param)).await?;
let active = result["active"].as_bool().unwrap();
@ -121,23 +115,3 @@ pub async fn view_task_result(
Ok(())
}
/// Wait for a locally spanned worker task
///
/// Note: local workers should print logs to stdout, so there is no
/// need to fetch/display logs. We just wait for the worker to finish.
pub async fn wait_for_local_worker(upid_str: &str) -> Result<(), Error> {
let upid: UPID = upid_str.parse()?;
let sleep_duration = core::time::Duration::new(0, 100_000_000);
loop {
if worker_is_active_local(&upid) {
tokio::time::sleep(sleep_duration).await;
} else {
break;
}
}
Ok(())
}

View File

@ -10,7 +10,7 @@ use proxmox::api::schema::*;
use proxmox::sys::linux::tty;
use proxmox::tools::fs::file_get_contents;
use proxmox_backup::backup::CryptMode;
use pbs_api_types::CryptMode;
pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
pub const DEFAULT_MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
@ -86,6 +86,14 @@ pub struct CryptoParams {
}
pub fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
do_crypto_parameters(param, false)
}
pub fn crypto_parameters_keep_fd(param: &Value) -> Result<CryptoParams, Error> {
do_crypto_parameters(param, true)
}
fn do_crypto_parameters(param: &Value, keep_keyfd_open: bool) -> Result<CryptoParams, Error> {
let keyfile = match param.get("keyfile") {
Some(Value::String(keyfile)) => Some(keyfile),
Some(_) => bail!("bad --keyfile parameter type"),
@ -135,11 +143,16 @@ pub fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
file_get_contents(keyfile)?,
)),
(None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new();
let _len: usize = { input }.read_to_end(&mut data).map_err(|err| {
let _len: usize = input.read_to_end(&mut data).map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err)
})?;
if keep_keyfd_open {
// don't close fd if requested, and try to reset seek position
std::mem::forget(input);
unsafe { libc::lseek(fd, 0, libc::SEEK_SET); }
}
Some(KeyWithSource::from_fd(data))
}
};
@ -330,13 +343,8 @@ pub(crate) unsafe fn set_test_default_master_pubkey(value: Result<Option<Vec<u8>
pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
// fixme: implement other input methods
use std::env::VarError::*;
match std::env::var("PBS_ENCRYPTION_PASSWORD") {
Ok(p) => return Ok(p.as_bytes().to_vec()),
Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
Err(NotPresent) => {
// Try another method
}
if let Some(password) = super::get_secret_from_env("PBS_ENCRYPTION_PASSWORD")? {
return Ok(password.as_bytes().to_vec());
}
// If we're on a TTY, query the user for a password
@ -347,6 +355,20 @@ pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
bail!("no password input mechanism available");
}
#[cfg(test)]
fn create_testdir(name: &str) -> Result<String, Error> {
// FIXME:
//let mut testdir: PathBuf = format!("{}/testout", env!("CARGO_TARGET_TMPDIR")).into();
let mut testdir: PathBuf = "./target/testout".to_string().into();
testdir.push(std::module_path!());
testdir.push(name);
let _ = std::fs::remove_dir_all(&testdir);
let _ = std::fs::create_dir_all(&testdir);
Ok(testdir.to_str().unwrap().to_string())
}
#[test]
// WARNING: there must only be one test for crypto_parameters as the default key handling is not
// safe w.r.t. concurrency
@ -360,9 +382,11 @@ fn test_crypto_parameters_handling() -> Result<(), Error> {
let some_master_key = vec![3;1];
let default_master_key = vec![4;1];
let keypath = "./target/testout/keyfile.test";
let master_keypath = "./target/testout/masterkeyfile.test";
let invalid_keypath = "./target/testout/invalid_keyfile.test";
let testdir = create_testdir("key_source")?;
let keypath = format!("{}/keyfile.test", testdir);
let master_keypath = format!("{}/masterkeyfile.test", testdir);
let invalid_keypath = format!("{}/invalid_keyfile.test", testdir);
let no_key_res = CryptoParams {
enc_key: None,

View File

@ -1,5 +1,10 @@
//! Shared tools useful for common CLI clients.
use std::collections::HashMap;
use std::fs::File;
use std::os::unix::io::FromRawFd;
use std::env::VarError::{NotUnicode, NotPresent};
use std::io::{BufReader, BufRead};
use std::process::Command;
use anyhow::{bail, format_err, Context, Error};
use serde_json::{json, Value};
@ -7,15 +12,15 @@ use xdg::BaseDirectories;
use proxmox::{
api::schema::*,
api::cli::shellword_split,
tools::fs::file_get_json,
};
use proxmox_backup::api2::access::user::UserWithTokens;
use proxmox_backup::api2::types::*;
use proxmox_backup::backup::BackupDir;
use proxmox_backup::buildcfg;
use proxmox_backup::client::*;
use proxmox_backup::tools;
use pbs_api_types::{BACKUP_REPO_URL, Authid, UserWithTokens};
use pbs_datastore::BackupDir;
use pbs_tools::json::json_object_to_query;
use crate::{BackupRepository, HttpClient, HttpClientOptions};
pub mod key_source;
@ -33,6 +38,80 @@ pub const CHUNK_SIZE_SCHEMA: Schema = IntegerSchema::new("Chunk size in KB. Must
.default(4096)
.schema();
/// Helper to read a secret through a environment variable (ENV).
///
/// Tries the following variable names in order and returns the value
/// it will resolve for the first defined one:
///
/// BASE_NAME => use value from ENV(BASE_NAME) directly as secret
/// BASE_NAME_FD => read the secret from the specified file descriptor
/// BASE_NAME_FILE => read the secret from the specified file name
/// BASE_NAME_CMD => read the secret from specified command first line of output on stdout
///
/// Only return the first line of data (without CRLF).
pub fn get_secret_from_env(base_name: &str) -> Result<Option<String>, Error> {
let firstline = |data: String| -> String {
match data.lines().next() {
Some(line) => line.to_string(),
None => String::new(),
}
};
let firstline_file = |file: &mut File| -> Result<String, Error> {
let reader = BufReader::new(file);
match reader.lines().next() {
Some(Ok(line)) => Ok(line),
Some(Err(err)) => Err(err.into()),
None => Ok(String::new()),
}
};
match std::env::var(base_name) {
Ok(p) => return Ok(Some(firstline(p))),
Err(NotUnicode(_)) => bail!(format!("{} contains bad characters", base_name)),
Err(NotPresent) => {},
};
let env_name = format!("{}_FD", base_name);
match std::env::var(&env_name) {
Ok(fd_str) => {
let fd: i32 = fd_str.parse()
.map_err(|err| format_err!("unable to parse file descriptor in ENV({}): {}", env_name, err))?;
let mut file = unsafe { File::from_raw_fd(fd) };
return Ok(Some(firstline_file(&mut file)?));
}
Err(NotUnicode(_)) => bail!(format!("{} contains bad characters", env_name)),
Err(NotPresent) => {},
}
let env_name = format!("{}_FILE", base_name);
match std::env::var(&env_name) {
Ok(filename) => {
let mut file = std::fs::File::open(filename)
.map_err(|err| format_err!("unable to open file in ENV({}): {}", env_name, err))?;
return Ok(Some(firstline_file(&mut file)?));
}
Err(NotUnicode(_)) => bail!(format!("{} contains bad characters", env_name)),
Err(NotPresent) => {},
}
let env_name = format!("{}_CMD", base_name);
match std::env::var(&env_name) {
Ok(ref command) => {
let args = shellword_split(command)?;
let mut command = Command::new(&args[0]);
command.args(&args[1..]);
let output = pbs_tools::run_command(command, None)?;
return Ok(Some(firstline(output)));
}
Err(NotUnicode(_)) => bail!(format!("{} contains bad characters", env_name)),
Err(NotPresent) => {},
}
Ok(None)
}
pub fn get_default_repository() -> Option<String> {
std::env::var("PBS_REPOSITORY").ok()
}
@ -65,13 +144,7 @@ pub fn connect(repo: &BackupRepository) -> Result<HttpClient, Error> {
fn connect_do(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, Error> {
let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
use std::env::VarError::*;
let password = match std::env::var(ENV_VAR_PBS_PASSWORD) {
Ok(p) => Some(p),
Err(NotUnicode(_)) => bail!(format!("{} contains bad characters", ENV_VAR_PBS_PASSWORD)),
Err(NotPresent) => None,
};
let password = get_secret_from_env(ENV_VAR_PBS_PASSWORD)?;
let options = HttpClientOptions::new_interactive(password, fingerprint);
HttpClient::new(server, port, auth_id, options)
@ -81,7 +154,7 @@ fn connect_do(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, E
pub async fn try_get(repo: &BackupRepository, url: &str) -> Value {
let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
let password = std::env::var(ENV_VAR_PBS_PASSWORD).ok();
let password = get_secret_from_env(ENV_VAR_PBS_PASSWORD).unwrap_or(None);
// ticket cache, but no questions asked
let options = HttpClientOptions::new_interactive(password, fingerprint)
@ -106,7 +179,7 @@ pub async fn try_get(repo: &BackupRepository, url: &str) -> Value {
}
pub fn complete_backup_group(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
proxmox_backup::tools::runtime::main(async { complete_backup_group_do(param).await })
pbs_runtime::main(async { complete_backup_group_do(param).await })
}
pub async fn complete_backup_group_do(param: &HashMap<String, String>) -> Vec<String> {
@ -136,7 +209,7 @@ pub async fn complete_backup_group_do(param: &HashMap<String, String>) -> Vec<St
}
pub fn complete_group_or_snapshot(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
proxmox_backup::tools::runtime::main(async { complete_group_or_snapshot_do(arg, param).await })
pbs_runtime::main(async { complete_group_or_snapshot_do(arg, param).await })
}
pub async fn complete_group_or_snapshot_do(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
@ -155,7 +228,7 @@ pub async fn complete_group_or_snapshot_do(arg: &str, param: &HashMap<String, St
}
pub fn complete_backup_snapshot(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
proxmox_backup::tools::runtime::main(async { complete_backup_snapshot_do(param).await })
pbs_runtime::main(async { complete_backup_snapshot_do(param).await })
}
pub async fn complete_backup_snapshot_do(param: &HashMap<String, String>) -> Vec<String> {
@ -187,7 +260,7 @@ pub async fn complete_backup_snapshot_do(param: &HashMap<String, String>) -> Vec
}
pub fn complete_server_file_name(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
proxmox_backup::tools::runtime::main(async { complete_server_file_name_do(param).await })
pbs_runtime::main(async { complete_server_file_name_do(param).await })
}
pub async fn complete_server_file_name_do(param: &HashMap<String, String>) -> Vec<String> {
@ -209,7 +282,7 @@ pub async fn complete_server_file_name_do(param: &HashMap<String, String>) -> Ve
_ => return result,
};
let query = tools::json_object_to_query(json!({
let query = json_object_to_query(json!({
"backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time(),
@ -233,7 +306,7 @@ pub async fn complete_server_file_name_do(param: &HashMap<String, String>) -> Ve
pub fn complete_archive_name(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
complete_server_file_name(arg, param)
.iter()
.map(|v| tools::format::strip_server_file_extension(&v))
.map(|v| pbs_tools::format::strip_server_file_extension(&v))
.collect()
}
@ -242,7 +315,7 @@ pub fn complete_pxar_archive_name(arg: &str, param: &HashMap<String, String>) ->
.iter()
.filter_map(|name| {
if name.ends_with(".pxar.didx") {
Some(tools::format::strip_server_file_extension(name))
Some(pbs_tools::format::strip_server_file_extension(name))
} else {
None
}
@ -255,7 +328,7 @@ pub fn complete_img_archive_name(arg: &str, param: &HashMap<String, String>) ->
.iter()
.filter_map(|name| {
if name.ends_with(".img.fidx") {
Some(tools::format::strip_server_file_extension(name))
Some(pbs_tools::format::strip_server_file_extension(name))
} else {
None
}
@ -278,7 +351,7 @@ pub fn complete_chunk_size(_arg: &str, _param: &HashMap<String, String>) -> Vec<
}
pub fn complete_auth_id(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
proxmox_backup::tools::runtime::main(async { complete_auth_id_do(param).await })
pbs_runtime::main(async { complete_auth_id_do(param).await })
}
pub async fn complete_auth_id_do(param: &HashMap<String, String>) -> Vec<String> {
@ -340,7 +413,7 @@ pub fn complete_backup_source(arg: &str, param: &HashMap<String, String>) -> Vec
return result;
}
let files = tools::complete_file_name(data[1], param);
let files = pbs_tools::fs::complete_file_name(data[1], param);
for file in files {
result.push(format!("{}:{}", data[0], file));
@ -373,15 +446,3 @@ pub fn place_xdg_file(
.and_then(|base| base.place_config_file(file_name).map_err(Error::from))
.with_context(|| format!("failed to place {} in xdg home", description))
}
/// Returns a runtime dir owned by the current user.
/// Note that XDG_RUNTIME_DIR is not always available, especially for non-login users like
/// "www-data", so we use a custom one in /run/proxmox-backup/<uid> instead.
pub fn get_user_run_dir() -> Result<std::path::PathBuf, Error> {
let uid = nix::unistd::Uid::current();
let mut path: std::path::PathBuf = buildcfg::PROXMOX_BACKUP_RUN_DIR.into();
path.push(uid.to_string());
tools::create_run_dir()?;
std::fs::create_dir_all(&path)?;
Ok(path)
}

View File

@ -1,21 +1,18 @@
use std::pin::Pin;
use std::task::{Context, Poll};
use anyhow::{bail, format_err, Error};
use futures::*;
use core::task::Context;
use std::pin::Pin;
use std::task::Poll;
use http::Uri;
use http::{Request, Response};
use hyper::client::connect::{Connected, Connection};
use hyper::client::Client;
use hyper::Body;
use pin_project::pin_project;
use pin_project_lite::pin_project;
use serde_json::Value;
use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt, ReadBuf};
use tokio::net::UnixStream;
use crate::tools;
use proxmox::api::error::HttpError;
pub const DEFAULT_VSOCK_PORT: u16 = 807;
@ -23,11 +20,12 @@ pub const DEFAULT_VSOCK_PORT: u16 = 807;
#[derive(Clone)]
struct VsockConnector;
#[pin_project]
/// Wrapper around UnixStream so we can implement hyper::client::connect::Connection
struct UnixConnection {
#[pin]
stream: UnixStream,
pin_project! {
/// Wrapper around UnixStream so we can implement hyper::client::connect::Connection
struct UnixConnection {
#[pin]
stream: UnixStream,
}
}
impl tower_service::Service<Uri> for VsockConnector {
@ -242,7 +240,7 @@ impl VsockClient {
let request = builder.body(Body::from(data.to_string()))?;
return Ok(request);
} else {
let query = tools::json_object_to_query(data)?;
let query = pbs_tools::json::json_object_to_query(data)?;
let url: Uri =
format!("vsock://{}:{}/{}?{}", self.cid, self.port, path, query).parse()?;
let builder = make_builder("application/x-www-form-urlencoded", &url);

23
pbs-config/Cargo.toml Normal file
View File

@ -0,0 +1,23 @@
[package]
name = "pbs-config"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "Configuration file management for PBS"
[dependencies]
libc = "0.2"
anyhow = "1.0"
lazy_static = "1.4"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
openssl = "0.10"
nix = "0.19.1"
regex = "1.2"
once_cell = "1.3.1"
proxmox = { version = "0.13.3", default-features = false, features = [ "cli" ] }
pbs-api-types = { path = "../pbs-api-types" }
pbs-buildcfg = { path = "../pbs-buildcfg" }
pbs-tools = { path = "../pbs-tools" }

View File

@ -8,228 +8,11 @@ use anyhow::{bail, Error};
use lazy_static::lazy_static;
use ::serde::{Deserialize, Serialize};
use serde::de::{value, IntoDeserializer};
use proxmox::api::schema::{Schema, StringSchema, ApiStringFormat, ApiType};
use proxmox::api::{api, schema::*};
use proxmox::constnamedbitmap;
use proxmox::tools::{fs::replace_file, fs::CreateOptions};
use pbs_api_types::{Authid, Userid, Role, ROLE_NAME_NO_ACCESS};
use crate::api2::types::{Authid, Userid};
// define Privilege bitfield
constnamedbitmap! {
/// Contains a list of privilege name to privilege value mappings.
///
/// The names are used when displaying/persisting privileges anywhere, the values are used to
/// allow easy matching of privileges as bitflags.
PRIVILEGES: u64 => {
/// Sys.Audit allows knowing about the system and its status
PRIV_SYS_AUDIT("Sys.Audit");
/// Sys.Modify allows modifying system-level configuration
PRIV_SYS_MODIFY("Sys.Modify");
/// Sys.Modify allows to poweroff/reboot/.. the system
PRIV_SYS_POWER_MANAGEMENT("Sys.PowerManagement");
/// Datastore.Audit allows knowing about a datastore,
/// including reading the configuration entry and listing its contents
PRIV_DATASTORE_AUDIT("Datastore.Audit");
/// Datastore.Allocate allows creating or deleting datastores
PRIV_DATASTORE_ALLOCATE("Datastore.Allocate");
/// Datastore.Modify allows modifying a datastore and its contents
PRIV_DATASTORE_MODIFY("Datastore.Modify");
/// Datastore.Read allows reading arbitrary backup contents
PRIV_DATASTORE_READ("Datastore.Read");
/// Allows verifying a datastore
PRIV_DATASTORE_VERIFY("Datastore.Verify");
/// Datastore.Backup allows Datastore.Read|Verify and creating new snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_BACKUP("Datastore.Backup");
/// Datastore.Prune allows deleting snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_PRUNE("Datastore.Prune");
/// Permissions.Modify allows modifying ACLs
PRIV_PERMISSIONS_MODIFY("Permissions.Modify");
/// Remote.Audit allows reading remote.cfg and sync.cfg entries
PRIV_REMOTE_AUDIT("Remote.Audit");
/// Remote.Modify allows modifying remote.cfg
PRIV_REMOTE_MODIFY("Remote.Modify");
/// Remote.Read allows reading data from a configured `Remote`
PRIV_REMOTE_READ("Remote.Read");
/// Sys.Console allows access to the system's console
PRIV_SYS_CONSOLE("Sys.Console");
/// Tape.Audit allows reading tape backup configuration and status
PRIV_TAPE_AUDIT("Tape.Audit");
/// Tape.Modify allows modifying tape backup configuration
PRIV_TAPE_MODIFY("Tape.Modify");
/// Tape.Write allows writing tape media
PRIV_TAPE_WRITE("Tape.Write");
/// Tape.Read allows reading tape backup configuration and media contents
PRIV_TAPE_READ("Tape.Read");
}
}
/// Admin always has all privileges. It can do everything except a few actions
/// which are limited to the 'root@pam` superuser
pub const ROLE_ADMIN: u64 = std::u64::MAX;
/// NoAccess can be used to remove privileges from specific (sub-)paths
pub const ROLE_NO_ACCESS: u64 = 0;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Audit can view configuration and status information, but not modify it.
pub const ROLE_AUDIT: u64 = 0
| PRIV_SYS_AUDIT
| PRIV_DATASTORE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Admin can do anything on the datastore.
pub const ROLE_DATASTORE_ADMIN: u64 = 0
| PRIV_DATASTORE_AUDIT
| PRIV_DATASTORE_MODIFY
| PRIV_DATASTORE_READ
| PRIV_DATASTORE_VERIFY
| PRIV_DATASTORE_BACKUP
| PRIV_DATASTORE_PRUNE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Reader can read/verify datastore content and do restore
pub const ROLE_DATASTORE_READER: u64 = 0
| PRIV_DATASTORE_AUDIT
| PRIV_DATASTORE_VERIFY
| PRIV_DATASTORE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Backup can do backup and restore, but no prune.
pub const ROLE_DATASTORE_BACKUP: u64 = 0
| PRIV_DATASTORE_BACKUP;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.PowerUser can do backup, restore, and prune.
pub const ROLE_DATASTORE_POWERUSER: u64 = 0
| PRIV_DATASTORE_PRUNE
| PRIV_DATASTORE_BACKUP;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Audit can audit the datastore.
pub const ROLE_DATASTORE_AUDIT: u64 = 0
| PRIV_DATASTORE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.Audit can audit the remote
pub const ROLE_REMOTE_AUDIT: u64 = 0
| PRIV_REMOTE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.Admin can do anything on the remote.
pub const ROLE_REMOTE_ADMIN: u64 = 0
| PRIV_REMOTE_AUDIT
| PRIV_REMOTE_MODIFY
| PRIV_REMOTE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.SyncOperator can do read and prune on the remote.
pub const ROLE_REMOTE_SYNC_OPERATOR: u64 = 0
| PRIV_REMOTE_AUDIT
| PRIV_REMOTE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Audit can audit the tape backup configuration and media content
pub const ROLE_TAPE_AUDIT: u64 = 0
| PRIV_TAPE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Admin can do anything on the tape backup
pub const ROLE_TAPE_ADMIN: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_MODIFY
| PRIV_TAPE_READ
| PRIV_TAPE_WRITE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Operator can do tape backup and restore (but no configuration changes)
pub const ROLE_TAPE_OPERATOR: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_READ
| PRIV_TAPE_WRITE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Reader can do read and inspect tape content
pub const ROLE_TAPE_READER: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_READ;
/// NoAccess can be used to remove privileges from specific (sub-)paths
pub const ROLE_NAME_NO_ACCESS: &str = "NoAccess";
#[api(
type_text: "<role>",
)]
#[repr(u64)]
#[derive(Serialize, Deserialize)]
/// Enum representing roles via their [PRIVILEGES] combination.
///
/// Since privileges are implemented as bitflags, each unique combination of privileges maps to a
/// single, unique `u64` value that is used in this enum definition.
pub enum Role {
/// Administrator
Admin = ROLE_ADMIN,
/// Auditor
Audit = ROLE_AUDIT,
/// Disable Access
NoAccess = ROLE_NO_ACCESS,
/// Datastore Administrator
DatastoreAdmin = ROLE_DATASTORE_ADMIN,
/// Datastore Reader (inspect datastore content and do restores)
DatastoreReader = ROLE_DATASTORE_READER,
/// Datastore Backup (backup and restore owned backups)
DatastoreBackup = ROLE_DATASTORE_BACKUP,
/// Datastore PowerUser (backup, restore and prune owned backup)
DatastorePowerUser = ROLE_DATASTORE_POWERUSER,
/// Datastore Auditor
DatastoreAudit = ROLE_DATASTORE_AUDIT,
/// Remote Auditor
RemoteAudit = ROLE_REMOTE_AUDIT,
/// Remote Administrator
RemoteAdmin = ROLE_REMOTE_ADMIN,
/// Syncronisation Opertator
RemoteSyncOperator = ROLE_REMOTE_SYNC_OPERATOR,
/// Tape Auditor
TapeAudit = ROLE_TAPE_AUDIT,
/// Tape Administrator
TapeAdmin = ROLE_TAPE_ADMIN,
/// Tape Operator
TapeOperator = ROLE_TAPE_OPERATOR,
/// Tape Reader
TapeReader = ROLE_TAPE_READER,
}
impl FromStr for Role {
type Err = value::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Self::deserialize(s.into_deserializer())
}
}
use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
lazy_static! {
/// Map of pre-defined [Roles](Role) to their associated [privileges](PRIVILEGES) combination and
@ -251,7 +34,7 @@ lazy_static! {
};
}
pub(crate) fn split_acl_path(path: &str) -> Vec<&str> {
pub fn split_acl_path(path: &str) -> Vec<&str> {
let items = path.split('/');
let mut components = vec![];
@ -283,11 +66,17 @@ pub fn check_acl_path(path: &str) -> Result<(), Error> {
return Ok(());
}
match components[1] {
"acl" | "users" => {
"acl" | "users" | "domains" => {
if components_len == 2 {
return Ok(());
}
}
// /access/openid/{endpoint}
"openid" => {
if components_len <= 3 {
return Ok(());
}
}
_ => {}
}
}
@ -308,7 +97,7 @@ pub fn check_acl_path(path: &str) -> Result<(), Error> {
return Ok(());
}
match components[1] {
"disks" | "log" | "status" | "tasks" | "time" => {
"certificates" | "disks" | "log" | "status" | "tasks" | "time" => {
if components_len == 2 {
return Ok(());
}
@ -386,8 +175,8 @@ pub struct AclTree {
/// Node representing ACLs for a certain ACL path.
#[derive(Default)]
pub struct AclTreeNode {
/// [User](crate::config::user::User) or
/// [Token](crate::config::user::ApiToken) ACLs for this node.
/// [User](pbs_api_types::User) or
/// [Token](pbs_api_types::ApiToken) ACLs for this node.
pub users: HashMap<Authid, HashMap<String, bool>>,
/// `Group` ACLs for this node (not yet implemented)
pub groups: HashMap<String, HashMap<String, bool>>,
@ -406,9 +195,9 @@ impl AclTreeNode {
}
/// Returns applicable [Role] and their propagation status for a given
/// [Authid](crate::api2::types::Authid).
/// [Authid](pbs_api_types::Authid).
///
/// If the `Authid` is a [User](crate::config::user::User) that has no specific `Roles` configured on this node,
/// If the `Authid` is a [User](pbs_api_types::User) that has no specific `Roles` configured on this node,
/// applicable `Group` roles will be returned instead.
///
/// If `leaf` is `false`, only those roles where the propagate flag in the ACL is set to `true`
@ -783,8 +572,8 @@ impl AclTree {
Ok((tree, digest))
}
#[cfg(test)]
pub(crate) fn from_raw(raw: &str) -> Result<Self, Error> {
/// This is used for testing
pub fn from_raw(raw: &str) -> Result<Self, Error> {
let mut tree = Self::new();
for (linenr, line) in raw.lines().enumerate() {
let line = line.trim();
@ -837,6 +626,11 @@ pub const ACL_CFG_FILENAME: &str = "/etc/proxmox-backup/acl.cfg";
/// Path used to lock the [AclTree] when modifying.
pub const ACL_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.acl.lck";
/// Get exclusive lock
pub fn lock_config() -> Result<BackupLockGuard, Error> {
open_backup_lockfile(ACL_CFG_LOCKFILE, None, true)
}
/// Reads the [AclTree] from the [default path](ACL_CFG_FILENAME).
pub fn config() -> Result<(AclTree, [u8; 32]), Error> {
let path = PathBuf::from(ACL_CFG_FILENAME);
@ -903,18 +697,7 @@ pub fn save_config(acl: &AclTree) -> Result<(), Error> {
acl.write_config(&mut raw)?;
let backup_user = crate::backup::backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= backup
let options = CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(backup_user.gid);
replace_file(ACL_CFG_FILENAME, &raw, options)?;
Ok(())
replace_backup_config(ACL_CFG_FILENAME, &raw)
}
#[cfg(test)]
@ -922,7 +705,7 @@ mod test {
use super::AclTree;
use anyhow::Error;
use crate::api2::types::Authid;
use pbs_api_types::Authid;
fn check_roles(tree: &AclTree, auth_id: &Authid, path: &str, expected_roles: &str) {
let path_vec = super::split_acl_path(path);

View File

@ -7,10 +7,12 @@ use anyhow::{Error, bail};
use proxmox::api::section_config::SectionConfigData;
use lazy_static::lazy_static;
use proxmox::api::UserInformation;
use proxmox::tools::time::epoch_i64;
use super::acl::{AclTree, ROLE_NAMES, ROLE_ADMIN};
use super::user::{ApiToken, User};
use crate::api2::types::{Authid, Userid};
use pbs_api_types::{Authid, Userid, User, ApiToken, ROLE_ADMIN};
use crate::acl::{AclTree, ROLE_NAMES};
use crate::memcom::Memcom;
/// Cache User/Group/Token/Acl configuration data for fast permission tests
pub struct CachedUserInfo {
@ -18,16 +20,15 @@ pub struct CachedUserInfo {
acl_tree: Arc<AclTree>,
}
fn now() -> i64 { unsafe { libc::time(std::ptr::null_mut()) } }
struct ConfigCache {
data: Option<Arc<CachedUserInfo>>,
last_update: i64,
last_user_cache_generation: usize,
}
lazy_static! {
static ref CACHED_CONFIG: RwLock<ConfigCache> = RwLock::new(
ConfigCache { data: None, last_update: 0 }
ConfigCache { data: None, last_update: 0, last_user_cache_generation: 0 }
);
}
@ -35,10 +36,16 @@ impl CachedUserInfo {
/// Returns a cached instance (up to 5 seconds old).
pub fn new() -> Result<Arc<Self>, Error> {
let now = now();
let now = epoch_i64();
let memcom = Memcom::new()?;
let user_cache_generation = memcom.user_cache_generation();
{ // limit scope
let cache = CACHED_CONFIG.read().unwrap();
if (now - cache.last_update) < 5 {
if (user_cache_generation == cache.last_user_cache_generation) &&
((now - cache.last_update) < 5)
{
if let Some(ref config) = cache.data {
return Ok(config.clone());
}
@ -46,53 +53,47 @@ impl CachedUserInfo {
}
let config = Arc::new(CachedUserInfo {
user_cfg: super::user::cached_config()?,
acl_tree: super::acl::cached_config()?,
user_cfg: crate::user::cached_config()?,
acl_tree: crate::acl::cached_config()?,
});
let mut cache = CACHED_CONFIG.write().unwrap();
cache.last_update = now;
cache.last_user_cache_generation = user_cache_generation;
cache.data = Some(config.clone());
Ok(config)
}
#[cfg(test)]
pub(crate) fn test_new(user_cfg: SectionConfigData, acl_tree: AclTree) -> Self {
/// Only exposed for testing
#[doc(hidden)]
pub fn test_new(user_cfg: SectionConfigData, acl_tree: AclTree) -> Self {
Self {
user_cfg: Arc::new(user_cfg),
acl_tree: Arc::new(acl_tree),
}
}
/// Test if a user_id is enabled and not expired
pub fn is_active_user_id(&self, userid: &Userid) -> bool {
if let Ok(info) = self.user_cfg.lookup::<User>("user", userid.as_str()) {
info.is_active()
} else {
false
}
}
/// Test if a authentication id is enabled and not expired
pub fn is_active_auth_id(&self, auth_id: &Authid) -> bool {
let userid = auth_id.user();
if let Ok(info) = self.user_cfg.lookup::<User>("user", userid.as_str()) {
if !info.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = info.expire {
if expire > 0 && expire <= now() {
return false;
}
}
} else {
if !self.is_active_user_id(userid) {
return false;
}
if auth_id.is_token() {
if let Ok(info) = self.user_cfg.lookup::<ApiToken>("token", &auth_id.to_string()) {
if !info.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = info.expire {
if expire > 0 && expire <= now() {
return false;
}
}
return true;
return info.is_active();
} else {
return false;
}

100
pbs-config/src/datastore.rs Normal file
View File

@ -0,0 +1,100 @@
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use proxmox::api::{
schema::{ApiType, Schema},
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
}
};
use pbs_api_types::{DataStoreConfig, DATASTORE_SCHEMA};
use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
lazy_static! {
pub static ref CONFIG: SectionConfig = init();
}
fn init() -> SectionConfig {
let obj_schema = match DataStoreConfig::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("datastore".to_string(), Some(String::from("name")), obj_schema);
let mut config = SectionConfig::new(&DATASTORE_SCHEMA);
config.register_plugin(plugin);
config
}
pub const DATASTORE_CFG_FILENAME: &str = "/etc/proxmox-backup/datastore.cfg";
pub const DATASTORE_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.datastore.lck";
/// Get exclusive lock
pub fn lock_config() -> Result<BackupLockGuard, Error> {
open_backup_lockfile(DATASTORE_CFG_LOCKFILE, None, true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(DATASTORE_CFG_FILENAME)?
.unwrap_or_else(|| "".to_string());
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(DATASTORE_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(DATASTORE_CFG_FILENAME, &config)?;
replace_backup_config(DATASTORE_CFG_FILENAME, raw.as_bytes())
}
// shell completion helper
pub fn complete_datastore_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => return vec![],
}
}
pub fn complete_acl_path(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
let mut list = Vec::new();
list.push(String::from("/"));
list.push(String::from("/datastore"));
list.push(String::from("/datastore/"));
if let Ok((data, _digest)) = config() {
for id in data.sections.keys() {
list.push(format!("/datastore/{}", id));
}
}
list.push(String::from("/remote"));
list.push(String::from("/remote/"));
list.push(String::from("/tape"));
list.push(String::from("/tape/"));
list.push(String::from("/tape/drive"));
list.push(String::from("/tape/drive/"));
list.push(String::from("/tape/changer"));
list.push(String::from("/tape/changer/"));
list.push(String::from("/tape/pool"));
list.push(String::from("/tape/pool/"));
list.push(String::from("/tape/job"));
list.push(String::from("/tape/job/"));
list
}
pub fn complete_calendar_event(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
// just give some hints about possible values
["minutely", "hourly", "daily", "mon..fri", "0:0"]
.iter().map(|s| String::from(*s)).collect()
}

136
pbs-config/src/domains.rs Normal file
View File

@ -0,0 +1,136 @@
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use serde::{Serialize, Deserialize};
use proxmox::api::{
api,
schema::*,
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
}
};
use pbs_api_types::{REALM_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA};
use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
lazy_static! {
pub static ref CONFIG: SectionConfig = init();
}
#[api()]
#[derive(Eq, PartialEq, Debug, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Use the value of this attribute/claim as unique user name. It is
/// up to the identity provider to guarantee the uniqueness. The
/// OpenID specification only guarantees that Subject ('sub') is unique. Also
/// make sure that the user is not allowed to change that attribute by
/// himself!
pub enum OpenIdUserAttribute {
/// Subject (OpenId 'sub' claim)
Subject,
/// Username (OpenId 'preferred_username' claim)
Username,
/// Email (OpenId 'email' claim)
Email,
}
#[api(
properties: {
realm: {
schema: REALM_ID_SCHEMA,
},
"client-key": {
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
autocreate: {
optional: true,
default: false,
},
"username-claim": {
type: OpenIdUserAttribute,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all="kebab-case")]
/// OpenID configuration properties.
pub struct OpenIdRealmConfig {
#[updater(skip)]
pub realm: String,
/// OpenID Issuer Url
pub issuer_url: String,
/// OpenID Client ID
pub client_id: String,
/// OpenID Client Key
#[serde(skip_serializing_if="Option::is_none")]
pub client_key: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
/// Automatically create users if they do not exist.
#[serde(skip_serializing_if="Option::is_none")]
pub autocreate: Option<bool>,
#[updater(skip)]
#[serde(skip_serializing_if="Option::is_none")]
pub username_claim: Option<OpenIdUserAttribute>,
}
fn init() -> SectionConfig {
let obj_schema = match OpenIdRealmConfig::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("openid".to_string(), Some(String::from("realm")), obj_schema);
let mut config = SectionConfig::new(&REALM_ID_SCHEMA);
config.register_plugin(plugin);
config
}
pub const DOMAINS_CFG_FILENAME: &str = "/etc/proxmox-backup/domains.cfg";
pub const DOMAINS_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.domains.lck";
/// Get exclusive lock
pub fn lock_config() -> Result<BackupLockGuard, Error> {
open_backup_lockfile(DOMAINS_CFG_LOCKFILE, None, true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(DOMAINS_CFG_FILENAME)?
.unwrap_or_else(|| "".to_string());
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(DOMAINS_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(DOMAINS_CFG_FILENAME, &config)?;
replace_backup_config(DOMAINS_CFG_FILENAME, raw.as_bytes())
}
// shell completion helper
pub fn complete_realm_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => return vec![],
}
}
pub fn complete_openid_realm_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter()
.filter_map(|(id, (t, _))| if t == "openid" { Some(id.to_string()) } else { None })
.collect(),
Err(_) => return vec![],
}
}

View File

@ -25,22 +25,15 @@ use proxmox::{
SectionConfigPlugin,
},
},
tools::fs::{
open_file_locked,
replace_file,
CreateOptions,
},
};
use crate::{
api2::types::{
DRIVE_NAME_SCHEMA,
VirtualTapeDrive,
LtoTapeDrive,
ScsiTapeChanger,
},
use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
use pbs_api_types::{
DRIVE_NAME_SCHEMA, VirtualTapeDrive, LtoTapeDrive, ScsiTapeChanger,
};
lazy_static! {
/// Static [`SectionConfig`] to access parser/writer functions.
pub static ref CONFIG: SectionConfig = init();
@ -79,8 +72,8 @@ pub const DRIVE_CFG_FILENAME: &str = "/etc/proxmox-backup/tape.cfg";
pub const DRIVE_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.tape.lck";
/// Get exclusive lock
pub fn lock() -> Result<std::fs::File, Error> {
open_file_locked(DRIVE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)
pub fn lock() -> Result<BackupLockGuard, Error> {
open_backup_lockfile(DRIVE_CFG_LOCKFILE, None, true)
}
/// Read and parse the configuration file
@ -97,19 +90,7 @@ pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
/// Save the configuration file
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(DRIVE_CFG_FILENAME, &config)?;
let backup_user = crate::backup::backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= backup
let options = CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(backup_user.gid);
replace_file(DRIVE_CFG_FILENAME, raw.as_bytes(), options)?;
Ok(())
replace_backup_config(DRIVE_CFG_FILENAME, raw.as_bytes())
}
/// Check if the specified drive name exists in the config.

View File

@ -1,15 +1,15 @@
use anyhow::{bail, format_err, Context, Error};
use serde::{Deserialize, Serialize};
use crate::backup::{CryptConfig, Fingerprint};
use std::io::Write;
use std::path::Path;
use anyhow::{bail, format_err, Context, Error};
use serde::{Deserialize, Serialize};
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::try_block;
use crate::api2::types::{KeyInfo, Kdf};
use pbs_api_types::{Kdf, KeyInfo, Fingerprint};
use pbs_tools::crypt_config::CryptConfig;
/// Key derivation function configuration
#[derive(Deserialize, Serialize, Clone, Debug)]
@ -100,7 +100,7 @@ impl From<&KeyConfig> for KeyInfo {
fingerprint: key_config
.fingerprint
.as_ref()
.map(|fp| crate::tools::format::as_fingerprint(fp.bytes())),
.map(|fp| pbs_tools::format::as_fingerprint(fp.bytes())),
hint: key_config.hint.clone(),
}
}
@ -120,7 +120,7 @@ impl KeyConfig {
pub fn without_password(raw_key: [u8; 32]) -> Result<Self, Error> {
// always compute fingerprint
let crypt_config = CryptConfig::new(raw_key.clone())?;
let fingerprint = Some(crypt_config.fingerprint());
let fingerprint = Some(Fingerprint::new(crypt_config.fingerprint()));
let created = proxmox::tools::time::epoch_i64();
Ok(Self {
@ -187,7 +187,7 @@ impl KeyConfig {
// always compute fingerprint
let crypt_config = CryptConfig::new(raw_key.clone())?;
let fingerprint = Some(crypt_config.fingerprint());
let fingerprint = Some(Fingerprint::new(crypt_config.fingerprint()));
Ok(Self {
kdf: Some(kdf),
@ -258,7 +258,7 @@ impl KeyConfig {
result.copy_from_slice(&key);
let crypt_config = CryptConfig::new(result.clone())?;
let fingerprint = crypt_config.fingerprint();
let fingerprint = Fingerprint::new(crypt_config.fingerprint());
if let Some(ref stored_fingerprint) = self.fingerprint {
if &fingerprint != stored_fingerprint {
bail!(

106
pbs-config/src/lib.rs Normal file
View File

@ -0,0 +1,106 @@
pub mod acl;
mod cached_user_info;
pub use cached_user_info::CachedUserInfo;
pub mod datastore;
pub mod domains;
pub mod drive;
pub mod key_config;
pub mod media_pool;
pub mod network;
pub mod remote;
pub mod sync;
pub mod tape_encryption_keys;
pub mod tape_job;
pub mod token_shadow;
pub mod user;
pub mod verify;
pub(crate) mod memcom;
use anyhow::{format_err, Error};
pub use pbs_buildcfg::{BACKUP_USER_NAME, BACKUP_GROUP_NAME};
/// Return User info for the 'backup' user (``getpwnam_r(3)``)
pub fn backup_user() -> Result<nix::unistd::User, Error> {
pbs_tools::sys::query_user(BACKUP_USER_NAME)?
.ok_or_else(|| format_err!("Unable to lookup '{}' user.", BACKUP_USER_NAME))
}
/// Return Group info for the 'backup' group (``getgrnam(3)``)
pub fn backup_group() -> Result<nix::unistd::Group, Error> {
pbs_tools::sys::query_group(BACKUP_GROUP_NAME)?
.ok_or_else(|| format_err!("Unable to lookup '{}' group.", BACKUP_GROUP_NAME))
}
pub struct BackupLockGuard(Option<std::fs::File>);
#[doc(hidden)]
/// Note: do not use for production code, this is only intended for tests
pub unsafe fn create_mocked_lock() -> BackupLockGuard {
BackupLockGuard(None)
}
/// Open or create a lock file owned by user "backup" and lock it.
///
/// Owner/Group of the file is set to backup/backup.
/// File mode is 0660.
/// Default timeout is 10 seconds.
///
/// Note: This method needs to be called by user "root" or "backup".
pub fn open_backup_lockfile<P: AsRef<std::path::Path>>(
path: P,
timeout: Option<std::time::Duration>,
exclusive: bool,
) -> Result<BackupLockGuard, Error> {
let user = backup_user()?;
let options = proxmox::tools::fs::CreateOptions::new()
.perm(nix::sys::stat::Mode::from_bits_truncate(0o660))
.owner(user.uid)
.group(user.gid);
let timeout = timeout.unwrap_or(std::time::Duration::new(10, 0));
let file = proxmox::tools::fs::open_file_locked(&path, timeout, exclusive, options)?;
Ok(BackupLockGuard(Some(file)))
}
/// Atomically write data to file owned by "root:backup" with permission "0640"
///
/// Only the superuser can write those files, but group 'backup' can read them.
pub fn replace_backup_config<P: AsRef<std::path::Path>>(
path: P,
data: &[u8],
) -> Result<(), Error> {
let backup_user = backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= backup
let options = proxmox::tools::fs::CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(backup_user.gid);
proxmox::tools::fs::replace_file(path, data, options)?;
Ok(())
}
/// Atomically write data to file owned by "root:root" with permission "0600"
///
/// Only the superuser can read and write those files.
pub fn replace_secret_config<P: AsRef<std::path::Path>>(
path: P,
data: &[u8],
) -> Result<(), Error> {
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0600);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= root
let options = proxmox::tools::fs::CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(nix::unistd::Gid::from_raw(0));
proxmox::tools::fs::replace_file(path, data, options)?;
Ok(())
}

View File

@ -20,19 +20,11 @@ use proxmox::{
SectionConfigPlugin,
}
},
tools::fs::{
open_file_locked,
replace_file,
CreateOptions,
},
};
use crate::{
api2::types::{
MEDIA_POOL_NAME_SCHEMA,
MediaPoolConfig,
},
};
use pbs_api_types::{MEDIA_POOL_NAME_SCHEMA, MediaPoolConfig};
use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
lazy_static! {
/// Static [`SectionConfig`] to access parser/writer functions.
@ -57,10 +49,9 @@ pub const MEDIA_POOL_CFG_FILENAME: &str = "/etc/proxmox-backup/media-pool.cfg";
/// Lock file name (used to prevent concurrent access)
pub const MEDIA_POOL_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.media-pool.lck";
/// Get exclusive lock
pub fn lock() -> Result<std::fs::File, Error> {
open_file_locked(MEDIA_POOL_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)
pub fn lock() -> Result<BackupLockGuard, Error> {
open_backup_lockfile(MEDIA_POOL_CFG_LOCKFILE, None, true)
}
/// Read and parse the configuration file
@ -77,19 +68,7 @@ pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
/// Save the configuration file
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(MEDIA_POOL_CFG_FILENAME, &config)?;
let backup_user = crate::backup::backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= backup
let options = CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(backup_user.gid);
replace_file(MEDIA_POOL_CFG_FILENAME, raw.as_bytes(), options)?;
Ok(())
replace_backup_config(MEDIA_POOL_CFG_FILENAME, raw.as_bytes())
}
// shell completion helper

81
pbs-config/src/memcom.rs Normal file
View File

@ -0,0 +1,81 @@
//! Memory based communication channel between proxy & daemon for things such as cache
//! invalidation.
use std::os::unix::io::AsRawFd;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
use anyhow::Error;
use nix::fcntl::OFlag;
use nix::sys::mman::{MapFlags, ProtFlags};
use nix::sys::stat::Mode;
use once_cell::sync::OnceCell;
use proxmox::tools::fs::CreateOptions;
use proxmox::tools::mmap::Mmap;
/// In-memory communication channel.
pub struct Memcom {
mmap: Mmap<u8>,
}
#[repr(C)]
struct Head {
// User (user.cfg) cache generation/version.
user_cache_generation: AtomicUsize,
}
static INSTANCE: OnceCell<Arc<Memcom>> = OnceCell::new();
const MEMCOM_FILE_PATH: &str = pbs_buildcfg::rundir!("/proxmox-backup-memcom");
const EMPTY_PAGE: [u8; 4096] = [0u8; 4096];
impl Memcom {
/// Open the memory based communication channel singleton.
pub fn new() -> Result<Arc<Self>, Error> {
INSTANCE.get_or_try_init(Self::open).map(Arc::clone)
}
// Actual work of `new`:
fn open() -> Result<Arc<Self>, Error> {
let user = crate::backup_user()?;
let options = CreateOptions::new()
.perm(Mode::from_bits_truncate(0o660))
.owner(user.uid)
.group(user.gid);
let file = proxmox::tools::fs::atomic_open_or_create_file(
MEMCOM_FILE_PATH,
OFlag::O_RDWR | OFlag::O_CLOEXEC,
&EMPTY_PAGE, options)?;
let mmap = unsafe {
Mmap::<u8>::map_fd(
file.as_raw_fd(),
0,
4096,
ProtFlags::PROT_READ | ProtFlags::PROT_WRITE,
MapFlags::MAP_SHARED | MapFlags::MAP_NORESERVE | MapFlags::MAP_POPULATE,
)?
};
Ok(Arc::new(Self { mmap }))
}
// Shortcut to get the mapped `Head` as a `Head`.
fn head(&self) -> &Head {
unsafe { &*(self.mmap.as_ptr() as *const u8 as *const Head) }
}
/// Returns the user cache generation number.
pub fn user_cache_generation(&self) -> usize {
self.head().user_cache_generation.load(Ordering::Acquire)
}
/// Increase the user cache generation number.
pub fn increase_user_cache_generation(&self) {
self.head()
.user_cache_generation
.fetch_add(1, Ordering::AcqRel);
}
}

Some files were not shown because too many files have changed in this diff Show More