Compare commits

..

46 Commits

Author SHA1 Message Date
a67874b6ae bump version to 1.1.14-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:08:02 +02:00
9402e9f357 cargo: update proxmox-acme-rs to 0.3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:05:19 +02:00
b75bb5434e d/control.in: update
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:05:06 +02:00
ec44c3113b backport "datastore: lookup: reuse ChunkStore on stale datastore re-open"
Backport of commit 0bd9c87010

When re-opening a datastore due to the cached entry being stale
(only on verify-new config change). On datastore open the chunk store
was also re-opened, which in turn creates a new ProcessLocker,
loosing any existing shared lock which can cause conflicts between
long running (24h+) backups  and GC.

To fix this, reuse the existing ChunkStore, and thus  its
ProcessLocker, when creating a up-to-date datastore instance on
lookup, since only the datastore config should be reloaded. This is
fine as the ChunkStore path is not updatable over our API.

Note that this is a precaution backport, the underlying issue this
fixes is relatively unlikely to cause any trouble in the 1.x branch
due to not often re-opening the datastore.

Originally-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:00:01 +02:00
cb21bf7454 ui: add notice for nearing PBS 1.1 End-of-Life
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 17:35:03 +02:00
a1cffef503 pbs-tools: LruCache: implement Drop
this fixes the leaked memory for the cache, as we had only pointers
in the map/list which were freed, not the underlying chunks

moves the 'clear' implementation out of the trait bounds so that
Drop can reuse it

this is used e.g. for file download from a pxar

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
(cherry picked from commit 98983a9dab)
2022-01-20 15:46:35 +01:00
9b00099ead drop RawWaker usage
this was also leaking a refcount before, this is fixed now

See-also: proxmox/proxmox-async:
  * d0a3e38006fe ("drop RawWaker usage")
  * ff132e93c6fd ("rustfmt")

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-01-20 15:41:00 +01:00
d2351f1a81 bump version to 1.1.13-3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-19 10:21:23 +02:00
869e4601b4 api daemons: fix sending log-reopen command
send_command serializes everything so it cannot be used to send a
raw, optimized command. Normally that means we get an error like
> 'unable to parse parameters (expected json object)'
when used that way.

Switch over to send_raw_command which does not re-serializes the
command.

Fixes: 45b8a032 ("refactor send_command")
Originally-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-11 14:56:30 +02:00
238e5b573e buildsys: prune-sim is not generated, do not cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-26 16:41:34 +02:00
996680a336 bump version to 1.1.13-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-26 16:40:37 +02:00
94f6127711 Revert "auth: 'crypt' is not thread safe"
With this I'm getting coredumps on every log in:

> Process 20957 (proxmox-backup-) of user 34 dumped core.
>
> Stack trace of thread 20987:
> #0  0x0000563dec9ac37f _ZN3std3sys4unix14stack_overflow3imp14signal_handler17ha95ed06a038ca319E.llvm.11547235952357801165 (proxmox-backup-proxy)
> #1  0x00007f2638de9840 __restore_rt (libc.so.6)
> #2  0x00007f2638e51dac __stpncpy_sse2_unaligned (libc.so.6)
> #3  0x00007f26393b1340 __sha256_crypt_r (libcrypt.so.1)
> #4  0x00007f26393b0553 __crypt_r (libcrypt.so.1)
> #5  0x0000563dec6e44df _ZN14proxmox_backup4auth5crypt17hd5165f960093dfe7E (proxmox-backup-proxy)

This reverts commit acefa2bb6e.
2021-07-26 16:38:16 +02:00
3841301ee9 d/control: update generated build-deps
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 12:36:36 +02:00
f406202825 bump version to 1.1.13-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 12:35:06 +02:00
ba50f57e93 file-restore: increase lock timeout on QEMU map
This lock is held during VM startup, so that multiple calls will not
start VMs twice. But this means that the timeout needs to incorporate
the time it might take a VM to boot, so increase it quite a bit.

This could previously lead to "interrupted system call" errors when
accessing backups with many disks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
(cherry picked from commit 66501529a2)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 12:30:09 +02:00
61a758f67d build.rs: tell cargo to only rerun build.rs step if .git/HEAD changes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 12:43:19 +02:00
847c27fbee build.rs: factor out getting git command output into helper fn
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 12:43:19 +02:00
7d79f3d5f7 file restore daemon: log about basic steps
to make the log more useful..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 9a06eb1618)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:15 +02:00
fa3fdea590 file restore daemon: reword warning about manual execution
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 309e14ebb7)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:10 +02:00
aa2cd76c58 restore daemon: use millisecond log resolution
During startup most of the stuff is happening in milliseconds (or
less), so the timestamp granularity of seconds made it hard to tell
if the previous command required 990ms or 1ms, which is quite the
difference in the restore daemon context.

Using micros seems not to bring too much additional information, a
millisecond is already an ok lower time resolution for logging, so
switch only to millis for now.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit ecd66ecaf6)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:00 +02:00
e2d82c7d4d restore daemon: create /run/proxmox-backup on startup
fixes file restore again.

The new Memcom tracking file lives in `/run/proxmox-backup` and is
always created on REST interaction, as CachedUserInfo uses it to
efficiently track config changes, and such a cache is used in each
REST handle_request.

Further, the Memcom infra expects the base run PBS dir to exists
already, which is an OK assumption to have, but in the file-restore
daemon we have a significantly more minimal environment, and the run
dir was simply not required there, even /run isn't a tmpfs yet.

Fixes fda19dcc6f ("fix CachedUserInfo by using a shared memory version counter")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 33d7292f29)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:00 +02:00
e9c2a34def REST: set error message extenesion for bad-request response log
We send it already to the user via the response body, but the
log_response does not has, nor wants to have FWIW, access to the
async body stream, so pass it through the ErrorMessageExtension
mechanism like we do else where.

Note that this is not only useful for PBS API proxy/daemon but also
the REST server of the file-restore daemon running inside the restore
VM, and it really is *very* helpful to debug things there..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit f4d371d2d2)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:07:41 +02:00
0fad95f032 REST: rust fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 2d48533378)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:07:41 +02:00
683595940b fix #3496: acme: plugin: add sleep for dns propagation
the dns plugin config allow for a specified amount of time to wait for
the TXT record to be set and propagated through DNS.

This patch adds a sleep for this amount of time.
The log message was taken from the perl implementation in proxmox-acme
for consistency.

Tested with the powerdns plugin in my test setup.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
(cherry picked from commit 3f84541412)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:29 +02:00
40060c1fed config: acme: make validation_delay crate public
we need the setting in acme::plugin.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
(cherry picked from commit 4d8bd03668)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:29 +02:00
2abee30fdd acme: plugin: fix error message
extract_challenge is used by both dns-01 and http-01 challenges.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
(cherry picked from commit f9bd5e1691)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:29 +02:00
7cdc53bbf7 buildsys: docs: clean: also clean generated JS files
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 13a2445744)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:04 +02:00
dac877252b api: disk list: sort by name
So callers get more stable results. Most noticeable, the disk list in
the web UI doesn't jump around upon reloading, and while sorting could
be done directly there, like this other callers get the benefit too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
(cherry picked from commit bbff317aa7)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:04:57 +02:00
dd749b0e47 disks: also check for file systems with lsblk
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
(cherry picked from commit 20429238e0)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:04:57 +02:00
f98c02cbc6 disks: refactor partition type handling
in preparation to also get the file system type from lsblk.

Co-developed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
(cherry picked from commit 364299740f)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:04:57 +02:00
218d7e3ec6 rest: log response: avoid unnecessary mut on variable
a match expresses the fallback slightly nicer and needs no mut,
which is always nice to avoid.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 6b5013edb3)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:02:47 +02:00
acefa2bb6e auth: 'crypt' is not thread safe
According to crypt(3):
"crypt places its result in a static storage area, which will be
overwritten by subsequent calls to crypt. It is not safe to call crypt
from multiple threads simultaneously."

This means that multiple login calls as a PBS-realm user can collide and
produce intermittent authentication failures. A visible case is for
file-restore, where VMs with many disks lead to just as many auth-calls
at the same time, as the GUI tries to expand each tree element on load.

Instead, use the thread-safe variant 'crypt_r', which places the result
into a pre-allocated buffer of type 'crypt_data'. The C struct is laid
out according to 'lib/crypt.h.in' and the man page mentioned above.

Use the opportunity and make both arguments to the rust 'crypt' function
take a &[u8].

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
(cherry picked from commit c4c4b5a3ef)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:02:01 +02:00
36551172f3 depend on proxmox 0.11.6 (changed make_tmp_file() return type)
(cherry picked from commit bfd357c5a1)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 09:45:28 +02:00
c26f4ef385 buildsys: Prepare new way for path dependencies
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
(cherry picked from commit 9f5b57a348)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 09:39:12 +02:00
60816a8a82 Cargo.toml: regroup imports
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
(cherry picked from commit aceae32baa)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 09:34:18 +02:00
d7d09712ef bump version to 1.1.12-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:58:14 +02:00
825f019226 buildsys: call dpkg-buildpackage directly in deb-all
else we may double-build

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit a2c73c78dd)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:58:14 +02:00
ca5e5bb67f ui: datastore/OptionView: only navigate up when we removed the datastore
and not on window close

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
(cherry picked from commit 82cae19d19)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:54:50 +02:00
8191ff150e ui: dashboard/DataStoreStatistics: fix closing <i> tag
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
(cherry picked from commit 4a489ae3de)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:49:42 +02:00
f2aeb13c68 subscription: set higher-level error to message instead of bailing
While the PVE one "bails" too, it has an eval around those and moves
the error to the message property, so lets do so too to ensure a user
can force an update on a too old subscription

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit b81818b6ad)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:48:03 +02:00
ce76b4b3c2 bump version to 1.1-11-1 2021-06-30 11:25:11 +02:00
44b9d6f162 tape/drive: fix logging when requesting media
we try to load the correct media in a loop until we find the correct tape.
when encountering an error or wrong tape, we want to log that (and send
an email if one is set) that requests the correct tape.

while trying to avoid printing the same errors more than once in a row,
we had at least one case (starting with an empty tape in the drive)
which would not print/send any tape request.

reworking that code to use a custom 'TapeRequest' enum, which contains
the state + error message, and a helper that prints and sends an email
when the state changes

this reduces the change check/log to a single variable, instead of 4
(tried, last_media_uuid, last_error, failure_reason)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-30 11:22:04 +02:00
53e80e8aa2 tape: fix LTO locate_file for HP drives
Add test code to the first locate_file command, compute locate_offset.
Subsequent locate_file commands use that offset.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-30 11:22:04 +02:00
f94aa5ceb1 fix #3393 (again): pxar/create: try to read xattrs/fcaps/acls by default
we have a static list of filesystems and their capabilities regarding
file attributes and fs features (e.g. sockets/fifos/etc) which also
includes xattrs,acls and fcaps

if we did not know a filesystem by its magic number (for example cephfs),
we did not even attempt to read xattrs, etc.

this patch adds those flags by default to unknown filesystems, and
removes them when we encounter EOPNOTSUPP (to remove the number
of syscalls)

with this, we should be able to catch xattrs/acls/fcaps on all
(unknown) fs types that support them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-30 11:22:04 +02:00
3e4b9868a0 proxmox-backup-manager: show task log on datastore create
since the output:
Result: "<UPID>"
is not really interesting, show instead the task log while
the datastore is creating, since it is now run in a worker

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-30 11:22:04 +02:00
4d86df04a0 bump version to 1.1.10-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-16 09:55:47 +02:00
457 changed files with 12704 additions and 16518 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "2.0.10"
version = "1.1.14"
authors = [
"Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>",
@ -15,29 +15,10 @@ edition = "2018"
license = "AGPL-3"
description = "Proxmox Backup"
homepage = "https://www.proxmox.com"
build = "build.rs"
exclude = [ "build", "debian", "tests/catar_data/test_symlink/symlink1"]
[workspace]
members = [
"pbs-buildcfg",
"pbs-client",
"pbs-config",
"pbs-datastore",
"pbs-fuse-loop",
"pbs-runtime",
"proxmox-rest-server",
"proxmox-systemd",
"pbs-tape",
"pbs-tools",
"proxmox-backup-banner",
"proxmox-backup-client",
"proxmox-file-restore",
"proxmox-restore-daemon",
"pxar-bin",
]
[lib]
name = "proxmox_backup"
path = "src/lib.rs"
@ -52,6 +33,7 @@ endian_trait = { version = "0.6", features = ["arrays"] }
env_logger = "0.7"
flate2 = "1.0"
anyhow = "1.0"
foreign-types = "0.3"
thiserror = "1.0"
futures = "0.3"
h2 = { version = "0.3", features = [ "stream" ] }
@ -68,6 +50,8 @@ openssl = "0.10"
pam = "0.7"
pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pin-project = "1.0"
regex = "1.2"
rustyline = "7"
serde = { version = "1.0", features = ["derive"] }
@ -85,38 +69,24 @@ url = "2.1"
walkdir = "2"
webauthn-rs = "0.2.5"
xdg = "2.2"
zstd = { version = "0.4", features = [ "bindgen" ] }
nom = "5.1"
crossbeam-channel = "0.5"
# Used only by examples currently:
zstd = { version = "0.6", features = [ "bindgen" ] }
pathpatterns = "0.1.2"
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
proxmox = { version = "0.13.3", features = [ "sortable-macro", "api-macro", "cli", "router", "tfa" ] }
proxmox-acme-rs = "0.2.1"
proxmox-apt = "0.7.0"
proxmox-http = { version = "0.4.0", features = [ "client", "http-helpers", "websocket" ] }
proxmox-openid = "0.7.0"
pbs-api-types = { path = "pbs-api-types" }
pbs-buildcfg = { path = "pbs-buildcfg" }
pbs-client = { path = "pbs-client" }
pbs-config = { path = "pbs-config" }
pbs-datastore = { path = "pbs-datastore" }
pbs-runtime = { path = "pbs-runtime" }
proxmox-rest-server = { path = "proxmox-rest-server" }
proxmox-systemd = { path = "proxmox-systemd" }
pbs-tools = { path = "pbs-tools" }
pbs-tape = { path = "pbs-tape" }
proxmox = { version = "0.11.6", features = [ "sortable-macro", "api-macro", "cli", "router", "tfa" ] }
proxmox-acme-rs = "0.3"
proxmox-fuse = "0.1.1"
proxmox-http = { version = "0.2.1", features = [ "client", "http-helpers", "websocket" ] }
# Local path overrides
# NOTE: You must run `cargo update` after changing this for it to take effect!
[patch.crates-io]
#proxmox = { path = "../proxmox/proxmox" }
#proxmox-http = { path = "../proxmox/proxmox-http" }
#pxar = { path = "../pxar" }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "cli", "router", "tfa" ] }
#proxmox-http = { path = "../proxmox/proxmox-http", features = [ "client", "http-helpers", "websocket" ] }
#pxar = { path = "../pxar", features = [ "tokio-io" ] }
[features]
default = []

115
Makefile
View File

@ -17,8 +17,7 @@ USR_BIN := \
# Binaries usable by admins
USR_SBIN := \
proxmox-backup-manager \
proxmox-backup-debug \
proxmox-backup-manager
# Binaries for services:
SERVICE_BIN := \
@ -31,24 +30,6 @@ SERVICE_BIN := \
RESTORE_BIN := \
proxmox-restore-daemon
SUBCRATES := \
pbs-api-types \
pbs-buildcfg \
pbs-client \
pbs-config \
pbs-datastore \
pbs-fuse-loop \
pbs-runtime \
proxmox-rest-server \
proxmox-systemd \
pbs-tape \
pbs-tools \
proxmox-backup-banner \
proxmox-backup-client \
proxmox-file-restore \
proxmox-restore-daemon \
pxar-bin
ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release
COMPILEDIR := target/release
@ -76,15 +57,13 @@ RESTORE_DBG_DEB=proxmox-backup-file-restore-dbgsym_${DEB_VERSION}_${ARCH}.deb
DOC_DEB=${PACKAGE}-docs_${DEB_VERSION}_all.deb
DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} \
${RESTORE_DEB} ${RESTORE_DBG_DEB} ${DEBUG_DEB} ${DEBUG_DBG_DEB}
${RESTORE_DEB} ${RESTORE_DBG_DEB}
DSC = rust-${PACKAGE}_${DEB_VERSION}.dsc
DESTDIR=
tests ?= --workspace
all: $(SUBDIRS)
all: cargo-build $(SUBDIRS)
.PHONY: $(SUBDIRS)
$(SUBDIRS):
@ -96,23 +75,25 @@ test:
$(CARGO) test $(tests) $(CARGO_BUILD_ARGS)
doc:
$(CARGO) doc --workspace --no-deps $(CARGO_BUILD_ARGS)
$(CARGO) doc --no-deps $(CARGO_BUILD_ARGS)
# always re-create this dir
.PHONY: build
build:
@echo "Setting pkg-buildcfg version to: $(DEB_VERSION_UPSTREAM)"
sed -i -e 's/^version =.*$$/version = "$(DEB_VERSION_UPSTREAM)"/' \
pbs-buildcfg/Cargo.toml
rm -rf build
mkdir build
cp -a debian \
Cargo.toml src \
$(SUBCRATES) \
docs etc examples tests www zsh-completions \
defines.mk Makefile \
./build/
rm -f build/Cargo.lock
rm -f debian/control
debcargo package \
--config debian/debcargo.toml \
--changelog-ready \
--no-overlay-write-back \
--directory build \
proxmox-backup \
$(shell dpkg-parsechangelog -l debian/changelog -SVersion | sed -e 's/-.*//')
sed -e '1,/^$$/ ! d' build/debian/control > build/debian/control.src
cat build/debian/control.src build/debian/control.in > build/debian/control
rm build/debian/control.in build/debian/control.src
cp build/debian/control debian/control
rm build/Cargo.lock
find build/debian -name "*.hint" -delete
$(foreach i,$(SUBDIRS), \
$(MAKE) -C build/$(i) clean ;)
@ -142,61 +123,27 @@ $(DSC): build
cd build; dpkg-buildpackage -S -us -uc -d -nc
lintian $(DSC)
.PHONY: clean distclean deb clean
distclean: clean
clean: clean-deb
clean:
$(foreach i,$(SUBDIRS), \
$(MAKE) -C $(i) clean ;)
$(CARGO) clean
rm -f .do-cargo-build
rm -rf *.deb *.dsc *.tar.gz *.buildinfo *.changes build
find . -name '*~' -exec rm {} ';'
# allows one to avoid running cargo clean when one just wants to tidy up after a packgae build
clean-deb:
rm -rf *.deb *.dsc *.tar.gz *.buildinfo *.changes build/
.PHONY: dinstall
dinstall: ${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} \
${DEBUG_DEB} ${DEBUG_DBG_DEB}
dinstall: ${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB}
dpkg -i $^
# make sure we build binaries before docs
docs: $(COMPILEDIR)/dump-catalog-shell-cli $(COMPILEDIR)/docgen
docs: cargo-build
.PHONY: cargo-build
cargo-build:
rm -f .do-cargo-build
$(MAKE) $(COMPILED_BINS)
$(COMPILED_BINS) $(COMPILEDIR)/dump-catalog-shell-cli $(COMPILEDIR)/docgen: .do-cargo-build
.do-cargo-build:
$(CARGO) build $(CARGO_BUILD_ARGS) \
--bin proxmox-backup-api \
--bin proxmox-backup-proxy \
--bin proxmox-backup-manager \
--bin docgen \
--package proxmox-backup-banner \
--bin proxmox-backup-banner \
--package proxmox-backup-client \
--bin proxmox-backup-client \
--bin proxmox-backup-debug \
--package proxmox-file-restore \
--bin proxmox-file-restore \
--package pxar-bin \
--bin pxar \
--package pbs-tape \
--bin pmt \
--bin pmtx \
--package proxmox-restore-daemon \
--bin proxmox-restore-daemon \
--package proxmox-backup \
--bin dump-catalog-shell-cli \
--bin proxmox-daily-update \
--bin proxmox-file-restore \
--bin proxmox-tape \
--bin sg-tape-cmd
touch "$@"
$(CARGO) build $(CARGO_BUILD_ARGS)
$(COMPILED_BINS): cargo-build
.PHONY: lint
lint:
@ -222,16 +169,12 @@ install: $(COMPILED_BINS)
install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/ ;)
$(MAKE) -C www install
$(MAKE) -C docs install
ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))
$(MAKE) test # HACK, only test now to avoid clobbering build files with wrong config
endif
.PHONY: upload
upload: ${SERVER_DEB} ${CLIENT_DEB} ${RESTORE_DEB} ${DOC_DEB} ${DEBUG_DEB}
upload: ${SERVER_DEB} ${CLIENT_DEB} ${RESTORE_DEB} ${DOC_DEB}
# check if working directory is clean
git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} ${CLIENT_DEB} \
${CLIENT_DBG_DEB} ${DEBUG_DEB} ${DEBUG_DBG_DEB} \
| ssh -X repoman@repo.proxmox.com upload --product pbs --dist bullseye
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve,pmg,pbs-client" --dist bullseye
tar cf - ${RESTORE_DEB} ${RESTORE_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve" --dist bullseye
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} | \
ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve,pmg,pbs-client" --dist buster
tar cf - ${RESTORE_DEB} ${RESTORE_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve" --dist buster

23
build.rs Normal file
View File

@ -0,0 +1,23 @@
// build.rs
use std::env;
use std::process::Command;
fn git_command(args: &[&str]) -> String {
match Command::new("git").args(args).output() {
Ok(output) => String::from_utf8(output.stdout).unwrap().trim_end().to_string(),
Err(err) => {
panic!("git {:?} failed: {}", args, err);
}
}
}
fn main() {
let repo_path = git_command(&["rev-parse", "--show-toplevel"]);
let repoid = match env::var("REPOID") {
Ok(repoid) => repoid,
Err(_) => git_command(&["rev-parse", "HEAD"]),
};
println!("cargo:rustc-env=REPOID={}", repoid);
println!("cargo:rerun-if-changed={}/.git/HEAD", repo_path);
}

224
debian/changelog vendored
View File

@ -1,184 +1,39 @@
rust-proxmox-backup (2.0.10-1) UNRELEASED; urgency=medium
rust-proxmox-backup (1.1.14-1) buster; urgency=medium
* ui: fix order of prune keep reasons
* drop RawWaker usage to avoid a leaking a refcount
* server: add proxmox-backup-debug binary with chunk/file inspection, an API
shell with completion support
* pbs-tools: LruCache: implement Drop to fix a memory leak for the cache
* restructured code base to reduce linkage and libraray ABI version
constraints for all non-server binaries (client, pxar, file-restore)
* ui: add notice for nearing PBS 1.1 End-of-Life
* zsh: fix passign parameters in auto-completion scripts
* backport "datastore: lookup: reuse ChunkStore on stale datastore re-open"
* tape: also add 'force-media-set' to availablea CLI options
-- Proxmox Support Team <support@proxmox.com> Thu, 02 Jun 2022 18:07:54 +0200
* api: nodes: add missing node list (index) api endpoint
rust-proxmox-backup (1.1.13-3) buster; urgency=medium
* docs: proxmox-backup-debug: add info about the new 'api' subcommand
* fix sending log-rotation command to API daemons
* docs/technical-overview: add troubleshooting section
-- Proxmox Support Team <support@proxmox.com> Tue, 19 Oct 2021 10:21:18 +0200
-- Proxmox Support Team <support@proxmox.com> Tue, 21 Sep 2021 14:00:48 +0200
rust-proxmox-backup (1.1.13-2) buster; urgency=medium
rust-proxmox-backup (2.0.9-2) bullseye; urgency=medium
* revert "auth: improve thread safety of 'crypt' C-library", not safe for
Debian buster based releases.
* tape backup: mention groups that were empty
-- Proxmox Support Team <support@proxmox.com> Mon, 26 Jul 2021 16:40:07 +0200
* tape: compute next-media-label for each tape backup job
* tape: lto: increase default timeout to 10 minutes
* ui: display next-media-label for tape backup jobs
* cli: proxmox-tape backup-job list: use status api and display next-run
and next-media-label
-- Proxmox Support Team <support@proxmox.com> Tue, 24 Aug 2021 14:44:12 +0200
rust-proxmox-backup (2.0.8-1) bullseye; urgency=medium
* use proxmox-apt to 0.6
* api: apt: adapt to proxmox-apt back-end changes
* api/ui: allow zstd compression for new zpools
* tape: media_catalog: add snapshot list cache for catalog
* api2: tape: media: use MediaCatalog::snapshot_list for content listing
* tape: lock media_catalog file to to get a consistent view with load_catalog
* tape: changer: handle libraries that sends wrong amount of data
* tape: changer: remove unnecesary inquiry parameter
* api2: tape/restore: commit temporary catalog at the end
* docs: tape: add instructions on how to restore the catalog
* ui: tape/ChangerStatus: improve layout for large libraries
* tape: changer: handle invalid descriptor data from library in status page
* datastore config: cleanup code (use flatten attribute)
-- Proxmox Support Team <support@proxmox.com> Mon, 02 Aug 2021 10:34:55 +0200
rust-proxmox-backup (2.0.7-1) bullseye; urgency=medium
* tape changer: better cope with models that are not following spec
proposals when returning the status page
* tape changer: make DVCID information optional, not all devices return it
* restore daemon: setup the 'backup' system user and group in the minimal
restore environment, as we like to ensure that all state files are ownend
by them.
-- Proxmox Support Team <support@proxmox.com> Fri, 23 Jul 2021 08:43:51 +0200
rust-proxmox-backup (2.0.6-1) bullseye; urgency=medium
* increase maximum drives per changer to 255
* allow one to pass a secret not only directly through the environment value,
but also indirectly through a file path, an open file descriptor or a
command that can write the secret to standard out.
* pull in new proxmox library version to improve the file system
comaptibility on creation of atomic files, e.g., lock files.
-- Proxmox Support Team <support@proxmox.com> Thu, 22 Jul 2021 10:22:19 +0200
rust-proxmox-backup (2.0.5-2) bullseye; urgency=medium
* ui: tape: backup overview: increase timeout for media-set content
* tape: changer: always retry until timeout
* file-restore: increase lock timeout on QEMU map
* fix #3515: file-restore-daemon: allow LVs/PVs with dash in name
* fix #3526: correctly filter tasks with 'since' and 'until'
* tape: changer: make scsi request for DVCID a separate one, as some
libraries cannot handle requesting that combined with volume tags in one
go
* api, ui: datastore: add new 'prune-datastore' api call and expose it with
a 'Prune All' button
* make creating log files more robust so that theys are always owned by the
less privileged `backup` user
-- Proxmox Support Team <support@proxmox.com> Wed, 21 Jul 2021 09:12:39 +0200
rust-proxmox-backup (2.0.4-1) bullseye; urgency=medium
* change tape drive lock path to avoid issues with sticky bit on tmpfs
mountpoint
* tape: changer: query transport-element types separately
rust-proxmox-backup (1.1.13-1) buster; urgency=medium
* auth: improve thread safety of 'crypt' C-library
-- Proxmox Support Team <support@proxmox.com> Mon, 12 Jul 2021 18:51:21 +0200
rust-proxmox-backup (2.0.3-1) bullseye; urgency=medium
* api: apt: add repositories info and update calls
* ui: administration: add APT repositories status and update panel
* api: access domains: add get/create/update/delete endpoints for realms
* ui: access control: add 'Realm' tab for adding and editing OpenID Connect
identity provider
* fix #3447: ui: Dashboard: disallow selection of datastore statistics row
* ui: tapeRestore: make window non-resizable
* ui: dashboard: rework resource-load panel to a more detailed status panel,
showing, among other things, uptime, Kernel version, CPU info and
repository status.
* ui: adminsitration/dashboard: auto-scale columns count and add
browser-local setting to override that to a fixed value of columns.
* fix #3212: api, ui: add support for notes on backup groups
-- Proxmox Support Team <support@proxmox.com> Mon, 12 Jul 2021 08:07:41 +0200
rust-proxmox-backup (2.0.2-1) bullseye; urgency=medium
* ui: use task list component from widget toolkit
* api: add keep-job-configs flag to datastore remove endpoint
* api: config: delete datastore: also remove tape backup jobs
* ui: tape restore: mark datastore selector as 'not a form field' to fix
compatibility with ExtJS 7.0
* ui: datastore removal: only navigate away when the user actually confirmed
the removal of that datastore
-- Proxmox Support Team <support@proxmox.com> Thu, 08 Jul 2021 14:44:12 +0200
rust-proxmox-backup (2.0.1-2) bullseye; urgency=medium
* file-restore: increase lock timeout on QEMU map
* file restore daemon: log basic startup steps
* REST-API: set error message extension for bad-request response log to
ensure the actual error is logged in any (access) log, making debugging
such issues easier
* restore daemon: create /run/proxmox-backup on startup as there's now some
runtime state saved there, which failed all API requests to the restore
daemon otherwise
such issues easier.
* restore daemon: use millisecond log resolution
@ -186,19 +41,33 @@ rust-proxmox-backup (2.0.1-2) bullseye; urgency=medium
ensuring DNS propagation of that record. This makes it catch up with the
docs/web-interface, where the option was already available.
* docs: initial update to repositories for bullseye
-- Proxmox Support Team <support@proxmox.com> Fri, 23 Jul 2021 12:34:29 +0200
-- Proxmox Support Team <support@proxmox.com> Sat, 03 Jul 2021 23:14:49 +0200
rust-proxmox-backup (1.1.12-1) buster; urgency=medium
rust-proxmox-backup (2.0.0-2) bullseye; urgency=medium
* subscription: set higher-level error to message instead of bailing out, to
ensure a force-check gets through
* file-restore-daemon/disk: add LVM (thin) support
* ui: dashboard: datastore stats: fix closing <i> tag
-- Proxmox Support Team <support@proxmox.com> Sat, 03 Jul 2021 02:15:16 +0200
* ui: datastore: option view: only navigate up when we actually removed the
datastore
rust-proxmox-backup (2.0.0-1) bullseye; urgency=medium
-- Proxmox Support Team <support@proxmox.com> Fri, 09 Jul 2021 12:56:35 +0200
* initial bump for Debian 11 Bullseye / Proxmox Backup Server 2.0
rust-proxmox-backup (1.1.11-1) buster; urgency=medium
* tape/drive: fix logging when requesting media
* tape: fix LTO locate_file for HP drives
* fix #3393 (again): pxar/create: try to read xattrs/fcaps/acls by default
* proxmox-backup-manager: show task log on datastore create
-- Proxmox Support Team <support@proxmox.com> Wed, 30 Jun 2021 11:24:20 +0200
rust-proxmox-backup (1.1.10-1) buster; urgency=medium
* ui: datastore list summary: catch and show errors per datastore
@ -215,7 +84,7 @@ rust-proxmox-backup (2.0.0-1) bullseye; urgency=medium
* ui: datastore options: add remove button to drop a datastore from the
configuration, without removing any actual data
* ui: tape: drive selector: do not auto select the drive
* ui: tape: drive selector: do not autoselect the drive
* ui: tape: backup job: use correct default value for pbsUserSelector
@ -224,22 +93,7 @@ rust-proxmox-backup (2.0.0-1) bullseye; urgency=medium
* backup: add helpers for async last recently used (LRU) caches for chunk
and index reading of backup snapshot
* fix #3459: manager: add --ignore-verified and --outdated-after parameters
* proxmox-backup-manager: show task log on datastore create
* tape: snapshot reader: read chunks sorted by inode (per index) to improve
sequential reads when backing up data from slow spinning disks to tape.
* file-restore: support ZFS pools
* improve fix for #3393: pxar create: try to read xattrs/fcaps/acls by default
* fix compatibility with ExtJS 7.0
* docs: build api-viewer from widget-toolkit-dev
-- Proxmox Support Team <support@proxmox.com> Mon, 28 Jun 2021 19:35:40 +0200
-- Proxmox Support Team <support@proxmox.com> Wed, 16 Jun 2021 09:46:15 +0200
rust-proxmox-backup (1.1.9-1) stable; urgency=medium

47
debian/control vendored
View File

@ -1,8 +1,8 @@
Source: rust-proxmox-backup
Section: admin
Priority: optional
Build-Depends: debhelper (>= 12),
dh-cargo (>= 24),
Build-Depends: debhelper (>= 11),
dh-cargo (>= 18),
cargo:native,
rustc:native,
libstd-rust-dev,
@ -37,21 +37,20 @@ Build-Depends: debhelper (>= 12),
librust-pam-sys-0.5+default-dev,
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-lite-0.2+default-dev,
librust-proxmox-0.13+api-macro-dev,
librust-proxmox-0.13+cli-dev,
librust-proxmox-0.13+default-dev,
librust-proxmox-0.13+router-dev,
librust-proxmox-0.13+sortable-macro-dev,
librust-proxmox-0.13+tfa-dev,
librust-proxmox-acme-rs-0.2+default-dev (>= 0.2.1-~~),
librust-proxmox-apt-0.7+default-dev,
librust-pin-project-1+default-dev,
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.11+api-macro-dev (>= 0.11.6-~~),
librust-proxmox-0.11+cli-dev (>= 0.11.6-~~),
librust-proxmox-0.11+default-dev (>= 0.11.6-~~),
librust-proxmox-0.11+router-dev (>= 0.11.6-~~),
librust-proxmox-0.11+sortable-macro-dev (>= 0.11.6-~~),
librust-proxmox-0.11+tfa-dev (>= 0.11.6-~~),
librust-proxmox-acme-rs-0.3+default-dev,
librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~),
librust-proxmox-http-0.4+client-dev,
librust-proxmox-http-0.4+default-dev ,
librust-proxmox-http-0.4+http-helpers-dev,
librust-proxmox-http-0.4+websocket-dev,
librust-proxmox-openid-0.7+default-dev,
librust-proxmox-http-0.2+client-dev (>= 0.2.1-~~),
librust-proxmox-http-0.2+default-dev (>= 0.2.1-~~),
librust-proxmox-http-0.2+http-helpers-dev (>= 0.2.1-~~),
librust-proxmox-http-0.2+websocket-dev (>= 0.2.1-~~),
librust-pxar-0.10+default-dev (>= 0.10.1-~~),
librust-pxar-0.10+tokio-io-dev (>= 0.10.1-~~),
librust-regex-1+default-dev (>= 1.2-~~),
@ -85,8 +84,8 @@ Build-Depends: debhelper (>= 12),
librust-walkdir-2+default-dev,
librust-webauthn-rs-0.2+default-dev (>= 0.2.5-~~),
librust-xdg-2+default-dev (>= 2.2-~~),
librust-zstd-0.6+bindgen-dev,
librust-zstd-0.6+default-dev,
librust-zstd-0.4+bindgen-dev,
librust-zstd-0.4+default-dev,
libacl1-dev,
libfuse3-dev,
libsystemd-dev,
@ -100,7 +99,6 @@ Build-Depends: debhelper (>= 12),
graphviz <!nodoc>,
latexmk <!nodoc>,
patchelf,
proxmox-widget-toolkit-dev <!nodoc>,
pve-eslint (>= 7.18.0-1),
python3-docutils,
python3-pygments,
@ -111,16 +109,15 @@ Build-Depends: debhelper (>= 12),
texlive-xetex <!nodoc>,
xindy <!nodoc>
Maintainer: Proxmox Support Team <support@proxmox.com>
Standards-Version: 4.5.1
Standards-Version: 4.4.1
Vcs-Git: git://git.proxmox.com/git/proxmox-backup.git
Vcs-Browser: https://git.proxmox.com/?p=proxmox-backup.git;a=summary
Homepage: https://www.proxmox.com
Rules-Requires-Root: binary-targets
Package: proxmox-backup-server
Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 7~),
libjs-extjs (>= 6.0.1),
libjs-qrcodejs (>= 1.20201119),
libproxmox-acme-plugins,
libsgutils2-2,
@ -131,7 +128,7 @@ Depends: fonts-font-awesome,
postfix | mail-transport-agent,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 3.3-2),
proxmox-widget-toolkit (>= 2.6-2),
pve-xtermjs (>= 4.7.0-1),
sg3-utils,
smartmontools,
@ -155,8 +152,7 @@ Description: Proxmox Backup Client tools
Package: proxmox-backup-docs
Build-Profiles: <!nodoc>
Section: doc
Depends: fonts-font-awesome,
libjs-extjs,
Depends: libjs-extjs,
libjs-mathjax,
${misc:Depends},
Architecture: all
@ -169,7 +165,6 @@ Depends: ${misc:Depends},
${shlibs:Depends},
Recommends: pve-qemu-kvm (>= 5.0.0-9),
proxmox-backup-restore-image,
Breaks: proxmox-backup-restore-image (<< 0.3.1)
Description: Proxmox Backup single file restore tools for pxar and block device backups
This package contains the Proxmox Backup single file restore client for
restoring individual files and folders from both host/container and VM/block

55
debian/control.in vendored Normal file
View File

@ -0,0 +1,55 @@
Package: proxmox-backup-server
Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libjs-qrcodejs (>= 1.20201119),
libproxmox-acme-plugins,
libsgutils2-2,
libzstd1 (>= 1.3.8),
lvm2,
openssh-server,
pbs-i18n,
postfix | mail-transport-agent,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.6-2),
pve-xtermjs (>= 4.7.0-1),
sg3-utils,
smartmontools,
${misc:Depends},
${shlibs:Depends},
Recommends: zfsutils-linux,
ifupdown2,
Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface.
Package: proxmox-backup-client
Architecture: any
Depends: qrencode,
${misc:Depends},
${shlibs:Depends},
Description: Proxmox Backup Client tools
This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups.
Package: proxmox-backup-docs
Build-Profiles: <!nodoc>
Section: doc
Depends: libjs-extjs,
libjs-mathjax,
${misc:Depends},
Architecture: all
Description: Proxmox Backup Documentation
This package contains the Proxmox Backup Documentation files.
Package: proxmox-backup-file-restore
Architecture: any
Depends: ${misc:Depends},
${shlibs:Depends},
Recommends: pve-qemu-kvm (>= 5.0.0-9),
proxmox-backup-restore-image,
Description: Proxmox Backup single file restore tools for pxar and block device backups
This package contains the Proxmox Backup single file restore client for
restoring individual files and folders from both host/container and VM/block
device backups. It includes a block device restore driver using QEMU.

42
debian/debcargo.toml vendored Normal file
View File

@ -0,0 +1,42 @@
overlay = "."
crate_src_path = ".."
whitelist = ["tests/*.c"]
maintainer = "Proxmox Support Team <support@proxmox.com>"
[source]
vcs_git = "git://git.proxmox.com/git/proxmox-backup.git"
vcs_browser = "https://git.proxmox.com/?p=proxmox-backup.git;a=summary"
section = "admin"
build_depends = [
"bash-completion",
"debhelper (>= 12~)",
"fonts-dejavu-core <!nodoc>",
"fonts-lato <!nodoc>",
"fonts-open-sans <!nodoc>",
"graphviz <!nodoc>",
"latexmk <!nodoc>",
"patchelf",
"pve-eslint (>= 7.18.0-1)",
"python3-docutils",
"python3-pygments",
"python3-sphinx <!nodoc>",
"rsync",
"texlive-fonts-extra <!nodoc>",
"texlive-fonts-recommended <!nodoc>",
"texlive-xetex <!nodoc>",
"xindy <!nodoc>",
]
build_depends_excludes = [
"debhelper (>=11)",
]
[packages.lib]
depends = [
"libacl1-dev",
"libfuse3-dev",
"libsystemd-dev",
"uuid-dev",
"libsgutils2-dev",
]

36
debian/postinst vendored
View File

@ -26,7 +26,43 @@ case "$1" in
fi
deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true
# FIXME: Remove with 1.1
if test -n "$2"; then
if dpkg --compare-versions "$2" 'lt' '0.9.4-1'; then
if grep -s -q -P -e '^\s+verify-schedule ' /etc/proxmox-backup/datastore.cfg; then
echo "NOTE: drop all verify schedules from datastore config."
echo "You can now add more flexible verify jobs"
flock -w 30 /etc/proxmox-backup/.datastore.lck \
sed -i '/^\s\+verify-schedule /d' /etc/proxmox-backup/datastore.cfg || true
fi
fi
if dpkg --compare-versions "$2" 'le' '0.9.5-1'; then
chown --quiet backup:backup /var/log/proxmox-backup/api/auth.log || true
fi
if dpkg --compare-versions "$2" 'le' '0.9.7-1'; then
if [ -e /etc/proxmox-backup/remote.cfg ]; then
echo "NOTE: Switching over remote.cfg to new field names.."
flock -w 30 /etc/proxmox-backup/.remote.lck \
sed -i \
-e 's/^\s\+userid /\tauth-id /g' \
/etc/proxmox-backup/remote.cfg || true
fi
fi
if dpkg --compare-versions "$2" 'le' '1.0.14-1'; then
# FIXME: Remove with 2.0
if grep -s -q -P -e '^linux:' /etc/proxmox-backup/tape.cfg; then
echo "========="
echo "= NOTE: You have now unsupported 'linux' tape drives configured."
echo "= * Execute 'udevadm control --reload-rules && udevadm trigger' to update /dev"
echo "= * Edit '/etc/proxmox-backup/tape.cfg', remove 'linux' entries and re-add over CLI/GUI"
echo "========="
fi
fi
# FIXME: remove with 2.0
if [ -d "/var/lib/proxmox-backup/tape" ] &&
[ "$(stat --printf '%a' '/var/lib/proxmox-backup/tape')" != "750" ]; then
chmod 0750 /var/lib/proxmox-backup/tape || true
fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."

View File

@ -1,8 +0,0 @@
# proxmox-backup-debug bash completion
# see http://tiswww.case.edu/php/chet/bash/FAQ
# and __ltrim_colon_completions() in /usr/share/bash-completion/bash_completion
# this modifies global var, but I found no better way
COMP_WORDBREAKS=${COMP_WORDBREAKS//:}
complete -C 'proxmox-backup-debug bashcomplete' proxmox-backup-debug

View File

@ -1,6 +1,5 @@
/usr/share/doc/proxmox-backup/proxmox-backup.pdf /usr/share/doc/proxmox-backup/html/proxmox-backup.pdf
/usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/prune-simulator/extjs
/usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/lto-barcode/extjs
/usr/share/fonts-font-awesome/ /usr/share/doc/proxmox-backup/html/lto-barcode/font-awesome
/usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/api-viewer/extjs
/usr/share/javascript/mathjax /usr/share/doc/proxmox-backup/html/_static/mathjax

View File

@ -1,5 +1,4 @@
debian/proxmox-backup-manager.bc proxmox-backup-manager
debian/proxmox-backup-debug.bc proxmox-backup-debug
debian/proxmox-tape.bc proxmox-tape
debian/pmtx.bc pmtx
debian/pmt.bc pmt

View File

@ -9,7 +9,6 @@ usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-daily-update
usr/lib/x86_64-linux-gnu/proxmox-backup/sg-tape-cmd
usr/sbin/proxmox-backup-debug
usr/sbin/proxmox-backup-manager
usr/bin/pmtx
usr/bin/pmt
@ -18,7 +17,6 @@ usr/share/javascript/proxmox-backup/index.hbs
usr/share/javascript/proxmox-backup/css/ext6-pbs.css
usr/share/javascript/proxmox-backup/images
usr/share/javascript/proxmox-backup/js/proxmox-backup-gui.js
usr/share/man/man1/proxmox-backup-debug.1
usr/share/man/man1/proxmox-backup-manager.1
usr/share/man/man1/proxmox-backup-proxy.1
usr/share/man/man1/proxmox-tape.1
@ -33,7 +31,6 @@ usr/share/man/man5/verification.cfg.5
usr/share/man/man5/media-pool.cfg.5
usr/share/man/man5/tape.cfg.5
usr/share/man/man5/tape-job.cfg.5
usr/share/zsh/vendor-completions/_proxmox-backup-debug
usr/share/zsh/vendor-completions/_proxmox-backup-manager
usr/share/zsh/vendor-completions/_proxmox-tape
usr/share/zsh/vendor-completions/_pmtx

8
debian/rules vendored
View File

@ -32,9 +32,6 @@ override_dh_auto_build:
override_dh_missing:
dh_missing --fail-missing
override_dh_auto_test:
# ignore here to avoid rebuilding the binaries with the wrong target
override_dh_auto_install:
dh_auto_install -- \
PROXY_USER=backup \
@ -48,6 +45,11 @@ override_dh_installsystemd:
override_dh_fixperms:
dh_fixperms --exclude sg-tape-cmd
# workaround https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933541
# TODO: remove once available (Debian 11 ?)
override_dh_dwz:
dh_dwz --no-dwz-multifile
override_dh_strip:
dh_strip
for exe in $$(find \

View File

@ -5,7 +5,6 @@ GENERATED_SYNOPSIS := \
proxmox-backup-client/synopsis.rst \
proxmox-backup-client/catalog-shell-synopsis.rst \
proxmox-backup-manager/synopsis.rst \
proxmox-backup-debug/synopsis.rst \
proxmox-file-restore/synopsis.rst \
pxar/synopsis.rst \
pmtx/synopsis.rst \
@ -28,8 +27,7 @@ MAN1_PAGES := \
proxmox-backup-proxy.1 \
proxmox-backup-client.1 \
proxmox-backup-manager.1 \
proxmox-file-restore.1 \
proxmox-backup-debug.1
proxmox-file-restore.1
MAN5_PAGES := \
media-pool.cfg.5 \
@ -48,12 +46,8 @@ PRUNE_SIMULATOR_FILES := \
prune-simulator/clear-trigger.png \
prune-simulator/prune-simulator.js
PRUNE_SIMULATOR_JS_SOURCE := \
/usr/share/javascript/proxmox-widget-toolkit-dev/Toolkit.js \
prune-simulator/prune-simulator_source.js
LTO_BARCODE_JS_SOURCE := \
/usr/share/javascript/proxmox-widget-toolkit-dev/Toolkit.js \
LTO_BARCODE_FILES := \
lto-barcode/index.html \
lto-barcode/code39.js \
lto-barcode/prefix-field.js \
lto-barcode/label-style.js \
@ -65,18 +59,10 @@ LTO_BARCODE_JS_SOURCE := \
lto-barcode/label-setup.js \
lto-barcode/lto-barcode.js
LTO_BARCODE_FILES := \
lto-barcode/index.html \
lto-barcode/lto-barcode-generator.js
API_VIEWER_SOURCES= \
api-viewer/index.html \
api-viewer/apidoc.js
API_VIEWER_FILES := \
api-viewer/apidata.js \
/usr/share/javascript/proxmox-widget-toolkit-dev/APIViewer.js \
# Sphinx documentation setup
SPHINXOPTS =
SPHINXBUILD = sphinx-build
@ -201,12 +187,6 @@ proxmox-file-restore/synopsis.rst: ${COMPILEDIR}/proxmox-file-restore
proxmox-file-restore.1: proxmox-file-restore/man1.rst proxmox-file-restore/description.rst proxmox-file-restore/synopsis.rst
rst2man $< >$@
proxmox-backup-debug/synopsis.rst: ${COMPILEDIR}/proxmox-backup-debug
${COMPILEDIR}/proxmox-backup-debug printdoc > proxmox-backup-debug/synopsis.rst
proxmox-backup-debug.1: proxmox-backup-debug/man1.rst proxmox-backup-debug/description.rst proxmox-backup-debug/synopsis.rst
rst2man $< >$@
.PHONY: onlinehelpinfo
onlinehelpinfo:
@echo "Generating OnlineHelpInfo.js..."
@ -216,17 +196,8 @@ onlinehelpinfo:
api-viewer/apidata.js: ${COMPILEDIR}/docgen
${COMPILEDIR}/docgen apidata.js >$@
api-viewer/apidoc.js: ${API_VIEWER_FILES}
cat ${API_VIEWER_FILES} >$@.tmp
mv $@.tmp $@
prune-simulator/prune-simulator.js: ${PRUNE_SIMULATOR_JS_SOURCE}
cat ${PRUNE_SIMULATOR_JS_SOURCE} >$@.tmp
mv $@.tmp $@
lto-barcode/lto-barcode-generator.js: ${LTO_BARCODE_JS_SOURCE}
cat ${LTO_BARCODE_JS_SOURCE} >$@.tmp
mv $@.tmp $@
api-viewer/apidoc.js: api-viewer/apidata.js api-viewer/PBSAPI.js
cat api-viewer/apidata.js api-viewer/PBSAPI.js >$@
.PHONY: html
html: ${GENERATED_SYNOPSIS} images/proxmox-logo.svg custom.css conf.py ${PRUNE_SIMULATOR_FILES} ${LTO_BARCODE_FILES} ${API_VIEWER_SOURCES}
@ -257,7 +228,7 @@ epub3: ${GENERATED_SYNOPSIS}
clean:
rm -r -f *~ *.1 ${BUILDDIR} ${GENERATED_SYNOPSIS} api-viewer/apidata.js
rm -f api-viewer/apidoc.js lto-barcode/lto-barcode-generator.js prune-simulator/prune-simulator.js
rm -f api-viewer/apidoc.js lto-barcode/lto-barcode-generator.js
install_manual_pages: ${MAN1_PAGES} ${MAN5_PAGES}

526
docs/api-viewer/PBSAPI.js Normal file
View File

@ -0,0 +1,526 @@
// avoid errors when running without development tools
if (!Ext.isDefined(Ext.global.console)) {
var console = {
dir: function() {},
log: function() {}
};
}
Ext.onReady(function() {
Ext.define('pve-param-schema', {
extend: 'Ext.data.Model',
fields: [
'name', 'type', 'typetext', 'description', 'verbose_description',
'enum', 'minimum', 'maximum', 'minLength', 'maxLength',
'pattern', 'title', 'requires', 'format', 'default',
'disallow', 'extends', 'links',
{
name: 'optional',
type: 'boolean'
}
]
});
var store = Ext.define('pve-updated-treestore', {
extend: 'Ext.data.TreeStore',
model: Ext.define('pve-api-doc', {
extend: 'Ext.data.Model',
fields: [
'path', 'info', 'text',
]
}),
proxy: {
type: 'memory',
data: pbsapi
},
sorters: [{
property: 'leaf',
direction: 'ASC'
}, {
property: 'text',
direction: 'ASC'
}],
filterer: 'bottomup',
doFilter: function(node) {
this.filterNodes(node, this.getFilters().getFilterFn(), true);
},
filterNodes: function(node, filterFn, parentVisible) {
var me = this,
bottomUpFiltering = me.filterer === 'bottomup',
match = filterFn(node) && parentVisible || (node.isRoot() && !me.getRootVisible()),
childNodes = node.childNodes,
len = childNodes && childNodes.length, i, matchingChildren;
if (len) {
for (i = 0; i < len; ++i) {
matchingChildren = me.filterNodes(childNodes[i], filterFn, match || bottomUpFiltering) || matchingChildren;
}
if (bottomUpFiltering) {
match = matchingChildren || match;
}
}
node.set("visible", match, me._silentOptions);
return match;
},
}).create();
var render_description = function(value, metaData, record) {
var pdef = record.data;
value = pdef.verbose_description || value;
// TODO: try to render asciidoc correctly
metaData.style = 'white-space:pre-wrap;'
return Ext.htmlEncode(value);
};
var render_type = function(value, metaData, record) {
var pdef = record.data;
return pdef['enum'] ? 'enum' : (pdef.type || 'string');
};
let render_simple_format = function(pdef, type_fallback) {
if (pdef.typetext)
return pdef.typetext;
if (pdef['enum'])
return pdef['enum'].join(' | ');
if (pdef.format)
return pdef.format;
if (pdef.pattern)
return pdef.pattern;
if (pdef.type === 'boolean')
return `<true|false>`;
if (type_fallback && pdef.type)
return `<${pdef.type}>`;
return;
};
let render_format = function(value, metaData, record) {
let pdef = record.data;
metaData.style = 'white-space:normal;'
if (pdef.type === 'array' && pdef.items) {
let format = render_simple_format(pdef.items, true);
return `[${Ext.htmlEncode(format)}, ...]`;
}
return Ext.htmlEncode(render_simple_format(pdef) || '');
};
var real_path = function(path) {
return path.replace(/^.*\/_upgrade_(\/)?/, "/");
};
var permission_text = function(permission) {
let permhtml = "";
if (permission.user) {
if (!permission.description) {
if (permission.user === 'world') {
permhtml += "Accessible without any authentication.";
} else if (permission.user === 'all') {
permhtml += "Accessible by all authenticated users.";
} else {
permhtml += 'Onyl accessible by user "' +
permission.user + '"';
}
}
} else if (permission.check) {
permhtml += "<pre>Check: " +
Ext.htmlEncode(Ext.JSON.encode(permission.check)) + "</pre>";
} else if (permission.userParam) {
permhtml += `<div>Check if user matches parameter '${permission.userParam}'`;
} else if (permission.or) {
permhtml += "<div>Or<div style='padding-left: 10px;'>";
Ext.Array.each(permission.or, function(sub_permission) {
permhtml += permission_text(sub_permission);
})
permhtml += "</div></div>";
} else if (permission.and) {
permhtml += "<div>And<div style='padding-left: 10px;'>";
Ext.Array.each(permission.and, function(sub_permission) {
permhtml += permission_text(sub_permission);
})
permhtml += "</div></div>";
} else {
//console.log(permission);
permhtml += "Unknown syntax!";
}
return permhtml;
};
var render_docu = function(data) {
var md = data.info;
// console.dir(data);
var items = [];
var clicmdhash = {
GET: 'get',
POST: 'create',
PUT: 'set',
DELETE: 'delete'
};
Ext.Array.each(['GET', 'POST', 'PUT', 'DELETE'], function(method) {
var info = md[method];
if (info) {
var usage = "";
usage += "<table><tr><td>HTTP:&nbsp;&nbsp;&nbsp;</td><td>"
+ method + " " + real_path("/api2/json" + data.path) + "</td></tr>";
var sections = [
{
title: 'Description',
html: Ext.htmlEncode(info.description),
bodyPadding: 10
},
{
title: 'Usage',
html: usage,
bodyPadding: 10
}
];
if (info.parameters && info.parameters.properties) {
var pstore = Ext.create('Ext.data.Store', {
model: 'pve-param-schema',
proxy: {
type: 'memory'
},
groupField: 'optional',
sorters: [
{
property: 'name',
direction: 'ASC'
}
]
});
Ext.Object.each(info.parameters.properties, function(name, pdef) {
pdef.name = name;
pstore.add(pdef);
});
pstore.sort();
var groupingFeature = Ext.create('Ext.grid.feature.Grouping',{
enableGroupingMenu: false,
groupHeaderTpl: '<tpl if="groupValue">Optional</tpl><tpl if="!groupValue">Required</tpl>'
});
sections.push({
xtype: 'gridpanel',
title: 'Parameters',
features: [groupingFeature],
store: pstore,
viewConfig: {
trackOver: false,
stripeRows: true
},
columns: [
{
header: 'Name',
dataIndex: 'name',
flex: 1
},
{
header: 'Type',
dataIndex: 'type',
renderer: render_type,
flex: 1
},
{
header: 'Default',
dataIndex: 'default',
flex: 1
},
{
header: 'Format',
dataIndex: 'type',
renderer: render_format,
flex: 2
},
{
header: 'Description',
dataIndex: 'description',
renderer: render_description,
flex: 6
}
]
});
}
if (info.returns) {
var retinf = info.returns;
var rtype = retinf.type;
if (!rtype && retinf.items)
rtype = 'array';
if (!rtype)
rtype = 'object';
var rpstore = Ext.create('Ext.data.Store', {
model: 'pve-param-schema',
proxy: {
type: 'memory'
},
groupField: 'optional',
sorters: [
{
property: 'name',
direction: 'ASC'
}
]
});
var properties;
if (rtype === 'array' && retinf.items.properties) {
properties = retinf.items.properties;
}
if (rtype === 'object' && retinf.properties) {
properties = retinf.properties;
}
Ext.Object.each(properties, function(name, pdef) {
pdef.name = name;
rpstore.add(pdef);
});
rpstore.sort();
var groupingFeature = Ext.create('Ext.grid.feature.Grouping',{
enableGroupingMenu: false,
groupHeaderTpl: '<tpl if="groupValue">Optional</tpl><tpl if="!groupValue">Obligatory</tpl>'
});
var returnhtml;
if (retinf.items) {
returnhtml = '<pre>items: ' + Ext.htmlEncode(JSON.stringify(retinf.items, null, 4)) + '</pre>';
}
if (retinf.properties) {
returnhtml = returnhtml || '';
returnhtml += '<pre>properties:' + Ext.htmlEncode(JSON.stringify(retinf.properties, null, 4)) + '</pre>';
}
var rawSection = Ext.create('Ext.panel.Panel', {
bodyPadding: '0px 10px 10px 10px',
html: returnhtml,
hidden: true
});
sections.push({
xtype: 'gridpanel',
title: 'Returns: ' + rtype,
features: [groupingFeature],
store: rpstore,
viewConfig: {
trackOver: false,
stripeRows: true
},
columns: [
{
header: 'Name',
dataIndex: 'name',
flex: 1
},
{
header: 'Type',
dataIndex: 'type',
renderer: render_type,
flex: 1
},
{
header: 'Default',
dataIndex: 'default',
flex: 1
},
{
header: 'Format',
dataIndex: 'type',
renderer: render_format,
flex: 2
},
{
header: 'Description',
dataIndex: 'description',
renderer: render_description,
flex: 6
}
],
bbar: [
{
xtype: 'button',
text: 'Show RAW',
handler: function(btn) {
rawSection.setVisible(!rawSection.isVisible());
btn.setText(rawSection.isVisible() ? 'Hide RAW' : 'Show RAW');
}}
]
});
sections.push(rawSection);
}
if (!data.path.match(/\/_upgrade_/)) {
var permhtml = '';
if (!info.permissions) {
permhtml = "Root only.";
} else {
if (info.permissions.description) {
permhtml += "<div style='white-space:pre-wrap;padding-bottom:10px;'>" +
Ext.htmlEncode(info.permissions.description) + "</div>";
}
permhtml += permission_text(info.permissions);
}
// we do not have this information for PBS api
//if (!info.allowtoken) {
// permhtml += "<br />This API endpoint is not available for API tokens."
//}
sections.push({
title: 'Required permissions',
bodyPadding: 10,
html: permhtml
});
}
items.push({
title: method,
autoScroll: true,
defaults: {
border: false
},
items: sections
});
}
});
var ct = Ext.getCmp('docview');
ct.setTitle("Path: " + real_path(data.path));
ct.removeAll(true);
ct.add(items);
ct.setActiveTab(0);
};
Ext.define('Ext.form.SearchField', {
extend: 'Ext.form.field.Text',
alias: 'widget.searchfield',
emptyText: 'Search...',
flex: 1,
inputType: 'search',
listeners: {
'change': function(){
var value = this.getValue();
if (!Ext.isEmpty(value)) {
store.filter({
property: 'path',
value: value,
anyMatch: true
});
} else {
store.clearFilter();
}
}
}
});
var tree = Ext.create('Ext.tree.Panel', {
title: 'Resource Tree',
tbar: [
{
xtype: 'searchfield',
}
],
tools: [
{
type: 'expand',
tooltip: 'Expand all',
tooltipType: 'title',
callback: (tree) => tree.expandAll(),
},
{
type: 'collapse',
tooltip: 'Collapse all',
tooltipType: 'title',
callback: (tree) => tree.collapseAll(),
},
],
store: store,
width: 200,
region: 'west',
split: true,
margins: '5 0 5 5',
rootVisible: false,
listeners: {
selectionchange: function(v, selections) {
if (!selections[0])
return;
var rec = selections[0];
render_docu(rec.data);
location.hash = '#' + rec.data.path;
}
}
});
Ext.create('Ext.container.Viewport', {
layout: 'border',
renderTo: Ext.getBody(),
items: [
tree,
{
xtype: 'tabpanel',
title: 'Documentation',
id: 'docview',
region: 'center',
margins: '5 5 5 0',
layout: 'fit',
items: []
}
]
});
var deepLink = function() {
var path = window.location.hash.substring(1).replace(/\/\s*$/, '')
var endpoint = store.findNode('path', path);
if (endpoint) {
tree.getSelectionModel().select(endpoint);
tree.expandPath(endpoint.getPath());
render_docu(endpoint.data);
}
}
window.onhashchange = deepLink;
deepLink();
});

View File

@ -49,31 +49,15 @@ Environment Variables
When set, this value is used for the password required for the backup server.
You can also set this to a API token secret.
``PBS_PASSWORD_FD``, ``PBS_PASSWORD_FILE``, ``PBS_PASSWORD_CMD``
Like ``PBS_PASSWORD``, but read data from an open file descriptor, a file
name or from the `stdout` of a command, respectively. The first defined
environment variable from the order above is preferred.
``PBS_ENCRYPTION_PASSWORD``
When set, this value is used to access the secret encryption key (if
protected by password).
``PBS_ENCRYPTION_PASSWORD_FD``, ``PBS_ENCRYPTION_PASSWORD_FILE``, ``PBS_ENCRYPTION_PASSWORD_CMD``
Like ``PBS_ENCRYPTION_PASSWORD``, but read data from an open file descriptor,
a file name or from the `stdout` of a command, respectively. The first
defined environment variable from the order above is preferred.
``PBS_FINGERPRINT`` When set, this value is used to verify the server
certificate (only used if the system CA certificates cannot validate the
certificate).
.. Note:: Passwords must be valid UTF8 an may not contain
newlines. For your convienience, we just use the first line as
password, so you can add arbitrary comments after the
first newline.
Output Format
-------------

View File

@ -21,7 +21,3 @@ Command Line Tools
.. include:: pxar/description.rst
``proxmox-backup-debug``
~~~~~~~~
.. include:: proxmox-backup-debug/description.rst

View File

@ -24,13 +24,11 @@ future plans to support 32-bit processors.
How long will my Proxmox Backup Server version be supported?
------------------------------------------------------------
+-----------------------+----------------------+---------------+------------+--------------------+
+-----------------------+--------------------+---------------+------------+--------------------+
|Proxmox Backup Version | Debian Version | First Release | Debian EOL | Proxmox Backup EOL |
+=======================+======================+===============+============+====================+
|Proxmox Backup 2.x | Debian 11 (Bullseye) | 2021-07 | tba | tba |
+-----------------------+----------------------+---------------+------------+--------------------+
|Proxmox Backup 1.x | Debian 10 (Buster) | 2020-11 | ~Q2/2022 | Q2-Q3/2022 |
+-----------------------+----------------------+---------------+------------+--------------------+
+=======================+====================+===============+============+====================+
|Proxmox Backup 1.x | Debian 10 (Buster) | 2020-11 | tba | tba |
+-----------------------+--------------------+---------------+------------+--------------------+
Can I copy or synchronize my datastore to another location?

View File

@ -51,7 +51,7 @@ data:
* - ``MAGIC: [u8; 8]``
* - ``CRC32: [u8; 4]``
* - ``IV: [u8; 16]``
* - ``ÌV: [u8; 16]``
* - ``TAG: [u8; 16]``
* - ``Data: (max 16MiB)``

View File

@ -19,7 +19,7 @@ for various management tasks such as disk management.
`Proxmox Backup`_ without the server part.
The disk image (ISO file) provided by Proxmox includes a complete Debian system
as well as all necessary packages for the `Proxmox Backup`_ server.
("buster" for version 1.x) as well as all necessary packages for the `Proxmox Backup`_ server.
The installer will guide you through the setup process and allow
you to partition the local disk(s), apply basic system configurations

View File

@ -34,7 +34,17 @@
</style>
<link rel="stylesheet" type="text/css" href="font-awesome/css/font-awesome.css"/>
<script type="text/javascript" src="extjs/ext-all.js"></script>
<script type="text/javascript" src="lto-barcode-generator.js"></script>
<script type="text/javascript" src="code39.js"></script>
<script type="text/javascript" src="prefix-field.js"></script>
<script type="text/javascript" src="label-style.js"></script>
<script type="text/javascript" src="tape-type.js"></script>
<script type="text/javascript" src="paper-size.js"></script>
<script type="text/javascript" src="page-layout.js"></script>
<script type="text/javascript" src="page-calibration.js"></script>
<script type="text/javascript" src="label-list.js"></script>
<script type="text/javascript" src="label-setup.js"></script>
<script type="text/javascript" src="lto-barcode.js"></script>
</head>
<body>
</body>

View File

@ -1,5 +1,7 @@
// for toolkit.js
function gettext(val) { return val; };
// FIXME: HACK! Makes scrolling in number spinner work again. fixed in ExtJS >= 6.1
if (Ext.isFirefox) {
Ext.$eventNameMap.DOMMouseScroll = 'DOMMouseScroll';
}
function draw_labels(target_id, label_list, page_layout, calibration) {
let max_labels = compute_max_labels(page_layout);

View File

@ -17,13 +17,15 @@ update``.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian bullseye main contrib
deb http://ftp.debian.org/debian bullseye-updates main contrib
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
# security updates
deb http://security.debian.org/debian-security bullseye-security main contrib
deb http://security.debian.org/debian-security buster/updates main contrib
.. FIXME for 7.0: change security update suite to bullseye-security
In addition, you need a package repository from Proxmox to get Proxmox Backup
updates.
@ -43,21 +45,31 @@ key with the following commands:
.. code-block:: console
# wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
# wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
Verify the SHA512 checksum afterwards with the expected output below:
Verify the SHA512 checksum afterwards with:
.. code-block:: console
# sha512sum /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
7fb03ec8a1675723d2853b84aa4fdb49a46a3bb72b9951361488bfd19b29aab0a789a4f8c7406e71a69aabbc727c936d3549731c4659ffa1a08f44db8fdcebfa /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
# sha512sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
and the md5sum, with the expected output below:
The output should be:
.. code-block:: console
# md5sum /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
bcc35c7173e0845c0d6ad6470b70f50e /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
acca6f416917e8e11490a08a1e2842d500b3a5d9f322c6319db0927b2901c3eae23cfb5cd5df6facf2b57399d3cfa52ad7769ebdd75d9b204549ca147da52626 /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
and the md5sum:
.. code-block:: console
# md5sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
Here, the output should be:
.. code-block:: console
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
.. _sysadmin_package_repos_enterprise:
@ -72,7 +84,7 @@ enabled by default:
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list``
deb https://enterprise.proxmox.com/debian/pbs bullseye pbs-enterprise
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise
To never miss important security fixes, the superuser (``root@pam`` user) is
@ -102,15 +114,15 @@ We recommend to configure this repository in ``/etc/apt/sources.list``.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian bullseye main contrib
deb http://ftp.debian.org/debian bullseye-updates main contrib
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
# PBS pbs-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pbs bullseye pbs-no-subscription
deb http://download.proxmox.com/debian/pbs buster pbs-no-subscription
# security updates
deb http://security.debian.org/debian-security bullseye-security main contrib
deb http://security.debian.org/debian-security buster/updates main contrib
`Proxmox Backup`_ Test Repository
@ -128,7 +140,7 @@ You can access this repository by adding the following line to
.. code-block:: sources.list
:caption: sources.list entry for ``pbstest``
deb http://download.proxmox.com/debian/pbs bullseye pbstest
deb http://download.proxmox.com/debian/pbs buster pbstest
.. _package_repositories_client_only:
@ -149,26 +161,6 @@ APT-based Proxmox Backup Client Repository
For modern Linux distributions using `apt` as package manager, like all Debian
and Ubuntu Derivative do, you may be able to use the APT-based repository.
In order to configure this repository you need to first :ref:`setup the Proxmox
release key <package_repos_secure_apt>`. After that, add the repository URL to
the APT sources lists.
**Repositories for Debian 11 (Bullseye) based releases**
This repository is tested with:
- Debian Bullseye
Edit the file ``/etc/apt/sources.list.d/pbs-client.list`` and add the following
snipped
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://download.proxmox.com/debian/pbs-client bullseye main
**Repositories for Debian 10 (Buster) based releases**
This repository is tested with:
- Debian Buster
@ -176,6 +168,9 @@ This repository is tested with:
It may work with older, and should work with more recent released versions.
In order to configure this repository you need to first :ref:`setup the Proxmox
release key <package_repos_secure_apt>`. After that, add the repository URL to
the APT sources lists.
Edit the file ``/etc/apt/sources.list.d/pbs-client.list`` and add the following
snipped

View File

@ -1,14 +0,0 @@
Implements debugging functionality to inspect Proxmox Backup datastore
files, verify the integrity of chunks.
Also contains an 'api' subcommand where arbitrary api paths can be called
(get/create/set/delete) as well as display their parameters (usage) and
their child-links (ls).
By default, it connects to the proxmox-backup-proxy on localhost via https,
but by setting the environment variable `PROXMOX_DEBUG_API_CODE` to `1` the
tool directly calls the corresponding code.
.. WARNING:: Using `PROXMOX_DEBUG_API_CODE` can be dangerous and is only intended
for debugging purposes. It is not intended for use on a production system.

View File

@ -1,33 +0,0 @@
==========================
proxmox-backup-debug
==========================
.. include:: ../epilog.rst
-------------------------------------------------------------
Debugging command line tool for Backup and Restore
-------------------------------------------------------------
:Author: |AUTHOR|
:Version: Version |VERSION|
:Manual section: 1
Synopsis
==========
.. include:: synopsis.rst
Common Options
==============
.. include:: ../output-format.rst
Description
============
.. include:: description.rst
.. include:: ../pbs-copyright.rst

View File

@ -1,5 +1,7 @@
// for Toolkit.js
function gettext(val) { return val; };
// FIXME: HACK! Makes scrolling in number spinner work again. fixed in ExtJS >= 6.1
if (Ext.isFirefox) {
Ext.$eventNameMap.DOMMouseScroll = 'DOMMouseScroll';
}
Ext.onReady(function() {
const NOW = new Date();
@ -35,6 +37,7 @@ Ext.onReady(function() {
editable: true,
displayField: 'text',
valueField: 'value',
queryMode: 'local',

View File

@ -3,6 +3,9 @@
Tape Backup
===========
.. CAUTION:: Tape Backup is a technical preview feature, not meant for
production use.
.. image:: images/screenshots/pbs-gui-tape-changer-overview.png
:align: right
:alt: Tape Backup: Tape changer overview
@ -845,17 +848,6 @@ Update Inventory
Restore Catalog
~~~~~~~~~~~~~~~
To restore a catalog from an existing tape, just insert the tape into the drive
and execute:
.. code-block:: console
# proxmox-tape catalog
You can restore from a tape even without an existing catalog, but only the
whole media set. If you do this, the catalog will be automatically created.
Encryption Key Management
~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -164,66 +164,3 @@ Verification of encrypted chunks
For encrypted chunks, only the checksum of the original (plaintext) data is
available, making it impossible for the server (without the encryption key), to
verify its content against it. Instead only the CRC-32 checksum gets checked.
Troubleshooting
---------------
Index files(.fidx, .didx) contain information about how to rebuild a file, more
precisely, they contain an ordered list of references to the chunks the original
file was split up in. If there is something wrong with a snapshot it might be
useful to find out which chunks are referenced in this specific snapshot, and
check wheather all of them are present and intact. The command for getting the
list of referenced chunks could look something like this:
.. code-block:: console
# proxmox-backup-debug inspect file drive-scsi0.img.fidx
The same command can be used to look at .blob file, without ``--decode`` just
the size and the encryption type, if any, is printed. If ``--decode`` is set the
blob file is decoded into the specified file('-' will decode it directly into
stdout).
.. code-block:: console
# proxmox-backup-debug inspect file qemu-server.conf.blob --decode -
would print the decoded contents of `qemu-server.conf.blob`. If the file you're
trying to inspect is encrypted, a path to the keyfile has to be provided using
``--keyfile``.
Checking in which index files a specific chunk file is referenced can be done
with:
.. code-block:: console
# proxmox-backup-debug inspect chunk b531d3ffc9bd7c65748a61198c060678326a431db7eded874c327b7986e595e0 --reference-filter /path/in/a/datastore/directory
Here ``--reference-filter`` specifies where index files should be searched, this
can be an arbitrary path. If, for some reason, the filename of the chunk was
changed you can explicitly specify the digest using ``--digest``, by default the
chunk filename is used as the digest to look for. Specifying no
``--reference-filter`` will just print the CRC and encryption status of the
chunk. You can also decode chunks, to do so ``--decode`` has to be set. If the
chunk is encrypted a ``--keyfile`` has to be provided for decoding.
Restore without a running PBS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to restore spefiic files of snapshots without a running PBS using
the `recover` sub-command, provided you have access to the intact index and
chunk files. Note that you also need the corresponding key file if the backup
was encrypted.
.. code-block:: console
# proxmox-backup-debug recover index drive-scsi0.img.fidx /path/to/.chunks
In above example the `/path/to/.chunks` argument is the path to the directory
that contains contains the chunks, and `drive-scsi0.img.fidx` is the index-file
of the file you'd lile to restore. Both paths can be absolute or relative. With
``--skip-crc`` it is possible to disable the crc checks of the chunks, this will
speed up the process slightly and allows for trying to restore (partially)
corrupt chunks. It's recommended to always try without the skip-CRC option
first.

View File

@ -1 +1 @@
deb https://enterprise.proxmox.com/debian/pbs bullseye pbs-enterprise
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise

View File

@ -2,8 +2,8 @@ use std::io::Write;
use anyhow::{Error};
use pbs_api_types::Authid;
use pbs_client::{HttpClient, HttpClientOptions, BackupReader};
use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader};
pub struct DummyWriter {
bytes: usize,
@ -59,7 +59,7 @@ async fn run() -> Result<(), Error> {
}
fn main() {
if let Err(err) = pbs_runtime::main(run()) {
if let Err(err) = proxmox_backup::tools::runtime::main(run()) {
eprintln!("ERROR: {}", err);
}
println!("DONE");

View File

@ -69,7 +69,7 @@ fn send_request(
}
fn main() -> Result<(), Error> {
pbs_runtime::main(run())
proxmox_backup::tools::runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -69,7 +69,7 @@ fn send_request(
}
fn main() -> Result<(), Error> {
pbs_runtime::main(run())
proxmox_backup::tools::runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -6,10 +6,10 @@ use hyper::{Body, Request, Response};
use openssl::ssl::{SslAcceptor, SslFiletype, SslMethod};
use tokio::net::{TcpListener, TcpStream};
use pbs_buildcfg::configdir;
use proxmox_backup::configdir;
fn main() -> Result<(), Error> {
pbs_runtime::main(run())
proxmox_backup::tools::runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -5,7 +5,7 @@ use hyper::{Body, Request, Response};
use tokio::net::{TcpListener, TcpStream};
fn main() -> Result<(), Error> {
pbs_runtime::main(run())
proxmox_backup::tools::runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -5,7 +5,7 @@ extern crate proxmox_backup;
use anyhow::{Error};
use std::io::{Read, Write};
use pbs_datastore::Chunker;
use proxmox_backup::backup::*;
struct ChunkWriter {
chunker: Chunker,

View File

@ -1,6 +1,7 @@
extern crate proxmox_backup;
use pbs_datastore::Chunker;
//use proxmox_backup::backup::chunker::*;
use proxmox_backup::backup::*;
fn main() {

View File

@ -3,7 +3,7 @@ use futures::*;
extern crate proxmox_backup;
use pbs_client::ChunkStream;
use proxmox_backup::backup::*;
// Test Chunker with real data read from a file.
//
@ -13,7 +13,7 @@ use pbs_client::ChunkStream;
// Note: I can currently get about 830MB/s
fn main() {
if let Err(err) = pbs_runtime::main(run()) {
if let Err(err) = proxmox_backup::tools::runtime::main(run()) {
panic!("ERROR: {}", err);
}
}

View File

@ -1,7 +1,7 @@
use anyhow::{Error};
use pbs_client::{HttpClient, HttpClientOptions, BackupWriter};
use pbs_api_types::Authid;
use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::*;
async fn upload_speed() -> Result<f64, Error> {
@ -27,7 +27,7 @@ async fn upload_speed() -> Result<f64, Error> {
}
fn main() {
match pbs_runtime::main(upload_speed()) {
match proxmox_backup::tools::runtime::main(upload_speed()) {
Ok(mbs) => {
println!("average upload speed: {} MB/s", mbs);
}

View File

@ -1,20 +0,0 @@
[package]
name = "pbs-api-types"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "general API type helpers for PBS"
[dependencies]
anyhow = "1.0"
lazy_static = "1.4"
libc = "0.2"
nix = "0.19.1"
openssl = "0.10"
regex = "1.2"
serde = { version = "1.0", features = ["derive"] }
proxmox = { version = "0.13.3", default-features = false, features = [ "api-macro" ] }
proxmox-systemd = { path = "../proxmox-systemd" }
pbs-tools = { path = "../pbs-tools" }

View File

@ -1,284 +0,0 @@
use std::str::FromStr;
use serde::{Deserialize, Serialize};
use serde::de::{value, IntoDeserializer};
use proxmox::api::api;
use proxmox::api::schema::{
ApiStringFormat, BooleanSchema, EnumEntry, Schema, StringSchema,
};
use proxmox::{constnamedbitmap, const_regex};
const_regex! {
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
}
// define Privilege bitfield
constnamedbitmap! {
/// Contains a list of privilege name to privilege value mappings.
///
/// The names are used when displaying/persisting privileges anywhere, the values are used to
/// allow easy matching of privileges as bitflags.
PRIVILEGES: u64 => {
/// Sys.Audit allows knowing about the system and its status
PRIV_SYS_AUDIT("Sys.Audit");
/// Sys.Modify allows modifying system-level configuration
PRIV_SYS_MODIFY("Sys.Modify");
/// Sys.Modify allows to poweroff/reboot/.. the system
PRIV_SYS_POWER_MANAGEMENT("Sys.PowerManagement");
/// Datastore.Audit allows knowing about a datastore,
/// including reading the configuration entry and listing its contents
PRIV_DATASTORE_AUDIT("Datastore.Audit");
/// Datastore.Allocate allows creating or deleting datastores
PRIV_DATASTORE_ALLOCATE("Datastore.Allocate");
/// Datastore.Modify allows modifying a datastore and its contents
PRIV_DATASTORE_MODIFY("Datastore.Modify");
/// Datastore.Read allows reading arbitrary backup contents
PRIV_DATASTORE_READ("Datastore.Read");
/// Allows verifying a datastore
PRIV_DATASTORE_VERIFY("Datastore.Verify");
/// Datastore.Backup allows Datastore.Read|Verify and creating new snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_BACKUP("Datastore.Backup");
/// Datastore.Prune allows deleting snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_PRUNE("Datastore.Prune");
/// Permissions.Modify allows modifying ACLs
PRIV_PERMISSIONS_MODIFY("Permissions.Modify");
/// Remote.Audit allows reading remote.cfg and sync.cfg entries
PRIV_REMOTE_AUDIT("Remote.Audit");
/// Remote.Modify allows modifying remote.cfg
PRIV_REMOTE_MODIFY("Remote.Modify");
/// Remote.Read allows reading data from a configured `Remote`
PRIV_REMOTE_READ("Remote.Read");
/// Sys.Console allows access to the system's console
PRIV_SYS_CONSOLE("Sys.Console");
/// Tape.Audit allows reading tape backup configuration and status
PRIV_TAPE_AUDIT("Tape.Audit");
/// Tape.Modify allows modifying tape backup configuration
PRIV_TAPE_MODIFY("Tape.Modify");
/// Tape.Write allows writing tape media
PRIV_TAPE_WRITE("Tape.Write");
/// Tape.Read allows reading tape backup configuration and media contents
PRIV_TAPE_READ("Tape.Read");
/// Realm.Allocate allows viewing, creating, modifying and deleting realms
PRIV_REALM_ALLOCATE("Realm.Allocate");
}
}
/// Admin always has all privileges. It can do everything except a few actions
/// which are limited to the 'root@pam` superuser
pub const ROLE_ADMIN: u64 = std::u64::MAX;
/// NoAccess can be used to remove privileges from specific (sub-)paths
pub const ROLE_NO_ACCESS: u64 = 0;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Audit can view configuration and status information, but not modify it.
pub const ROLE_AUDIT: u64 = 0
| PRIV_SYS_AUDIT
| PRIV_DATASTORE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Admin can do anything on the datastore.
pub const ROLE_DATASTORE_ADMIN: u64 = 0
| PRIV_DATASTORE_AUDIT
| PRIV_DATASTORE_MODIFY
| PRIV_DATASTORE_READ
| PRIV_DATASTORE_VERIFY
| PRIV_DATASTORE_BACKUP
| PRIV_DATASTORE_PRUNE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Reader can read/verify datastore content and do restore
pub const ROLE_DATASTORE_READER: u64 = 0
| PRIV_DATASTORE_AUDIT
| PRIV_DATASTORE_VERIFY
| PRIV_DATASTORE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Backup can do backup and restore, but no prune.
pub const ROLE_DATASTORE_BACKUP: u64 = 0
| PRIV_DATASTORE_BACKUP;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.PowerUser can do backup, restore, and prune.
pub const ROLE_DATASTORE_POWERUSER: u64 = 0
| PRIV_DATASTORE_PRUNE
| PRIV_DATASTORE_BACKUP;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Audit can audit the datastore.
pub const ROLE_DATASTORE_AUDIT: u64 = 0
| PRIV_DATASTORE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.Audit can audit the remote
pub const ROLE_REMOTE_AUDIT: u64 = 0
| PRIV_REMOTE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.Admin can do anything on the remote.
pub const ROLE_REMOTE_ADMIN: u64 = 0
| PRIV_REMOTE_AUDIT
| PRIV_REMOTE_MODIFY
| PRIV_REMOTE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.SyncOperator can do read and prune on the remote.
pub const ROLE_REMOTE_SYNC_OPERATOR: u64 = 0
| PRIV_REMOTE_AUDIT
| PRIV_REMOTE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Audit can audit the tape backup configuration and media content
pub const ROLE_TAPE_AUDIT: u64 = 0
| PRIV_TAPE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Admin can do anything on the tape backup
pub const ROLE_TAPE_ADMIN: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_MODIFY
| PRIV_TAPE_READ
| PRIV_TAPE_WRITE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Operator can do tape backup and restore (but no configuration changes)
pub const ROLE_TAPE_OPERATOR: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_READ
| PRIV_TAPE_WRITE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Reader can do read and inspect tape content
pub const ROLE_TAPE_READER: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_READ;
/// NoAccess can be used to remove privileges from specific (sub-)paths
pub const ROLE_NAME_NO_ACCESS: &str = "NoAccess";
#[api(
type_text: "<role>",
)]
#[repr(u64)]
#[derive(Serialize, Deserialize)]
/// Enum representing roles via their [PRIVILEGES] combination.
///
/// Since privileges are implemented as bitflags, each unique combination of privileges maps to a
/// single, unique `u64` value that is used in this enum definition.
pub enum Role {
/// Administrator
Admin = ROLE_ADMIN,
/// Auditor
Audit = ROLE_AUDIT,
/// Disable Access
NoAccess = ROLE_NO_ACCESS,
/// Datastore Administrator
DatastoreAdmin = ROLE_DATASTORE_ADMIN,
/// Datastore Reader (inspect datastore content and do restores)
DatastoreReader = ROLE_DATASTORE_READER,
/// Datastore Backup (backup and restore owned backups)
DatastoreBackup = ROLE_DATASTORE_BACKUP,
/// Datastore PowerUser (backup, restore and prune owned backup)
DatastorePowerUser = ROLE_DATASTORE_POWERUSER,
/// Datastore Auditor
DatastoreAudit = ROLE_DATASTORE_AUDIT,
/// Remote Auditor
RemoteAudit = ROLE_REMOTE_AUDIT,
/// Remote Administrator
RemoteAdmin = ROLE_REMOTE_ADMIN,
/// Syncronisation Opertator
RemoteSyncOperator = ROLE_REMOTE_SYNC_OPERATOR,
/// Tape Auditor
TapeAudit = ROLE_TAPE_AUDIT,
/// Tape Administrator
TapeAdmin = ROLE_TAPE_ADMIN,
/// Tape Operator
TapeOperator = ROLE_TAPE_OPERATOR,
/// Tape Reader
TapeReader = ROLE_TAPE_READER,
}
impl FromStr for Role {
type Err = value::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Self::deserialize(s.into_deserializer())
}
}
pub const ACL_PATH_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&ACL_PATH_REGEX);
pub const ACL_PATH_SCHEMA: Schema = StringSchema::new(
"Access control path.")
.format(&ACL_PATH_FORMAT)
.min_length(1)
.max_length(128)
.schema();
pub const ACL_PROPAGATE_SCHEMA: Schema = BooleanSchema::new(
"Allow to propagate (inherit) permissions.")
.default(true)
.schema();
pub const ACL_UGID_TYPE_SCHEMA: Schema = StringSchema::new(
"Type of 'ugid' property.")
.format(&ApiStringFormat::Enum(&[
EnumEntry::new("user", "User"),
EnumEntry::new("group", "Group")]))
.schema();
#[api(
properties: {
propagate: {
schema: ACL_PROPAGATE_SCHEMA,
},
path: {
schema: ACL_PATH_SCHEMA,
},
ugid_type: {
schema: ACL_UGID_TYPE_SCHEMA,
},
ugid: {
type: String,
description: "User or Group ID.",
},
roleid: {
type: Role,
}
}
)]
#[derive(Serialize, Deserialize)]
/// ACL list entry.
pub struct AclListItem {
pub path: String,
pub ugid: String,
pub ugid_type: String,
pub propagate: bool,
pub roleid: String,
}

View File

@ -1,57 +0,0 @@
use std::fmt::{self, Display};
use anyhow::Error;
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use pbs_tools::format::{as_fingerprint, bytes_as_fingerprint};
#[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Defines whether data is encrypted (using an AEAD cipher), only signed, or neither.
pub enum CryptMode {
/// Don't encrypt.
None,
/// Encrypt.
Encrypt,
/// Only sign.
SignOnly,
}
#[derive(Debug, Eq, PartialEq, Hash, Clone, Deserialize, Serialize)]
#[serde(transparent)]
/// 32-byte fingerprint, usually calculated with SHA256.
pub struct Fingerprint {
#[serde(with = "bytes_as_fingerprint")]
bytes: [u8; 32],
}
impl Fingerprint {
pub fn new(bytes: [u8; 32]) -> Self {
Self { bytes }
}
pub fn bytes(&self) -> &[u8; 32] {
&self.bytes
}
}
/// Display as short key ID
impl Display for Fingerprint {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", as_fingerprint(&self.bytes[0..8]))
}
}
impl std::str::FromStr for Fingerprint {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Error> {
let mut tmp = s.to_string();
tmp.retain(|c| c != ':');
let bytes = proxmox::tools::hex_to_digest(&tmp)?;
Ok(Fingerprint::new(bytes))
}
}

View File

@ -1,622 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{
ApiStringFormat, ApiType, ArraySchema, EnumEntry, IntegerSchema, ReturnType, Schema,
StringSchema, Updater,
};
use proxmox::const_regex;
use crate::{
PROXMOX_SAFE_ID_FORMAT, SHA256_HEX_REGEX, SINGLE_LINE_COMMENT_SCHEMA, CryptMode, UPID,
Fingerprint, Userid, Authid,
GC_SCHEDULE_SCHEMA, DATASTORE_NOTIFY_STRING_SCHEMA, PRUNE_SCHEDULE_SCHEMA,
};
const_regex!{
pub BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$");
pub BACKUP_ID_REGEX = concat!(r"^", BACKUP_ID_RE!(), r"$");
pub BACKUP_DATE_REGEX = concat!(r"^", BACKUP_TIME_RE!() ,r"$");
pub GROUP_PATH_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$");
pub BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
pub SNAPSHOT_PATH_REGEX = concat!(r"^", SNAPSHOT_PATH_REGEX_STR!(), r"$");
pub DATASTORE_MAP_REGEX = concat!(r"(:?", PROXMOX_SAFE_ID_REGEX_STR!(), r"=)?", PROXMOX_SAFE_ID_REGEX_STR!());
}
pub const CHUNK_DIGEST_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const DIR_NAME_SCHEMA: Schema = StringSchema::new("Directory name")
.min_length(1)
.max_length(4096)
.schema();
pub const BACKUP_ARCHIVE_NAME_SCHEMA: Schema = StringSchema::new("Backup archive name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.schema();
pub const BACKUP_ID_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_ID_REGEX);
pub const BACKUP_ID_SCHEMA: Schema = StringSchema::new("Backup ID.")
.format(&BACKUP_ID_FORMAT)
.schema();
pub const BACKUP_TYPE_SCHEMA: Schema = StringSchema::new("Backup type.")
.format(&ApiStringFormat::Enum(&[
EnumEntry::new("vm", "Virtual Machine Backup"),
EnumEntry::new("ct", "Container Backup"),
EnumEntry::new("host", "Host Backup"),
]))
.schema();
pub const BACKUP_TIME_SCHEMA: Schema = IntegerSchema::new("Backup time (Unix epoch.)")
.minimum(1_547_797_308)
.schema();
pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const CHUNK_DIGEST_SCHEMA: Schema = StringSchema::new("Chunk digest (SHA256).")
.format(&CHUNK_DIGEST_FORMAT)
.schema();
pub const DATASTORE_MAP_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&DATASTORE_MAP_REGEX);
pub const DATASTORE_MAP_SCHEMA: Schema = StringSchema::new("Datastore mapping.")
.format(&DATASTORE_MAP_FORMAT)
.min_length(3)
.max_length(65)
.type_text("(<source>=)?<target>")
.schema();
pub const DATASTORE_MAP_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Datastore mapping list.", &DATASTORE_MAP_SCHEMA)
.schema();
pub const DATASTORE_MAP_LIST_SCHEMA: Schema = StringSchema::new(
"A list of Datastore mappings (or single datastore), comma separated. \
For example 'a=b,e' maps the source datastore 'a' to target 'b and \
all other sources to the default 'e'. If no default is given, only the \
specified sources are mapped.")
.format(&ApiStringFormat::PropertyString(&DATASTORE_MAP_ARRAY_SCHEMA))
.schema();
pub const PRUNE_SCHEMA_KEEP_DAILY: Schema = IntegerSchema::new("Number of daily backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_HOURLY: Schema =
IntegerSchema::new("Number of hourly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_LAST: Schema = IntegerSchema::new("Number of backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_MONTHLY: Schema =
IntegerSchema::new("Number of monthly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_WEEKLY: Schema =
IntegerSchema::new("Number of weekly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_YEARLY: Schema =
IntegerSchema::new("Number of yearly backups to keep.")
.minimum(1)
.schema();
#[api(
properties: {
"keep-last": {
schema: PRUNE_SCHEMA_KEEP_LAST,
optional: true,
},
"keep-hourly": {
schema: PRUNE_SCHEMA_KEEP_HOURLY,
optional: true,
},
"keep-daily": {
schema: PRUNE_SCHEMA_KEEP_DAILY,
optional: true,
},
"keep-weekly": {
schema: PRUNE_SCHEMA_KEEP_WEEKLY,
optional: true,
},
"keep-monthly": {
schema: PRUNE_SCHEMA_KEEP_MONTHLY,
optional: true,
},
"keep-yearly": {
schema: PRUNE_SCHEMA_KEEP_YEARLY,
optional: true,
},
}
)]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
/// Common pruning options
pub struct PruneOptions {
#[serde(skip_serializing_if="Option::is_none")]
pub keep_last: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_hourly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_daily: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_weekly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_monthly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_yearly: Option<u64>,
}
#[api(
properties: {
name: {
schema: DATASTORE_SCHEMA,
},
path: {
schema: DIR_NAME_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
schema: DATASTORE_NOTIFY_STRING_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
},
"prune-schedule": {
optional: true,
schema: PRUNE_SCHEDULE_SCHEMA,
},
"keep-last": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST,
},
"keep-hourly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_HOURLY,
},
"keep-daily": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_DAILY,
},
"keep-weekly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_WEEKLY,
},
"keep-monthly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_MONTHLY,
},
"keep-yearly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_YEARLY,
},
"verify-new": {
description: "If enabled, all new backups will be verified right after completion.",
optional: true,
type: bool,
},
}
)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all="kebab-case")]
/// Datastore configuration properties.
pub struct DataStoreConfig {
#[updater(skip)]
pub name: String,
#[updater(skip)]
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub gc_schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub prune_schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_last: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_hourly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_daily: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_weekly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_monthly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_yearly: Option<u64>,
/// If enabled, all backups will be verified right after completion.
#[serde(skip_serializing_if="Option::is_none")]
pub verify_new: Option<bool>,
/// Send job email notification to this user
#[serde(skip_serializing_if="Option::is_none")]
pub notify_user: Option<Userid>,
/// Send notification only for job errors
#[serde(skip_serializing_if="Option::is_none")]
pub notify: Option<String>,
}
#[api(
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about a datastore.
pub struct DataStoreListItem {
pub store: String,
pub comment: Option<String>,
}
#[api(
properties: {
"filename": {
schema: BACKUP_ARCHIVE_NAME_SCHEMA,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about archive files inside a backup snapshot.
pub struct BackupContent {
pub filename: String,
/// Info if file is encrypted, signed, or neither.
#[serde(skip_serializing_if = "Option::is_none")]
pub crypt_mode: Option<CryptMode>,
/// Archive size (from backup manifest).
#[serde(skip_serializing_if = "Option::is_none")]
pub size: Option<u64>,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Result of a verify operation.
pub enum VerifyState {
/// Verification was successful
Ok,
/// Verification reported one or more errors
Failed,
}
#[api(
properties: {
upid: {
type: UPID,
},
state: {
type: VerifyState,
},
},
)]
#[derive(Serialize, Deserialize)]
/// Task properties.
pub struct SnapshotVerifyState {
/// UPID of the verify task
pub upid: UPID,
/// State of the verification. Enum.
pub state: VerifyState,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
comment: {
schema: SINGLE_LINE_COMMENT_SCHEMA,
optional: true,
},
verification: {
type: SnapshotVerifyState,
optional: true,
},
fingerprint: {
type: String,
optional: true,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
},
},
owner: {
type: Authid,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about backup snapshot.
pub struct SnapshotListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub backup_time: i64,
/// The first line from manifest "notes"
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
/// The result of the last run verify task
#[serde(skip_serializing_if = "Option::is_none")]
pub verification: Option<SnapshotVerifyState>,
/// Fingerprint of encryption key
#[serde(skip_serializing_if = "Option::is_none")]
pub fingerprint: Option<Fingerprint>,
/// List of contained archive files.
pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes).
#[serde(skip_serializing_if = "Option::is_none")]
pub size: Option<u64>,
/// The owner of the snapshots group
#[serde(skip_serializing_if = "Option::is_none")]
pub owner: Option<Authid>,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"last-backup": {
schema: BACKUP_TIME_SCHEMA,
},
"backup-count": {
type: Integer,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
},
},
owner: {
type: Authid,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about a backup group.
pub struct GroupListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub last_backup: i64,
/// Number of contained snapshots
pub backup_count: u64,
/// List of contained archive files.
pub files: Vec<String>,
/// The owner of group
#[serde(skip_serializing_if = "Option::is_none")]
pub owner: Option<Authid>,
/// The first line from group "notes"
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Prune result.
pub struct PruneListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub backup_time: i64,
/// Keep snapshot
pub keep: bool,
}
#[api(
properties: {
ct: {
type: TypeCounts,
optional: true,
},
host: {
type: TypeCounts,
optional: true,
},
vm: {
type: TypeCounts,
optional: true,
},
other: {
type: TypeCounts,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize, Default)]
/// Counts of groups/snapshots per BackupType.
pub struct Counts {
/// The counts for CT backups
pub ct: Option<TypeCounts>,
/// The counts for Host backups
pub host: Option<TypeCounts>,
/// The counts for VM backups
pub vm: Option<TypeCounts>,
/// The counts for other backup types
pub other: Option<TypeCounts>,
}
#[api()]
#[derive(Serialize, Deserialize, Default)]
/// Backup Type group/snapshot counts.
pub struct TypeCounts {
/// The number of groups of the type.
pub groups: u64,
/// The number of snapshots of the type.
pub snapshots: u64,
}
#[api(
properties: {
"upid": {
optional: true,
type: UPID,
},
},
)]
#[derive(Clone, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Garbage collection status.
pub struct GarbageCollectionStatus {
pub upid: Option<String>,
/// Number of processed index files.
pub index_file_count: usize,
/// Sum of bytes referred by index files.
pub index_data_bytes: u64,
/// Bytes used on disk.
pub disk_bytes: u64,
/// Chunks used on disk.
pub disk_chunks: usize,
/// Sum of removed bytes.
pub removed_bytes: u64,
/// Number of removed chunks.
pub removed_chunks: usize,
/// Sum of pending bytes (pending removal - kept for safety).
pub pending_bytes: u64,
/// Number of pending chunks (pending removal - kept for safety).
pub pending_chunks: usize,
/// Number of chunks marked as .bad by verify that have been removed by GC.
pub removed_bad: usize,
/// Number of chunks still marked as .bad after garbage collection.
pub still_bad: usize,
}
impl Default for GarbageCollectionStatus {
fn default() -> Self {
GarbageCollectionStatus {
upid: None,
index_file_count: 0,
index_data_bytes: 0,
disk_bytes: 0,
disk_chunks: 0,
removed_bytes: 0,
removed_chunks: 0,
pending_bytes: 0,
pending_chunks: 0,
removed_bad: 0,
still_bad: 0,
}
}
}
#[api(
properties: {
"gc-status": {
type: GarbageCollectionStatus,
optional: true,
},
counts: {
type: Counts,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Overall Datastore status and useful information.
pub struct DataStoreStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
/// Status of last GC
#[serde(skip_serializing_if="Option::is_none")]
pub gc_status: Option<GarbageCollectionStatus>,
/// Group/Snapshot counts
#[serde(skip_serializing_if="Option::is_none")]
pub counts: Option<Counts>,
}
pub const ADMIN_DATASTORE_LIST_SNAPSHOTS_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of snapshots.",
&SnapshotListItem::API_SCHEMA,
).schema(),
};
pub const ADMIN_DATASTORE_LIST_SNAPSHOT_FILES_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of archive files inside a backup snapshots.",
&BackupContent::API_SCHEMA,
).schema(),
};
pub const ADMIN_DATASTORE_LIST_GROUPS_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of backup groups.",
&GroupListItem::API_SCHEMA,
).schema(),
};
pub const ADMIN_DATASTORE_PRUNE_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of snapshots and a flag indicating if there are kept or removed.",
&PruneListItem::API_SCHEMA,
).schema(),
};

View File

@ -1,392 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox::const_regex;
use proxmox::api::{api, schema::*};
use crate::{
Userid, Authid, REMOTE_ID_SCHEMA, DRIVE_NAME_SCHEMA, MEDIA_POOL_NAME_SCHEMA,
SINGLE_LINE_COMMENT_SCHEMA, PROXMOX_SAFE_ID_FORMAT, DATASTORE_SCHEMA,
};
const_regex!{
/// Regex for verification jobs 'DATASTORE:ACTUAL_JOB_ID'
pub VERIFICATION_JOB_WORKER_ID_REGEX = concat!(r"^(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):");
/// Regex for sync jobs 'REMOTE:REMOTE_DATASTORE:LOCAL_DATASTORE:ACTUAL_JOB_ID'
pub SYNC_JOB_WORKER_ID_REGEX = concat!(r"^(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):");
}
pub const JOB_ID_SCHEMA: Schema = StringSchema::new("Job ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const SYNC_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run sync job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_systemd::time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const GC_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run garbage collection job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_systemd::time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const PRUNE_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run prune job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_systemd::time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const VERIFICATION_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run verify job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_systemd::time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const REMOVE_VANISHED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
"Delete vanished backups. This remove the local copy if the remote backup was deleted.")
.default(true)
.schema();
#[api(
properties: {
"next-run": {
description: "Estimated time of the next run (UNIX epoch).",
optional: true,
type: Integer,
},
"last-run-state": {
description: "Result of the last run.",
optional: true,
type: String,
},
"last-run-upid": {
description: "Task UPID of the last run.",
optional: true,
type: String,
},
"last-run-endtime": {
description: "Endtime of the last run.",
optional: true,
type: Integer,
},
}
)]
#[derive(Serialize,Deserialize,Default)]
#[serde(rename_all="kebab-case")]
/// Job Scheduling Status
pub struct JobScheduleStatus {
#[serde(skip_serializing_if="Option::is_none")]
pub next_run: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_state: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_upid: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_endtime: Option<i64>,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// When do we send notifications
pub enum Notify {
/// Never send notification
Never,
/// Send notifications for failed and successful jobs
Always,
/// Send notifications for failed jobs only
Error,
}
#[api(
properties: {
gc: {
type: Notify,
optional: true,
},
verify: {
type: Notify,
optional: true,
},
sync: {
type: Notify,
optional: true,
},
},
)]
#[derive(Debug, Serialize, Deserialize)]
/// Datastore notify settings
pub struct DatastoreNotify {
/// Garbage collection settings
pub gc: Option<Notify>,
/// Verify job setting
pub verify: Option<Notify>,
/// Sync job setting
pub sync: Option<Notify>,
}
pub const DATASTORE_NOTIFY_STRING_SCHEMA: Schema = StringSchema::new(
"Datastore notification setting")
.format(&ApiStringFormat::PropertyString(&DatastoreNotify::API_SCHEMA))
.schema();
pub const IGNORE_VERIFIED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
"Do not verify backups that are already verified if their verification is not outdated.")
.default(true)
.schema();
pub const VERIFICATION_OUTDATED_AFTER_SCHEMA: Schema = IntegerSchema::new(
"Days after that a verification becomes outdated")
.minimum(1)
.schema();
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
"ignore-verified": {
optional: true,
schema: IGNORE_VERIFIED_BACKUPS_SCHEMA,
},
"outdated-after": {
optional: true,
schema: VERIFICATION_OUTDATED_AFTER_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: VERIFICATION_SCHEDULE_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all="kebab-case")]
/// Verification Job
pub struct VerificationJobConfig {
/// unique ID to address this job
#[updater(skip)]
pub id: String,
/// the datastore ID this verificaiton job affects
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
/// if not set to false, check the age of the last snapshot verification to filter
/// out recent ones, depending on 'outdated_after' configuration.
pub ignore_verified: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
/// Reverify snapshots after X days, never if 0. Ignored if 'ignore_verified' is false.
pub outdated_after: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// when to schedule this job in calendar event notation
pub schedule: Option<String>,
}
#[api(
properties: {
config: {
type: VerificationJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Status of Verification Job
pub struct VerificationJobStatus {
#[serde(flatten)]
pub config: VerificationJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
}
#[api(
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
drive: {
schema: DRIVE_NAME_SCHEMA,
},
"eject-media": {
description: "Eject media upon job completion.",
type: bool,
optional: true,
},
"export-media-set": {
description: "Export media set upon job completion.",
type: bool,
optional: true,
},
"latest-only": {
description: "Backup latest snapshots only.",
type: bool,
optional: true,
},
"notify-user": {
optional: true,
type: Userid,
},
}
)]
#[derive(Serialize,Deserialize,Clone,Updater)]
#[serde(rename_all="kebab-case")]
/// Tape Backup Job Setup
pub struct TapeBackupJobSetup {
pub store: String,
pub pool: String,
pub drive: String,
#[serde(skip_serializing_if="Option::is_none")]
pub eject_media: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub export_media_set: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub latest_only: Option<bool>,
/// Send job email notification to this user
#[serde(skip_serializing_if="Option::is_none")]
pub notify_user: Option<Userid>,
}
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
setup: {
type: TapeBackupJobSetup,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize,Clone,Updater)]
#[serde(rename_all="kebab-case")]
/// Tape Backup Job
pub struct TapeBackupJobConfig {
#[updater(skip)]
pub id: String,
#[serde(flatten)]
pub setup: TapeBackupJobSetup,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub schedule: Option<String>,
}
#[api(
properties: {
config: {
type: TapeBackupJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Status of Tape Backup Job
pub struct TapeBackupJobStatus {
#[serde(flatten)]
pub config: TapeBackupJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
/// Next tape used (best guess)
#[serde(skip_serializing_if="Option::is_none")]
pub next_media_label: Option<String>,
}
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
"owner": {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
"remote-store": {
schema: DATASTORE_SCHEMA,
},
"remove-vanished": {
schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize,Clone,Updater)]
#[serde(rename_all="kebab-case")]
/// Sync Job
pub struct SyncJobConfig {
#[updater(skip)]
pub id: String,
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Authid>,
pub remote: String,
pub remote_store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub remove_vanished: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub schedule: Option<String>,
}
#[api(
properties: {
config: {
type: SyncJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Status of Sync Job
pub struct SyncJobStatus {
#[serde(flatten)]
pub config: SyncJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
}

View File

@ -1,56 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use crate::CERT_FINGERPRINT_SHA256_SCHEMA;
#[api(default: "scrypt")]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
/// Encrtypt the Key with a password using PBKDF2
PBKDF2,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api(
properties: {
kdf: {
type: Kdf,
},
fingerprint: {
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
optional: true,
},
},
)]
#[derive(Deserialize, Serialize)]
/// Encryption Key Information
pub struct KeyInfo {
/// Path to key (if stored in a file)
#[serde(skip_serializing_if="Option::is_none")]
pub path: Option<String>,
pub kdf: Kdf,
/// Key creation time
pub created: i64,
/// Key modification time
pub modified: i64,
/// Key fingerprint
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<String>,
/// Password hint
#[serde(skip_serializing_if="Option::is_none")]
pub hint: Option<String>,
}

View File

@ -1,399 +0,0 @@
//! Basic API types used by most of the PBS code.
use serde::{Deserialize, Serialize};
use anyhow::bail;
use proxmox::api::api;
use proxmox::api::schema::{ApiStringFormat, ArraySchema, Schema, StringSchema};
use proxmox::const_regex;
use proxmox::{IPRE, IPRE_BRACKET, IPV4OCTET, IPV4RE, IPV6H16, IPV6LS32, IPV6RE};
#[rustfmt::skip]
#[macro_export]
macro_rules! PROXMOX_SAFE_ID_REGEX_STR { () => { r"(?:[A-Za-z0-9_][A-Za-z0-9._\-]*)" }; }
#[rustfmt::skip]
#[macro_export]
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9_][A-Za-z0-9._\-]*") }
#[rustfmt::skip]
#[macro_export]
macro_rules! BACKUP_TYPE_RE { () => (r"(?:host|vm|ct)") }
#[rustfmt::skip]
#[macro_export]
macro_rules! BACKUP_TIME_RE { () => (r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z") }
#[rustfmt::skip]
#[macro_export]
macro_rules! SNAPSHOT_PATH_REGEX_STR {
() => (
concat!(r"(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")")
);
}
mod acl;
pub use acl::*;
mod datastore;
pub use datastore::*;
mod jobs;
pub use jobs::*;
mod key_derivation;
pub use key_derivation::{Kdf, KeyInfo};
mod network;
pub use network::*;
#[macro_use]
mod userid;
pub use userid::Authid;
pub use userid::Userid;
pub use userid::{Realm, RealmRef};
pub use userid::{Tokenname, TokennameRef};
pub use userid::{Username, UsernameRef};
pub use userid::{PROXMOX_GROUP_ID_SCHEMA, PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA};
#[macro_use]
mod user;
pub use user::*;
pub mod upid;
pub use upid::*;
mod crypto;
pub use crypto::{CryptMode, Fingerprint};
pub mod file_restore;
mod remote;
pub use remote::*;
mod tape;
pub use tape::*;
mod zfs;
pub use zfs::*;
#[rustfmt::skip]
#[macro_use]
mod local_macros {
macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_NAME { () => (concat!(r"(?:(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!(), ")")) }
macro_rules! CIDR_V4_REGEX_STR { () => (concat!(r"(?:", IPV4RE!(), r"/\d{1,2})$")) }
macro_rules! CIDR_V6_REGEX_STR { () => (concat!(r"(?:", IPV6RE!(), r"/\d{1,3})$")) }
macro_rules! DNS_ALIAS_LABEL { () => (r"(?:[a-zA-Z0-9_](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_ALIAS_NAME {
() => (concat!(r"(?:(?:", DNS_ALIAS_LABEL!() , r"\.)*", DNS_ALIAS_LABEL!(), ")"))
}
}
const_regex! {
pub IP_V4_REGEX = concat!(r"^", IPV4RE!(), r"$");
pub IP_V6_REGEX = concat!(r"^", IPV6RE!(), r"$");
pub IP_REGEX = concat!(r"^", IPRE!(), r"$");
pub CIDR_V4_REGEX = concat!(r"^", CIDR_V4_REGEX_STR!(), r"$");
pub CIDR_V6_REGEX = concat!(r"^", CIDR_V6_REGEX_STR!(), r"$");
pub CIDR_REGEX = concat!(r"^(?:", CIDR_V4_REGEX_STR!(), "|", CIDR_V6_REGEX_STR!(), r")$");
pub HOSTNAME_REGEX = r"^(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)$";
pub DNS_NAME_REGEX = concat!(r"^", DNS_NAME!(), r"$");
pub DNS_ALIAS_REGEX = concat!(r"^", DNS_ALIAS_NAME!(), r"$");
pub DNS_NAME_OR_IP_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE!(), r")$");
pub SHA256_HEX_REGEX = r"^[a-f0-9]{64}$"; // fixme: define in common_regex ?
pub PASSWORD_REGEX = r"^[[:^cntrl:]]*$"; // everything but control characters
pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$";
pub SYSTEMD_DATETIME_REGEX = r"^\d{4}-\d{2}-\d{2}( \d{2}:\d{2}(:\d{2})?)?$"; // fixme: define in common_regex ?
pub FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";
/// Regex for safe identifiers.
///
/// This
/// [article](https://dwheeler.com/essays/fixing-unix-linux-filenames.html)
/// contains further information why it is reasonable to restict
/// names this way. This is not only useful for filenames, but for
/// any identifier command line tools work with.
pub PROXMOX_SAFE_ID_REGEX = concat!(r"^", PROXMOX_SAFE_ID_REGEX_STR!(), r"$");
pub SINGLE_LINE_COMMENT_REGEX = r"^[[:^cntrl:]]*$";
pub BACKUP_REPO_URL_REGEX = concat!(
r"^^(?:(?:(",
USER_ID_REGEX_STR!(), "|", APITOKEN_ID_REGEX_STR!(),
")@)?(",
DNS_NAME!(), "|", IPRE_BRACKET!(),
"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$"
);
pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$";
pub SUBSCRIPTION_KEY_REGEX = concat!(r"^pbs(?:[cbsp])-[0-9a-f]{10}$");
}
pub const IP_V4_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&IP_V4_REGEX);
pub const IP_V6_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&IP_V6_REGEX);
pub const IP_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&IP_REGEX);
pub const CIDR_V4_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&CIDR_V4_REGEX);
pub const CIDR_V6_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&CIDR_V6_REGEX);
pub const CIDR_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&CIDR_REGEX);
pub const PVE_CONFIG_DIGEST_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const PASSWORD_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&PASSWORD_REGEX);
pub const UUID_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&UUID_REGEX);
pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX);
pub const SUBSCRIPTION_KEY_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SUBSCRIPTION_KEY_REGEX);
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SYSTEMD_DATETIME_REGEX);
pub const HOSTNAME_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&HOSTNAME_REGEX);
pub const DNS_ALIAS_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_ALIAS_REGEX);
pub const SEARCH_DOMAIN_SCHEMA: Schema =
StringSchema::new("Search domain for host-name lookup.").schema();
pub const FIRST_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("First name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const SECOND_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("Second name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const THIRD_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("Third name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const HOSTNAME_SCHEMA: Schema = StringSchema::new("Hostname (as defined in RFC1123).")
.format(&HOSTNAME_FORMAT)
.schema();
pub const DNS_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_REGEX);
pub const DNS_NAME_OR_IP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_OR_IP_REGEX);
pub const DNS_NAME_OR_IP_SCHEMA: Schema = StringSchema::new("DNS name or IP address.")
.format(&DNS_NAME_OR_IP_FORMAT)
.schema();
pub const NODE_SCHEMA: Schema = StringSchema::new("Node name (or 'localhost')")
.format(&ApiStringFormat::VerifyFn(|node| {
if node == "localhost" || node == proxmox::tools::nodename() {
Ok(())
} else {
bail!("no such node '{}'", node);
}
}))
.schema();
pub const TIME_ZONE_SCHEMA: Schema = StringSchema::new(
"Time zone. The file '/usr/share/zoneinfo/zone.tab' contains the list of valid names.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name (/sys/block/<name>).")
.format(&BLOCKDEVICE_NAME_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const DISK_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Disk name list.", &BLOCKDEVICE_NAME_SCHEMA)
.schema();
pub const DISK_LIST_SCHEMA: Schema = StringSchema::new(
"A list of disk names, comma separated.")
.format(&ApiStringFormat::PropertyString(&DISK_ARRAY_SCHEMA))
.schema();
pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
.format(&PASSWORD_FORMAT)
.min_length(5)
.max_length(64)
.schema();
pub const REALM_ID_SCHEMA: Schema = StringSchema::new("Realm name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(2)
.max_length(32)
.schema();
pub const FINGERPRINT_SHA256_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&FINGERPRINT_SHA256_REGEX);
pub const CERT_FINGERPRINT_SHA256_SCHEMA: Schema =
StringSchema::new("X509 certificate fingerprint (sha256).")
.format(&FINGERPRINT_SHA256_FORMAT)
.schema();
pub const PROXMOX_SAFE_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const SINGLE_LINE_COMMENT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SINGLE_LINE_COMMENT_REGEX);
pub const SINGLE_LINE_COMMENT_SCHEMA: Schema = StringSchema::new("Comment (single line).")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.schema();
pub const SUBSCRIPTION_KEY_SCHEMA: Schema = StringSchema::new("Proxmox Backup Server subscription key.")
.format(&SUBSCRIPTION_KEY_FORMAT)
.min_length(15)
.max_length(16)
.schema();
pub const SERVICE_ID_SCHEMA: Schema = StringSchema::new("Service ID.")
.max_length(256)
.schema();
pub const PROXMOX_CONFIG_DIGEST_SCHEMA: Schema = StringSchema::new(
"Prevent changes if current configuration file has different \
SHA256 digest. This can be used to prevent concurrent \
modifications.",
)
.format(&PVE_CONFIG_DIGEST_FORMAT)
.schema();
/// API schema format definition for repository URLs
pub const BACKUP_REPO_URL: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_REPO_URL_REGEX);
// Complex type definitions
#[api()]
#[derive(Default, Serialize, Deserialize)]
/// Storage space usage information.
pub struct StorageStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
}
pub const PASSWORD_HINT_SCHEMA: Schema = StringSchema::new("Password hint.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(1)
.max_length(64)
.schema();
#[api]
#[derive(Deserialize, Serialize)]
/// RSA public key information
pub struct RsaPubKeyInfo {
/// Path to key (if stored in a file)
#[serde(skip_serializing_if="Option::is_none")]
pub path: Option<String>,
/// RSA exponent
pub exponent: String,
/// Hex-encoded RSA modulus
pub modulus: String,
/// Key (modulus) length in bits
pub length: usize,
}
impl std::convert::TryFrom<openssl::rsa::Rsa<openssl::pkey::Public>> for RsaPubKeyInfo {
type Error = anyhow::Error;
fn try_from(value: openssl::rsa::Rsa<openssl::pkey::Public>) -> Result<Self, Self::Error> {
let modulus = value.n().to_hex_str()?.to_string();
let exponent = value.e().to_dec_str()?.to_string();
let length = value.size() as usize * 8;
Ok(Self {
path: None,
exponent,
modulus,
length,
})
}
}
#[api()]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
/// Describes a package for which an update is available.
pub struct APTUpdateInfo {
/// Package name
pub package: String,
/// Package title
pub title: String,
/// Package architecture
pub arch: String,
/// Human readable package description
pub description: String,
/// New version to be updated to
pub version: String,
/// Old version currently installed
pub old_version: String,
/// Package origin
pub origin: String,
/// Package priority in human-readable form
pub priority: String,
/// Package section
pub section: String,
/// URL under which the package's changelog can be retrieved
pub change_log_url: String,
/// Custom extra field for additional package information
#[serde(skip_serializing_if="Option::is_none")]
pub extra_info: Option<String>,
}
#[api()]
#[derive(Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "UPPERCASE")]
pub enum RRDMode {
/// Maximum
Max,
/// Average
Average,
}
#[api()]
#[repr(u64)]
#[derive(Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum RRDTimeFrameResolution {
/// 1 min => last 70 minutes
Hour = 60,
/// 30 min => last 35 hours
Day = 60*30,
/// 3 hours => about 8 days
Week = 60*180,
/// 12 hours => last 35 days
Month = 60*720,
/// 1 week => last 490 days
Year = 60*10080,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Node Power command type.
pub enum NodePowerCommand {
/// Restart the server
Reboot,
/// Shutdown the server
Shutdown,
}

View File

@ -1,308 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox::api::{api, schema::*};
use crate::{
PROXMOX_SAFE_ID_REGEX,
IP_V4_FORMAT, IP_V6_FORMAT, IP_FORMAT,
CIDR_V4_FORMAT, CIDR_V6_FORMAT, CIDR_FORMAT,
};
pub const NETWORK_INTERFACE_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const IP_V4_SCHEMA: Schema =
StringSchema::new("IPv4 address.")
.format(&IP_V4_FORMAT)
.max_length(15)
.schema();
pub const IP_V6_SCHEMA: Schema =
StringSchema::new("IPv6 address.")
.format(&IP_V6_FORMAT)
.max_length(39)
.schema();
pub const IP_SCHEMA: Schema =
StringSchema::new("IP (IPv4 or IPv6) address.")
.format(&IP_FORMAT)
.max_length(39)
.schema();
pub const CIDR_V4_SCHEMA: Schema =
StringSchema::new("IPv4 address with netmask (CIDR notation).")
.format(&CIDR_V4_FORMAT)
.max_length(18)
.schema();
pub const CIDR_V6_SCHEMA: Schema =
StringSchema::new("IPv6 address with netmask (CIDR notation).")
.format(&CIDR_V6_FORMAT)
.max_length(43)
.schema();
pub const CIDR_SCHEMA: Schema =
StringSchema::new("IP address (IPv4 or IPv6) with netmask (CIDR notation).")
.format(&CIDR_FORMAT)
.max_length(43)
.schema();
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Interface configuration method
pub enum NetworkConfigMethod {
/// Configuration is done manually using other tools
Manual,
/// Define interfaces with statically allocated addresses.
Static,
/// Obtain an address via DHCP
DHCP,
/// Define the loopback interface.
Loopback,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
#[allow(non_camel_case_types)]
#[repr(u8)]
/// Linux Bond Mode
pub enum LinuxBondMode {
/// Round-robin policy
balance_rr = 0,
/// Active-backup policy
active_backup = 1,
/// XOR policy
balance_xor = 2,
/// Broadcast policy
broadcast = 3,
/// IEEE 802.3ad Dynamic link aggregation
#[serde(rename = "802.3ad")]
ieee802_3ad = 4,
/// Adaptive transmit load balancing
balance_tlb = 5,
/// Adaptive load balancing
balance_alb = 6,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
#[allow(non_camel_case_types)]
#[repr(u8)]
/// Bond Transmit Hash Policy for LACP (802.3ad)
pub enum BondXmitHashPolicy {
/// Layer 2
layer2 = 0,
/// Layer 2+3
#[serde(rename = "layer2+3")]
layer2_3 = 1,
/// Layer 3+4
#[serde(rename = "layer3+4")]
layer3_4 = 2,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Network interface type
pub enum NetworkInterfaceType {
/// Loopback
Loopback,
/// Physical Ethernet device
Eth,
/// Linux Bridge
Bridge,
/// Linux Bond
Bond,
/// Linux VLAN (eth.10)
Vlan,
/// Interface Alias (eth:1)
Alias,
/// Unknown interface type
Unknown,
}
pub const NETWORK_INTERFACE_NAME_SCHEMA: Schema = StringSchema::new("Network interface name.")
.format(&NETWORK_INTERFACE_FORMAT)
.min_length(1)
.max_length(libc::IFNAMSIZ-1)
.schema();
pub const NETWORK_INTERFACE_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Network interface list.", &NETWORK_INTERFACE_NAME_SCHEMA)
.schema();
pub const NETWORK_INTERFACE_LIST_SCHEMA: Schema = StringSchema::new(
"A list of network devices, comma separated.")
.format(&ApiStringFormat::PropertyString(&NETWORK_INTERFACE_ARRAY_SCHEMA))
.schema();
#[api(
properties: {
name: {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
},
"type": {
type: NetworkInterfaceType,
},
method: {
type: NetworkConfigMethod,
optional: true,
},
method6: {
type: NetworkConfigMethod,
optional: true,
},
cidr: {
schema: CIDR_V4_SCHEMA,
optional: true,
},
cidr6: {
schema: CIDR_V6_SCHEMA,
optional: true,
},
gateway: {
schema: IP_V4_SCHEMA,
optional: true,
},
gateway6: {
schema: IP_V6_SCHEMA,
optional: true,
},
options: {
description: "Option list (inet)",
type: Array,
items: {
description: "Optional attribute line.",
type: String,
},
},
options6: {
description: "Option list (inet6)",
type: Array,
items: {
description: "Optional attribute line.",
type: String,
},
},
comments: {
description: "Comments (inet, may span multiple lines)",
type: String,
optional: true,
},
comments6: {
description: "Comments (inet6, may span multiple lines)",
type: String,
optional: true,
},
bridge_ports: {
schema: NETWORK_INTERFACE_ARRAY_SCHEMA,
optional: true,
},
slaves: {
schema: NETWORK_INTERFACE_ARRAY_SCHEMA,
optional: true,
},
bond_mode: {
type: LinuxBondMode,
optional: true,
},
"bond-primary": {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
optional: true,
},
bond_xmit_hash_policy: {
type: BondXmitHashPolicy,
optional: true,
},
}
)]
#[derive(Debug, Serialize, Deserialize)]
/// Network Interface configuration
pub struct Interface {
/// Autostart interface
#[serde(rename = "autostart")]
pub autostart: bool,
/// Interface is active (UP)
pub active: bool,
/// Interface name
pub name: String,
/// Interface type
#[serde(rename = "type")]
pub interface_type: NetworkInterfaceType,
#[serde(skip_serializing_if="Option::is_none")]
pub method: Option<NetworkConfigMethod>,
#[serde(skip_serializing_if="Option::is_none")]
pub method6: Option<NetworkConfigMethod>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv4 address with netmask
pub cidr: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv4 gateway
pub gateway: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv6 address with netmask
pub cidr6: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv6 gateway
pub gateway6: Option<String>,
#[serde(skip_serializing_if="Vec::is_empty")]
pub options: Vec<String>,
#[serde(skip_serializing_if="Vec::is_empty")]
pub options6: Vec<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub comments: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub comments6: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// Maximum Transmission Unit
pub mtu: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub bridge_ports: Option<Vec<String>>,
/// Enable bridge vlan support.
#[serde(skip_serializing_if="Option::is_none")]
pub bridge_vlan_aware: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub slaves: Option<Vec<String>>,
#[serde(skip_serializing_if="Option::is_none")]
pub bond_mode: Option<LinuxBondMode>,
#[serde(skip_serializing_if="Option::is_none")]
#[serde(rename = "bond-primary")]
pub bond_primary: Option<String>,
pub bond_xmit_hash_policy: Option<BondXmitHashPolicy>,
}
impl Interface {
pub fn new(name: String) -> Self {
Self {
name,
interface_type: NetworkInterfaceType::Unknown,
autostart: false,
active: false,
method: None,
method6: None,
cidr: None,
gateway: None,
cidr6: None,
gateway6: None,
options: Vec::new(),
options6: Vec::new(),
comments: None,
comments6: None,
mtu: None,
bridge_ports: None,
bridge_vlan_aware: None,
slaves: None,
bond_mode: None,
bond_primary: None,
bond_xmit_hash_policy: None,
}
}
}

View File

@ -1,86 +0,0 @@
use serde::{Deserialize, Serialize};
use super::*;
use proxmox::api::{api, schema::*};
pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth token for remote host.")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const REMOTE_PASSWORD_BASE64_SCHEMA: Schema = StringSchema::new("Password or auth token for remote host (stored as base64 string).")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const REMOTE_ID_SCHEMA: Schema = StringSchema::new("Remote ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
#[api(
properties: {
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
host: {
schema: DNS_NAME_OR_IP_SCHEMA,
},
port: {
optional: true,
description: "The (optional) port",
type: u16,
},
"auth-id": {
type: Authid,
},
fingerprint: {
optional: true,
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
},
},
)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all = "kebab-case")]
/// Remote configuration properties.
pub struct RemoteConfig {
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
pub host: String,
#[serde(skip_serializing_if="Option::is_none")]
pub port: Option<u16>,
pub auth_id: Authid,
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<String>,
}
#[api(
properties: {
name: {
schema: REMOTE_ID_SCHEMA,
},
config: {
type: RemoteConfig,
},
password: {
schema: REMOTE_PASSWORD_BASE64_SCHEMA,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Remote properties.
pub struct Remote {
pub name: String,
// Note: The stored password is base64 encoded
#[serde(skip_serializing_if="String::is_empty")]
#[serde(with = "proxmox::tools::serde::string_as_base64")]
pub password: String,
#[serde(flatten)]
pub config: RemoteConfig,
}

View File

@ -1,94 +0,0 @@
//! Types for tape backup API
mod device;
pub use device::*;
mod changer;
pub use changer::*;
mod drive;
pub use drive::*;
mod media_pool;
pub use media_pool::*;
mod media_status;
pub use media_status::*;
mod media_location;
pub use media_location::*;
mod media;
pub use media::*;
use ::serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{Schema, StringSchema, ApiStringFormat};
use proxmox::tools::Uuid;
use proxmox::const_regex;
use crate::{
FINGERPRINT_SHA256_FORMAT, BACKUP_ID_SCHEMA, BACKUP_TYPE_SCHEMA,
};
const_regex!{
pub TAPE_RESTORE_SNAPSHOT_REGEX = concat!(r"^", PROXMOX_SAFE_ID_REGEX_STR!(), r":", SNAPSHOT_PATH_REGEX_STR!(), r"$");
}
pub const TAPE_RESTORE_SNAPSHOT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&TAPE_RESTORE_SNAPSHOT_REGEX);
pub const TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA: Schema = StringSchema::new(
"Tape encryption key fingerprint (sha256)."
)
.format(&FINGERPRINT_SHA256_FORMAT)
.schema();
pub const TAPE_RESTORE_SNAPSHOT_SCHEMA: Schema = StringSchema::new(
"A snapshot in the format: 'store:type/id/time")
.format(&TAPE_RESTORE_SNAPSHOT_FORMAT)
.type_text("store:type/id/time")
.schema();
#[api(
properties: {
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
"label-text": {
schema: MEDIA_LABEL_SCHEMA,
optional: true,
},
"media": {
schema: MEDIA_UUID_SCHEMA,
optional: true,
},
"media-set": {
schema: MEDIA_SET_UUID_SCHEMA,
optional: true,
},
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
optional: true,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Content list filter parameters
pub struct MediaContentListFilter {
pub pool: Option<String>,
pub label_text: Option<String>,
pub media: Option<Uuid>,
pub media_set: Option<Uuid>,
pub backup_type: Option<String>,
pub backup_id: Option<String>,
}

View File

@ -1,203 +0,0 @@
use std::sync::atomic::{AtomicUsize, Ordering};
use anyhow::{bail, Error};
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{ApiStringFormat, ApiType, Schema, StringSchema, ArraySchema, ReturnType};
use proxmox::const_regex;
use proxmox::sys::linux::procfs;
use crate::Authid;
/// Unique Process/Task Identifier
///
/// We use this to uniquely identify worker task. UPIDs have a short
/// string repesentaion, which gives additional information about the
/// type of the task. for example:
/// ```text
/// UPID:{node}:{pid}:{pstart}:{task_id}:{starttime}:{worker_type}:{worker_id}:{userid}:
/// UPID:elsa:00004F37:0039E469:00000000:5CA78B83:garbage_collection::root@pam:
/// ```
/// Please note that we use tokio, so a single thread can run multiple
/// tasks.
// #[api] - manually implemented API type
#[derive(Debug, Clone)]
pub struct UPID {
/// The Unix PID
pub pid: libc::pid_t,
/// The Unix process start time from `/proc/pid/stat`
pub pstart: u64,
/// The task start time (Epoch)
pub starttime: i64,
/// The task ID (inside the process/thread)
pub task_id: usize,
/// Worker type (arbitrary ASCII string)
pub worker_type: String,
/// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>,
/// The authenticated entity who started the task
pub auth_id: Authid,
/// The node name.
pub node: String,
}
proxmox::forward_serialize_to_display!(UPID);
proxmox::forward_deserialize_to_from_str!(UPID);
const_regex! {
pub PROXMOX_UPID_REGEX = concat!(
r"^UPID:(?P<node>[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?):(?P<pid>[0-9A-Fa-f]{8}):",
r"(?P<pstart>[0-9A-Fa-f]{8,9}):(?P<task_id>[0-9A-Fa-f]{8,16}):(?P<starttime>[0-9A-Fa-f]{8}):",
r"(?P<wtype>[^:\s]+):(?P<wid>[^:\s]*):(?P<authid>[^:\s]+):$"
);
}
pub const PROXMOX_UPID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_UPID_REGEX);
pub const UPID_SCHEMA: Schema = StringSchema::new("Unique Process/Task Identifier")
.min_length("UPID:N:12345678:12345678:12345678:::".len())
.max_length(128) // arbitrary
.format(&PROXMOX_UPID_FORMAT)
.schema();
impl ApiType for UPID {
const API_SCHEMA: Schema = UPID_SCHEMA;
}
impl UPID {
/// Create a new UPID
pub fn new(
worker_type: &str,
worker_id: Option<String>,
auth_id: Authid,
) -> Result<Self, Error> {
let pid = unsafe { libc::getpid() };
let bad: &[_] = &['/', ':', ' '];
if worker_type.contains(bad) {
bail!("illegal characters in worker type '{}'", worker_type);
}
static WORKER_TASK_NEXT_ID: AtomicUsize = AtomicUsize::new(0);
let task_id = WORKER_TASK_NEXT_ID.fetch_add(1, Ordering::SeqCst);
Ok(UPID {
pid,
pstart: procfs::PidStat::read_from_pid(nix::unistd::Pid::from_raw(pid))?.starttime,
starttime: proxmox::tools::time::epoch_i64(),
task_id,
worker_type: worker_type.to_owned(),
worker_id,
auth_id,
node: proxmox::tools::nodename().to_owned(),
})
}
}
impl std::str::FromStr for UPID {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if let Some(cap) = PROXMOX_UPID_REGEX.captures(s) {
let worker_id = if cap["wid"].is_empty() {
None
} else {
let wid = proxmox_systemd::unescape_unit(&cap["wid"])?;
Some(wid)
};
Ok(UPID {
pid: i32::from_str_radix(&cap["pid"], 16).unwrap(),
pstart: u64::from_str_radix(&cap["pstart"], 16).unwrap(),
starttime: i64::from_str_radix(&cap["starttime"], 16).unwrap(),
task_id: usize::from_str_radix(&cap["task_id"], 16).unwrap(),
worker_type: cap["wtype"].to_string(),
worker_id,
auth_id: cap["authid"].parse()?,
node: cap["node"].to_string(),
})
} else {
bail!("unable to parse UPID '{}'", s);
}
}
}
impl std::fmt::Display for UPID {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
let wid = if let Some(ref id) = self.worker_id {
proxmox_systemd::escape_unit(id, false)
} else {
String::new()
};
// Note: pstart can be > 32bit if uptime > 497 days, so this can result in
// more that 8 characters for pstart
write!(f, "UPID:{}:{:08X}:{:08X}:{:08X}:{:08X}:{}:{}:{}:",
self.node, self.pid, self.pstart, self.task_id, self.starttime, self.worker_type, wid, self.auth_id)
}
}
#[api()]
#[derive(Eq, PartialEq, Debug, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum TaskStateType {
/// Ok
OK,
/// Warning
Warning,
/// Error
Error,
/// Unknown
Unknown,
}
#[api(
properties: {
upid: { schema: UPID::API_SCHEMA },
},
)]
#[derive(Serialize, Deserialize)]
/// Task properties.
pub struct TaskListItem {
pub upid: String,
/// The node name where the task is running on.
pub node: String,
/// The Unix PID
pub pid: i64,
/// The task start time (Epoch)
pub pstart: u64,
/// The task start time (Epoch)
pub starttime: i64,
/// Worker type (arbitrary ASCII string)
pub worker_type: String,
/// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>,
/// The authenticated entity who started the task
pub user: Authid,
/// The task end time (Epoch)
#[serde(skip_serializing_if="Option::is_none")]
pub endtime: Option<i64>,
/// Task end status
#[serde(skip_serializing_if="Option::is_none")]
pub status: Option<String>,
}
pub const NODE_TASKS_LIST_TASKS_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"A list of tasks.",
&TaskListItem::API_SCHEMA,
).schema(),
};

View File

@ -1,208 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{
BooleanSchema, IntegerSchema, Schema, StringSchema, Updater,
};
use super::{SINGLE_LINE_COMMENT_FORMAT, SINGLE_LINE_COMMENT_SCHEMA};
use super::userid::{Authid, Userid, PROXMOX_TOKEN_ID_SCHEMA};
pub const ENABLE_USER_SCHEMA: Schema = BooleanSchema::new(
"Enable the account (default). You can set this to '0' to disable the account.")
.default(true)
.schema();
pub const EXPIRE_USER_SCHEMA: Schema = IntegerSchema::new(
"Account expiration date (seconds since epoch). '0' means no expiration date.")
.default(0)
.minimum(0)
.schema();
pub const FIRST_NAME_SCHEMA: Schema = StringSchema::new("First name.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const LAST_NAME_SCHEMA: Schema = StringSchema::new("Last name.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const EMAIL_SCHEMA: Schema = StringSchema::new("E-Mail Address.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
#[api(
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
firstname: {
optional: true,
schema: FIRST_NAME_SCHEMA,
},
lastname: {
schema: LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: EMAIL_SCHEMA,
optional: true,
},
tokens: {
type: Array,
optional: true,
description: "List of user's API tokens.",
items: {
type: ApiToken
},
},
}
)]
#[derive(Serialize,Deserialize)]
/// User properties with added list of ApiTokens
pub struct UserWithTokens {
pub userid: Userid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub firstname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub lastname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub email: Option<String>,
#[serde(skip_serializing_if="Vec::is_empty", default)]
pub tokens: Vec<ApiToken>,
}
#[api(
properties: {
tokenid: {
schema: PROXMOX_TOKEN_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize)]
/// ApiToken properties.
pub struct ApiToken {
pub tokenid: Authid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
}
impl ApiToken {
pub fn is_active(&self) -> bool {
if !self.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = self.expire {
let now = proxmox::tools::time::epoch_i64();
if expire > 0 && expire <= now {
return false;
}
}
true
}
}
#[api(
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
firstname: {
optional: true,
schema: FIRST_NAME_SCHEMA,
},
lastname: {
schema: LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: EMAIL_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize,Deserialize,Updater)]
/// User properties.
pub struct User {
#[updater(skip)]
pub userid: Userid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub firstname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub lastname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub email: Option<String>,
}
impl User {
pub fn is_active(&self) -> bool {
if !self.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = self.expire {
let now = proxmox::tools::time::epoch_i64();
if expire > 0 && expire <= now {
return false;
}
}
true
}
}

View File

@ -1,81 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox::api::{api, schema::*};
use proxmox::const_regex;
const_regex! {
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
}
pub const ZFS_ASHIFT_SCHEMA: Schema = IntegerSchema::new(
"Pool sector size exponent.")
.minimum(9)
.maximum(16)
.default(12)
.schema();
pub const ZPOOL_NAME_SCHEMA: Schema = StringSchema::new("ZFS Pool Name")
.format(&ApiStringFormat::Pattern(&ZPOOL_NAME_REGEX))
.schema();
#[api(default: "On")]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// The ZFS compression algorithm to use.
pub enum ZfsCompressionType {
/// Gnu Zip
Gzip,
/// LZ4
Lz4,
/// LZJB
Lzjb,
/// ZLE
Zle,
/// ZStd
ZStd,
/// Enable compression using the default algorithm.
On,
/// Disable compression.
Off,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// The ZFS RAID level to use.
pub enum ZfsRaidLevel {
/// Single Disk
Single,
/// Mirror
Mirror,
/// Raid10
Raid10,
/// RaidZ
RaidZ,
/// RaidZ2
RaidZ2,
/// RaidZ3
RaidZ3,
}
#[api()]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// zpool list item
pub struct ZpoolListItem {
/// zpool name
pub name: String,
/// Health
pub health: String,
/// Total size
pub size: u64,
/// Used size
pub alloc: u64,
/// Free space
pub free: u64,
/// ZFS fragnentation level
pub frag: u64,
/// ZFS deduplication ratio
pub dedup: f64,
}

View File

@ -1,9 +0,0 @@
[package]
name = "pbs-buildcfg"
version = "2.0.10"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "macros used for pbs related paths such as configdir and rundir"
build = "build.rs"
[dependencies]

View File

@ -1,24 +0,0 @@
// build.rs
use std::env;
use std::process::Command;
fn main() {
let repoid = match env::var("REPOID") {
Ok(repoid) => repoid,
Err(_) => {
match Command::new("git")
.args(&["rev-parse", "HEAD"])
.output()
{
Ok(output) => {
String::from_utf8(output.stdout).unwrap()
}
Err(err) => {
panic!("git rev-parse failed: {}", err);
}
}
}
};
println!("cargo:rustc-env=REPOID={}", repoid);
}

View File

@ -1,40 +0,0 @@
[package]
name = "pbs-client"
version = "0.1.0"
authors = ["Wolfgang Bumiller <w.bumiller@proxmox.com>"]
edition = "2018"
description = "The main proxmox backup client crate"
[dependencies]
anyhow = "1.0"
bitflags = "1.2.1"
bytes = "1.0"
futures = "0.3"
h2 = { version = "0.3", features = [ "stream" ] }
http = "0.2"
hyper = { version = "0.14", features = [ "full" ] }
lazy_static = "1.4"
libc = "0.2"
nix = "0.19.1"
openssl = "0.10"
percent-encoding = "2.1"
pin-project-lite = "0.2"
regex = "1.2"
rustyline = "7"
serde_json = "1.0"
tokio = { version = "1.6", features = [ "fs", "signal" ] }
tokio-stream = "0.1.0"
tower-service = "0.3.0"
xdg = "2.2"
pathpatterns = "0.1.2"
proxmox = { version = "0.13.3", default-features = false, features = [ "cli" ] }
proxmox-fuse = "0.1.1"
proxmox-http = { version = "0.4.0", features = [ "client", "http-helpers", "websocket" ] }
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
pbs-api-types = { path = "../pbs-api-types" }
pbs-buildcfg = { path = "../pbs-buildcfg" }
pbs-datastore = { path = "../pbs-datastore" }
pbs-runtime = { path = "../pbs-runtime" }
pbs-tools = { path = "../pbs-tools" }

View File

@ -1,230 +0,0 @@
use std::io::{self, Seek, SeekFrom};
use std::ops::Range;
use std::sync::{Arc, Mutex};
use std::task::Context;
use std::pin::Pin;
use anyhow::{bail, format_err, Error};
use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::read_chunk::ReadChunk;
use pbs_datastore::index::IndexFile;
use pbs_tools::lru_cache::LruCache;
struct CachedChunk {
range: Range<u64>,
data: Vec<u8>,
}
impl CachedChunk {
/// Perform sanity checks on the range and data size:
pub fn new(range: Range<u64>, data: Vec<u8>) -> Result<Self, Error> {
if data.len() as u64 != range.end - range.start {
bail!(
"read chunk with wrong size ({} != {})",
data.len(),
range.end - range.start,
);
}
Ok(Self { range, data })
}
}
pub struct BufferedDynamicReader<S> {
store: S,
index: DynamicIndexReader,
archive_size: u64,
read_buffer: Vec<u8>,
buffered_chunk_idx: usize,
buffered_chunk_start: u64,
read_offset: u64,
lru_cache: LruCache<usize, CachedChunk>,
}
struct ChunkCacher<'a, S> {
store: &'a mut S,
index: &'a DynamicIndexReader,
}
impl<'a, S: ReadChunk> pbs_tools::lru_cache::Cacher<usize, CachedChunk> for ChunkCacher<'a, S> {
fn fetch(&mut self, index: usize) -> Result<Option<CachedChunk>, Error> {
let info = match self.index.chunk_info(index) {
Some(info) => info,
None => bail!("chunk index out of range"),
};
let range = info.range;
let data = self.store.read_chunk(&info.digest)?;
CachedChunk::new(range, data).map(Some)
}
}
impl<S: ReadChunk> BufferedDynamicReader<S> {
pub fn new(index: DynamicIndexReader, store: S) -> Self {
let archive_size = index.index_bytes();
Self {
store,
index,
archive_size,
read_buffer: Vec::with_capacity(1024 * 1024),
buffered_chunk_idx: 0,
buffered_chunk_start: 0,
read_offset: 0,
lru_cache: LruCache::new(32),
}
}
pub fn archive_size(&self) -> u64 {
self.archive_size
}
fn buffer_chunk(&mut self, idx: usize) -> Result<(), Error> {
//let (start, end, data) = self.lru_cache.access(
let cached_chunk = self.lru_cache.access(
idx,
&mut ChunkCacher {
store: &mut self.store,
index: &self.index,
},
)?.ok_or_else(|| format_err!("chunk not found by cacher"))?;
// fixme: avoid copy
self.read_buffer.clear();
self.read_buffer.extend_from_slice(&cached_chunk.data);
self.buffered_chunk_idx = idx;
self.buffered_chunk_start = cached_chunk.range.start;
//println!("BUFFER {} {}", self.buffered_chunk_start, end);
Ok(())
}
}
impl<S: ReadChunk> pbs_tools::io::BufferedRead for BufferedDynamicReader<S> {
fn buffered_read(&mut self, offset: u64) -> Result<&[u8], Error> {
if offset == self.archive_size {
return Ok(&self.read_buffer[0..0]);
}
let buffer_len = self.read_buffer.len();
let index = &self.index;
// optimization for sequential read
if buffer_len > 0
&& ((self.buffered_chunk_idx + 1) < index.index().len())
&& (offset >= (self.buffered_chunk_start + (self.read_buffer.len() as u64)))
{
let next_idx = self.buffered_chunk_idx + 1;
let next_end = index.chunk_end(next_idx);
if offset < next_end {
self.buffer_chunk(next_idx)?;
let buffer_offset = (offset - self.buffered_chunk_start) as usize;
return Ok(&self.read_buffer[buffer_offset..]);
}
}
if (buffer_len == 0)
|| (offset < self.buffered_chunk_start)
|| (offset >= (self.buffered_chunk_start + (self.read_buffer.len() as u64)))
{
let end_idx = index.index().len() - 1;
let end = index.chunk_end(end_idx);
let idx = index.binary_search(0, 0, end_idx, end, offset)?;
self.buffer_chunk(idx)?;
}
let buffer_offset = (offset - self.buffered_chunk_start) as usize;
Ok(&self.read_buffer[buffer_offset..])
}
}
impl<S: ReadChunk> std::io::Read for BufferedDynamicReader<S> {
fn read(&mut self, buf: &mut [u8]) -> Result<usize, std::io::Error> {
use pbs_tools::io::BufferedRead;
use std::io::{Error, ErrorKind};
let data = match self.buffered_read(self.read_offset) {
Ok(v) => v,
Err(err) => return Err(Error::new(ErrorKind::Other, err.to_string())),
};
let n = if data.len() > buf.len() {
buf.len()
} else {
data.len()
};
buf[0..n].copy_from_slice(&data[0..n]);
self.read_offset += n as u64;
Ok(n)
}
}
impl<S: ReadChunk> std::io::Seek for BufferedDynamicReader<S> {
fn seek(&mut self, pos: SeekFrom) -> Result<u64, std::io::Error> {
let new_offset = match pos {
SeekFrom::Start(start_offset) => start_offset as i64,
SeekFrom::End(end_offset) => (self.archive_size as i64) + end_offset,
SeekFrom::Current(offset) => (self.read_offset as i64) + offset,
};
use std::io::{Error, ErrorKind};
if (new_offset < 0) || (new_offset > (self.archive_size as i64)) {
return Err(Error::new(
ErrorKind::Other,
format!(
"seek is out of range {} ([0..{}])",
new_offset, self.archive_size
),
));
}
self.read_offset = new_offset as u64;
Ok(self.read_offset)
}
}
/// This is a workaround until we have cleaned up the chunk/reader/... infrastructure for better
/// async use!
///
/// Ideally BufferedDynamicReader gets replaced so the LruCache maps to `BroadcastFuture<Chunk>`,
/// so that we can properly access it from multiple threads simultaneously while not issuing
/// duplicate simultaneous reads over http.
#[derive(Clone)]
pub struct LocalDynamicReadAt<R: ReadChunk> {
inner: Arc<Mutex<BufferedDynamicReader<R>>>,
}
impl<R: ReadChunk> LocalDynamicReadAt<R> {
pub fn new(inner: BufferedDynamicReader<R>) -> Self {
Self {
inner: Arc::new(Mutex::new(inner)),
}
}
}
impl<R: ReadChunk> ReadAt for LocalDynamicReadAt<R> {
fn start_read_at<'a>(
self: Pin<&'a Self>,
_cx: &mut Context,
buf: &'a mut [u8],
offset: u64,
) -> MaybeReady<io::Result<usize>, ReadAtOperation<'a>> {
use std::io::Read;
MaybeReady::Ready(tokio::task::block_in_place(move || {
let mut reader = self.inner.lock().unwrap();
reader.seek(SeekFrom::Start(offset))?;
Ok(reader.read(buf)?)
}))
}
fn poll_complete<'a>(
self: Pin<&'a Self>,
_op: ReadAtOperation<'a>,
) -> MaybeReady<io::Result<usize>, ReadAtOperation<'a>> {
panic!("LocalDynamicReadAt::start_read_at returned Pending");
}
}

View File

@ -1,23 +0,0 @@
[package]
name = "pbs-config"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "Configuration file management for PBS"
[dependencies]
libc = "0.2"
anyhow = "1.0"
lazy_static = "1.4"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
openssl = "0.10"
nix = "0.19.1"
regex = "1.2"
once_cell = "1.3.1"
proxmox = { version = "0.13.3", default-features = false, features = [ "cli" ] }
pbs-api-types = { path = "../pbs-api-types" }
pbs-buildcfg = { path = "../pbs-buildcfg" }
pbs-tools = { path = "../pbs-tools" }

View File

@ -1,100 +0,0 @@
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use proxmox::api::{
schema::{ApiType, Schema},
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
}
};
use pbs_api_types::{DataStoreConfig, DATASTORE_SCHEMA};
use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
lazy_static! {
pub static ref CONFIG: SectionConfig = init();
}
fn init() -> SectionConfig {
let obj_schema = match DataStoreConfig::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("datastore".to_string(), Some(String::from("name")), obj_schema);
let mut config = SectionConfig::new(&DATASTORE_SCHEMA);
config.register_plugin(plugin);
config
}
pub const DATASTORE_CFG_FILENAME: &str = "/etc/proxmox-backup/datastore.cfg";
pub const DATASTORE_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.datastore.lck";
/// Get exclusive lock
pub fn lock_config() -> Result<BackupLockGuard, Error> {
open_backup_lockfile(DATASTORE_CFG_LOCKFILE, None, true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(DATASTORE_CFG_FILENAME)?
.unwrap_or_else(|| "".to_string());
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(DATASTORE_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(DATASTORE_CFG_FILENAME, &config)?;
replace_backup_config(DATASTORE_CFG_FILENAME, raw.as_bytes())
}
// shell completion helper
pub fn complete_datastore_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => return vec![],
}
}
pub fn complete_acl_path(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
let mut list = Vec::new();
list.push(String::from("/"));
list.push(String::from("/datastore"));
list.push(String::from("/datastore/"));
if let Ok((data, _digest)) = config() {
for id in data.sections.keys() {
list.push(format!("/datastore/{}", id));
}
}
list.push(String::from("/remote"));
list.push(String::from("/remote/"));
list.push(String::from("/tape"));
list.push(String::from("/tape/"));
list.push(String::from("/tape/drive"));
list.push(String::from("/tape/drive/"));
list.push(String::from("/tape/changer"));
list.push(String::from("/tape/changer/"));
list.push(String::from("/tape/pool"));
list.push(String::from("/tape/pool/"));
list.push(String::from("/tape/job"));
list.push(String::from("/tape/job/"));
list
}
pub fn complete_calendar_event(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
// just give some hints about possible values
["minutely", "hourly", "daily", "mon..fri", "0:0"]
.iter().map(|s| String::from(*s)).collect()
}

View File

@ -1,136 +0,0 @@
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use serde::{Serialize, Deserialize};
use proxmox::api::{
api,
schema::*,
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
}
};
use pbs_api_types::{REALM_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA};
use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
lazy_static! {
pub static ref CONFIG: SectionConfig = init();
}
#[api()]
#[derive(Eq, PartialEq, Debug, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Use the value of this attribute/claim as unique user name. It is
/// up to the identity provider to guarantee the uniqueness. The
/// OpenID specification only guarantees that Subject ('sub') is unique. Also
/// make sure that the user is not allowed to change that attribute by
/// himself!
pub enum OpenIdUserAttribute {
/// Subject (OpenId 'sub' claim)
Subject,
/// Username (OpenId 'preferred_username' claim)
Username,
/// Email (OpenId 'email' claim)
Email,
}
#[api(
properties: {
realm: {
schema: REALM_ID_SCHEMA,
},
"client-key": {
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
autocreate: {
optional: true,
default: false,
},
"username-claim": {
type: OpenIdUserAttribute,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all="kebab-case")]
/// OpenID configuration properties.
pub struct OpenIdRealmConfig {
#[updater(skip)]
pub realm: String,
/// OpenID Issuer Url
pub issuer_url: String,
/// OpenID Client ID
pub client_id: String,
/// OpenID Client Key
#[serde(skip_serializing_if="Option::is_none")]
pub client_key: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
/// Automatically create users if they do not exist.
#[serde(skip_serializing_if="Option::is_none")]
pub autocreate: Option<bool>,
#[updater(skip)]
#[serde(skip_serializing_if="Option::is_none")]
pub username_claim: Option<OpenIdUserAttribute>,
}
fn init() -> SectionConfig {
let obj_schema = match OpenIdRealmConfig::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("openid".to_string(), Some(String::from("realm")), obj_schema);
let mut config = SectionConfig::new(&REALM_ID_SCHEMA);
config.register_plugin(plugin);
config
}
pub const DOMAINS_CFG_FILENAME: &str = "/etc/proxmox-backup/domains.cfg";
pub const DOMAINS_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.domains.lck";
/// Get exclusive lock
pub fn lock_config() -> Result<BackupLockGuard, Error> {
open_backup_lockfile(DOMAINS_CFG_LOCKFILE, None, true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(DOMAINS_CFG_FILENAME)?
.unwrap_or_else(|| "".to_string());
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(DOMAINS_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(DOMAINS_CFG_FILENAME, &config)?;
replace_backup_config(DOMAINS_CFG_FILENAME, raw.as_bytes())
}
// shell completion helper
pub fn complete_realm_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => return vec![],
}
}
pub fn complete_openid_realm_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter()
.filter_map(|(id, (t, _))| if t == "openid" { Some(id.to_string()) } else { None })
.collect(),
Err(_) => return vec![],
}
}

View File

@ -1,106 +0,0 @@
pub mod acl;
mod cached_user_info;
pub use cached_user_info::CachedUserInfo;
pub mod datastore;
pub mod domains;
pub mod drive;
pub mod key_config;
pub mod media_pool;
pub mod network;
pub mod remote;
pub mod sync;
pub mod tape_encryption_keys;
pub mod tape_job;
pub mod token_shadow;
pub mod user;
pub mod verify;
pub(crate) mod memcom;
use anyhow::{format_err, Error};
pub use pbs_buildcfg::{BACKUP_USER_NAME, BACKUP_GROUP_NAME};
/// Return User info for the 'backup' user (``getpwnam_r(3)``)
pub fn backup_user() -> Result<nix::unistd::User, Error> {
pbs_tools::sys::query_user(BACKUP_USER_NAME)?
.ok_or_else(|| format_err!("Unable to lookup '{}' user.", BACKUP_USER_NAME))
}
/// Return Group info for the 'backup' group (``getgrnam(3)``)
pub fn backup_group() -> Result<nix::unistd::Group, Error> {
pbs_tools::sys::query_group(BACKUP_GROUP_NAME)?
.ok_or_else(|| format_err!("Unable to lookup '{}' group.", BACKUP_GROUP_NAME))
}
pub struct BackupLockGuard(Option<std::fs::File>);
#[doc(hidden)]
/// Note: do not use for production code, this is only intended for tests
pub unsafe fn create_mocked_lock() -> BackupLockGuard {
BackupLockGuard(None)
}
/// Open or create a lock file owned by user "backup" and lock it.
///
/// Owner/Group of the file is set to backup/backup.
/// File mode is 0660.
/// Default timeout is 10 seconds.
///
/// Note: This method needs to be called by user "root" or "backup".
pub fn open_backup_lockfile<P: AsRef<std::path::Path>>(
path: P,
timeout: Option<std::time::Duration>,
exclusive: bool,
) -> Result<BackupLockGuard, Error> {
let user = backup_user()?;
let options = proxmox::tools::fs::CreateOptions::new()
.perm(nix::sys::stat::Mode::from_bits_truncate(0o660))
.owner(user.uid)
.group(user.gid);
let timeout = timeout.unwrap_or(std::time::Duration::new(10, 0));
let file = proxmox::tools::fs::open_file_locked(&path, timeout, exclusive, options)?;
Ok(BackupLockGuard(Some(file)))
}
/// Atomically write data to file owned by "root:backup" with permission "0640"
///
/// Only the superuser can write those files, but group 'backup' can read them.
pub fn replace_backup_config<P: AsRef<std::path::Path>>(
path: P,
data: &[u8],
) -> Result<(), Error> {
let backup_user = backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= backup
let options = proxmox::tools::fs::CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(backup_user.gid);
proxmox::tools::fs::replace_file(path, data, options)?;
Ok(())
}
/// Atomically write data to file owned by "root:root" with permission "0600"
///
/// Only the superuser can read and write those files.
pub fn replace_secret_config<P: AsRef<std::path::Path>>(
path: P,
data: &[u8],
) -> Result<(), Error> {
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0600);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= root
let options = proxmox::tools::fs::CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(nix::unistd::Gid::from_raw(0));
proxmox::tools::fs::replace_file(path, data, options)?;
Ok(())
}

View File

@ -1,81 +0,0 @@
//! Memory based communication channel between proxy & daemon for things such as cache
//! invalidation.
use std::os::unix::io::AsRawFd;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
use anyhow::Error;
use nix::fcntl::OFlag;
use nix::sys::mman::{MapFlags, ProtFlags};
use nix::sys::stat::Mode;
use once_cell::sync::OnceCell;
use proxmox::tools::fs::CreateOptions;
use proxmox::tools::mmap::Mmap;
/// In-memory communication channel.
pub struct Memcom {
mmap: Mmap<u8>,
}
#[repr(C)]
struct Head {
// User (user.cfg) cache generation/version.
user_cache_generation: AtomicUsize,
}
static INSTANCE: OnceCell<Arc<Memcom>> = OnceCell::new();
const MEMCOM_FILE_PATH: &str = pbs_buildcfg::rundir!("/proxmox-backup-memcom");
const EMPTY_PAGE: [u8; 4096] = [0u8; 4096];
impl Memcom {
/// Open the memory based communication channel singleton.
pub fn new() -> Result<Arc<Self>, Error> {
INSTANCE.get_or_try_init(Self::open).map(Arc::clone)
}
// Actual work of `new`:
fn open() -> Result<Arc<Self>, Error> {
let user = crate::backup_user()?;
let options = CreateOptions::new()
.perm(Mode::from_bits_truncate(0o660))
.owner(user.uid)
.group(user.gid);
let file = proxmox::tools::fs::atomic_open_or_create_file(
MEMCOM_FILE_PATH,
OFlag::O_RDWR | OFlag::O_CLOEXEC,
&EMPTY_PAGE, options)?;
let mmap = unsafe {
Mmap::<u8>::map_fd(
file.as_raw_fd(),
0,
4096,
ProtFlags::PROT_READ | ProtFlags::PROT_WRITE,
MapFlags::MAP_SHARED | MapFlags::MAP_NORESERVE | MapFlags::MAP_POPULATE,
)?
};
Ok(Arc::new(Self { mmap }))
}
// Shortcut to get the mapped `Head` as a `Head`.
fn head(&self) -> &Head {
unsafe { &*(self.mmap.as_ptr() as *const u8 as *const Head) }
}
/// Returns the user cache generation number.
pub fn user_cache_generation(&self) -> usize {
self.head().user_cache_generation.load(Ordering::Acquire)
}
/// Increase the user cache generation number.
pub fn increase_user_cache_generation(&self) {
self.head()
.user_cache_generation
.fetch_add(1, Ordering::AcqRel);
}
}

View File

@ -1,64 +0,0 @@
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use proxmox::api::{
schema::*,
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
}
};
use pbs_api_types::{Remote, REMOTE_ID_SCHEMA};
use crate::{open_backup_lockfile, BackupLockGuard};
lazy_static! {
pub static ref CONFIG: SectionConfig = init();
}
fn init() -> SectionConfig {
let obj_schema = match Remote::API_SCHEMA {
Schema::AllOf(ref allof_schema) => allof_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("remote".to_string(), Some("name".to_string()), obj_schema);
let mut config = SectionConfig::new(&REMOTE_ID_SCHEMA);
config.register_plugin(plugin);
config
}
pub const REMOTE_CFG_FILENAME: &str = "/etc/proxmox-backup/remote.cfg";
pub const REMOTE_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.remote.lck";
/// Get exclusive lock
pub fn lock_config() -> Result<BackupLockGuard, Error> {
open_backup_lockfile(REMOTE_CFG_LOCKFILE, None, true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(REMOTE_CFG_FILENAME)?
.unwrap_or_else(|| "".to_string());
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(REMOTE_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(REMOTE_CFG_FILENAME, &config)?;
crate::replace_backup_config(REMOTE_CFG_FILENAME, raw.as_bytes())
}
// shell completion helper
pub fn complete_remote_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => return vec![],
}
}

View File

@ -1,65 +0,0 @@
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use proxmox::api::{
schema::*,
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
}
};
use pbs_api_types::{JOB_ID_SCHEMA, SyncJobConfig};
use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
lazy_static! {
pub static ref CONFIG: SectionConfig = init();
}
fn init() -> SectionConfig {
let obj_schema = match SyncJobConfig::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("sync".to_string(), Some(String::from("id")), obj_schema);
let mut config = SectionConfig::new(&JOB_ID_SCHEMA);
config.register_plugin(plugin);
config
}
pub const SYNC_CFG_FILENAME: &str = "/etc/proxmox-backup/sync.cfg";
pub const SYNC_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.sync.lck";
/// Get exclusive lock
pub fn lock_config() -> Result<BackupLockGuard, Error> {
open_backup_lockfile(SYNC_CFG_LOCKFILE, None, true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(SYNC_CFG_FILENAME)?
.unwrap_or_else(|| "".to_string());
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(SYNC_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(SYNC_CFG_FILENAME, &config)?;
replace_backup_config(SYNC_CFG_FILENAME, raw.as_bytes())
}
// shell completion helper
pub fn complete_sync_job_id(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => return vec![],
}
}

View File

@ -1,66 +0,0 @@
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use proxmox::api::{
schema::{Schema, ApiType},
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
}
};
use pbs_api_types::{TapeBackupJobConfig, JOB_ID_SCHEMA};
use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
lazy_static! {
pub static ref CONFIG: SectionConfig = init();
}
fn init() -> SectionConfig {
let obj_schema = match TapeBackupJobConfig::API_SCHEMA {
Schema::AllOf(ref allof_schema) => allof_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("backup".to_string(), Some(String::from("id")), obj_schema);
let mut config = SectionConfig::new(&JOB_ID_SCHEMA);
config.register_plugin(plugin);
config
}
pub const TAPE_JOB_CFG_FILENAME: &str = "/etc/proxmox-backup/tape-job.cfg";
pub const TAPE_JOB_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.tape-job.lck";
/// Get exclusive lock
pub fn lock() -> Result<BackupLockGuard, Error> {
open_backup_lockfile( TAPE_JOB_CFG_LOCKFILE, None, true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(TAPE_JOB_CFG_FILENAME)?
.unwrap_or_else(|| "".to_string());
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(TAPE_JOB_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(TAPE_JOB_CFG_FILENAME, &config)?;
replace_backup_config(TAPE_JOB_CFG_FILENAME, raw.as_bytes())
}
// shell completion helper
/// List all tape job IDs
pub fn complete_tape_job_id(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => return vec![],
}
}

View File

@ -1,64 +0,0 @@
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use proxmox::api::{
schema::*,
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
}
};
use pbs_api_types::{JOB_ID_SCHEMA, VerificationJobConfig};
use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
lazy_static! {
pub static ref CONFIG: SectionConfig = init();
}
fn init() -> SectionConfig {
let obj_schema = match VerificationJobConfig::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("verification".to_string(), Some(String::from("id")), obj_schema);
let mut config = SectionConfig::new(&JOB_ID_SCHEMA);
config.register_plugin(plugin);
config
}
pub const VERIFICATION_CFG_FILENAME: &str = "/etc/proxmox-backup/verification.cfg";
pub const VERIFICATION_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.verification.lck";
/// Get exclusive lock
pub fn lock_config() -> Result<BackupLockGuard, Error> {
open_backup_lockfile(VERIFICATION_CFG_LOCKFILE, None, true)
}
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(VERIFICATION_CFG_FILENAME)?;
let content = content.unwrap_or_else(String::new);
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(VERIFICATION_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(VERIFICATION_CFG_FILENAME, &config)?;
replace_backup_config(VERIFICATION_CFG_FILENAME, raw.as_bytes())
}
// shell completion helper
pub fn complete_verification_job_id(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => return vec![],
}
}

View File

@ -1,30 +0,0 @@
[package]
name = "pbs-datastore"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "low level pbs data storage access"
[dependencies]
anyhow = "1.0"
base64 = "0.12"
crc32fast = "1"
endian_trait = { version = "0.6", features = [ "arrays" ] }
futures = "0.3"
libc = "0.2"
log = "0.4"
nix = "0.19.1"
openssl = "0.10"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1.6", features = [] }
zstd = { version = "0.6", features = [ "bindgen" ] }
pathpatterns = "0.1.2"
pxar = "0.10.1"
proxmox = { version = "0.13.3", default-features = false, features = [ "api-macro" ] }
pbs-api-types = { path = "../pbs-api-types" }
pbs-tools = { path = "../pbs-tools" }
pbs-config = { path = "../pbs-config" }

View File

@ -1,178 +0,0 @@
use std::io::{BufReader, Read};
use std::sync::Arc;
use anyhow::{bail, format_err, Error};
use proxmox::tools::io::ReadExt;
use pbs_tools::crypt_config::CryptConfig;
use crate::checksum_reader::ChecksumReader;
use crate::crypt_reader::CryptReader;
use crate::file_formats::{self, DataBlobHeader};
enum BlobReaderState<'reader, R: Read> {
Uncompressed {
expected_crc: u32,
csum_reader: ChecksumReader<R>,
},
Compressed {
expected_crc: u32,
decompr: zstd::stream::read::Decoder<'reader, BufReader<ChecksumReader<R>>>,
},
Encrypted {
expected_crc: u32,
decrypt_reader: CryptReader<BufReader<ChecksumReader<R>>>,
},
EncryptedCompressed {
expected_crc: u32,
decompr: zstd::stream::read::Decoder<
'reader,
BufReader<CryptReader<BufReader<ChecksumReader<R>>>>,
>,
},
}
/// Read data blobs
pub struct DataBlobReader<'reader, R: Read> {
state: BlobReaderState<'reader, R>,
}
// zstd_safe::DCtx is not sync but we are, since
// the only public interface is on mutable reference
unsafe impl<R: Read> Sync for DataBlobReader<'_, R> {}
impl<R: Read> DataBlobReader<'_, R> {
pub fn new(mut reader: R, config: Option<Arc<CryptConfig>>) -> Result<Self, Error> {
let head: DataBlobHeader = unsafe { reader.read_le_value()? };
match head.magic {
file_formats::UNCOMPRESSED_BLOB_MAGIC_1_0 => {
let expected_crc = u32::from_le_bytes(head.crc);
let csum_reader = ChecksumReader::new(reader, None);
Ok(Self {
state: BlobReaderState::Uncompressed {
expected_crc,
csum_reader,
},
})
}
file_formats::COMPRESSED_BLOB_MAGIC_1_0 => {
let expected_crc = u32::from_le_bytes(head.crc);
let csum_reader = ChecksumReader::new(reader, None);
let decompr = zstd::stream::read::Decoder::new(csum_reader)?;
Ok(Self {
state: BlobReaderState::Compressed {
expected_crc,
decompr,
},
})
}
file_formats::ENCRYPTED_BLOB_MAGIC_1_0 => {
let config = config
.ok_or_else(|| format_err!("unable to read encrypted blob without key"))?;
let expected_crc = u32::from_le_bytes(head.crc);
let mut iv = [0u8; 16];
let mut expected_tag = [0u8; 16];
reader.read_exact(&mut iv)?;
reader.read_exact(&mut expected_tag)?;
let csum_reader = ChecksumReader::new(reader, None);
let decrypt_reader = CryptReader::new(
BufReader::with_capacity(64 * 1024, csum_reader),
iv,
expected_tag,
config,
)?;
Ok(Self {
state: BlobReaderState::Encrypted {
expected_crc,
decrypt_reader,
},
})
}
file_formats::ENCR_COMPR_BLOB_MAGIC_1_0 => {
let config = config
.ok_or_else(|| format_err!("unable to read encrypted blob without key"))?;
let expected_crc = u32::from_le_bytes(head.crc);
let mut iv = [0u8; 16];
let mut expected_tag = [0u8; 16];
reader.read_exact(&mut iv)?;
reader.read_exact(&mut expected_tag)?;
let csum_reader = ChecksumReader::new(reader, None);
let decrypt_reader = CryptReader::new(
BufReader::with_capacity(64 * 1024, csum_reader),
iv,
expected_tag,
config,
)?;
let decompr = zstd::stream::read::Decoder::new(decrypt_reader)?;
Ok(Self {
state: BlobReaderState::EncryptedCompressed {
expected_crc,
decompr,
},
})
}
_ => bail!("got wrong magic number {:?}", head.magic),
}
}
pub fn finish(self) -> Result<R, Error> {
match self.state {
BlobReaderState::Uncompressed {
csum_reader,
expected_crc,
} => {
let (reader, crc, _) = csum_reader.finish()?;
if crc != expected_crc {
bail!("blob crc check failed");
}
Ok(reader)
}
BlobReaderState::Compressed {
expected_crc,
decompr,
} => {
let csum_reader = decompr.finish().into_inner();
let (reader, crc, _) = csum_reader.finish()?;
if crc != expected_crc {
bail!("blob crc check failed");
}
Ok(reader)
}
BlobReaderState::Encrypted {
expected_crc,
decrypt_reader,
} => {
let csum_reader = decrypt_reader.finish()?.into_inner();
let (reader, crc, _) = csum_reader.finish()?;
if crc != expected_crc {
bail!("blob crc check failed");
}
Ok(reader)
}
BlobReaderState::EncryptedCompressed {
expected_crc,
decompr,
} => {
let decrypt_reader = decompr.finish().into_inner();
let csum_reader = decrypt_reader.finish()?.into_inner();
let (reader, crc, _) = csum_reader.finish()?;
if crc != expected_crc {
bail!("blob crc check failed");
}
Ok(reader)
}
}
}
}
impl<R: Read> Read for DataBlobReader<'_, R> {
fn read(&mut self, buf: &mut [u8]) -> Result<usize, std::io::Error> {
match &mut self.state {
BlobReaderState::Uncompressed { csum_reader, .. } => csum_reader.read(buf),
BlobReaderState::Compressed { decompr, .. } => decompr.read(buf),
BlobReaderState::Encrypted { decrypt_reader, .. } => decrypt_reader.read(buf),
BlobReaderState::EncryptedCompressed { decompr, .. } => decompr.read(buf),
}
}
}

View File

@ -1,210 +0,0 @@
use anyhow::Error;
use proxmox::tools::io::WriteExt;
use std::io::{Seek, SeekFrom, Write};
use std::sync::Arc;
use pbs_tools::crypt_config::CryptConfig;
use crate::checksum_writer::ChecksumWriter;
use crate::crypt_writer::CryptWriter;
use crate::file_formats::{self, DataBlobHeader, EncryptedDataBlobHeader};
enum BlobWriterState<'writer, W: Write> {
Uncompressed {
csum_writer: ChecksumWriter<W>,
},
Compressed {
compr: zstd::stream::write::Encoder<'writer, ChecksumWriter<W>>,
},
Encrypted {
crypt_writer: CryptWriter<ChecksumWriter<W>>,
},
EncryptedCompressed {
compr: zstd::stream::write::Encoder<'writer, CryptWriter<ChecksumWriter<W>>>,
},
}
/// Data blob writer
pub struct DataBlobWriter<'writer, W: Write> {
state: BlobWriterState<'writer, W>,
}
impl<W: Write + Seek> DataBlobWriter<'_, W> {
pub fn new_uncompressed(mut writer: W) -> Result<Self, Error> {
writer.seek(SeekFrom::Start(0))?;
let head = DataBlobHeader {
magic: file_formats::UNCOMPRESSED_BLOB_MAGIC_1_0,
crc: [0; 4],
};
unsafe {
writer.write_le_value(head)?;
}
let csum_writer = ChecksumWriter::new(writer, None);
Ok(Self {
state: BlobWriterState::Uncompressed { csum_writer },
})
}
pub fn new_compressed(mut writer: W) -> Result<Self, Error> {
writer.seek(SeekFrom::Start(0))?;
let head = DataBlobHeader {
magic: file_formats::COMPRESSED_BLOB_MAGIC_1_0,
crc: [0; 4],
};
unsafe {
writer.write_le_value(head)?;
}
let csum_writer = ChecksumWriter::new(writer, None);
let compr = zstd::stream::write::Encoder::new(csum_writer, 1)?;
Ok(Self {
state: BlobWriterState::Compressed { compr },
})
}
pub fn new_encrypted(mut writer: W, config: Arc<CryptConfig>) -> Result<Self, Error> {
writer.seek(SeekFrom::Start(0))?;
let head = EncryptedDataBlobHeader {
head: DataBlobHeader {
magic: file_formats::ENCRYPTED_BLOB_MAGIC_1_0,
crc: [0; 4],
},
iv: [0u8; 16],
tag: [0u8; 16],
};
unsafe {
writer.write_le_value(head)?;
}
let csum_writer = ChecksumWriter::new(writer, None);
let crypt_writer = CryptWriter::new(csum_writer, config)?;
Ok(Self {
state: BlobWriterState::Encrypted { crypt_writer },
})
}
pub fn new_encrypted_compressed(
mut writer: W,
config: Arc<CryptConfig>,
) -> Result<Self, Error> {
writer.seek(SeekFrom::Start(0))?;
let head = EncryptedDataBlobHeader {
head: DataBlobHeader {
magic: file_formats::ENCR_COMPR_BLOB_MAGIC_1_0,
crc: [0; 4],
},
iv: [0u8; 16],
tag: [0u8; 16],
};
unsafe {
writer.write_le_value(head)?;
}
let csum_writer = ChecksumWriter::new(writer, None);
let crypt_writer = CryptWriter::new(csum_writer, config)?;
let compr = zstd::stream::write::Encoder::new(crypt_writer, 1)?;
Ok(Self {
state: BlobWriterState::EncryptedCompressed { compr },
})
}
pub fn finish(self) -> Result<W, Error> {
match self.state {
BlobWriterState::Uncompressed { csum_writer } => {
// write CRC
let (mut writer, crc, _) = csum_writer.finish()?;
let head = DataBlobHeader {
magic: file_formats::UNCOMPRESSED_BLOB_MAGIC_1_0,
crc: crc.to_le_bytes(),
};
writer.seek(SeekFrom::Start(0))?;
unsafe {
writer.write_le_value(head)?;
}
Ok(writer)
}
BlobWriterState::Compressed { compr } => {
let csum_writer = compr.finish()?;
let (mut writer, crc, _) = csum_writer.finish()?;
let head = DataBlobHeader {
magic: file_formats::COMPRESSED_BLOB_MAGIC_1_0,
crc: crc.to_le_bytes(),
};
writer.seek(SeekFrom::Start(0))?;
unsafe {
writer.write_le_value(head)?;
}
Ok(writer)
}
BlobWriterState::Encrypted { crypt_writer } => {
let (csum_writer, iv, tag) = crypt_writer.finish()?;
let (mut writer, crc, _) = csum_writer.finish()?;
let head = EncryptedDataBlobHeader {
head: DataBlobHeader {
magic: file_formats::ENCRYPTED_BLOB_MAGIC_1_0,
crc: crc.to_le_bytes(),
},
iv,
tag,
};
writer.seek(SeekFrom::Start(0))?;
unsafe {
writer.write_le_value(head)?;
}
Ok(writer)
}
BlobWriterState::EncryptedCompressed { compr } => {
let crypt_writer = compr.finish()?;
let (csum_writer, iv, tag) = crypt_writer.finish()?;
let (mut writer, crc, _) = csum_writer.finish()?;
let head = EncryptedDataBlobHeader {
head: DataBlobHeader {
magic: file_formats::ENCR_COMPR_BLOB_MAGIC_1_0,
crc: crc.to_le_bytes(),
},
iv,
tag,
};
writer.seek(SeekFrom::Start(0))?;
unsafe {
writer.write_le_value(head)?;
}
Ok(writer)
}
}
}
}
impl<W: Write + Seek> Write for DataBlobWriter<'_, W> {
fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
match self.state {
BlobWriterState::Uncompressed {
ref mut csum_writer,
} => csum_writer.write(buf),
BlobWriterState::Compressed { ref mut compr } => compr.write(buf),
BlobWriterState::Encrypted {
ref mut crypt_writer,
} => crypt_writer.write(buf),
BlobWriterState::EncryptedCompressed { ref mut compr } => compr.write(buf),
}
}
fn flush(&mut self) -> Result<(), std::io::Error> {
match self.state {
BlobWriterState::Uncompressed {
ref mut csum_writer,
} => csum_writer.flush(),
BlobWriterState::Compressed { ref mut compr } => compr.flush(),
BlobWriterState::Encrypted {
ref mut crypt_writer,
} => crypt_writer.flush(),
BlobWriterState::EncryptedCompressed { ref mut compr } => compr.flush(),
}
}
}

View File

@ -1,29 +0,0 @@
use std::future::Future;
use std::pin::Pin;
use anyhow::Error;
use crate::data_blob::DataBlob;
/// The ReadChunk trait allows reading backup data chunks (local or remote)
pub trait ReadChunk {
/// Returns the encoded chunk data
fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error>;
/// Returns the decoded chunk data
fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error>;
}
pub trait AsyncReadChunk: Send {
/// Returns the encoded chunk data
fn read_raw_chunk<'a>(
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>>;
/// Returns the decoded chunk data
fn read_chunk<'a>(
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>>;
}

View File

@ -1,20 +0,0 @@
[package]
name = "pbs-fuse-loop"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "fuse and loop device helpers"
[dependencies]
anyhow = "1.0"
futures = "0.3"
lazy_static = "1.4"
libc = "0.2"
nix = "0.19.1"
regex = "1.2"
tokio = { version = "1.6", features = [] }
proxmox = "0.13.3"
proxmox-fuse = "0.1.1"
pbs-tools = { path = "../pbs-tools" }

View File

@ -1,5 +0,0 @@
pub mod loopdev;
mod fuse_loop;
pub use fuse_loop::*;

View File

@ -1,11 +0,0 @@
[package]
name = "pbs-runtime"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "tokio runtime related helpers required for binaries"
[dependencies]
lazy_static = "1.4"
pin-utils = "0.1.0"
tokio = { version = "1.6", features = [ "rt", "rt-multi-thread" ] }

View File

@ -1,25 +0,0 @@
[package]
name = "pbs-tape"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "LTO tage support"
[dependencies]
lazy_static = "1.4"
libc = "0.2"
anyhow = "1.0"
thiserror = "1.0"
endian_trait = { version = "0.6", features = ["arrays"] }
nix = "0.19.1"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
bitflags = "1.2.1"
regex = "1.2"
udev = ">= 0.3, <0.5"
proxmox = { version = "0.13.3", default-features = false, features = [] }
pbs-api-types = { path = "../pbs-api-types" }
pbs-tools = { path = "../pbs-tools" }
pbs-config = { path = "../pbs-config" }

View File

@ -1,334 +0,0 @@
use std::collections::HashSet;
use anyhow::{bail, Error};
use bitflags::bitflags;
use endian_trait::Endian;
use serde::{Serialize, Deserialize};
use serde_json::Value;
use proxmox::tools::Uuid;
use proxmox::api::schema::parse_property_string;
use pbs_api_types::{ScsiTapeChanger, SLOT_ARRAY_SCHEMA};
pub mod linux_list_drives;
pub mod sgutils2;
mod blocked_reader;
pub use blocked_reader::BlockedReader;
mod blocked_writer;
pub use blocked_writer::BlockedWriter;
mod tape_write;
pub use tape_write::*;
mod tape_read;
pub use tape_read::*;
mod emulate_tape_reader;
pub use emulate_tape_reader::EmulateTapeReader;
mod emulate_tape_writer;
pub use emulate_tape_writer::EmulateTapeWriter;
pub mod sg_tape;
pub mod sg_pt_changer;
/// We use 256KB blocksize (always)
pub const PROXMOX_TAPE_BLOCK_SIZE: usize = 256*1024;
// openssl::sha::sha256(b"Proxmox Tape Block Header v1.0")[0..8]
pub const PROXMOX_TAPE_BLOCK_HEADER_MAGIC_1_0: [u8; 8] = [220, 189, 175, 202, 235, 160, 165, 40];
// openssl::sha::sha256(b"Proxmox Backup Content Header v1.0")[0..8];
pub const PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0: [u8; 8] = [99, 238, 20, 159, 205, 242, 155, 12];
// openssl::sha::sha256(b"Proxmox Backup Tape Label v1.0")[0..8];
pub const PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0: [u8; 8] = [42, 5, 191, 60, 176, 48, 170, 57];
// openssl::sha::sha256(b"Proxmox Backup MediaSet Label v1.0")
pub const PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0: [u8; 8] = [8, 96, 99, 249, 47, 151, 83, 216];
/// Tape Block Header with data payload
///
/// All tape files are written as sequence of blocks.
///
/// Note: this struct is large, never put this on the stack!
/// so we use an unsized type to avoid that.
///
/// Tape data block are always read/written with a fixed size
/// (`PROXMOX_TAPE_BLOCK_SIZE`). But they may contain less data, so the
/// header has an additional size field. For streams of blocks, there
/// is a sequence number (`seq_nr`) which may be use for additional
/// error checking.
#[repr(C,packed)]
pub struct BlockHeader {
/// fixed value `PROXMOX_TAPE_BLOCK_HEADER_MAGIC_1_0`
pub magic: [u8; 8],
pub flags: BlockHeaderFlags,
/// size as 3 bytes unsigned, little endian
pub size: [u8; 3],
/// block sequence number
pub seq_nr: u32,
pub payload: [u8],
}
bitflags! {
/// Header flags (e.g. `END_OF_STREAM` or `INCOMPLETE`)
pub struct BlockHeaderFlags: u8 {
/// Marks the last block in a stream.
const END_OF_STREAM = 0b00000001;
/// Mark multivolume streams (when set in the last block)
const INCOMPLETE = 0b00000010;
}
}
#[derive(Endian, Copy, Clone, Debug)]
#[repr(C,packed)]
/// Media Content Header
///
/// All tape files start with this header. The header may contain some
/// informational data indicated by `size`.
///
/// `| MediaContentHeader | header data (size) | stream data |`
///
/// Note: The stream data following may be of any size.
pub struct MediaContentHeader {
/// fixed value `PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0`
pub magic: [u8; 8],
/// magic number for the content following
pub content_magic: [u8; 8],
/// unique ID to identify this data stream
pub uuid: [u8; 16],
/// stream creation time
pub ctime: i64,
/// Size of header data
pub size: u32,
/// Part number for multipart archives.
pub part_number: u8,
/// Reserved for future use
pub reserved_0: u8,
/// Reserved for future use
pub reserved_1: u8,
/// Reserved for future use
pub reserved_2: u8,
}
impl MediaContentHeader {
/// Create a new instance with autogenerated Uuid
pub fn new(content_magic: [u8; 8], size: u32) -> Self {
let uuid = *proxmox::tools::uuid::Uuid::generate()
.into_inner();
Self {
magic: PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0,
content_magic,
uuid,
ctime: proxmox::tools::time::epoch_i64(),
size,
part_number: 0,
reserved_0: 0,
reserved_1: 0,
reserved_2: 0,
}
}
/// Helper to check magic numbers and size constraints
pub fn check(&self, content_magic: [u8; 8], min_size: u32, max_size: u32) -> Result<(), Error> {
if self.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
bail!("MediaContentHeader: wrong magic");
}
if self.content_magic != content_magic {
bail!("MediaContentHeader: wrong content magic");
}
if self.size < min_size || self.size > max_size {
bail!("MediaContentHeader: got unexpected size");
}
Ok(())
}
/// Returns the content Uuid
pub fn content_uuid(&self) -> Uuid {
Uuid::from(self.uuid)
}
}
impl BlockHeader {
pub const SIZE: usize = PROXMOX_TAPE_BLOCK_SIZE;
/// Allocates a new instance on the heap
pub fn new() -> Box<Self> {
use std::alloc::{alloc_zeroed, Layout};
// align to PAGESIZE, so that we can use it with SG_IO
let page_size = unsafe { libc::sysconf(libc::_SC_PAGESIZE) } as usize;
let mut buffer = unsafe {
let ptr = alloc_zeroed(
Layout::from_size_align(Self::SIZE, page_size)
.unwrap(),
);
Box::from_raw(
std::slice::from_raw_parts_mut(ptr, Self::SIZE - 16)
as *mut [u8] as *mut Self
)
};
buffer.magic = PROXMOX_TAPE_BLOCK_HEADER_MAGIC_1_0;
buffer
}
/// Set the `size` field
pub fn set_size(&mut self, size: usize) {
let size = size.to_le_bytes();
self.size.copy_from_slice(&size[..3]);
}
/// Returns the `size` field
pub fn size(&self) -> usize {
(self.size[0] as usize) + ((self.size[1] as usize)<<8) + ((self.size[2] as usize)<<16)
}
/// Set the `seq_nr` field
pub fn set_seq_nr(&mut self, seq_nr: u32) {
self.seq_nr = seq_nr.to_le();
}
/// Returns the `seq_nr` field
pub fn seq_nr(&self) -> u32 {
u32::from_le(self.seq_nr)
}
}
/// Changer element status.
///
/// Drive and slots may be `Empty`, or contain some media, either
/// with known volume tag `VolumeTag(String)`, or without (`Full`).
#[derive(Serialize, Deserialize, Debug)]
pub enum ElementStatus {
Empty,
Full,
VolumeTag(String),
}
/// Changer drive status.
#[derive(Serialize, Deserialize)]
pub struct DriveStatus {
/// The slot the element was loaded from (if known).
pub loaded_slot: Option<u64>,
/// The status.
pub status: ElementStatus,
/// Drive Identifier (Serial number)
pub drive_serial_number: Option<String>,
/// Drive Vendor
pub vendor: Option<String>,
/// Drive Model
pub model: Option<String>,
/// Element Address
pub element_address: u16,
}
/// Storage element status.
#[derive(Serialize, Deserialize)]
pub struct StorageElementStatus {
/// Flag for Import/Export slots
pub import_export: bool,
/// The status.
pub status: ElementStatus,
/// Element Address
pub element_address: u16,
}
/// Transport element status.
#[derive(Serialize, Deserialize)]
pub struct TransportElementStatus {
/// The status.
pub status: ElementStatus,
/// Element Address
pub element_address: u16,
}
/// Changer status - show drive/slot usage
#[derive(Serialize, Deserialize)]
pub struct MtxStatus {
/// List of known drives
pub drives: Vec<DriveStatus>,
/// List of known storage slots
pub slots: Vec<StorageElementStatus>,
/// Transport elements
///
/// Note: Some libraries do not report transport elements.
pub transports: Vec<TransportElementStatus>,
}
impl MtxStatus {
pub fn slot_address(&self, slot: u64) -> Result<u16, Error> {
if slot == 0 {
bail!("invalid slot number '{}' (slots numbers starts at 1)", slot);
}
if slot > (self.slots.len() as u64) {
bail!("invalid slot number '{}' (max {} slots)", slot, self.slots.len());
}
Ok(self.slots[(slot -1) as usize].element_address)
}
pub fn drive_address(&self, drivenum: u64) -> Result<u16, Error> {
if drivenum >= (self.drives.len() as u64) {
bail!("invalid drive number '{}'", drivenum);
}
Ok(self.drives[drivenum as usize].element_address)
}
pub fn transport_address(&self) -> u16 {
// simply use first transport
// (are there changers exposing more than one?)
// defaults to 0 for changer that do not report transports
self
.transports
.get(0)
.map(|t| t.element_address)
.unwrap_or(0u16)
}
pub fn find_free_slot(&self, import_export: bool) -> Option<u64> {
let mut free_slot = None;
for (i, slot_info) in self.slots.iter().enumerate() {
if slot_info.import_export != import_export {
continue; // skip slots of wrong type
}
if let ElementStatus::Empty = slot_info.status {
free_slot = Some((i+1) as u64);
break;
}
}
free_slot
}
pub fn mark_import_export_slots(&mut self, config: &ScsiTapeChanger) -> Result<(), Error>{
let mut export_slots: HashSet<u64> = HashSet::new();
if let Some(slots) = &config.export_slots {
let slots: Value = parse_property_string(&slots, &SLOT_ARRAY_SCHEMA)?;
export_slots = slots
.as_array()
.unwrap()
.iter()
.filter_map(|v| v.as_u64())
.collect();
}
for (i, entry) in self.slots.iter_mut().enumerate() {
let slot = i as u64 + 1;
if export_slots.contains(&slot) {
entry.import_export = true; // mark as IMPORT/EXPORT
}
}
Ok(())
}
}

View File

@ -1,39 +0,0 @@
[package]
name = "pbs-tools"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "common tools used throughout pbs"
# This must not depend on any subcrates more closely related to pbs itself.
[dependencies]
anyhow = "1.0"
base64 = "0.12"
bytes = "1.0"
crc32fast = "1"
endian_trait = { version = "0.6", features = ["arrays"] }
flate2 = "1.0"
foreign-types = "0.3"
futures = "0.3"
lazy_static = "1.4"
libc = "0.2"
nix = "0.19.1"
nom = "5.1"
openssl = "0.10"
percent-encoding = "2.1"
regex = "1.2"
serde = "1.0"
serde_json = "1.0"
# rt-multi-thread is required for block_in_place
tokio = { version = "1.6", features = [ "fs", "io-util", "rt", "rt-multi-thread", "sync" ] }
url = "2.1"
walkdir = "2"
zstd = { version = "0.6", features = [ "bindgen" ] }
proxmox = { version = "0.13.3", default-features = false, features = [ "tokio" ] }
pbs-buildcfg = { path = "../pbs-buildcfg" }
pbs-runtime = { path = "../pbs-runtime" }
[dev-dependencies]
tokio = { version = "1.6", features = [ "macros" ] }

View File

@ -1,26 +0,0 @@
//! Helpers for authentication used by both client and server.
use anyhow::Error;
use lazy_static::lazy_static;
use openssl::pkey::{PKey, Private};
use openssl::rsa::Rsa;
use proxmox::tools::fs::file_get_contents;
use pbs_buildcfg::configdir;
fn load_private_auth_key() -> Result<PKey<Private>, Error> {
let pem = file_get_contents(configdir!("/authkey.key"))?;
let rsa = Rsa::private_key_from_pem(&pem)?;
let key = PKey::from_rsa(rsa)?;
Ok(key)
}
pub fn private_auth_key() -> &'static PKey<Private> {
lazy_static! {
static ref KEY: PKey<Private> = load_private_auth_key().unwrap();
}
&KEY
}

View File

@ -1,13 +0,0 @@
use std::fs::File;
use std::io::{self, stdout, Write};
use std::path::Path;
/// Returns either a new file, if a path is given, or stdout, if no path is given.
pub fn outfile_or_stdout<P: AsRef<Path>>(path: Option<P>) -> io::Result<Box<dyn Write>> {
if let Some(path) = path {
let f = File::create(path)?;
Ok(Box::new(f) as Box<dyn Write>)
} else {
Ok(Box::new(stdout()) as Box<dyn Write>)
}
}

View File

@ -1,64 +0,0 @@
use anyhow::{bail, format_err, Error};
/// Helper to check result from std::process::Command output
///
/// The exit_code_check() function should return true if the exit code
/// is considered successful.
pub fn command_output(
output: std::process::Output,
exit_code_check: Option<fn(i32) -> bool>,
) -> Result<Vec<u8>, Error> {
if !output.status.success() {
match output.status.code() {
Some(code) => {
let is_ok = match exit_code_check {
Some(check_fn) => check_fn(code),
None => code == 0,
};
if !is_ok {
let msg = String::from_utf8(output.stderr)
.map(|m| {
if m.is_empty() {
String::from("no error message")
} else {
m
}
})
.unwrap_or_else(|_| String::from("non utf8 error message (suppressed)"));
bail!("status code: {} - {}", code, msg);
}
}
None => bail!("terminated by signal"),
}
}
Ok(output.stdout)
}
/// Helper to check result from std::process::Command output, returns String.
///
/// The exit_code_check() function should return true if the exit code
/// is considered successful.
pub fn command_output_as_string(
output: std::process::Output,
exit_code_check: Option<fn(i32) -> bool>,
) -> Result<String, Error> {
let output = command_output(output, exit_code_check)?;
let output = String::from_utf8(output)?;
Ok(output)
}
pub fn run_command(
mut command: std::process::Command,
exit_code_check: Option<fn(i32) -> bool>,
) -> Result<String, Error> {
let output = command
.output()
.map_err(|err| format_err!("failed to execute {:?} - {}", command, err))?;
let output = command_output_as_string(output, exit_code_check)
.map_err(|err| format_err!("command {:?} failed - {}", command, err))?;
Ok(output)
}

View File

@ -1,68 +0,0 @@
use std::ffi::CStr;
use anyhow::{bail, Error};
// from libcrypt1, 'lib/crypt.h.in'
const CRYPT_OUTPUT_SIZE: usize = 384;
const CRYPT_MAX_PASSPHRASE_SIZE: usize = 512;
const CRYPT_DATA_RESERVED_SIZE: usize = 767;
const CRYPT_DATA_INTERNAL_SIZE: usize = 30720;
#[repr(C)]
struct crypt_data {
output: [libc::c_char; CRYPT_OUTPUT_SIZE],
setting: [libc::c_char; CRYPT_OUTPUT_SIZE],
input: [libc::c_char; CRYPT_MAX_PASSPHRASE_SIZE],
reserved: [libc::c_char; CRYPT_DATA_RESERVED_SIZE],
initialized: libc::c_char,
internal: [libc::c_char; CRYPT_DATA_INTERNAL_SIZE],
}
pub fn crypt(password: &[u8], salt: &[u8]) -> Result<String, Error> {
#[link(name = "crypt")]
extern "C" {
#[link_name = "crypt_r"]
fn __crypt_r(
key: *const libc::c_char,
salt: *const libc::c_char,
data: *mut crypt_data,
) -> *mut libc::c_char;
}
let mut data: crypt_data = unsafe { std::mem::zeroed() };
for (i, c) in salt.iter().take(data.setting.len() - 1).enumerate() {
data.setting[i] = *c as libc::c_char;
}
for (i, c) in password.iter().take(data.input.len() - 1).enumerate() {
data.input[i] = *c as libc::c_char;
}
let res = unsafe {
let status = __crypt_r(
&data.input as *const _,
&data.setting as *const _,
&mut data as *mut _,
);
if status.is_null() {
bail!("internal error: crypt_r returned null pointer");
}
CStr::from_ptr(&data.output as *const _)
};
Ok(String::from(res.to_str()?))
}
pub fn encrypt_pw(password: &str) -> Result<String, Error> {
let salt = proxmox::sys::linux::random_data(8)?;
let salt = format!("$5${}$", base64::encode_config(&salt, base64::CRYPT));
crypt(password.as_bytes(), salt.as_bytes())
}
pub fn verify_crypt_pw(password: &str, enc_password: &str) -> Result<(), Error> {
let verify = crypt(password.as_bytes(), enc_password.as_bytes())?;
if verify != enc_password {
bail!("invalid credentials");
}
Ok(())
}

View File

@ -1,115 +0,0 @@
//! Wrappers for OpenSSL crypto functions
//!
//! We use this to encrypt and decrypt data chunks. Cipher is
//! AES_256_GCM, which is fast and provides authenticated encryption.
//!
//! See the Wikipedia Artikel for [Authenticated
//! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption)
//! for a short introduction.
use anyhow::{Error};
use openssl::hash::MessageDigest;
use openssl::pkcs5::pbkdf2_hmac;
use openssl::symm::{Cipher, Crypter, Mode};
// openssl::sha::sha256(b"Proxmox Backup Encryption Key Fingerprint")
/// This constant is used to compute fingerprints.
const FINGERPRINT_INPUT: [u8; 32] = [
110, 208, 239, 119, 71, 31, 255, 77,
85, 199, 168, 254, 74, 157, 182, 33,
97, 64, 127, 19, 76, 114, 93, 223,
48, 153, 45, 37, 236, 69, 237, 38,
];
/// Encryption Configuration with secret key
///
/// This structure stores the secret key and provides helpers for
/// authenticated encryption.
pub struct CryptConfig {
// the Cipher
cipher: Cipher,
// A secrect key use to provide the chunk digest name space.
id_key: [u8; 32],
// Openssl hmac PKey of id_key
id_pkey: openssl::pkey::PKey<openssl::pkey::Private>,
// The private key used by the cipher.
enc_key: [u8; 32],
}
impl CryptConfig {
/// Create a new instance.
///
/// We compute a derived 32 byte key using pbkdf2_hmac. This second
/// key is used in compute_digest.
pub fn new(enc_key: [u8; 32]) -> Result<Self, Error> {
let mut id_key = [0u8; 32];
pbkdf2_hmac(
&enc_key,
b"_id_key",
10,
MessageDigest::sha256(),
&mut id_key)?;
let id_pkey = openssl::pkey::PKey::hmac(&id_key).unwrap();
Ok(Self { id_key, id_pkey, enc_key, cipher: Cipher::aes_256_gcm() })
}
/// Expose Cipher (AES_256_GCM)
pub fn cipher(&self) -> &Cipher {
&self.cipher
}
/// Expose encryption key
pub fn enc_key(&self) -> &[u8; 32] {
&self.enc_key
}
/// Compute a chunk digest using a secret name space.
///
/// Computes an SHA256 checksum over some secret data (derived
/// from the secret key) and the provided data. This ensures that
/// chunk digest values do not clash with values computed for
/// other sectret keys.
pub fn compute_digest(&self, data: &[u8]) -> [u8; 32] {
let mut hasher = openssl::sha::Sha256::new();
hasher.update(data);
hasher.update(&self.id_key); // at the end, to avoid length extensions attacks
hasher.finish()
}
/// Returns an openssl Signer using SHA256
pub fn data_signer(&self) -> openssl::sign::Signer {
openssl::sign::Signer::new(MessageDigest::sha256(), &self.id_pkey).unwrap()
}
/// Compute authentication tag (hmac/sha256)
///
/// Computes an SHA256 HMAC using some secret data (derived
/// from the secret key) and the provided data.
pub fn compute_auth_tag(&self, data: &[u8]) -> [u8; 32] {
let mut signer = self.data_signer();
signer.update(data).unwrap();
let mut tag = [0u8; 32];
signer.sign(&mut tag).unwrap();
tag
}
/// Computes a fingerprint for the secret key.
///
/// This computes a digest using the derived key (id_key) in order
/// to hinder brute force attacks.
pub fn fingerprint(&self) -> [u8; 32] {
self.compute_digest(&FINGERPRINT_INPUT)
}
/// Returns an openssl Crypter using AES_256_GCM,
pub fn data_crypter(&self, iv: &[u8; 16], mode: Mode) -> Result<Crypter, Error> {
let mut crypter = openssl::symm::Crypter::new(self.cipher, mode, &self.enc_key, Some(iv))?;
crypter.aad_update(b"")?; //??
Ok(crypter)
}
}

View File

@ -1,14 +0,0 @@
//! Raw file descriptor related utilities.
use std::os::unix::io::RawFd;
use anyhow::Error;
use nix::fcntl::{fcntl, FdFlag, F_GETFD, F_SETFD};
/// Change the `O_CLOEXEC` flag of an existing file descriptor.
pub fn fd_change_cloexec(fd: RawFd, on: bool) -> Result<(), Error> {
let mut flags = unsafe { FdFlag::from_bits_unchecked(fcntl(fd, F_GETFD)?) };
flags.set(FdFlag::FD_CLOEXEC, on);
fcntl(fd, F_SETFD(flags))?;
Ok(())
}

View File

@ -1,22 +0,0 @@
//! I/O utilities.
use anyhow::Error;
use proxmox::tools::fd::Fd;
/// The `BufferedRead` trait provides a single function
/// `buffered_read`. It returns a reference to an internal buffer. The
/// purpose of this traid is to avoid unnecessary data copies.
pub trait BufferedRead {
/// This functions tries to fill the internal buffers, then
/// returns a reference to the available data. It returns an empty
/// buffer if `offset` points to the end of the file.
fn buffered_read(&mut self, offset: u64) -> Result<&[u8], Error>;
}
/// safe wrapper for `nix::unistd::pipe2` defaulting to `O_CLOEXEC` and guarding the file
/// descriptors.
pub fn pipe() -> Result<(Fd, Fd), Error> {
let (pin, pout) = nix::unistd::pipe2(nix::fcntl::OFlag::O_CLOEXEC)?;
Ok((Fd(pin), Fd(pout)))
}

View File

@ -1,134 +0,0 @@
use anyhow::{bail, format_err, Error};
use serde_json::Value;
// Generate canonical json
pub fn to_canonical_json(value: &Value) -> Result<Vec<u8>, Error> {
let mut data = Vec::new();
write_canonical_json(value, &mut data)?;
Ok(data)
}
pub fn write_canonical_json(value: &Value, output: &mut Vec<u8>) -> Result<(), Error> {
match value {
Value::Null => bail!("got unexpected null value"),
Value::String(_) | Value::Number(_) | Value::Bool(_) => {
serde_json::to_writer(output, &value)?;
}
Value::Array(list) => {
output.push(b'[');
let mut iter = list.iter();
if let Some(item) = iter.next() {
write_canonical_json(item, output)?;
for item in iter {
output.push(b',');
write_canonical_json(item, output)?;
}
}
output.push(b']');
}
Value::Object(map) => {
output.push(b'{');
let mut keys: Vec<&str> = map.keys().map(String::as_str).collect();
keys.sort_unstable();
let mut iter = keys.into_iter();
if let Some(key) = iter.next() {
serde_json::to_writer(&mut *output, &key)?;
output.push(b':');
write_canonical_json(&map[key], output)?;
for key in iter {
output.push(b',');
serde_json::to_writer(&mut *output, &key)?;
output.push(b':');
write_canonical_json(&map[key], output)?;
}
}
output.push(b'}');
}
}
Ok(())
}
pub fn json_object_to_query(data: Value) -> Result<String, Error> {
let mut query = url::form_urlencoded::Serializer::new(String::new());
let object = data.as_object().ok_or_else(|| {
format_err!("json_object_to_query: got wrong data type (expected object).")
})?;
for (key, value) in object {
match value {
Value::Bool(b) => {
query.append_pair(key, &b.to_string());
}
Value::Number(n) => {
query.append_pair(key, &n.to_string());
}
Value::String(s) => {
query.append_pair(key, &s);
}
Value::Array(arr) => {
for element in arr {
match element {
Value::Bool(b) => {
query.append_pair(key, &b.to_string());
}
Value::Number(n) => {
query.append_pair(key, &n.to_string());
}
Value::String(s) => {
query.append_pair(key, &s);
}
_ => bail!(
"json_object_to_query: unable to handle complex array data types."
),
}
}
}
_ => bail!("json_object_to_query: unable to handle complex data types."),
}
}
Ok(query.finish())
}
pub fn required_string_param<'a>(param: &'a Value, name: &str) -> Result<&'a str, Error> {
match param[name].as_str() {
Some(s) => Ok(s),
None => bail!("missing parameter '{}'", name),
}
}
pub fn required_string_property<'a>(param: &'a Value, name: &str) -> Result<&'a str, Error> {
match param[name].as_str() {
Some(s) => Ok(s),
None => bail!("missing property '{}'", name),
}
}
pub fn required_integer_param(param: &Value, name: &str) -> Result<i64, Error> {
match param[name].as_i64() {
Some(s) => Ok(s),
None => bail!("missing parameter '{}'", name),
}
}
pub fn required_integer_property(param: &Value, name: &str) -> Result<i64, Error> {
match param[name].as_i64() {
Some(s) => Ok(s),
None => bail!("missing property '{}'", name),
}
}
pub fn required_array_param<'a>(param: &'a Value, name: &str) -> Result<&'a [Value], Error> {
match param[name].as_array() {
Some(s) => Ok(&s),
None => bail!("missing parameter '{}'", name),
}
}
pub fn required_array_property<'a>(param: &'a Value, name: &str) -> Result<&'a [Value], Error> {
match param[name].as_array() {
Some(s) => Ok(&s),
None => bail!("missing property '{}'", name),
}
}

View File

@ -1,35 +0,0 @@
pub mod acl;
pub mod auth;
pub mod blocking;
pub mod borrow;
pub mod broadcast_future;
pub mod cert;
pub mod cli;
pub mod compression;
pub mod crypt;
pub mod crypt_config;
pub mod format;
pub mod fd;
pub mod fs;
pub mod io;
pub mod json;
pub mod logrotate;
pub mod lru_cache;
pub mod nom;
pub mod ops;
pub mod percent_encoding;
pub mod process_locker;
pub mod sha;
pub mod str;
pub mod stream;
pub mod sync;
pub mod sys;
pub mod ticket;
pub mod tokio;
pub mod xattr;
pub mod zip;
pub mod async_lru_cache;
mod command;
pub use command::{command_output, command_output_as_string, run_command};

View File

@ -1,12 +0,0 @@
//! std::ops extensions
/// Modeled after the nightly `std::ops::ControlFlow`.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum ControlFlow<B, C = ()> {
Continue(C),
Break(B),
}
impl<B> ControlFlow<B> {
pub const CONTINUE: ControlFlow<B, ()> = ControlFlow::Continue(());
}

View File

@ -1,22 +0,0 @@
use percent_encoding::{utf8_percent_encode, AsciiSet};
/// This used to be: `SIMPLE_ENCODE_SET` plus space, `"`, `#`, `<`, `>`, backtick, `?`, `{`, `}`
pub const DEFAULT_ENCODE_SET: &AsciiSet = &percent_encoding::CONTROLS // 0..1f and 7e
// The SIMPLE_ENCODE_SET adds space and anything >= 0x7e (7e itself is already included above)
.add(0x20)
.add(0x7f)
// the DEFAULT_ENCODE_SET added:
.add(b' ')
.add(b'"')
.add(b'#')
.add(b'<')
.add(b'>')
.add(b'`')
.add(b'?')
.add(b'{')
.add(b'}');
/// percent encode a url component
pub fn percent_encode_component(comp: &str) -> String {
utf8_percent_encode(comp, percent_encoding::NON_ALPHANUMERIC).to_string()
}

View File

@ -1,29 +0,0 @@
//! SHA helpers.
use std::io::Read;
use anyhow::Error;
/// Calculate the sha256sum from a readable object.
pub fn sha256(file: &mut dyn Read) -> Result<([u8; 32], u64), Error> {
let mut hasher = openssl::sha::Sha256::new();
let mut buffer = proxmox::tools::vec::undefined(256 * 1024);
let mut size: u64 = 0;
loop {
let count = match file.read(&mut buffer) {
Ok(0) => break,
Ok(count) => count,
Err(ref err) if err.kind() == std::io::ErrorKind::Interrupted => {
continue;
}
Err(err) => return Err(err.into()),
};
size += count as u64;
hasher.update(&buffer[..count]);
}
let csum = hasher.finish();
Ok((csum, size))
}

View File

@ -1,27 +0,0 @@
//! String related utilities.
use std::borrow::Borrow;
pub fn join<S: Borrow<str>>(data: &[S], sep: char) -> String {
let mut list = String::new();
for item in data {
if !list.is_empty() {
list.push(sep);
}
list.push_str(item.borrow());
}
list
}
pub fn strip_ascii_whitespace(line: &[u8]) -> &[u8] {
let line = match line.iter().position(|&b| !b.is_ascii_whitespace()) {
Some(n) => &line[n..],
None => return &[],
};
match line.iter().rev().position(|&b| !b.is_ascii_whitespace()) {
Some(n) => &line[..(line.len() - n)],
None => &[],
}
}

View File

@ -1,229 +0,0 @@
//! Wrappers between async readers and streams.
use std::io::{self, Read};
use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
use anyhow::{Error, Result};
use tokio::io::{AsyncRead, AsyncWrite, ReadBuf};
use tokio::sync::mpsc::Sender;
use futures::ready;
use futures::future::FutureExt;
use futures::stream::Stream;
use proxmox::io_format_err;
use proxmox::tools::byte_buffer::ByteBuffer;
use proxmox::sys::error::io_err_other;
use pbs_runtime::block_in_place;
/// Wrapper struct to convert a Reader into a Stream
pub struct WrappedReaderStream<R: Read + Unpin> {
reader: R,
buffer: Vec<u8>,
}
impl <R: Read + Unpin> WrappedReaderStream<R> {
pub fn new(reader: R) -> Self {
let mut buffer = Vec::with_capacity(64*1024);
unsafe { buffer.set_len(buffer.capacity()); }
Self { reader, buffer }
}
}
impl<R: Read + Unpin> Stream for WrappedReaderStream<R> {
type Item = Result<Vec<u8>, io::Error>;
fn poll_next(self: Pin<&mut Self>, _cx: &mut Context) -> Poll<Option<Self::Item>> {
let this = self.get_mut();
match block_in_place(|| this.reader.read(&mut this.buffer)) {
Ok(n) => {
if n == 0 {
// EOF
Poll::Ready(None)
} else {
Poll::Ready(Some(Ok(this.buffer[..n].to_vec())))
}
}
Err(err) => Poll::Ready(Some(Err(err))),
}
}
}
/// Wrapper struct to convert an AsyncReader into a Stream
pub struct AsyncReaderStream<R: AsyncRead + Unpin> {
reader: R,
buffer: Vec<u8>,
}
impl <R: AsyncRead + Unpin> AsyncReaderStream<R> {
pub fn new(reader: R) -> Self {
let mut buffer = Vec::with_capacity(64*1024);
unsafe { buffer.set_len(buffer.capacity()); }
Self { reader, buffer }
}
pub fn with_buffer_size(reader: R, buffer_size: usize) -> Self {
let mut buffer = Vec::with_capacity(buffer_size);
unsafe { buffer.set_len(buffer.capacity()); }
Self { reader, buffer }
}
}
impl<R: AsyncRead + Unpin> Stream for AsyncReaderStream<R> {
type Item = Result<Vec<u8>, io::Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
let this = self.get_mut();
let mut read_buf = ReadBuf::new(&mut this.buffer);
match ready!(Pin::new(&mut this.reader).poll_read(cx, &mut read_buf)) {
Ok(()) => {
let n = read_buf.filled().len();
if n == 0 {
// EOF
Poll::Ready(None)
} else {
Poll::Ready(Some(Ok(this.buffer[..n].to_vec())))
}
}
Err(err) => Poll::Ready(Some(Err(err))),
}
}
}
#[cfg(test)]
mod test {
use std::io;
use anyhow::Error;
use futures::stream::TryStreamExt;
#[test]
fn test_wrapped_stream_reader() -> Result<(), Error> {
pbs_runtime::main(async {
run_wrapped_stream_reader_test().await
})
}
struct DummyReader(usize);
impl io::Read for DummyReader {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
self.0 += 1;
if self.0 >= 10 {
return Ok(0);
}
unsafe {
std::ptr::write_bytes(buf.as_mut_ptr(), 0, buf.len());
}
Ok(buf.len())
}
}
async fn run_wrapped_stream_reader_test() -> Result<(), Error> {
let mut reader = super::WrappedReaderStream::new(DummyReader(0));
while let Some(_data) = reader.try_next().await? {
// just waiting
}
Ok(())
}
}
/// Wrapper around tokio::sync::mpsc::Sender, which implements Write
pub struct AsyncChannelWriter {
sender: Option<Sender<Result<Vec<u8>, Error>>>,
buf: ByteBuffer,
state: WriterState,
}
type SendResult = io::Result<Sender<Result<Vec<u8>>>>;
enum WriterState {
Ready,
Sending(Pin<Box<dyn Future<Output = SendResult> + Send + 'static>>),
}
impl AsyncChannelWriter {
pub fn new(sender: Sender<Result<Vec<u8>, Error>>, buf_size: usize) -> Self {
Self {
sender: Some(sender),
buf: ByteBuffer::with_capacity(buf_size),
state: WriterState::Ready,
}
}
fn poll_write_impl(
&mut self,
cx: &mut Context,
buf: &[u8],
flush: bool,
) -> Poll<io::Result<usize>> {
loop {
match &mut self.state {
WriterState::Ready => {
if flush {
if self.buf.is_empty() {
return Poll::Ready(Ok(0));
}
} else {
let free_size = self.buf.free_size();
if free_size > buf.len() || self.buf.is_empty() {
let count = free_size.min(buf.len());
self.buf.get_free_mut_slice()[..count].copy_from_slice(&buf[..count]);
self.buf.add_size(count);
return Poll::Ready(Ok(count));
}
}
let sender = match self.sender.take() {
Some(sender) => sender,
None => return Poll::Ready(Err(io_err_other("no sender"))),
};
let data = self.buf.remove_data(self.buf.len()).to_vec();
let future = async move {
sender
.send(Ok(data))
.await
.map(move |_| sender)
.map_err(|err| io_format_err!("could not send: {}", err))
};
self.state = WriterState::Sending(future.boxed());
}
WriterState::Sending(ref mut future) => match ready!(future.as_mut().poll(cx)) {
Ok(sender) => {
self.sender = Some(sender);
self.state = WriterState::Ready;
}
Err(err) => return Poll::Ready(Err(err)),
},
}
}
}
}
impl AsyncWrite for AsyncChannelWriter {
fn poll_write(self: Pin<&mut Self>, cx: &mut Context, buf: &[u8]) -> Poll<io::Result<usize>> {
let this = self.get_mut();
this.poll_write_impl(cx, buf, false)
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context) -> Poll<io::Result<()>> {
let this = self.get_mut();
match ready!(this.poll_write_impl(cx, &[], true)) {
Ok(_) => Poll::Ready(Ok(())),
Err(err) => Poll::Ready(Err(err)),
}
}
fn poll_shutdown(self: Pin<&mut Self>, cx: &mut Context) -> Poll<io::Result<()>> {
self.poll_flush(cx)
}
}

View File

@ -1,2 +0,0 @@
mod std_channel_writer;
pub use std_channel_writer::StdChannelWriter;

View File

@ -1,31 +0,0 @@
//! System level helpers.
use nix::unistd::{Gid, Group, Uid, User};
/// Query a user by name but only unless built with `#[cfg(test)]`.
///
/// This is to avoid having regression tests query the users of development machines which may
/// not be compatible with PBS or privileged enough.
pub fn query_user(user_name: &str) -> Result<Option<User>, nix::Error> {
if cfg!(test) {
Ok(Some(
User::from_uid(Uid::current())?.expect("current user does not exist"),
))
} else {
User::from_name(user_name)
}
}
/// Query a group by name but only unless built with `#[cfg(test)]`.
///
/// This is to avoid having regression tests query the groups of development machines which may
/// not be compatible with PBS or privileged enough.
pub fn query_group(group_name: &str) -> Result<Option<Group>, nix::Error> {
if cfg!(test) {
Ok(Some(
Group::from_gid(Gid::current())?.expect("current group does not exist"),
))
} else {
Group::from_name(group_name)
}
}

View File

@ -1,332 +0,0 @@
//! Generate and verify Authentication tickets
use std::borrow::Cow;
use std::io;
use std::marker::PhantomData;
use anyhow::{bail, format_err, Error};
use openssl::hash::MessageDigest;
use openssl::pkey::{HasPublic, PKey, Private};
use openssl::sign::{Signer, Verifier};
use percent_encoding::{percent_decode_str, percent_encode, AsciiSet};
pub const TICKET_LIFETIME: i64 = 3600 * 2; // 2 hours
pub const TERM_PREFIX: &str = "PBSTERM";
/// Stringified ticket data must not contain colons...
const TICKET_ASCIISET: &AsciiSet = &percent_encoding::CONTROLS.add(b':');
/// An empty type implementing [`ToString`] and [`FromStr`](std::str::FromStr), used for tickets
/// with no data.
pub struct Empty;
impl ToString for Empty {
fn to_string(&self) -> String {
String::new()
}
}
impl std::str::FromStr for Empty {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Error> {
if !s.is_empty() {
bail!("unexpected ticket data, should be empty");
}
Ok(Empty)
}
}
/// An API ticket consists of a ticket type (prefix), type-dependent data, optional additional
/// authenticaztion data, a timestamp and a signature. We store these values in the form
/// `<prefix>:<stringified data>:<timestamp>::<signature>`.
///
/// The signature is made over the string consisting of prefix, data, timestamp and aad joined
/// together by colons. If there is no additional authentication data it will be skipped together
/// with the colon separating it from the timestamp.
pub struct Ticket<T>
where
T: ToString + std::str::FromStr,
{
prefix: Cow<'static, str>,
data: String,
time: i64,
signature: Option<Vec<u8>>,
_type_marker: PhantomData<T>,
}
impl<T> Ticket<T>
where
T: ToString + std::str::FromStr,
<T as std::str::FromStr>::Err: std::fmt::Debug,
{
/// Prepare a new ticket for signing.
pub fn new(prefix: &'static str, data: &T) -> Result<Self, Error> {
Ok(Self {
prefix: Cow::Borrowed(prefix),
data: data.to_string(),
time: proxmox::tools::time::epoch_i64(),
signature: None,
_type_marker: PhantomData,
})
}
/// Get the ticket prefix.
pub fn prefix(&self) -> &str {
&self.prefix
}
/// Get the ticket's time stamp in seconds since the unix epoch.
pub fn time(&self) -> i64 {
self.time
}
/// Get the raw string data contained in the ticket. The `verify` method will call `parse()`
/// this in the end, so using this method directly is discouraged as it does not verify the
/// signature.
pub fn raw_data(&self) -> &str {
&self.data
}
/// Serialize the ticket into a writer.
///
/// This only writes a string. We use `io::write` instead of `fmt::Write` so we can reuse the
/// same function for openssl's `Verify`, which only implements `io::Write`.
fn write_data(&self, f: &mut dyn io::Write) -> Result<(), Error> {
write!(
f,
"{}:{}:{:08X}",
percent_encode(self.prefix.as_bytes(), &TICKET_ASCIISET),
percent_encode(self.data.as_bytes(), &TICKET_ASCIISET),
self.time,
)
.map_err(Error::from)
}
/// Write additional authentication data to the verifier.
fn write_aad(f: &mut dyn io::Write, aad: Option<&str>) -> Result<(), Error> {
if let Some(aad) = aad {
write!(f, ":{}", percent_encode(aad.as_bytes(), &TICKET_ASCIISET))?;
}
Ok(())
}
/// Change the ticket's time, used mostly for testing.
#[cfg(test)]
fn change_time(&mut self, time: i64) -> &mut Self {
self.time = time;
self
}
/// Sign the ticket.
pub fn sign(&mut self, keypair: &PKey<Private>, aad: Option<&str>) -> Result<String, Error> {
let mut output = Vec::<u8>::new();
let mut signer = Signer::new(MessageDigest::sha256(), &keypair)
.map_err(|err| format_err!("openssl error creating signer for ticket: {}", err))?;
self.write_data(&mut output)
.map_err(|err| format_err!("error creating ticket: {}", err))?;
signer
.update(&output)
.map_err(Error::from)
.and_then(|()| Self::write_aad(&mut signer, aad))
.map_err(|err| format_err!("error signing ticket: {}", err))?;
// See `Self::write_data` for why this is safe
let mut output = unsafe { String::from_utf8_unchecked(output) };
let signature = signer
.sign_to_vec()
.map_err(|err| format_err!("error finishing ticket signature: {}", err))?;
use std::fmt::Write;
write!(
&mut output,
"::{}",
base64::encode_config(&signature, base64::STANDARD_NO_PAD),
)?;
self.signature = Some(signature);
Ok(output)
}
/// `verify` with an additional time frame parameter, not usually required since we always use
/// the same time frame.
pub fn verify_with_time_frame<P: HasPublic>(
&self,
keypair: &PKey<P>,
prefix: &str,
aad: Option<&str>,
time_frame: std::ops::Range<i64>,
) -> Result<T, Error> {
if self.prefix != prefix {
bail!("ticket with invalid prefix");
}
let signature = match self.signature.as_ref() {
Some(sig) => sig,
None => bail!("invalid ticket without signature"),
};
let age = proxmox::tools::time::epoch_i64() - self.time;
if age < time_frame.start {
bail!("invalid ticket - timestamp newer than expected");
}
if age > time_frame.end {
bail!("invalid ticket - expired");
}
let mut verifier = Verifier::new(MessageDigest::sha256(), &keypair)?;
self.write_data(&mut verifier)
.and_then(|()| Self::write_aad(&mut verifier, aad))
.map_err(|err| format_err!("error verifying ticket: {}", err))?;
let is_valid: bool = verifier
.verify(&signature)
.map_err(|err| format_err!("openssl error verifying ticket: {}", err))?;
if !is_valid {
bail!("ticket with invalid signature");
}
self.data
.parse()
.map_err(|err| format_err!("failed to parse contained ticket data: {:?}", err))
}
/// Verify the ticket with the provided key pair. The additional authentication data needs to
/// match the one used when generating the ticket, and the ticket's age must fall into the time
/// frame.
pub fn verify<P: HasPublic>(
&self,
keypair: &PKey<P>,
prefix: &str,
aad: Option<&str>,
) -> Result<T, Error> {
self.verify_with_time_frame(keypair, prefix, aad, -300..TICKET_LIFETIME)
}
/// Parse a ticket string.
pub fn parse(ticket: &str) -> Result<Self, Error> {
let mut parts = ticket.splitn(4, ':');
let prefix = percent_decode_str(
parts
.next()
.ok_or_else(|| format_err!("ticket without prefix"))?,
)
.decode_utf8()
.map_err(|err| format_err!("invalid ticket, error decoding prefix: {}", err))?;
let data = percent_decode_str(
parts
.next()
.ok_or_else(|| format_err!("ticket without data"))?,
)
.decode_utf8()
.map_err(|err| format_err!("invalid ticket, error decoding data: {}", err))?;
let time = i64::from_str_radix(
parts
.next()
.ok_or_else(|| format_err!("ticket without timestamp"))?,
16,
)
.map_err(|err| format_err!("ticket with bad timestamp: {}", err))?;
let remainder = parts
.next()
.ok_or_else(|| format_err!("ticket without signature"))?;
// <prefix>:<data>:<time>::signature - the 4th `.next()` swallows the first colon in the
// double-colon!
if !remainder.starts_with(':') {
bail!("ticket without signature separator");
}
let signature = base64::decode_config(&remainder[1..], base64::STANDARD_NO_PAD)
.map_err(|err| format_err!("ticket with bad signature: {}", err))?;
Ok(Self {
prefix: Cow::Owned(prefix.into_owned()),
data: data.into_owned(),
time,
signature: Some(signature),
_type_marker: PhantomData,
})
}
}
#[cfg(test)]
mod test {
use std::convert::Infallible;
use std::fmt;
use openssl::pkey::{PKey, Private};
use super::Ticket;
#[derive(Debug, Eq, PartialEq)]
struct Testid(String);
impl std::str::FromStr for Testid {
type Err = Infallible;
fn from_str(s: &str) -> Result<Self, Infallible> {
Ok(Self(s.to_string()))
}
}
impl fmt::Display for Testid {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.0)
}
}
fn simple_test<F>(key: &PKey<Private>, aad: Option<&str>, modify: F)
where
F: FnOnce(&mut Ticket<Testid>) -> bool,
{
let userid = Testid("root".to_string());
let mut ticket = Ticket::new("PREFIX", &userid).expect("failed to create Ticket struct");
let should_work = modify(&mut ticket);
let ticket = ticket.sign(key, aad).expect("failed to sign test ticket");
let parsed =
Ticket::<Testid>::parse(&ticket).expect("failed to parse generated test ticket");
if should_work {
let check: Testid = parsed
.verify(key, "PREFIX", aad)
.expect("failed to verify test ticket");
assert_eq!(userid, check);
} else {
parsed
.verify(key, "PREFIX", aad)
.expect_err("failed to verify test ticket");
}
}
#[test]
fn test_tickets() {
// first we need keys, for testing we use small keys for speed...
let rsa =
openssl::rsa::Rsa::generate(1024).expect("failed to generate RSA key for testing");
let key = openssl::pkey::PKey::<openssl::pkey::Private>::from_rsa(rsa)
.expect("failed to create PKey for RSA key");
simple_test(&key, Some("secret aad data"), |_| true);
simple_test(&key, None, |_| true);
simple_test(&key, None, |t| {
t.change_time(0);
false
});
simple_test(&key, None, |t| {
t.change_time(proxmox::tools::time::epoch_i64() + 0x1000_0000);
false
});
}
}

View File

@ -1,2 +0,0 @@
pub mod tokio_writer_adapter;
pub use tokio_writer_adapter::TokioWriterAdapter;

View File

@ -1,8 +0,0 @@
[package]
name = "proxmox-backup-banner"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
[dependencies]
nix = "0.19.1"

View File

@ -1,35 +0,0 @@
[package]
name = "proxmox-backup-client"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
[dependencies]
anyhow = "1.0"
futures = "0.3"
hyper = { version = "0.14", features = [ "full" ] }
libc = "0.2"
nix = "0.19.1"
openssl = "0.10"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1.6", features = [ "rt", "rt-multi-thread" ] }
tokio-stream = "0.1.0"
tokio-util = { version = "0.6", features = [ "codec", "io" ] }
xdg = "2.2"
zstd = { version = "0.6", features = [ "bindgen" ] }
pathpatterns = "0.1.2"
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
proxmox = { version = "0.13.3", features = [ "sortable-macro", "api-macro", "cli", "router" ] }
pbs-api-types = { path = "../pbs-api-types" }
pbs-buildcfg = { path = "../pbs-buildcfg" }
pbs-config = { path = "../pbs-config" }
pbs-client = { path = "../pbs-client" }
pbs-datastore = { path = "../pbs-datastore" }
pbs-fuse-loop = { path = "../pbs-fuse-loop" }
pbs-runtime = { path = "../pbs-runtime" }
proxmox-systemd = { path = "../proxmox-systemd" }
pbs-tools = { path = "../pbs-tools" }

View File

@ -1,28 +0,0 @@
[package]
name = "proxmox-file-restore"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
[dependencies]
anyhow = "1.0"
base64 = "0.12"
futures = "0.3"
libc = "0.2"
nix = "0.19.1"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1.6", features = [ "io-std", "rt", "rt-multi-thread", "time" ] }
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
proxmox = { version = "0.13.3", features = [ "api-macro", "cli" ] }
pbs-api-types = { path = "../pbs-api-types" }
pbs-buildcfg = { path = "../pbs-buildcfg" }
pbs-config = { path = "../pbs-config" }
pbs-client = { path = "../pbs-client" }
pbs-datastore = { path = "../pbs-datastore" }
pbs-runtime = { path = "../pbs-runtime" }
proxmox-systemd = { path = "../proxmox-systemd" }
pbs-tools = { path = "../pbs-tools" }

View File

@ -1,30 +0,0 @@
[package]
name = "proxmox-rest-server"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "REST server implementation"
[dependencies]
anyhow = "1.0"
futures = "0.3"
handlebars = "3.0"
http = "0.2"
hyper = { version = "0.14", features = [ "full" ] }
lazy_static = "1.4"
libc = "0.2"
log = "0.4"
nix = "0.19.1"
percent-encoding = "2.1"
regex = "1.2"
serde = { version = "1.0", features = [] }
serde_json = "1.0"
tokio = { version = "1.6", features = ["signal", "process"] }
tokio-openssl = "0.6.1"
tower-service = "0.3.0"
url = "2.1"
proxmox = { version = "0.13.3", features = [ "router"] }
# fixme: remove this dependency (pbs_tools::broadcast_future)
pbs-tools = { path = "../pbs-tools" }

View File

@ -1,39 +0,0 @@
use anyhow::{bail, Error};
use hyper::header;
/// Possible Compression Methods, order determines preference (later is preferred)
#[derive(Eq, Ord, PartialEq, PartialOrd, Debug)]
pub enum CompressionMethod {
Deflate,
// Gzip,
// Brotli,
}
impl CompressionMethod {
pub fn content_encoding(&self) -> header::HeaderValue {
header::HeaderValue::from_static(self.extension())
}
pub fn extension(&self) -> &'static str {
match *self {
// CompressionMethod::Brotli => "br",
// CompressionMethod::Gzip => "gzip",
CompressionMethod::Deflate => "deflate",
}
}
}
impl std::str::FromStr for CompressionMethod {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
// "br" => Ok(CompressionMethod::Brotli),
// "gzip" => Ok(CompressionMethod::Gzip),
"deflate" => Ok(CompressionMethod::Deflate),
// http accept-encoding allows to give weights with ';q='
other if other.starts_with("deflate;q=") => Ok(CompressionMethod::Deflate),
_ => bail!("unknown compression format"),
}
}
}

View File

@ -1,94 +0,0 @@
use std::sync::Arc;
use std::net::SocketAddr;
use serde_json::{json, Value};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::ApiConfig;
/// Encapsulates information about the runtime environment
pub struct RestEnvironment {
env_type: RpcEnvironmentType,
result_attributes: Value,
auth_id: Option<String>,
client_ip: Option<SocketAddr>,
api: Arc<ApiConfig>,
}
impl RestEnvironment {
pub fn new(env_type: RpcEnvironmentType, api: Arc<ApiConfig>) -> Self {
Self {
result_attributes: json!({}),
auth_id: None,
client_ip: None,
env_type,
api,
}
}
pub fn api_config(&self) -> &ApiConfig {
&self.api
}
pub fn log_auth(&self, auth_id: &str) {
let msg = format!("successful auth for user '{}'", auth_id);
log::info!("{}", msg);
if let Some(auth_logger) = self.api.get_auth_log() {
auth_logger.lock().unwrap().log(&msg);
}
}
pub fn log_failed_auth(&self, failed_auth_id: Option<String>, msg: &str) {
let msg = match (self.client_ip, failed_auth_id) {
(Some(peer), Some(user)) => {
format!("authentication failure; rhost={} user={} msg={}", peer, user, msg)
}
(Some(peer), None) => {
format!("authentication failure; rhost={} msg={}", peer, msg)
}
(None, Some(user)) => {
format!("authentication failure; rhost=unknown user={} msg={}", user, msg)
}
(None, None) => {
format!("authentication failure; rhost=unknown msg={}", msg)
}
};
log::error!("{}", msg);
if let Some(auth_logger) = self.api.get_auth_log() {
auth_logger.lock().unwrap().log(&msg);
}
}
}
impl RpcEnvironment for RestEnvironment {
fn result_attrib_mut (&mut self) -> &mut Value {
&mut self.result_attributes
}
fn result_attrib(&self) -> &Value {
&self.result_attributes
}
fn env_type(&self) -> RpcEnvironmentType {
self.env_type
}
fn set_auth_id(&mut self, auth_id: Option<String>) {
self.auth_id = auth_id;
}
fn get_auth_id(&self) -> Option<String> {
self.auth_id.clone()
}
fn set_client_ip(&mut self, client_ip: Option<SocketAddr>) {
self.client_ip = client_ip;
}
fn get_client_ip(&self) -> Option<SocketAddr> {
self.client_ip
}
}

View File

@ -1,141 +0,0 @@
use std::os::unix::io::RawFd;
use anyhow::{bail, format_err, Error};
use proxmox::tools::fd::Fd;
use proxmox::api::UserInformation;
mod compression;
pub use compression::*;
pub mod daemon;
pub mod formatter;
mod environment;
pub use environment::*;
mod state;
pub use state::*;
mod command_socket;
pub use command_socket::*;
mod file_logger;
pub use file_logger::{FileLogger, FileLogOptions};
mod api_config;
pub use api_config::ApiConfig;
mod rest;
pub use rest::{RestServer, handle_api_request};
pub enum AuthError {
Generic(Error),
NoData,
}
impl From<Error> for AuthError {
fn from(err: Error) -> Self {
AuthError::Generic(err)
}
}
pub trait ApiAuth {
fn check_auth(
&self,
headers: &http::HeaderMap,
method: &hyper::Method,
) -> Result<(String, Box<dyn UserInformation + Sync + Send>), AuthError>;
}
static mut SHUTDOWN_REQUESTED: bool = false;
pub fn request_shutdown() {
unsafe {
SHUTDOWN_REQUESTED = true;
}
crate::server_shutdown();
}
#[inline(always)]
pub fn shutdown_requested() -> bool {
unsafe { SHUTDOWN_REQUESTED }
}
pub fn fail_on_shutdown() -> Result<(), Error> {
if shutdown_requested() {
bail!("Server shutdown requested - aborting task");
}
Ok(())
}
/// Helper to set/clear the FD_CLOEXEC flag on file descriptors
pub fn fd_change_cloexec(fd: RawFd, on: bool) -> Result<(), Error> {
use nix::fcntl::{fcntl, FdFlag, F_GETFD, F_SETFD};
let mut flags = FdFlag::from_bits(fcntl(fd, F_GETFD)?)
.ok_or_else(|| format_err!("unhandled file flags"))?; // nix crate is stupid this way...
flags.set(FdFlag::FD_CLOEXEC, on);
fcntl(fd, F_SETFD(flags))?;
Ok(())
}
/// safe wrapper for `nix::sys::socket::socketpair` defaulting to `O_CLOEXEC` and guarding the file
/// descriptors.
pub fn socketpair() -> Result<(Fd, Fd), Error> {
use nix::sys::socket;
let (pa, pb) = socket::socketpair(
socket::AddressFamily::Unix,
socket::SockType::Stream,
None,
socket::SockFlag::SOCK_CLOEXEC,
)?;
Ok((Fd(pa), Fd(pb)))
}
/// Extract a specific cookie from cookie header.
/// We assume cookie_name is already url encoded.
pub fn extract_cookie(cookie: &str, cookie_name: &str) -> Option<String> {
for pair in cookie.split(';') {
let (name, value) = match pair.find('=') {
Some(i) => (pair[..i].trim(), pair[(i + 1)..].trim()),
None => return None, // Cookie format error
};
if name == cookie_name {
use percent_encoding::percent_decode;
if let Ok(value) = percent_decode(value.as_bytes()).decode_utf8() {
return Some(value.into());
} else {
return None; // Cookie format error
}
}
}
None
}
/// normalize uri path
///
/// Do not allow ".", "..", or hidden files ".XXXX"
/// Also remove empty path components
pub fn normalize_uri_path(path: &str) -> Result<(String, Vec<&str>), Error> {
let items = path.split('/');
let mut path = String::new();
let mut components = vec![];
for name in items {
if name.is_empty() {
continue;
}
if name.starts_with('.') {
bail!("Path contains illegal components.");
}
path.push('/');
path.push_str(name);
components.push(name);
}
Ok((path, components))
}

View File

@ -1,36 +0,0 @@
[package]
name = "proxmox-restore-daemon"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "Proxmox Restore Daemon"
[dependencies]
anyhow = "1.0"
base64 = "0.12"
env_logger = "0.7"
futures = "0.3"
http = "0.2"
hyper = { version = "0.14", features = [ "full" ] }
lazy_static = "1.4"
libc = "0.2"
log = "0.4"
nix = "0.19.1"
regex = "1.2"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1.6", features = ["parking_lot", "sync"] }
tokio-stream = "0.1.0"
tokio-util = { version = "0.6", features = [ "codec", "io" ] }
pathpatterns = "0.1.2"
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
proxmox = { version = "0.13.3", features = [ "router", "sortable-macro" ] }
pbs-api-types = { path = "../pbs-api-types" }
pbs-runtime = { path = "../pbs-runtime" }
pbs-tools = { path = "../pbs-tools" }
pbs-datastore = { path = "../pbs-datastore" }
proxmox-rest-server = { path = "../proxmox-rest-server" }
pbs-client = { path = "../pbs-client" }

Some files were not shown because too many files have changed in this diff Show More