Compare commits

...

46 Commits

Author SHA1 Message Date
Thomas Lamprecht a67874b6ae bump version to 1.1.14-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:08:02 +02:00
Thomas Lamprecht 9402e9f357 cargo: update proxmox-acme-rs to 0.3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:05:19 +02:00
Thomas Lamprecht b75bb5434e d/control.in: update
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:05:06 +02:00
Thomas Lamprecht ec44c3113b backport "datastore: lookup: reuse ChunkStore on stale datastore re-open"
Backport of commit 0bd9c87010

When re-opening a datastore due to the cached entry being stale
(only on verify-new config change). On datastore open the chunk store
was also re-opened, which in turn creates a new ProcessLocker,
loosing any existing shared lock which can cause conflicts between
long running (24h+) backups  and GC.

To fix this, reuse the existing ChunkStore, and thus  its
ProcessLocker, when creating a up-to-date datastore instance on
lookup, since only the datastore config should be reloaded. This is
fine as the ChunkStore path is not updatable over our API.

Note that this is a precaution backport, the underlying issue this
fixes is relatively unlikely to cause any trouble in the 1.x branch
due to not often re-opening the datastore.

Originally-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 18:00:01 +02:00
Thomas Lamprecht cb21bf7454 ui: add notice for nearing PBS 1.1 End-of-Life
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 17:35:03 +02:00
Dominik Csapak a1cffef503 pbs-tools: LruCache: implement Drop
this fixes the leaked memory for the cache, as we had only pointers
in the map/list which were freed, not the underlying chunks

moves the 'clear' implementation out of the trait bounds so that
Drop can reuse it

this is used e.g. for file download from a pxar

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
(cherry picked from commit 98983a9dab)
2022-01-20 15:46:35 +01:00
Wolfgang Bumiller 9b00099ead drop RawWaker usage
this was also leaking a refcount before, this is fixed now

See-also: proxmox/proxmox-async:
  * d0a3e38006fe ("drop RawWaker usage")
  * ff132e93c6fd ("rustfmt")

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-01-20 15:41:00 +01:00
Thomas Lamprecht d2351f1a81 bump version to 1.1.13-3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-19 10:21:23 +02:00
Thomas Lamprecht 869e4601b4 api daemons: fix sending log-reopen command
send_command serializes everything so it cannot be used to send a
raw, optimized command. Normally that means we get an error like
> 'unable to parse parameters (expected json object)'
when used that way.

Switch over to send_raw_command which does not re-serializes the
command.

Fixes: 45b8a032 ("refactor send_command")
Originally-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-11 14:56:30 +02:00
Thomas Lamprecht 238e5b573e buildsys: prune-sim is not generated, do not cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-26 16:41:34 +02:00
Thomas Lamprecht 996680a336 bump version to 1.1.13-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-26 16:40:37 +02:00
Thomas Lamprecht 94f6127711 Revert "auth: 'crypt' is not thread safe"
With this I'm getting coredumps on every log in:

> Process 20957 (proxmox-backup-) of user 34 dumped core.
>
> Stack trace of thread 20987:
> #0  0x0000563dec9ac37f _ZN3std3sys4unix14stack_overflow3imp14signal_handler17ha95ed06a038ca319E.llvm.11547235952357801165 (proxmox-backup-proxy)
> #1  0x00007f2638de9840 __restore_rt (libc.so.6)
> #2  0x00007f2638e51dac __stpncpy_sse2_unaligned (libc.so.6)
> #3  0x00007f26393b1340 __sha256_crypt_r (libcrypt.so.1)
> #4  0x00007f26393b0553 __crypt_r (libcrypt.so.1)
> #5  0x0000563dec6e44df _ZN14proxmox_backup4auth5crypt17hd5165f960093dfe7E (proxmox-backup-proxy)

This reverts commit acefa2bb6e.
2021-07-26 16:38:16 +02:00
Thomas Lamprecht 3841301ee9 d/control: update generated build-deps
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 12:36:36 +02:00
Thomas Lamprecht f406202825 bump version to 1.1.13-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 12:35:06 +02:00
Stefan Reiter ba50f57e93 file-restore: increase lock timeout on QEMU map
This lock is held during VM startup, so that multiple calls will not
start VMs twice. But this means that the timeout needs to incorporate
the time it might take a VM to boot, so increase it quite a bit.

This could previously lead to "interrupted system call" errors when
accessing backups with many disks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
(cherry picked from commit 66501529a2)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 12:30:09 +02:00
Thomas Lamprecht 61a758f67d build.rs: tell cargo to only rerun build.rs step if .git/HEAD changes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 12:43:19 +02:00
Thomas Lamprecht 847c27fbee build.rs: factor out getting git command output into helper fn
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 12:43:19 +02:00
Thomas Lamprecht 7d79f3d5f7 file restore daemon: log about basic steps
to make the log more useful..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 9a06eb1618)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:15 +02:00
Thomas Lamprecht fa3fdea590 file restore daemon: reword warning about manual execution
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 309e14ebb7)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:10 +02:00
Thomas Lamprecht aa2cd76c58 restore daemon: use millisecond log resolution
During startup most of the stuff is happening in milliseconds (or
less), so the timestamp granularity of seconds made it hard to tell
if the previous command required 990ms or 1ms, which is quite the
difference in the restore daemon context.

Using micros seems not to bring too much additional information, a
millisecond is already an ok lower time resolution for logging, so
switch only to millis for now.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit ecd66ecaf6)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:00 +02:00
Thomas Lamprecht e2d82c7d4d restore daemon: create /run/proxmox-backup on startup
fixes file restore again.

The new Memcom tracking file lives in `/run/proxmox-backup` and is
always created on REST interaction, as CachedUserInfo uses it to
efficiently track config changes, and such a cache is used in each
REST handle_request.

Further, the Memcom infra expects the base run PBS dir to exists
already, which is an OK assumption to have, but in the file-restore
daemon we have a significantly more minimal environment, and the run
dir was simply not required there, even /run isn't a tmpfs yet.

Fixes fda19dcc6f ("fix CachedUserInfo by using a shared memory version counter")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 33d7292f29)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:08:00 +02:00
Thomas Lamprecht e9c2a34def REST: set error message extenesion for bad-request response log
We send it already to the user via the response body, but the
log_response does not has, nor wants to have FWIW, access to the
async body stream, so pass it through the ErrorMessageExtension
mechanism like we do else where.

Note that this is not only useful for PBS API proxy/daemon but also
the REST server of the file-restore daemon running inside the restore
VM, and it really is *very* helpful to debug things there..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit f4d371d2d2)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:07:41 +02:00
Thomas Lamprecht 0fad95f032 REST: rust fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 2d48533378)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:07:41 +02:00
Stoiko Ivanov 683595940b fix #3496: acme: plugin: add sleep for dns propagation
the dns plugin config allow for a specified amount of time to wait for
the TXT record to be set and propagated through DNS.

This patch adds a sleep for this amount of time.
The log message was taken from the perl implementation in proxmox-acme
for consistency.

Tested with the powerdns plugin in my test setup.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
(cherry picked from commit 3f84541412)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:29 +02:00
Stoiko Ivanov 40060c1fed config: acme: make validation_delay crate public
we need the setting in acme::plugin.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
(cherry picked from commit 4d8bd03668)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:29 +02:00
Stoiko Ivanov 2abee30fdd acme: plugin: fix error message
extract_challenge is used by both dns-01 and http-01 challenges.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
(cherry picked from commit f9bd5e1691)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:29 +02:00
Thomas Lamprecht 7cdc53bbf7 buildsys: docs: clean: also clean generated JS files
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 13a2445744)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:06:04 +02:00
Fabian Ebner dac877252b api: disk list: sort by name
So callers get more stable results. Most noticeable, the disk list in
the web UI doesn't jump around upon reloading, and while sorting could
be done directly there, like this other callers get the benefit too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
(cherry picked from commit bbff317aa7)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:04:57 +02:00
Fabian Ebner dd749b0e47 disks: also check for file systems with lsblk
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
(cherry picked from commit 20429238e0)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:04:57 +02:00
Fabian Ebner f98c02cbc6 disks: refactor partition type handling
in preparation to also get the file system type from lsblk.

Co-developed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
(cherry picked from commit 364299740f)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:04:57 +02:00
Thomas Lamprecht 218d7e3ec6 rest: log response: avoid unnecessary mut on variable
a match expresses the fallback slightly nicer and needs no mut,
which is always nice to avoid.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 6b5013edb3)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:02:47 +02:00
Stefan Reiter acefa2bb6e auth: 'crypt' is not thread safe
According to crypt(3):
"crypt places its result in a static storage area, which will be
overwritten by subsequent calls to crypt. It is not safe to call crypt
from multiple threads simultaneously."

This means that multiple login calls as a PBS-realm user can collide and
produce intermittent authentication failures. A visible case is for
file-restore, where VMs with many disks lead to just as many auth-calls
at the same time, as the GUI tries to expand each tree element on load.

Instead, use the thread-safe variant 'crypt_r', which places the result
into a pre-allocated buffer of type 'crypt_data'. The C struct is laid
out according to 'lib/crypt.h.in' and the man page mentioned above.

Use the opportunity and make both arguments to the rust 'crypt' function
take a &[u8].

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
(cherry picked from commit c4c4b5a3ef)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 10:02:01 +02:00
Dietmar Maurer 36551172f3 depend on proxmox 0.11.6 (changed make_tmp_file() return type)
(cherry picked from commit bfd357c5a1)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 09:45:28 +02:00
Wolfgang Bumiller c26f4ef385 buildsys: Prepare new way for path dependencies
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
(cherry picked from commit 9f5b57a348)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 09:39:12 +02:00
Wolfgang Bumiller 60816a8a82 Cargo.toml: regroup imports
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
(cherry picked from commit aceae32baa)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 09:34:18 +02:00
Thomas Lamprecht d7d09712ef bump version to 1.1.12-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:58:14 +02:00
Thomas Lamprecht 825f019226 buildsys: call dpkg-buildpackage directly in deb-all
else we may double-build

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit a2c73c78dd)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:58:14 +02:00
Dominik Csapak ca5e5bb67f ui: datastore/OptionView: only navigate up when we removed the datastore
and not on window close

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
(cherry picked from commit 82cae19d19)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:54:50 +02:00
Dominik Csapak 8191ff150e ui: dashboard/DataStoreStatistics: fix closing <i> tag
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
(cherry picked from commit 4a489ae3de)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:49:42 +02:00
Thomas Lamprecht f2aeb13c68 subscription: set higher-level error to message instead of bailing
While the PVE one "bails" too, it has an eval around those and moves
the error to the message property, so lets do so too to ensure a user
can force an update on a too old subscription

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit b81818b6ad)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:48:03 +02:00
Dietmar Maurer ce76b4b3c2 bump version to 1.1-11-1 2021-06-30 11:25:11 +02:00
Dominik Csapak 44b9d6f162 tape/drive: fix logging when requesting media
we try to load the correct media in a loop until we find the correct tape.
when encountering an error or wrong tape, we want to log that (and send
an email if one is set) that requests the correct tape.

while trying to avoid printing the same errors more than once in a row,
we had at least one case (starting with an empty tape in the drive)
which would not print/send any tape request.

reworking that code to use a custom 'TapeRequest' enum, which contains
the state + error message, and a helper that prints and sends an email
when the state changes

this reduces the change check/log to a single variable, instead of 4
(tried, last_media_uuid, last_error, failure_reason)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-30 11:22:04 +02:00
Dietmar Maurer 53e80e8aa2 tape: fix LTO locate_file for HP drives
Add test code to the first locate_file command, compute locate_offset.
Subsequent locate_file commands use that offset.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-30 11:22:04 +02:00
Dominik Csapak f94aa5ceb1 fix #3393 (again): pxar/create: try to read xattrs/fcaps/acls by default
we have a static list of filesystems and their capabilities regarding
file attributes and fs features (e.g. sockets/fifos/etc) which also
includes xattrs,acls and fcaps

if we did not know a filesystem by its magic number (for example cephfs),
we did not even attempt to read xattrs, etc.

this patch adds those flags by default to unknown filesystems, and
removes them when we encounter EOPNOTSUPP (to remove the number
of syscalls)

with this, we should be able to catch xattrs/acls/fcaps on all
(unknown) fs types that support them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-30 11:22:04 +02:00
Dominik Csapak 3e4b9868a0 proxmox-backup-manager: show task log on datastore create
since the output:
Result: "<UPID>"
is not really interesting, show instead the task log while
the datastore is creating, since it is now run in a worker

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-30 11:22:04 +02:00
Thomas Lamprecht 4d86df04a0 bump version to 1.1.10-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-16 09:55:47 +02:00
30 changed files with 598 additions and 253 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "1.1.9" version = "1.1.14"
authors = [ authors = [
"Dietmar Maurer <dietmar@proxmox.com>", "Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>", "Dominik Csapak <d.csapak@proxmox.com>",
@ -52,15 +52,6 @@ pam-sys = "0.5"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-utils = "0.1.0" pin-utils = "0.1.0"
pin-project = "1.0" pin-project = "1.0"
pathpatterns = "0.1.2"
proxmox = { version = "0.11.5", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro" ] }
proxmox-fuse = "0.1.1"
proxmox-http = { version = "0.2.1", features = [ "client", "http-helpers", "websocket" ] }
#proxmox-http = { version = "0.2.0", path = "../proxmox/proxmox-http", features = [ "client", "http-helpers", "websocket" ] }
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io" ] }
regex = "1.2" regex = "1.2"
rustyline = "7" rustyline = "7"
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0", features = ["derive"] }
@ -82,7 +73,20 @@ zstd = { version = "0.4", features = [ "bindgen" ] }
nom = "5.1" nom = "5.1"
crossbeam-channel = "0.5" crossbeam-channel = "0.5"
proxmox-acme-rs = "0.2.1" pathpatterns = "0.1.2"
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
proxmox = { version = "0.11.6", features = [ "sortable-macro", "api-macro", "cli", "router", "tfa" ] }
proxmox-acme-rs = "0.3"
proxmox-fuse = "0.1.1"
proxmox-http = { version = "0.2.1", features = [ "client", "http-helpers", "websocket" ] }
# Local path overrides
# NOTE: You must run `cargo update` after changing this for it to take effect!
[patch.crates-io]
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "cli", "router", "tfa" ] }
#proxmox-http = { path = "../proxmox/proxmox-http", features = [ "client", "http-helpers", "websocket" ] }
#pxar = { path = "../pxar", features = [ "tokio-io" ] }
[features] [features]
default = [] default = []

View File

@ -113,7 +113,9 @@ deb: build
lintian $(DEBS) lintian $(DEBS)
.PHONY: deb-all .PHONY: deb-all
deb-all: $(DOC_DEB) $(DEBS) deb-all: build
cd build; dpkg-buildpackage -b -us -uc --no-pre-clean
lintian $(DEBS) $(DOC_DEB)
.PHONY: dsc .PHONY: dsc
dsc: $(DSC) dsc: $(DSC)

View File

@ -2,23 +2,22 @@
use std::env; use std::env;
use std::process::Command; use std::process::Command;
fn git_command(args: &[&str]) -> String {
match Command::new("git").args(args).output() {
Ok(output) => String::from_utf8(output.stdout).unwrap().trim_end().to_string(),
Err(err) => {
panic!("git {:?} failed: {}", args, err);
}
}
}
fn main() { fn main() {
let repo_path = git_command(&["rev-parse", "--show-toplevel"]);
let repoid = match env::var("REPOID") { let repoid = match env::var("REPOID") {
Ok(repoid) => repoid, Ok(repoid) => repoid,
Err(_) => { Err(_) => git_command(&["rev-parse", "HEAD"]),
match Command::new("git")
.args(&["rev-parse", "HEAD"])
.output()
{
Ok(output) => {
String::from_utf8(output.stdout).unwrap()
}
Err(err) => {
panic!("git rev-parse failed: {}", err);
}
}
}
}; };
println!("cargo:rustc-env=REPOID={}", repoid); println!("cargo:rustc-env=REPOID={}", repoid);
println!("cargo:rerun-if-changed={}/.git/HEAD", repo_path);
} }

99
debian/changelog vendored
View File

@ -1,4 +1,101 @@
rust-proxmox-backup (1.1.9-1) unstable; urgency=medium rust-proxmox-backup (1.1.14-1) buster; urgency=medium
* drop RawWaker usage to avoid a leaking a refcount
* pbs-tools: LruCache: implement Drop to fix a memory leak for the cache
* ui: add notice for nearing PBS 1.1 End-of-Life
* backport "datastore: lookup: reuse ChunkStore on stale datastore re-open"
-- Proxmox Support Team <support@proxmox.com> Thu, 02 Jun 2022 18:07:54 +0200
rust-proxmox-backup (1.1.13-3) buster; urgency=medium
* fix sending log-rotation command to API daemons
-- Proxmox Support Team <support@proxmox.com> Tue, 19 Oct 2021 10:21:18 +0200
rust-proxmox-backup (1.1.13-2) buster; urgency=medium
* revert "auth: improve thread safety of 'crypt' C-library", not safe for
Debian buster based releases.
-- Proxmox Support Team <support@proxmox.com> Mon, 26 Jul 2021 16:40:07 +0200
rust-proxmox-backup (1.1.13-1) buster; urgency=medium
* auth: improve thread safety of 'crypt' C-library
* file-restore: increase lock timeout on QEMU map
* file restore daemon: log basic startup steps
* REST-API: set error message extension for bad-request response log to
ensure the actual error is logged in any (access) log, making debugging
such issues easier.
* restore daemon: use millisecond log resolution
* fix #3496: acme: plugin: actually sleep after setting the TXT record,
ensuring DNS propagation of that record. This makes it catch up with the
docs/web-interface, where the option was already available.
-- Proxmox Support Team <support@proxmox.com> Fri, 23 Jul 2021 12:34:29 +0200
rust-proxmox-backup (1.1.12-1) buster; urgency=medium
* subscription: set higher-level error to message instead of bailing out, to
ensure a force-check gets through
* ui: dashboard: datastore stats: fix closing <i> tag
* ui: datastore: option view: only navigate up when we actually removed the
datastore
-- Proxmox Support Team <support@proxmox.com> Fri, 09 Jul 2021 12:56:35 +0200
rust-proxmox-backup (1.1.11-1) buster; urgency=medium
* tape/drive: fix logging when requesting media
* tape: fix LTO locate_file for HP drives
* fix #3393 (again): pxar/create: try to read xattrs/fcaps/acls by default
* proxmox-backup-manager: show task log on datastore create
-- Proxmox Support Team <support@proxmox.com> Wed, 30 Jun 2021 11:24:20 +0200
rust-proxmox-backup (1.1.10-1) buster; urgency=medium
* ui: datastore list summary: catch and show errors per datastore
* ui: dashboard: task summary: add a 'close' tool to the header
* ensure that backups which are currently being restored or backed up to a
tape won't get pruned
* improve error handling when locking a tape drive for a backup job
* client/pull: log snapshots that are skipped because of creation time being
older than last sync time
* ui: datastore options: add remove button to drop a datastore from the
configuration, without removing any actual data
* ui: tape: drive selector: do not autoselect the drive
* ui: tape: backup job: use correct default value for pbsUserSelector
* fix #3433: disks: port over Proxmox VE's S.M.A.R.T wearout logic
* backup: add helpers for async last recently used (LRU) caches for chunk
and index reading of backup snapshot
-- Proxmox Support Team <support@proxmox.com> Wed, 16 Jun 2021 09:46:15 +0200
rust-proxmox-backup (1.1.9-1) stable; urgency=medium
* lto/sg_tape/encryption: remove non lto-4 supported byte * lto/sg_tape/encryption: remove non lto-4 supported byte

13
debian/control vendored
View File

@ -39,10 +39,13 @@ Build-Depends: debhelper (>= 11),
librust-percent-encoding-2+default-dev (>= 2.1-~~), librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-1+default-dev, librust-pin-project-1+default-dev,
librust-pin-utils-0.1+default-dev, librust-pin-utils-0.1+default-dev,
librust-proxmox-0.11+api-macro-dev (>= 0.11.5-~~), librust-proxmox-0.11+api-macro-dev (>= 0.11.6-~~),
librust-proxmox-0.11+default-dev (>= 0.11.5-~~), librust-proxmox-0.11+cli-dev (>= 0.11.6-~~),
librust-proxmox-0.11+sortable-macro-dev (>= 0.11.5-~~), librust-proxmox-0.11+default-dev (>= 0.11.6-~~),
librust-proxmox-acme-rs-0.2+default-dev (>= 0.2.1-~~), librust-proxmox-0.11+router-dev (>= 0.11.6-~~),
librust-proxmox-0.11+sortable-macro-dev (>= 0.11.6-~~),
librust-proxmox-0.11+tfa-dev (>= 0.11.6-~~),
librust-proxmox-acme-rs-0.3+default-dev,
librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~), librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~),
librust-proxmox-http-0.2+client-dev (>= 0.2.1-~~), librust-proxmox-http-0.2+client-dev (>= 0.2.1-~~),
librust-proxmox-http-0.2+default-dev (>= 0.2.1-~~), librust-proxmox-http-0.2+default-dev (>= 0.2.1-~~),
@ -125,7 +128,7 @@ Depends: fonts-font-awesome,
postfix | mail-transport-agent, postfix | mail-transport-agent,
proxmox-backup-docs, proxmox-backup-docs,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.5-6), proxmox-widget-toolkit (>= 2.6-2),
pve-xtermjs (>= 4.7.0-1), pve-xtermjs (>= 4.7.0-1),
sg3-utils, sg3-utils,
smartmontools, smartmontools,

2
debian/control.in vendored
View File

@ -12,7 +12,7 @@ Depends: fonts-font-awesome,
postfix | mail-transport-agent, postfix | mail-transport-agent,
proxmox-backup-docs, proxmox-backup-docs,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.5-6), proxmox-widget-toolkit (>= 2.6-2),
pve-xtermjs (>= 4.7.0-1), pve-xtermjs (>= 4.7.0-1),
sg3-utils, sg3-utils,
smartmontools, smartmontools,

View File

@ -228,6 +228,7 @@ epub3: ${GENERATED_SYNOPSIS}
clean: clean:
rm -r -f *~ *.1 ${BUILDDIR} ${GENERATED_SYNOPSIS} api-viewer/apidata.js rm -r -f *~ *.1 ${BUILDDIR} ${GENERATED_SYNOPSIS} api-viewer/apidata.js
rm -f api-viewer/apidoc.js lto-barcode/lto-barcode-generator.js
install_manual_pages: ${MAN1_PAGES} ${MAN5_PAGES} install_manual_pages: ${MAN1_PAGES} ${MAN5_PAGES}

View File

@ -2,6 +2,7 @@ use std::future::Future;
use std::pin::Pin; use std::pin::Pin;
use std::process::Stdio; use std::process::Stdio;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use hyper::{Body, Request, Response}; use hyper::{Body, Request, Response};
@ -68,7 +69,7 @@ fn extract_challenge<'a>(
.challenges .challenges
.iter() .iter()
.find(|ch| ch.ty == ty) .find(|ch| ch.ty == ty)
.ok_or_else(|| format_err!("no supported challenge type (dns-01) found")) .ok_or_else(|| format_err!("no supported challenge type ({}) found", ty))
} }
async fn pipe_to_tasklog<T: AsyncRead + Unpin>( async fn pipe_to_tasklog<T: AsyncRead + Unpin>(
@ -180,7 +181,21 @@ impl AcmePlugin for DnsPlugin {
domain: &'d AcmeDomain, domain: &'d AcmeDomain,
task: Arc<WorkerTask>, task: Arc<WorkerTask>,
) -> Pin<Box<dyn Future<Output = Result<&'c str, Error>> + Send + 'fut>> { ) -> Pin<Box<dyn Future<Output = Result<&'c str, Error>> + Send + 'fut>> {
Box::pin(self.action(client, authorization, domain, task, "setup")) Box::pin(async move {
let result = self
.action(client, authorization, domain, task.clone(), "setup")
.await;
let validation_delay = self.core.validation_delay.unwrap_or(30) as u64;
if validation_delay > 0 {
task.log(format!(
"Sleeping {} seconds to wait for TXT record propagation",
validation_delay
));
tokio::time::sleep(Duration::from_secs(validation_delay)).await;
}
result
})
} }
fn teardown<'fut, 'a: 'fut, 'b: 'fut, 'c: 'fut, 'd: 'fut>( fn teardown<'fut, 'a: 'fut, 'b: 'fut, 'c: 'fut, 'd: 'fut>(

View File

@ -66,6 +66,8 @@ pub fn list_disks(
} }
} }
list.sort_by(|a, b| a.name.cmp(&b.name));
Ok(list) Ok(list)
} }

View File

@ -52,16 +52,20 @@ impl DataStore {
let mut map = DATASTORE_MAP.lock().unwrap(); let mut map = DATASTORE_MAP.lock().unwrap();
if let Some(datastore) = map.get(name) { // reuse chunk store so that we keep using the same process locker instance!
let chunk_store = if let Some(datastore) = map.get(name) {
// Compare Config - if changed, create new Datastore object! // Compare Config - if changed, create new Datastore object!
if datastore.chunk_store.base == path && if datastore.chunk_store.base == path &&
datastore.verify_new == config.verify_new.unwrap_or(false) datastore.verify_new == config.verify_new.unwrap_or(false)
{ {
return Ok(datastore.clone()); return Ok(datastore.clone());
} }
} Arc::clone(&datastore.chunk_store)
} else {
Arc::new(ChunkStore::open(name, &config.path)?)
};
let datastore = DataStore::open_with_path(name, &path, config)?; let datastore = DataStore::open_with_path(chunk_store, config)?;
let datastore = Arc::new(datastore); let datastore = Arc::new(datastore);
map.insert(name.to_string(), datastore.clone()); map.insert(name.to_string(), datastore.clone());
@ -81,9 +85,7 @@ impl DataStore {
Ok(()) Ok(())
} }
fn open_with_path(store_name: &str, path: &Path, config: DataStoreConfig) -> Result<Self, Error> { fn open_with_path(chunk_store: Arc<ChunkStore>, config: DataStoreConfig) -> Result<Self, Error> {
let chunk_store = ChunkStore::open(store_name, path)?;
let mut gc_status_path = chunk_store.base_path(); let mut gc_status_path = chunk_store.base_path();
gc_status_path.push(".gc-status"); gc_status_path.push(".gc-status");
@ -100,7 +102,7 @@ impl DataStore {
}; };
Ok(Self { Ok(Self {
chunk_store: Arc::new(chunk_store), chunk_store,
gc_mutex: Mutex::new(()), gc_mutex: Mutex::new(()),
last_gc_status: Mutex::new(gc_status), last_gc_status: Mutex::new(gc_status),
verify_new: config.verify_new.unwrap_or(false), verify_new: config.verify_new.unwrap_or(false),

View File

@ -737,11 +737,11 @@ async fn command_reopen_logfiles() -> Result<(), Error> {
// only care about the most recent daemon instance for each, proxy & api, as other older ones // only care about the most recent daemon instance for each, proxy & api, as other older ones
// should not respond to new requests anyway, but only finish their current one and then exit. // should not respond to new requests anyway, but only finish their current one and then exit.
let sock = server::our_ctrl_sock(); let sock = server::our_ctrl_sock();
let f1 = server::send_command(sock, "{\"command\":\"api-access-log-reopen\"}\n"); let f1 = server::send_raw_command(sock, "{\"command\":\"api-access-log-reopen\"}\n");
let pid = server::read_pid(buildcfg::PROXMOX_BACKUP_API_PID_FN)?; let pid = server::read_pid(buildcfg::PROXMOX_BACKUP_API_PID_FN)?;
let sock = server::ctrl_sock_from_pid(pid); let sock = server::ctrl_sock_from_pid(pid);
let f2 = server::send_command(sock, "{\"command\":\"api-access-log-reopen\"}\n"); let f2 = server::send_raw_command(sock, "{\"command\":\"api-access-log-reopen\"}\n");
match futures::join!(f1, f2) { match futures::join!(f1, f2) {
(Err(e1), Err(e2)) => Err(format_err!("reopen commands failed, proxy: {}; api: {}", e1, e2)), (Err(e1), Err(e2)) => Err(format_err!("reopen commands failed, proxy: {}; api: {}", e1, e2)),

View File

@ -1,7 +1,7 @@
///! Daemon binary to run inside a micro-VM for secure single file restore of disk images ///! Daemon binary to run inside a micro-VM for secure single file restore of disk images
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use log::error;
use lazy_static::lazy_static; use lazy_static::lazy_static;
use log::{info, error};
use std::os::unix::{ use std::os::unix::{
io::{FromRawFd, RawFd}, io::{FromRawFd, RawFd},
@ -37,24 +37,31 @@ lazy_static! {
/// This is expected to be run by 'proxmox-file-restore' within a mini-VM /// This is expected to be run by 'proxmox-file-restore' within a mini-VM
fn main() -> Result<(), Error> { fn main() -> Result<(), Error> {
if !Path::new(VM_DETECT_FILE).exists() { if !Path::new(VM_DETECT_FILE).exists() {
bail!(concat!( bail!(
"This binary is not supposed to be run manually. ", "This binary is not supposed to be run manually, use 'proxmox-file-restore' instead."
"Please use 'proxmox-file-restore' instead." );
));
} }
// don't have a real syslog (and no persistance), so use env_logger to print to a log file (via // don't have a real syslog (and no persistance), so use env_logger to print to a log file (via
// stdout to a serial terminal attached by QEMU) // stdout to a serial terminal attached by QEMU)
env_logger::from_env(env_logger::Env::default().default_filter_or("info")) env_logger::from_env(env_logger::Env::default().default_filter_or("info"))
.write_style(env_logger::WriteStyle::Never) .write_style(env_logger::WriteStyle::Never)
.format_timestamp_millis()
.init(); .init();
// the API may save some stuff there, e.g., the memcon tracking file
// we do not care much, but it's way less headache to just create it
std::fs::create_dir_all("/run/proxmox-backup")?;
// scan all attached disks now, before starting the API // scan all attached disks now, before starting the API
// this will panic and stop the VM if anything goes wrong // this will panic and stop the VM if anything goes wrong
info!("scanning all disks...");
{ {
let _disk_state = DISK_STATE.lock().unwrap(); let _disk_state = DISK_STATE.lock().unwrap();
} }
info!("disk scan complete, starting main runtime...");
proxmox_backup::tools::runtime::main(run()) proxmox_backup::tools::runtime::main(run())
} }

View File

@ -5,6 +5,11 @@ use proxmox::api::{api, cli::*, RpcEnvironment, ApiHandler};
use proxmox_backup::config; use proxmox_backup::config;
use proxmox_backup::api2::{self, types::* }; use proxmox_backup::api2::{self, types::* };
use proxmox_backup::client::{
connect_to_localhost,
view_task_result,
};
use proxmox_backup::config::datastore::DIR_NAME_SCHEMA;
#[api( #[api(
input: { input: {
@ -67,6 +72,81 @@ fn show_datastore(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value
Ok(Value::Null) Ok(Value::Null)
} }
#[api(
protected: true,
input: {
properties: {
name: {
schema: DATASTORE_SCHEMA,
},
path: {
schema: DIR_NAME_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
schema: DATASTORE_NOTIFY_STRING_SCHEMA,
},
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
},
"prune-schedule": {
optional: true,
schema: PRUNE_SCHEDULE_SCHEMA,
},
"keep-last": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST,
},
"keep-hourly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_HOURLY,
},
"keep-daily": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_DAILY,
},
"keep-weekly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_WEEKLY,
},
"keep-monthly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_MONTHLY,
},
"keep-yearly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_YEARLY,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// Create new datastore config.
async fn create_datastore(mut param: Value) -> Result<Value, Error> {
let output_format = extract_output_format(&mut param);
let mut client = connect_to_localhost()?;
let result = client.post(&"api2/json/config/datastore", Some(param)).await?;
view_task_result(&mut client, result, &output_format).await?;
Ok(Value::Null)
}
pub fn datastore_commands() -> CommandLineInterface { pub fn datastore_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new() let cmd_def = CliCommandMap::new()
@ -77,7 +157,7 @@ pub fn datastore_commands() -> CommandLineInterface {
.completion_cb("name", config::datastore::complete_datastore_name) .completion_cb("name", config::datastore::complete_datastore_name)
) )
.insert("create", .insert("create",
CliCommand::new(&api2::config::datastore::API_METHOD_CREATE_DATASTORE) CliCommand::new(&API_METHOD_CREATE_DATASTORE)
.arg_param(&["name", "path"]) .arg_param(&["name", "path"])
) )
.insert("update", .insert("update",

View File

@ -50,7 +50,7 @@ impl VMStateMap {
/// Acquire a lock on the state map and retrieve a deserialized version /// Acquire a lock on the state map and retrieve a deserialized version
fn load() -> Result<Self, Error> { fn load() -> Result<Self, Error> {
let mut file = Self::open_file_raw(true)?; let mut file = Self::open_file_raw(true)?;
lock_file(&mut file, true, Some(std::time::Duration::from_secs(5)))?; lock_file(&mut file, true, Some(std::time::Duration::from_secs(120)))?;
let map = serde_json::from_reader(&file).unwrap_or_default(); let map = serde_json::from_reader(&file).unwrap_or_default();
Ok(Self { map, file }) Ok(Self { map, file })
} }

View File

@ -1,7 +1,7 @@
//! Helper to start a QEMU VM for single file restore. //! Helper to start a QEMU VM for single file restore.
use std::fs::{File, OpenOptions}; use std::fs::{File, OpenOptions};
use std::io::prelude::*; use std::io::prelude::*;
use std::os::unix::io::{AsRawFd, FromRawFd}; use std::os::unix::io::AsRawFd;
use std::path::PathBuf; use std::path::PathBuf;
use std::time::Duration; use std::time::Duration;
@ -11,10 +11,7 @@ use tokio::time;
use nix::sys::signal::{kill, Signal}; use nix::sys::signal::{kill, Signal};
use nix::unistd::Pid; use nix::unistd::Pid;
use proxmox::tools::{ use proxmox::tools::fs::{create_path, file_read_string, make_tmp_file, CreateOptions};
fd::Fd,
fs::{create_path, file_read_string, make_tmp_file, CreateOptions},
};
use proxmox_backup::backup::backup_user; use proxmox_backup::backup::backup_user;
use proxmox_backup::client::{VsockClient, DEFAULT_VSOCK_PORT}; use proxmox_backup::client::{VsockClient, DEFAULT_VSOCK_PORT};
@ -83,14 +80,14 @@ pub fn try_kill_vm(pid: i32) -> Result<(), Error> {
Ok(()) Ok(())
} }
async fn create_temp_initramfs(ticket: &str, debug: bool) -> Result<(Fd, String), Error> { async fn create_temp_initramfs(ticket: &str, debug: bool) -> Result<(File, String), Error> {
use std::ffi::CString; use std::ffi::CString;
use tokio::fs::File; use tokio::fs::File;
let (tmp_fd, tmp_path) = let (tmp_file, tmp_path) =
make_tmp_file("/tmp/file-restore-qemu.initramfs.tmp", CreateOptions::new())?; make_tmp_file("/tmp/file-restore-qemu.initramfs.tmp", CreateOptions::new())?;
nix::unistd::unlink(&tmp_path)?; nix::unistd::unlink(&tmp_path)?;
tools::fd_change_cloexec(tmp_fd.0, false)?; tools::fd_change_cloexec(tmp_file.as_raw_fd(), false)?;
let initramfs = if debug { let initramfs = if debug {
buildcfg::PROXMOX_BACKUP_INITRAMFS_DBG_FN buildcfg::PROXMOX_BACKUP_INITRAMFS_DBG_FN
@ -98,7 +95,7 @@ async fn create_temp_initramfs(ticket: &str, debug: bool) -> Result<(Fd, String)
buildcfg::PROXMOX_BACKUP_INITRAMFS_FN buildcfg::PROXMOX_BACKUP_INITRAMFS_FN
}; };
let mut f = File::from_std(unsafe { std::fs::File::from_raw_fd(tmp_fd.0) }); let mut f = File::from_std(tmp_file);
let mut base = File::open(initramfs).await?; let mut base = File::open(initramfs).await?;
tokio::io::copy(&mut base, &mut f).await?; tokio::io::copy(&mut base, &mut f).await?;
@ -118,11 +115,10 @@ async fn create_temp_initramfs(ticket: &str, debug: bool) -> Result<(Fd, String)
.await?; .await?;
tools::cpio::append_trailer(&mut f).await?; tools::cpio::append_trailer(&mut f).await?;
// forget the tokio file, we close the file descriptor via the returned Fd let tmp_file = f.into_std().await;
std::mem::forget(f); let path = format!("/dev/fd/{}", &tmp_file.as_raw_fd());
let path = format!("/dev/fd/{}", &tmp_fd.0); Ok((tmp_file, path))
Ok((tmp_fd, path))
} }
pub async fn start_vm( pub async fn start_vm(
@ -145,9 +141,9 @@ pub async fn start_vm(
validate_img_existance(debug)?; validate_img_existance(debug)?;
let pid; let pid;
let (pid_fd, pid_path) = make_tmp_file("/tmp/file-restore-qemu.pid.tmp", CreateOptions::new())?; let (mut pid_file, pid_path) = make_tmp_file("/tmp/file-restore-qemu.pid.tmp", CreateOptions::new())?;
nix::unistd::unlink(&pid_path)?; nix::unistd::unlink(&pid_path)?;
tools::fd_change_cloexec(pid_fd.0, false)?; tools::fd_change_cloexec(pid_file.as_raw_fd(), false)?;
let (_ramfs_pid, ramfs_path) = create_temp_initramfs(ticket, debug).await?; let (_ramfs_pid, ramfs_path) = create_temp_initramfs(ticket, debug).await?;
@ -195,7 +191,7 @@ pub async fn start_vm(
}, },
"-daemonize", "-daemonize",
"-pidfile", "-pidfile",
&format!("/dev/fd/{}", pid_fd.as_raw_fd()), &format!("/dev/fd/{}", pid_file.as_raw_fd()),
"-name", "-name",
PBS_VM_NAME, PBS_VM_NAME,
]; ];
@ -282,8 +278,6 @@ pub async fn start_vm(
// at this point QEMU is already daemonized and running, so if anything fails we // at this point QEMU is already daemonized and running, so if anything fails we
// technically leave behind a zombie-VM... this shouldn't matter, as it will stop // technically leave behind a zombie-VM... this shouldn't matter, as it will stop
// itself soon enough (timer), and the following operations are unlikely to fail // itself soon enough (timer), and the following operations are unlikely to fail
let mut pid_file = unsafe { File::from_raw_fd(pid_fd.as_raw_fd()) };
std::mem::forget(pid_fd); // FD ownership is now in pid_fd/File
let mut pidstr = String::new(); let mut pidstr = String::new();
pid_file.read_to_string(&mut pidstr)?; pid_file.read_to_string(&mut pidstr)?;
pid = pidstr.trim_end().parse().map_err(|err| { pid = pidstr.trim_end().parse().map_err(|err| {

View File

@ -72,7 +72,7 @@ pub struct DnsPluginCore {
/// ///
/// Allows to cope with long TTL of DNS records. /// Allows to cope with long TTL of DNS records.
#[serde(skip_serializing_if = "Option::is_none", default)] #[serde(skip_serializing_if = "Option::is_none", default)]
validation_delay: Option<u32>, pub(crate) validation_delay: Option<u32>,
/// Flag to disable the config. /// Flag to disable the config.
#[serde(skip_serializing_if = "Option::is_none", default)] #[serde(skip_serializing_if = "Option::is_none", default)]

View File

@ -169,7 +169,7 @@ where
bail!("refusing to backup a virtual file system"); bail!("refusing to backup a virtual file system");
} }
let fs_feature_flags = Flags::from_magic(fs_magic); let mut fs_feature_flags = Flags::from_magic(fs_magic);
let stat = nix::sys::stat::fstat(source_dir.as_raw_fd())?; let stat = nix::sys::stat::fstat(source_dir.as_raw_fd())?;
let metadata = get_metadata( let metadata = get_metadata(
@ -177,6 +177,7 @@ where
&stat, &stat,
feature_flags & fs_feature_flags, feature_flags & fs_feature_flags,
fs_magic, fs_magic,
&mut fs_feature_flags,
) )
.map_err(|err| format_err!("failed to get metadata for source directory: {}", err))?; .map_err(|err| format_err!("failed to get metadata for source directory: {}", err))?;
@ -533,7 +534,7 @@ impl Archiver {
None => return Ok(()), None => return Ok(()),
}; };
let metadata = get_metadata(fd.as_raw_fd(), &stat, self.flags(), self.fs_magic)?; let metadata = get_metadata(fd.as_raw_fd(), &stat, self.flags(), self.fs_magic, &mut self.fs_feature_flags)?;
if self if self
.patterns .patterns
@ -742,7 +743,7 @@ impl Archiver {
} }
} }
fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Result<Metadata, Error> { fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64, fs_feature_flags: &mut Flags) -> Result<Metadata, Error> {
// required for some of these // required for some of these
let proc_path = Path::new("/proc/self/fd/").join(fd.to_string()); let proc_path = Path::new("/proc/self/fd/").join(fd.to_string());
@ -757,14 +758,14 @@ fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Resu
..Default::default() ..Default::default()
}; };
get_xattr_fcaps_acl(&mut meta, fd, &proc_path, flags)?; get_xattr_fcaps_acl(&mut meta, fd, &proc_path, flags, fs_feature_flags)?;
get_chattr(&mut meta, fd)?; get_chattr(&mut meta, fd)?;
get_fat_attr(&mut meta, fd, fs_magic)?; get_fat_attr(&mut meta, fd, fs_magic)?;
get_quota_project_id(&mut meta, fd, flags, fs_magic)?; get_quota_project_id(&mut meta, fd, flags, fs_magic)?;
Ok(meta) Ok(meta)
} }
fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags) -> Result<(), Error> { fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags, fs_feature_flags: &mut Flags) -> Result<(), Error> {
if !flags.contains(Flags::WITH_FCAPS) { if !flags.contains(Flags::WITH_FCAPS) {
return Ok(()); return Ok(());
} }
@ -775,7 +776,10 @@ fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags) -> Result<(), Error>
Ok(()) Ok(())
} }
Err(Errno::ENODATA) => Ok(()), Err(Errno::ENODATA) => Ok(()),
Err(Errno::EOPNOTSUPP) => Ok(()), Err(Errno::EOPNOTSUPP) => {
fs_feature_flags.remove(Flags::WITH_FCAPS);
Ok(())
}
Err(Errno::EBADF) => Ok(()), // symlinks Err(Errno::EBADF) => Ok(()), // symlinks
Err(err) => bail!("failed to read file capabilities: {}", err), Err(err) => bail!("failed to read file capabilities: {}", err),
} }
@ -786,6 +790,7 @@ fn get_xattr_fcaps_acl(
fd: RawFd, fd: RawFd,
proc_path: &Path, proc_path: &Path,
flags: Flags, flags: Flags,
fs_feature_flags: &mut Flags,
) -> Result<(), Error> { ) -> Result<(), Error> {
if !flags.contains(Flags::WITH_XATTRS) { if !flags.contains(Flags::WITH_XATTRS) {
return Ok(()); return Ok(());
@ -793,19 +798,22 @@ fn get_xattr_fcaps_acl(
let xattrs = match xattr::flistxattr(fd) { let xattrs = match xattr::flistxattr(fd) {
Ok(names) => names, Ok(names) => names,
Err(Errno::EOPNOTSUPP) => return Ok(()), Err(Errno::EOPNOTSUPP) => {
fs_feature_flags.remove(Flags::WITH_XATTRS);
return Ok(());
},
Err(Errno::EBADF) => return Ok(()), // symlinks Err(Errno::EBADF) => return Ok(()), // symlinks
Err(err) => bail!("failed to read xattrs: {}", err), Err(err) => bail!("failed to read xattrs: {}", err),
}; };
for attr in &xattrs { for attr in &xattrs {
if xattr::is_security_capability(&attr) { if xattr::is_security_capability(&attr) {
get_fcaps(meta, fd, flags)?; get_fcaps(meta, fd, flags, fs_feature_flags)?;
continue; continue;
} }
if xattr::is_acl(&attr) { if xattr::is_acl(&attr) {
get_acl(meta, proc_path, flags)?; get_acl(meta, proc_path, flags, fs_feature_flags)?;
continue; continue;
} }
@ -910,7 +918,7 @@ fn get_quota_project_id(
Ok(()) Ok(())
} }
fn get_acl(metadata: &mut Metadata, proc_path: &Path, flags: Flags) -> Result<(), Error> { fn get_acl(metadata: &mut Metadata, proc_path: &Path, flags: Flags, fs_feature_flags: &mut Flags) -> Result<(), Error> {
if !flags.contains(Flags::WITH_ACL) { if !flags.contains(Flags::WITH_ACL) {
return Ok(()); return Ok(());
} }
@ -919,10 +927,10 @@ fn get_acl(metadata: &mut Metadata, proc_path: &Path, flags: Flags) -> Result<()
return Ok(()); return Ok(());
} }
get_acl_do(metadata, proc_path, acl::ACL_TYPE_ACCESS)?; get_acl_do(metadata, proc_path, acl::ACL_TYPE_ACCESS, fs_feature_flags)?;
if metadata.is_dir() { if metadata.is_dir() {
get_acl_do(metadata, proc_path, acl::ACL_TYPE_DEFAULT)?; get_acl_do(metadata, proc_path, acl::ACL_TYPE_DEFAULT, fs_feature_flags)?;
} }
Ok(()) Ok(())
@ -932,6 +940,7 @@ fn get_acl_do(
metadata: &mut Metadata, metadata: &mut Metadata,
proc_path: &Path, proc_path: &Path,
acl_type: acl::ACLType, acl_type: acl::ACLType,
fs_feature_flags: &mut Flags,
) -> Result<(), Error> { ) -> Result<(), Error> {
// In order to be able to get ACLs with type ACL_TYPE_DEFAULT, we have // In order to be able to get ACLs with type ACL_TYPE_DEFAULT, we have
// to create a path for acl_get_file(). acl_get_fd() only allows to get // to create a path for acl_get_file(). acl_get_fd() only allows to get
@ -939,7 +948,10 @@ fn get_acl_do(
let acl = match acl::ACL::get_file(&proc_path, acl_type) { let acl = match acl::ACL::get_file(&proc_path, acl_type) {
Ok(acl) => acl, Ok(acl) => acl,
// Don't bail if underlying endpoint does not support acls // Don't bail if underlying endpoint does not support acls
Err(Errno::EOPNOTSUPP) => return Ok(()), Err(Errno::EOPNOTSUPP) => {
fs_feature_flags.remove(Flags::WITH_ACL);
return Ok(());
}
// Don't bail if the endpoint cannot carry acls // Don't bail if the endpoint cannot carry acls
Err(Errno::EBADF) => return Ok(()), Err(Errno::EBADF) => return Ok(()),
// Don't bail if there is no data // Don't bail if there is no data

View File

@ -368,7 +368,10 @@ impl Flags {
Flags::WITH_SYMLINKS | Flags::WITH_SYMLINKS |
Flags::WITH_DEVICE_NODES | Flags::WITH_DEVICE_NODES |
Flags::WITH_FIFOS | Flags::WITH_FIFOS |
Flags::WITH_SOCKETS Flags::WITH_SOCKETS |
Flags::WITH_XATTRS |
Flags::WITH_ACL |
Flags::WITH_FCAPS
}, },
} }
} }

View File

@ -152,14 +152,13 @@ fn log_response(
let path = &path_query[..MAX_URI_QUERY_LENGTH.min(path_query.len())]; let path = &path_query[..MAX_URI_QUERY_LENGTH.min(path_query.len())];
let status = resp.status(); let status = resp.status();
if !(status.is_success() || status.is_informational()) { if !(status.is_success() || status.is_informational()) {
let reason = status.canonical_reason().unwrap_or("unknown reason"); let reason = status.canonical_reason().unwrap_or("unknown reason");
let mut message = "request failed"; let message = match resp.extensions().get::<ErrorMessageExtension>() {
if let Some(data) = resp.extensions().get::<ErrorMessageExtension>() { Some(data) => &data.0,
message = &data.0; None => "request failed",
} };
log::error!( log::error!(
"{} {}: {} {}: [client {}] {}", "{} {}: {} {}: [client {}] {}",
@ -254,7 +253,10 @@ impl tower_service::Service<Request<Body>> for ApiService {
Some(apierr) => (apierr.message.clone(), apierr.code), Some(apierr) => (apierr.message.clone(), apierr.code),
_ => (err.to_string(), StatusCode::BAD_REQUEST), _ => (err.to_string(), StatusCode::BAD_REQUEST),
}; };
Response::builder().status(code).body(err.into())? Response::builder()
.status(code)
.extension(ErrorMessageExtension(err.to_string()))
.body(err.into())?
} }
}; };
let logger = config.get_file_log(); let logger = config.get_file_log();
@ -561,7 +563,8 @@ async fn simple_static_file_download(
let mut response = match compression { let mut response = match compression {
Some(CompressionMethod::Deflate) => { Some(CompressionMethod::Deflate) => {
let mut enc = DeflateEncoder::with_quality(data, Level::Default); let mut enc = DeflateEncoder::with_quality(data, Level::Default);
enc.compress_vec(&mut file, CHUNK_SIZE_LIMIT as usize).await?; enc.compress_vec(&mut file, CHUNK_SIZE_LIMIT as usize)
.await?;
let mut response = Response::new(enc.into_inner().into()); let mut response = Response::new(enc.into_inner().into());
response.headers_mut().insert( response.headers_mut().insert(
header::CONTENT_ENCODING, header::CONTENT_ENCODING,

View File

@ -3,6 +3,7 @@ use std::fs::{File, OpenOptions};
use std::os::unix::fs::OpenOptionsExt; use std::os::unix::fs::OpenOptionsExt;
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
use std::path::Path; use std::path::Path;
use std::convert::TryFrom;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use endian_trait::Endian; use endian_trait::Endian;
@ -122,6 +123,7 @@ pub struct LtoTapeStatus {
pub struct SgTape { pub struct SgTape {
file: File, file: File,
locate_offset: Option<i64>,
info: InquiryInfo, info: InquiryInfo,
encryption_key_loaded: bool, encryption_key_loaded: bool,
} }
@ -145,6 +147,7 @@ impl SgTape {
file, file,
info, info,
encryption_key_loaded: false, encryption_key_loaded: false,
locate_offset: None,
}) })
} }
@ -300,26 +303,76 @@ impl SgTape {
return self.rewind(); return self.rewind();
} }
let position = position -1; const SPACE_ONE_FILEMARK: &[u8] = &[0x11, 0x01, 0, 0, 1, 0];
// Special case for position 1, because LOCATE 0 does not work
if position == 1 {
self.rewind()?;
let mut sg_raw = SgRaw::new(&mut self.file, 16)?;
sg_raw.set_timeout(Self::SCSI_TAPE_DEFAULT_TIMEOUT);
sg_raw.do_command(SPACE_ONE_FILEMARK)
.map_err(|err| format_err!("locate file {} (space) failed - {}", position, err))?;
return Ok(());
}
let mut sg_raw = SgRaw::new(&mut self.file, 16)?; let mut sg_raw = SgRaw::new(&mut self.file, 16)?;
sg_raw.set_timeout(Self::SCSI_TAPE_DEFAULT_TIMEOUT); sg_raw.set_timeout(Self::SCSI_TAPE_DEFAULT_TIMEOUT);
let mut cmd = Vec::new();
// Note: LOCATE(16) works for LTO4 or newer // Note: LOCATE(16) works for LTO4 or newer
//
// It seems the LOCATE command behaves slightly different across vendors
// e.g. for IBM drives, LOCATE 1 moves to File #2, but
// for HP drives, LOCATE 1 move to File #1
let fixed_position = if let Some(locate_offset) = self.locate_offset {
if locate_offset < 0 {
position.saturating_sub((-locate_offset) as u64)
} else {
position.saturating_add(locate_offset as u64)
}
} else {
position
};
// always sub(1), so that it works for IBM drives without locate_offset
let fixed_position = fixed_position.saturating_sub(1);
let mut cmd = Vec::new();
cmd.extend(&[0x92, 0b000_01_000, 0, 0]); // LOCATE(16) filemarks cmd.extend(&[0x92, 0b000_01_000, 0, 0]); // LOCATE(16) filemarks
cmd.extend(&position.to_be_bytes()); cmd.extend(&fixed_position.to_be_bytes());
cmd.extend(&[0, 0, 0, 0]); cmd.extend(&[0, 0, 0, 0]);
sg_raw.do_command(&cmd) sg_raw.do_command(&cmd)
.map_err(|err| format_err!("locate file {} failed - {}", position, err))?; .map_err(|err| format_err!("locate file {} failed - {}", position, err))?;
// move to other side of filemark // LOCATE always position at the BOT side of the filemark, so
cmd.truncate(0); // we need to move to other side of filemark
cmd.extend(&[0x11, 0x01, 0, 0, 1, 0]); // SPACE(6) one filemarks sg_raw.do_command(SPACE_ONE_FILEMARK)
sg_raw.do_command(&cmd)
.map_err(|err| format_err!("locate file {} (space) failed - {}", position, err))?; .map_err(|err| format_err!("locate file {} (space) failed - {}", position, err))?;
if self.locate_offset.is_none() {
// check if we landed at correct position
let current_file = self.current_file_number()?;
if current_file != position {
let offset: i64 =
i64::try_from((position as i128) - (current_file as i128)).map_err(|err| {
format_err!(
"locate_file: offset between {} and {} invalid: {}",
position,
current_file,
err
)
})?;
self.locate_offset = Some(offset);
self.locate_file(position)?;
let current_file = self.current_file_number()?;
if current_file != position {
bail!("locate_file: compensating offset did not work, aborting...");
}
} else {
self.locate_offset = Some(0);
}
}
Ok(()) Ok(())
} }

View File

@ -321,6 +321,37 @@ pub fn open_drive(
} }
} }
#[derive(PartialEq, Eq)]
enum TapeRequestError {
None,
EmptyTape,
OpenFailed(String),
WrongLabel(String),
ReadFailed(String),
}
impl std::fmt::Display for TapeRequestError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
TapeRequestError::None => {
write!(f, "no error")
},
TapeRequestError::OpenFailed(reason) => {
write!(f, "tape open failed - {}", reason)
}
TapeRequestError::WrongLabel(label) => {
write!(f, "wrong media label {}", label)
}
TapeRequestError::EmptyTape => {
write!(f, "found empty media without label (please label all tapes first)")
}
TapeRequestError::ReadFailed(reason) => {
write!(f, "tape read failed - {}", reason)
}
}
}
}
/// Requests a specific 'media' to be inserted into 'drive'. Within a /// Requests a specific 'media' to be inserted into 'drive'. Within a
/// loop, this then tries to read the media label and waits until it /// loop, this then tries to read the media label and waits until it
/// finds the requested media. /// finds the requested media.
@ -388,84 +419,87 @@ pub fn request_and_load_media(
return Ok((handle, media_id)); return Ok((handle, media_id));
} }
let mut last_media_uuid = None; let mut last_error = TapeRequestError::None;
let mut last_error = None;
let mut tried = false; let update_and_log_request_error =
let mut failure_reason = None; |old: &mut TapeRequestError, new: TapeRequestError| -> Result<(), Error>
{
if new != *old {
task_log!(worker, "{}", new);
task_log!(
worker,
"Please insert media '{}' into drive '{}'",
label_text,
drive
);
if let Some(to) = notify_email {
send_load_media_email(
drive,
&label_text,
to,
Some(new.to_string()),
)?;
}
*old = new;
}
Ok(())
};
loop { loop {
worker.check_abort()?; worker.check_abort()?;
if tried { if last_error != TapeRequestError::None {
if let Some(reason) = failure_reason {
task_log!(worker, "Please insert media '{}' into drive '{}'", label_text, drive);
if let Some(to) = notify_email {
send_load_media_email(drive, &label_text, to, Some(reason))?;
}
}
failure_reason = None;
for _ in 0..50 { // delay 5 seconds for _ in 0..50 { // delay 5 seconds
worker.check_abort()?; worker.check_abort()?;
std::thread::sleep(std::time::Duration::from_millis(100)); std::thread::sleep(std::time::Duration::from_millis(100));
} }
} else {
task_log!(
worker,
"Checking for media '{}' in drive '{}'",
label_text,
drive
);
} }
tried = true;
let mut handle = match drive_config.open() { let mut handle = match drive_config.open() {
Ok(handle) => handle, Ok(handle) => handle,
Err(err) => { Err(err) => {
let err = err.to_string(); update_and_log_request_error(
if Some(err.clone()) != last_error { &mut last_error,
task_log!(worker, "tape open failed - {}", err); TapeRequestError::OpenFailed(err.to_string()),
last_error = Some(err); )?;
failure_reason = last_error.clone();
}
continue; continue;
} }
}; };
match handle.read_label() { let request_error = match handle.read_label() {
Ok((Some(media_id), _)) if media_id.label.uuid == label.uuid => {
task_log!(
worker,
"found media label {} ({})",
media_id.label.label_text,
media_id.label.uuid.to_string(),
);
return Ok((Box::new(handle), media_id));
}
Ok((Some(media_id), _)) => { Ok((Some(media_id), _)) => {
if media_id.label.uuid == label.uuid { let label_string = format!(
task_log!( "{} ({})",
worker, media_id.label.label_text,
"found media label {} ({})", media_id.label.uuid.to_string(),
media_id.label.label_text, );
media_id.label.uuid.to_string(), TapeRequestError::WrongLabel(label_string)
);
return Ok((Box::new(handle), media_id));
} else if Some(media_id.label.uuid.clone()) != last_media_uuid {
let err = format!(
"wrong media label {} ({})",
media_id.label.label_text,
media_id.label.uuid.to_string(),
);
task_log!(worker, "{}", err);
last_media_uuid = Some(media_id.label.uuid);
failure_reason = Some(err);
}
} }
Ok((None, _)) => { Ok((None, _)) => {
if last_media_uuid.is_some() { TapeRequestError::EmptyTape
let err = "found empty media without label (please label all tapes first)";
task_log!(worker, "{}", err);
last_media_uuid = None;
failure_reason = Some(err.to_string());
}
} }
Err(err) => { Err(err) => {
let err = err.to_string(); TapeRequestError::ReadFailed(err.to_string())
if Some(err.clone()) != last_error {
task_log!(worker, "tape open failed - {}", err);
last_error = Some(err);
failure_reason = last_error.clone();
}
} }
} };
update_and_log_request_error(&mut last_error, request_error)?;
} }
} }
_ => bail!("drive type '{}' not implemented!"), _ => bail!("drive type '{}' not implemented!"),

View File

@ -46,6 +46,19 @@ pub struct DiskManage {
mounted_devices: OnceCell<HashSet<dev_t>>, mounted_devices: OnceCell<HashSet<dev_t>>,
} }
/// Information for a device as returned by lsblk.
#[derive(Deserialize)]
pub struct LsblkInfo {
/// Path to the device.
path: String,
/// Partition type GUID.
#[serde(rename = "parttype")]
partition_type: Option<String>,
/// File system label.
#[serde(rename = "fstype")]
file_system_type: Option<String>,
}
impl DiskManage { impl DiskManage {
/// Create a new disk management context. /// Create a new disk management context.
pub fn new() -> Arc<Self> { pub fn new() -> Arc<Self> {
@ -555,32 +568,36 @@ pub struct BlockDevStat {
pub io_ticks: u64, // milliseconds pub io_ticks: u64, // milliseconds
} }
/// Use lsblk to read partition type uuids. /// Use lsblk to read partition type uuids and file system types.
pub fn get_partition_type_info() -> Result<HashMap<String, Vec<String>>, Error> { pub fn get_lsblk_info() -> Result<Vec<LsblkInfo>, Error> {
let mut command = std::process::Command::new("lsblk"); let mut command = std::process::Command::new("lsblk");
command.args(&["--json", "-o", "path,parttype"]); command.args(&["--json", "-o", "path,parttype,fstype"]);
let output = crate::tools::run_command(command, None)?; let output = crate::tools::run_command(command, None)?;
let mut res: HashMap<String, Vec<String>> = HashMap::new(); let mut output: serde_json::Value = output.parse()?;
let output: serde_json::Value = output.parse()?; Ok(serde_json::from_value(output["blockdevices"].take())?)
if let Some(list) = output["blockdevices"].as_array() { }
for info in list {
let path = match info["path"].as_str() { /// Get set of devices with a file system label.
Some(p) => p, ///
None => continue, /// The set is indexed by using the unix raw device number (dev_t is u64)
}; fn get_file_system_devices(
let partition_type = match info["parttype"].as_str() { lsblk_info: &[LsblkInfo],
Some(t) => t.to_owned(), ) -> Result<HashSet<u64>, Error> {
None => continue,
}; let mut device_set: HashSet<u64> = HashSet::new();
let devices = res.entry(partition_type).or_insert(Vec::new());
devices.push(path.to_string()); for info in lsblk_info.iter() {
if info.file_system_type.is_some() {
let meta = std::fs::metadata(&info.path)?;
device_set.insert(meta.rdev());
} }
} }
Ok(res)
Ok(device_set)
} }
#[api()] #[api()]
@ -599,6 +616,8 @@ pub enum DiskUsageType {
DeviceMapper, DeviceMapper,
/// Disk has partitions /// Disk has partitions
Partitions, Partitions,
/// Disk contains a file system label
FileSystem,
} }
#[api( #[api(
@ -736,14 +755,16 @@ pub fn get_disks(
let disk_manager = DiskManage::new(); let disk_manager = DiskManage::new();
let partition_type_map = get_partition_type_info()?; let lsblk_info = get_lsblk_info()?;
let zfs_devices = zfs_devices(&partition_type_map, None).or_else(|err| -> Result<HashSet<u64>, Error> { let zfs_devices = zfs_devices(&lsblk_info, None).or_else(|err| -> Result<HashSet<u64>, Error> {
eprintln!("error getting zfs devices: {}", err); eprintln!("error getting zfs devices: {}", err);
Ok(HashSet::new()) Ok(HashSet::new())
})?; })?;
let lvm_devices = get_lvm_devices(&partition_type_map)?; let lvm_devices = get_lvm_devices(&lsblk_info)?;
let file_system_devices = get_file_system_devices(&lsblk_info)?;
// fixme: ceph journals/volumes // fixme: ceph journals/volumes
@ -820,6 +841,10 @@ pub fn get_disks(
}; };
} }
if usage == DiskUsageType::Unused && file_system_devices.contains(&devnum) {
usage = DiskUsageType::FileSystem;
}
if usage == DiskUsageType::Unused && disk.has_holders()? { if usage == DiskUsageType::Unused && disk.has_holders()? {
usage = DiskUsageType::DeviceMapper; usage = DiskUsageType::DeviceMapper;
} }

View File

@ -1,10 +1,12 @@
use std::collections::{HashSet, HashMap}; use std::collections::HashSet;
use std::os::unix::fs::MetadataExt; use std::os::unix::fs::MetadataExt;
use anyhow::{Error}; use anyhow::{Error};
use serde_json::Value; use serde_json::Value;
use lazy_static::lazy_static; use lazy_static::lazy_static;
use super::LsblkInfo;
lazy_static!{ lazy_static!{
static ref LVM_UUIDS: HashSet<&'static str> = { static ref LVM_UUIDS: HashSet<&'static str> = {
let mut set = HashSet::new(); let mut set = HashSet::new();
@ -17,7 +19,7 @@ lazy_static!{
/// ///
/// The set is indexed by using the unix raw device number (dev_t is u64) /// The set is indexed by using the unix raw device number (dev_t is u64)
pub fn get_lvm_devices( pub fn get_lvm_devices(
partition_type_map: &HashMap<String, Vec<String>>, lsblk_info: &[LsblkInfo],
) -> Result<HashSet<u64>, Error> { ) -> Result<HashSet<u64>, Error> {
const PVS_BIN_PATH: &str = "pvs"; const PVS_BIN_PATH: &str = "pvs";
@ -29,12 +31,12 @@ pub fn get_lvm_devices(
let mut device_set: HashSet<u64> = HashSet::new(); let mut device_set: HashSet<u64> = HashSet::new();
for device_list in partition_type_map.iter() for info in lsblk_info.iter() {
.filter_map(|(uuid, list)| if LVM_UUIDS.contains(uuid.as_str()) { Some(list) } else { None }) if let Some(partition_type) = &info.partition_type {
{ if LVM_UUIDS.contains(partition_type.as_str()) {
for device in device_list { let meta = std::fs::metadata(&info.path)?;
let meta = std::fs::metadata(device)?; device_set.insert(meta.rdev());
device_set.insert(meta.rdev()); }
} }
} }

View File

@ -1,5 +1,5 @@
use std::path::PathBuf; use std::path::PathBuf;
use std::collections::{HashMap, HashSet}; use std::collections::HashSet;
use std::os::unix::fs::MetadataExt; use std::os::unix::fs::MetadataExt;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
@ -67,12 +67,11 @@ pub fn zfs_pool_stats(pool: &OsStr) -> Result<Option<BlockDevStat>, Error> {
Ok(Some(stat)) Ok(Some(stat))
} }
/// Get set of devices used by zfs (or a specific zfs pool) /// Get set of devices used by zfs (or a specific zfs pool)
/// ///
/// The set is indexed by using the unix raw device number (dev_t is u64) /// The set is indexed by using the unix raw device number (dev_t is u64)
pub fn zfs_devices( pub fn zfs_devices(
partition_type_map: &HashMap<String, Vec<String>>, lsblk_info: &[LsblkInfo],
pool: Option<String>, pool: Option<String>,
) -> Result<HashSet<u64>, Error> { ) -> Result<HashSet<u64>, Error> {
@ -86,12 +85,12 @@ pub fn zfs_devices(
} }
} }
for device_list in partition_type_map.iter() for info in lsblk_info.iter() {
.filter_map(|(uuid, list)| if ZFS_UUIDS.contains(uuid.as_str()) { Some(list) } else { None }) if let Some(partition_type) = &info.partition_type {
{ if ZFS_UUIDS.contains(partition_type.as_str()) {
for device in device_list { let meta = std::fs::metadata(&info.path)?;
let meta = std::fs::metadata(device)?; device_set.insert(meta.rdev());
device_set.insert(meta.rdev()); }
} }
} }

View File

@ -100,9 +100,25 @@ pub struct LruCache<K, V> {
_marker: PhantomData<Box<CacheNode<K, V>>>, _marker: PhantomData<Box<CacheNode<K, V>>>,
} }
impl<K, V> Drop for LruCache<K, V> {
fn drop (&mut self) {
self.clear();
}
}
// trivial: if our contents are Send, the whole cache is Send // trivial: if our contents are Send, the whole cache is Send
unsafe impl<K: Send, V: Send> Send for LruCache<K, V> {} unsafe impl<K: Send, V: Send> Send for LruCache<K, V> {}
impl<K, V> LruCache<K, V> {
/// Clear all the entries from the cache.
pub fn clear(&mut self) {
// This frees only the HashMap with the node pointers.
self.map.clear();
// This frees the actual nodes and resets the list head and tail.
self.list.clear();
}
}
impl<K: std::cmp::Eq + std::hash::Hash + Copy, V> LruCache<K, V> { impl<K: std::cmp::Eq + std::hash::Hash + Copy, V> LruCache<K, V> {
/// Create LRU cache instance which holds up to `capacity` nodes at once. /// Create LRU cache instance which holds up to `capacity` nodes at once.
pub fn new(capacity: usize) -> Self { pub fn new(capacity: usize) -> Self {
@ -115,14 +131,6 @@ impl<K: std::cmp::Eq + std::hash::Hash + Copy, V> LruCache<K, V> {
} }
} }
/// Clear all the entries from the cache.
pub fn clear(&mut self) {
// This frees only the HashMap with the node pointers.
self.map.clear();
// This frees the actual nodes and resets the list head and tail.
self.list.clear();
}
/// Insert or update an entry identified by `key` with the given `value`. /// Insert or update an entry identified by `key` with the given `value`.
/// This entry is placed as the most recently used node at the head. /// This entry is placed as the most recently used node at the head.
pub fn insert(&mut self, key: K, value: V) { pub fn insert(&mut self, key: K, value: V) {

View File

@ -2,8 +2,8 @@
use std::cell::RefCell; use std::cell::RefCell;
use std::future::Future; use std::future::Future;
use std::sync::{Arc, Weak, Mutex}; use std::sync::{Arc, Mutex, Weak};
use std::task::{Context, Poll, RawWaker, Waker}; use std::task::{Context, Poll, Waker};
use std::thread::{self, Thread}; use std::thread::{self, Thread};
use lazy_static::lazy_static; use lazy_static::lazy_static;
@ -15,8 +15,7 @@ thread_local! {
} }
fn is_in_tokio() -> bool { fn is_in_tokio() -> bool {
tokio::runtime::Handle::try_current() tokio::runtime::Handle::try_current().is_ok()
.is_ok()
} }
fn is_blocking() -> bool { fn is_blocking() -> bool {
@ -49,7 +48,8 @@ lazy_static! {
static ref RUNTIME: Mutex<Weak<Runtime>> = Mutex::new(Weak::new()); static ref RUNTIME: Mutex<Weak<Runtime>> = Mutex::new(Weak::new());
} }
extern { #[link(name = "crypto")]
extern "C" {
fn OPENSSL_thread_stop(); fn OPENSSL_thread_stop();
} }
@ -58,16 +58,19 @@ extern {
/// This makes sure that tokio's worker threads are marked for us so that we know whether we /// This makes sure that tokio's worker threads are marked for us so that we know whether we
/// can/need to use `block_in_place` in our `block_on` helper. /// can/need to use `block_in_place` in our `block_on` helper.
pub fn get_runtime_with_builder<F: Fn() -> runtime::Builder>(get_builder: F) -> Arc<Runtime> { pub fn get_runtime_with_builder<F: Fn() -> runtime::Builder>(get_builder: F) -> Arc<Runtime> {
let mut guard = RUNTIME.lock().unwrap(); let mut guard = RUNTIME.lock().unwrap();
if let Some(rt) = guard.upgrade() { return rt; } if let Some(rt) = guard.upgrade() {
return rt;
}
let mut builder = get_builder(); let mut builder = get_builder();
builder.on_thread_stop(|| { builder.on_thread_stop(|| {
// avoid openssl bug: https://github.com/openssl/openssl/issues/6214 // avoid openssl bug: https://github.com/openssl/openssl/issues/6214
// call OPENSSL_thread_stop to avoid race with openssl cleanup handlers // call OPENSSL_thread_stop to avoid race with openssl cleanup handlers
unsafe { OPENSSL_thread_stop(); } unsafe {
OPENSSL_thread_stop();
}
}); });
let runtime = builder.build().expect("failed to spawn tokio runtime"); let runtime = builder.build().expect("failed to spawn tokio runtime");
@ -82,7 +85,6 @@ pub fn get_runtime_with_builder<F: Fn() -> runtime::Builder>(get_builder: F) ->
/// ///
/// This calls get_runtime_with_builder() using the tokio default threaded scheduler /// This calls get_runtime_with_builder() using the tokio default threaded scheduler
pub fn get_runtime() -> Arc<Runtime> { pub fn get_runtime() -> Arc<Runtime> {
get_runtime_with_builder(|| { get_runtime_with_builder(|| {
let mut builder = runtime::Builder::new_multi_thread(); let mut builder = runtime::Builder::new_multi_thread();
builder.enable_all(); builder.enable_all();
@ -90,7 +92,6 @@ pub fn get_runtime() -> Arc<Runtime> {
}) })
} }
/// Block on a synchronous piece of code. /// Block on a synchronous piece of code.
pub fn block_in_place<R>(fut: impl FnOnce() -> R) -> R { pub fn block_in_place<R>(fut: impl FnOnce() -> R) -> R {
// don't double-exit the context (tokio doesn't like that) // don't double-exit the context (tokio doesn't like that)
@ -155,12 +156,22 @@ pub fn main<F: Future>(fut: F) -> F::Output {
block_on(fut) block_on(fut)
} }
struct ThreadWaker(Thread);
impl std::task::Wake for ThreadWaker {
fn wake(self: Arc<Self>) {
self.0.unpark();
}
fn wake_by_ref(self: &Arc<Self>) {
self.0.unpark();
}
}
fn block_on_local_future<F: Future>(fut: F) -> F::Output { fn block_on_local_future<F: Future>(fut: F) -> F::Output {
pin_mut!(fut); pin_mut!(fut);
let waker = Arc::new(thread::current()); let waker = Waker::from(Arc::new(ThreadWaker(thread::current())));
let waker = thread_waker_clone(Arc::into_raw(waker) as *const ());
let waker = unsafe { Waker::from_raw(waker) };
let mut context = Context::from_waker(&waker); let mut context = Context::from_waker(&waker);
loop { loop {
match fut.as_mut().poll(&mut context) { match fut.as_mut().poll(&mut context) {
@ -169,34 +180,3 @@ fn block_on_local_future<F: Future>(fut: F) -> F::Output {
} }
} }
} }
const THREAD_WAKER_VTABLE: std::task::RawWakerVTable = std::task::RawWakerVTable::new(
thread_waker_clone,
thread_waker_wake,
thread_waker_wake_by_ref,
thread_waker_drop,
);
fn thread_waker_clone(this: *const ()) -> RawWaker {
let this = unsafe { Arc::from_raw(this as *const Thread) };
let cloned = Arc::clone(&this);
let _ = Arc::into_raw(this);
RawWaker::new(Arc::into_raw(cloned) as *const (), &THREAD_WAKER_VTABLE)
}
fn thread_waker_wake(this: *const ()) {
let this = unsafe { Arc::from_raw(this as *const Thread) };
this.unpark();
}
fn thread_waker_wake_by_ref(this: *const ()) {
let this = unsafe { Arc::from_raw(this as *const Thread) };
this.unpark();
let _ = Arc::into_raw(this);
}
fn thread_waker_drop(this: *const ()) {
let this = unsafe { Arc::from_raw(this as *const Thread) };
drop(this);
}

View File

@ -258,15 +258,27 @@ pub fn read_subscription() -> Result<Option<SubscriptionInfo>, Error> {
let new_checksum = base64::encode(tools::md5sum(new_checksum.as_bytes())?); let new_checksum = base64::encode(tools::md5sum(new_checksum.as_bytes())?);
if checksum != new_checksum { if checksum != new_checksum {
bail!("stored checksum doesn't matches computed one '{}' != '{}'", checksum, new_checksum); return Ok(Some( SubscriptionInfo {
status: SubscriptionStatus::INVALID,
message: Some("checksum mismatch".to_string()),
..info
}));
} }
let age = proxmox::tools::time::epoch_i64() - info.checktime.unwrap_or(0); let age = proxmox::tools::time::epoch_i64() - info.checktime.unwrap_or(0);
if age < -5400 { // allow some delta for DST changes or time syncs, 1.5h if age < -5400 { // allow some delta for DST changes or time syncs, 1.5h
bail!("Last check time to far in the future."); return Ok(Some( SubscriptionInfo {
status: SubscriptionStatus::INVALID,
message: Some("last check date too far in the future".to_string()),
..info
}));
} else if age > MAX_LOCAL_KEY_AGE + MAX_KEY_CHECK_FAILURE_AGE { } else if age > MAX_LOCAL_KEY_AGE + MAX_KEY_CHECK_FAILURE_AGE {
if let SubscriptionStatus::ACTIVE = info.status { if let SubscriptionStatus::ACTIVE = info.status {
bail!("subscription information too old"); return Ok(Some( SubscriptionInfo {
status: SubscriptionStatus::INVALID,
message: Some("subscription information too old".to_string()),
..info
}));
} }
} }

View File

@ -218,6 +218,16 @@ Ext.define('PBS.MainView', {
flex: 1, flex: 1,
baseCls: 'x-plain', baseCls: 'x-plain',
}, },
{
xtype: 'proxmoxEOLNotice',
product: 'Proxmox Backup Server',
version: '1.1',
eolDate: '2022-07-31',
href: 'pbs.proxmox.com/docs/faq.html#how-long-will-my-proxmox-backup-server-version-be-supported',
},
{
flex: 1,
},
{ {
xtype: 'button', xtype: 'button',
baseCls: 'x-btn', baseCls: 'x-btn',

View File

@ -70,7 +70,7 @@ Ext.define('PBS.DatastoreStatistics', {
if (err) { if (err) {
metaData.tdAttr = `data-qtip="${Ext.htmlEncode(err)}"`; metaData.tdAttr = `data-qtip="${Ext.htmlEncode(err)}"`;
metaData.tdCls = 'proxmox-invalid-row'; metaData.tdCls = 'proxmox-invalid-row';
return `${value || ''} <i class="fa fa-fw critical fa-exclamation-circle"><i>`; return `${value || ''} <i class="fa fa-fw critical fa-exclamation-circle"></i>`;
} }
return value; return value;
}, },

View File

@ -33,13 +33,11 @@ Ext.define('PBS.Datastore.Options', {
note: gettext('Configuration change only, no data will be deleted.'), note: gettext('Configuration change only, no data will be deleted.'),
autoShow: true, autoShow: true,
taskName: 'delete-datastore', taskName: 'delete-datastore',
listeners: { apiCallDone: (success) => {
destroy: () => { let navtree = Ext.ComponentQuery.query('navigationtree')[0];
let navtree = Ext.ComponentQuery.query('navigationtree')[0]; navtree.rstore.load();
navtree.rstore.load(); let mainview = me.getView().up('mainview');
let mainview = me.getView().up('mainview'); mainview.getController().redirectTo('pbsDataStores');
mainview.getController().redirectTo('pbsDataStores');
},
}, },
}); });
}, },