Compare commits

...

196 Commits

Author SHA1 Message Date
beaa683a52 bump version to 0.8.9-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 11:24:56 +02:00
33a88dafb9 server/state: add spawn_internal_task and use it for websockets
is a helper to spawn an internal tokio task without it showing up
in the task list

it is still tracked for reload and notifies the last_worker_listeners

this enables the console to survive a reload of proxmox-backup-proxy

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-24 11:17:33 +02:00
224c65f8de termproxy: let users stop the termproxy task
for that we have to do a select on the workers abort_future

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-24 11:17:33 +02:00
f2b4b4b9fe fix 2885: bail on duplicate backup target
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-24 11:08:56 +02:00
ea9e559fc4 client: log archive upload duration more accurate, fix grammar
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 10:15:28 +02:00
0cf14984cc client: avoid division by zero in avg speed calculation, be more accurate
using micros vs. as_secs_f64 allows to have it calculated as usize
bytes, easier to handle - this was also used when it still lived in
upload_chunk_info_stream

Co-authored-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 10:14:40 +02:00
7d07b73def bump version to 0.8.8-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 13:12:18 +02:00
3d3670d786 termproxy: cmd: support upgrade
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 13:12:18 +02:00
14291179ce d/control: add dependecy for pve-xtermjs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 12:57:11 +02:00
e744de0eb0 api: termproxy: fix ACL as /nodes is /system
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 12:57:11 +02:00
98b1733760 api: apt: use schema default const for quiet param
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 12:25:28 +02:00
fdac28fcec update proxmox crate to get latest websocket implementation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 12:15:49 +02:00
653e2031d2 ui: add Console Button
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 12:06:38 +02:00
01ca99da2d server/rest: add console to index
register the console template and render it when the 'console' parameter
is given

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 12:06:38 +02:00
1c2f842a98 api2/nodes: add termproxy and vncwebsocket api calls
Even though it has nothing to do with vnc, we keep the name of the api
call for compatibility with our xtermjs client.

termproxy:
verifies that the user is allowed to open a console and starts
termproxy with the correct parameters

starts a TcpListener on "localhost:0" so that the kernel decides the
port (instead of trying to rerserving like in pve). Then it
leaves the fd open for termproxy and gives the number as port
and tells it via '--port-as-fd' that it should interpret this
as an open fd

the vncwebsocket api call checks the 'vncticket' (name for compatibility)
and connects the remote side (after an Upgrade) with a local TcpStream
connecting to the port given via WebSocket from the proxmox crate

to make sure that only the client can connect that called termproxy and
no one can connect to an arbitrary port on the host we have to include
the port in the ticket data

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 12:06:38 +02:00
a4d1675513 api2/access: implement term ticket
modeled after pves/pmgs vncticket (i substituted the vnc with term)
by putting the path and username as secret data in the ticket

when sending the ticket to /access/ticket it only verifies it,
checks the privs on the path and does not generate a new ticket

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 11:55:00 +02:00
2ab5acac5a server/config: add mechanism to update template
instead of exposing handlebars itself, offer a register_template and
a render_template ourselves.

render_template checks if the template file was modified since
the last render and reloads it when necessary

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 11:55:00 +02:00
27fde64794 api: apt update must run protected
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 11:45:52 +02:00
fa3f0584bb api: apt: support refreshing package index
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 11:21:54 +02:00
d12720c796 docs: epilog: point "Proxmox Backup" hyperlink to pbs wiki
This changes the "Proxmox Backup" hyperlink, which is referred to throughout the
Proxmox Backup Server documentation. Following this patch, it now points to the
pbs wiki page, rather than the unpublished product page.

*Note: This change is only a temporary measure, while the product page
(https://www.proxmox.com/proxmox-backup) is in development.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-07-23 10:43:17 +02:00
a4e86972a4 add .../apt/update API call
Depends on patched apt-pkg-native-rs. Changelog-URL detection is
inspired by PVE perl code for now, though marked with fixme to use 'apt
changelog' later on, if/when our repos have APT-compatible changelogs
set up.

list_installed_apt_packages iterates all packages and creates an
APTUpdateInfo with detailed information for every package matched by the
given filter Fn.

Sadly, libapt-pkg has some questionable design choices regarding their
use of 'iterators', which means quite a bit of nesting...

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-23 10:41:14 +02:00
3a3af6e2b6 backup manifest: make lookup_file_info public
useful to get info like, was the previous snapshot encrypted in
libproxmox-backup-qemu

Requested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 10:39:21 +02:00
482409641f docs: remove duplicate feature
Signed-off-by: Moayad Almalat <m.almalat@proxmox.com>
2020-07-23 10:29:08 +02:00
9688f6de0f client: log index.json upload only when verbose
I mean the user expects that we know what archives, fidx or didx, are
in a backup, so this is internal info and should not be logged by
default

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 10:28:38 +02:00
5b32820e93 client: don't use debug format for printing BackupRepository
It implements the fmt::Display  trait after all

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 10:28:38 +02:00
f40b4fb05a client writer: do not output chunklist for now on verbose true
Verbosity needs to be a non binary level, as this now is just
debug/development info, for endusers normally to much.

We want to have it available, but with a much higher verbosity level.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 10:28:38 +02:00
6e1deb158a client: rework logging upload size, bandwidth, ... info
Track reused size and chunk counts.
Log reused size and use pretty print for all sizes and bandwidth
metrics.
Calculate speed over the actually uploaded size, as else it can be
skewed really bad (showing like terabytes per second)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 10:28:38 +02:00
50ec1a8712 tools/format: add struct to pretty print bytes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 09:36:02 +02:00
a74b026baa systemd/time: document CalendarEvent struct and add TODOs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-23 07:55:42 +02:00
7e42ccdaf2 fixed index: chunk_from_offset: avoid slow modulo operation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-22 17:46:07 +02:00
e713ee5c56 remove BufferedFixedReader interface
replaced by AsyncIndexReader

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-22 17:28:49 +02:00
ec5f9d3525 implement AsyncSeek for AsyncIndexReader
Requires updating the AsyncRead implementation to cope with byte-wise
seeks to intra-chunk positions.

Uses chunk_from_offset to get locations within chunks, but tries to
avoid it for sequential read to not reduce performance from before.

AsyncSeek needs to use the temporary seek_to_pos to avoid changing the
position in case an invalid seek is given and it needs to error in
poll_complete.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-22 17:28:49 +02:00
d0463b67ca add and implement chunk_from_offset for IndexFile
Necessary for byte-wise seeking through chunks in an index.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-22 17:28:49 +02:00
2ff4c2cd5f datastore/chunker: fix comment typos
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-22 16:12:49 +02:00
c3b090ac8a backup: list images: handle walkdir error, catch "lost+found"
We support using an ext4 mountpoint directly as datastore and even do
so ourself when creating one through the disk manage code.

Such ext4 ountpoints have a lost+found directory which only root can
traverse into. As the GC list images is done as backup:backup user
walkdir gets an error.

We cannot ignore just all permission errors, as they could lead to
missing some backup indexes and thus possibly sweeping more chunks
than desired. While *normally* that should not happen through our
stack, we had already user report that they do rsyncs to move a
datastore from old to new server and got the permission wrong.

So for now be still very strict, only allow a "lost+found" directory
as immediate child of the datastore base directory, nothing else.

If deemed safe, this can always be made less strict. Possibly by
filtering the known backup-types on the highest level first.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-22 16:01:55 +02:00
c47e294ea7 datastore: fix typo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-22 15:04:14 +02:00
25455bd06d fix #2871: close FDs when scanning backup group
otherwise we leak those descriptors and run into EMFILE when a backup
group contains many snapshots.

fcntl::openat and Dir::openat are not the same ;)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-22 09:19:29 +02:00
c1c4a18f48 fix #2865: detect and skip vanished snapshots
also when they have been removed/forgotten since we retrieved the
snapshot list for the currently syncing backup group.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-22 09:19:29 +02:00
91f5594c08 api: translate ENOTFOUND to 404 for downloads
and percolate the HttpError back up on the client side

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-22 09:19:29 +02:00
86f6f74114 fix #2860: skip in-progress snapshots when syncing
they don't have a final manifest yet and are not done, so they can't be
synced either.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-22 09:19:29 +02:00
13d9fe3a6c .gitignore: add build directory
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-22 09:19:29 +02:00
41e4388005 ui: add calendar event selector
modelled after the PVE one, but we are not 1:1 compatible and need
deleteEmpty support. For now let's just have some duplicate code, but
we should try to move this to widget toolkit ASAP.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 19:33:52 +02:00
06a94edcf6 ui: sync job: default to false for "remove-vanished"
can be enabled later one easily, and restoring deleted snapshots
isn't easy.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 19:33:52 +02:00
ef496e2c20 ui: sync job: group remote fields and use "Source" in labels
Using "Source" helps to understand that this is a "pull from remote"
sync, not a "push to remote" one.

https://forum.proxmox.com/threads/suggestions-regarding-configurations-terminology.73272/

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 19:33:52 +02:00
113c9b5981 move subscription API path to /nodes
This aligns it with PVE and allows the widget toolkit's update window
"refresh" to work without modifications once POST /apt/update is
implemented.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-07-21 19:33:52 +02:00
956295cefe parse_calendar_event: support the weekly special expression
While we do not yet support the date specs for CalendarEvent the left
out "weekly" special expression[0] dies not requires that support.
It is specified to be equivalent with `Mon *-*-* 00:00:00` [0] and
this can be implemented with the weekday and time support we already
have.

[0]: https://www.freedesktop.org/software/systemd/man/systemd.time.html#Calendar%20Events

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 13:24:51 +02:00
a26c27c8e6 api2/status: fix estimation bug
when a datastore has enough data to calculate the estimated full date,
but always has exactly the same usage, the factor b of the regression
is '0'

return 0 for that case so that the gui can show 'never' instead of
'not enough data'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-21 13:02:08 +02:00
0c1c492d48 docs: fix some typos
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 13:01:21 +02:00
255ed62166 docs: GC followup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-21 12:58:47 +02:00
b96b11cdb7 chunk_store: Fix typo in bail message
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-07-21 12:51:41 +02:00
faa8e6948a backup: Fix typos and grammar
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-07-21 12:51:41 +02:00
8314ca9c10 docs: fix #2851 Add note about GC grace period
Adding a note about the garbage collection's grace period due to the
default atime behavior should help to avoid confusion as to why space is
not freed immediately.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-07-21 12:51:41 +02:00
538c2b6dcf followup: fixup the directory number, refactor
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-20 14:39:02 +02:00
e9b44bec01 docs: add note on supported filesystems
certain filesystems cannot be used as chunkstores, because they don't
support 2^16 subdirectories (e.g. ext4 with certain features disabled
or ext3 - see ext4(5))

reported via our community forum:
https://forum.proxmox.com/threads/emlink-too-many-links.73108/

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-20 14:10:39 +02:00
65418a0763 docs: introduction: rewording and fixing of minor errors
Reworded one sentence for improved readability.
Fixed some minor language errors.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-07-20 14:10:39 +02:00
aef4976801 docs: admin guide: fix grammatical errors and improve English
Mostly fixed typos and grammatical errors.
Improved wording in some sections to make instructions/advice clearer.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-07-20 14:10:39 +02:00
295d4f4116 bump udev build-dependency
0.4 contains a fix for C chars on non-x86 architectures.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-20 12:11:54 +02:00
c47a900ceb build: run tests on build (again)
now that all examples and tests are fixed again.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-20 11:37:53 +02:00
1b1110581a manifest: revert canonicalization to old behaviour
JSON keys MUST be quoted. this is a one-time break in signature
validation for backups created with the broken canonicalization code.
QEMU backups are not affected, as libproxmox-backup-qemu never linked
the broken versions.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-07-20 11:37:53 +02:00
eb13d9151a examples/upload-speed: adapt to change
commit 323b2f3dd6
changed the signature of upload_speedtest
adapt the example

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-20 10:22:42 +02:00
449e4a66fe tools/xattr: a char from C is not universally a rust i8
Make it actually do the correct cast by using `libc::c_char`.

Fixes issues when building on other platforms, e.g., the aarch64
client only build on Arch Linux ARM I tested in my free time.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-19 19:46:27 +02:00
217c22c754 server: add path value to NOT_FOUND http error
Especially helpful for requests not coming from browsers (where the
URL is normally easy to find out).

Makes it easier to detect if one triggered a request with an old
client, or so..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-16 12:46:51 +02:00
ba5b8a3e76 bump pxar dependency to 0.2.1
Contains a fix for the check for the maximum allowed size of
acl group object entries.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-16 11:48:22 +02:00
ac5e9e770b catalog_shell: add exit command
it is nice to have a command to exit from the shell instead of
only allowing ctrl+d or ctrl+c

the api method is just for documentation/help purposes and does nothing
by itself, the real logic is directly in the read loop

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-15 12:19:57 +02:00
b25deec0be pxar: .pxarexclude: absolute paths and byte based paths
Change the .pxarexclude parser to byte based parsing with
`.split(b'\n')` instead of `.lines()`, to not panic on
non-utf8 paths.

Specially deal with absolute paths by prefixing them with
the current directory.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-15 11:55:48 +02:00
cdf1da2872 tools: add strip_ascii_whitespace for byte slices
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-15 11:55:48 +02:00
3cfc56f5c2 cached user info: check_privs: print privilege path in error message
As else this is really user unfriendly, and it not printing it has no
advantage. If one doesn't wants to leak resource existence they just
need to *always* check permissions before checking if the requested
resource exists, if that's not done one can leak information also
without getting the path returned (as the system will either print
"resource doesn't exists" or "no permissions" respectively)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-15 08:55:58 +02:00
37e53b4c07 buildsys: fix targets to not run dpkg-buildpackage 4 times
and add a deb-all target

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 12:31:20 +02:00
77d634710e bump version to 0.8.7-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 12:05:34 +02:00
5c5181a252 d/lintian-overrides: ignore systemd-service-file-refers-to-unusual-wantedby-target
proxmox-backup-banner.service needs getty.target

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 11:08:36 +02:00
67042466e8 ui: datastore edit: avoid an extra indentation level
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 10:56:36 +02:00
757d0ccc76 warning fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-14 10:37:14 +02:00
4a55fa87d5 bump version to 0.8.7-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-14 10:25:53 +02:00
032cd1b862 pxar: restore file attributes, improve errors
and use the correct integer types for these operations

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-14 10:25:45 +02:00
ec2434fe3c ui: buildsys: add lint target
not yet automatically called on build, as it still fails.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 07:43:01 +02:00
34389132d9 docs: installation: add note where to find the webinterface
As the 8007 vs 8006 port is new and could confuse people, especially
if they did not used the PBS installer.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-14 07:35:59 +02:00
78ee20d72d docs: fix typo s/PBS_REPOSTOR/PBS_REPOSITOR/
Reported-by: Piotr Paszkowski aka patefoniQ
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-13 19:23:50 +02:00
601e42ac35 ui: running tasks: update limit to 100
else we'll never see the 99+ tasks ..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-11 12:53:32 +02:00
e1897b363b docs: add secure-apt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 14:12:51 +02:00
cf063c1973 bump version to 0.8.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:35:04 +02:00
f58233a73a src/backup/data_blob_reader.rs: avoid unwrap() - return error instead 2020-07-10 11:28:19 +02:00
d257c2ecbd ui: fingerprint: add icon to copy button
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:17:20 +02:00
e4ee7b7ac8 ui: fingerprint: add copy button
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:13:54 +02:00
1f0d23f792 ui: add show fingerprint button to dashboard
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
bfcef26a99 api2/node/status: add fingerprint
and rename get_usage to get_status (since its not usage only anymore)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
ec01eeadc6 refactor CertInfo to tools
we want to reuse some of the functionality elsewhere

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
660a34892d update proxmox crate to 0.2.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-10 11:08:27 +02:00
d86034afec src/bin/proxmox_backup_client/catalog.rs: fix keyfile handling 2020-07-10 10:36:45 +02:00
62593aba1e src/backup/manifest.rs: fix signature (exclude 'signature' property) 2020-07-10 10:36:45 +02:00
0eaef8eb84 client: show key path when creating/changing default key
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-10 09:58:24 +02:00
e39974afbf client: add simple version command
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 09:34:07 +02:00
dde18bbb85 proxmox-backup-client benchmark: improve output format 2020-07-10 09:13:52 +02:00
a40e1b0e8b src/server/rest.rs: avoid compiler warning 2020-07-10 09:13:52 +02:00
a0eb0cd372 ui: running task: increase active limit we show in badge to 99
Two digits fit nicely, and the extra plus for the >99 case doesn't
takes that much space either. So that and the fact that 9 is just
really low makes me bump this to 99 as cut-off value.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:56:46 +02:00
41067870c6 ui: tune badge styling a bit
the idea is to blend in when no task is running, thus no
background-color there. When tasks are running use the proxmox
branding guideline dark-grey, it isn't used as often so it should
fall into ones eye when changing but it has some use so it doesn't
seems out of place.

Reduce the border radius by a lot, so that it seems similar to the
one our ExtJS theme uses for the buttons outside - the original
border radius seems like it comes from the time where this was
intended to be a floating badge, there it'd make sense but as
integrated button one this seems to fit the style much more.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:51:25 +02:00
33a87bc39a docs: reference PDF variant in HTML output
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:31:38 +02:00
bed3e15f16 debian/proxmox-backup-docs.links: fix name and target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:23:41 +02:00
c687da9e8e datastore: chown base dir on creation
When creating a new datastore the basedir is only owned by the backup
user if it did not exist beforehand (create_path chowns only if it
creates the directory), and returns false if it did not create the
directory).

This improves the experience when adding a new datastore on a fresh
disk or existing directory (not owned by backup) - backups/pulls can
be run instead of terminating with EPERM.

Tested on my local testinstall with a new disk, and a existing directory:

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 18:20:16 +02:00
be30e7d269 ui: dashboard/TaskSummary: fade icons if count is zero
so that users can see the relevant counts faster

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 17:10:47 +02:00
106603c58f ui: fix crypt mode caluclation
also include 'mixed' in the calculation of the overall mode of a
snapshot and group

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 17:09:56 +02:00
7ba2c1c386 docs: add initial basic software stack definition
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 17:09:05 +02:00
4327a8462a proxmox-backup-client benchamrk: add more speed tests 2020-07-09 17:07:22 +02:00
e193544b8e src/server/rest.rs: disable debug logs 2020-07-09 16:18:14 +02:00
323b2f3dd6 proxmox-backup-client benchmark: add --verbose flag 2020-07-09 16:16:39 +02:00
7884e7ef4f bump version to 0.8.5-1 2020-07-09 15:35:07 +02:00
fae11693f0 fix cross process task listing
it does not make sense to check if the worker is running if we already
have an endtime and state

our 'worker_is_active_local' heuristic returns true for non
process-local tasks, so we got 'running' for all tasks that were not
started by 'our' pid and were still running

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 15:30:52 +02:00
22231524e2 docs: expand datastore documentation
document retention settings and schedules per datastore with
some minimal examples.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 15:04:26 +02:00
9634ca07db docs: add remotes and sync-jobs and schedules
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 15:04:26 +02:00
62f6a7e3d9 bump pathpatterns to 0.1.2
Fixes `**/foo` not matching "foo" without slashes.
(`**/lost+found` now matches the `lost+found` dir at the
root of our tree properly).

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 14:34:10 +02:00
86443141b5 ui: align version and user-menu spacing with pve/pmg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 14:31:21 +02:00
f6e964b96e ui: make username a menu-button
like we did in PVE and PMG

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 14:31:21 +02:00
c8bed1b4d7 bump version to 0.8.4-1 2020-07-09 14:28:44 +02:00
a3970d6c1e ui: add TaskButton in header
opens a grid with the running tasks and a shortcut the the node tasks

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 14:26:57 +02:00
cc83c13660 ui: add RunningTasksStore
so that we have a global store for running tasks

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 14:26:57 +02:00
bf7e2a4648 simpler lost+found pattern
the **/ is not required and currently also mistakenly
doesn't match /lost+found which is probably buggy on the
pathpatterns crate side and needs fixing there

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 14:06:42 +02:00
e284073e4a bump version to 0.8.3-1 2020-07-09 13:55:15 +02:00
3ec99affc8 get_disks: don't fail on zfs_devices
zfs does not have to be installed, so simply log an error and
continue, users still get an error when clicking directly on
ZFS

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:47:31 +02:00
a9649ddc44 disks/zpool_status: add test for pool with special character
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
4f9096a211 disks/zpool_list: allow some more characters for pool list
not exhaustive of what zfs allows (space is missing), but this
can be done easily without problems

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
c3a4b5e2e1 zpool_list: add tests for special pool names
those names are allowed for zpools

these will fail for now, but it will be fixed in the next commit

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
7957fabff2 api: add ZPOOL_NAME_SCHEMA and regex
poolnames can containe spaces and some other special characters

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
20a4e4e252 minor optimization to 'to_canonical_json'
* don't clone hash keys, just use references
* we don't need a String, stick to Vec<u8> and use
  serde_json::to_writer to avoid a temporary strings
  altogether

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 13:32:11 +02:00
2774566b03 ui: adapt for new sign-only crypt mode
we can now show 'none', 'encprypted', 'signed' or 'mixed' for
the crypt mode

also adds a different icon for signed files, and adds a hint that
signatures cannot be verified on the server

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:28:55 +02:00
4459ffe30e src/backup/manifest.rs: add default toömake it compatible with older backus 2020-07-09 13:25:38 +02:00
d16ed66c88 bump version toö 0.8.2-1 2020-07-09 11:59:10 +02:00
3ec6e249b3 buildsys: also upload debug packages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 11:39:10 +02:00
dfa517ad6c src/backup/manifest.rs: rename into_string -> to_string
And do not consume self.
2020-07-09 11:28:05 +02:00
8b2ad84a25 bump version to 0.8.1-1 2020-07-09 10:01:31 +02:00
3dacedce71 src/backup/manifest.rs: use serde_json::from_value() to deserialize data
Also modified from_data compute signature ditectly from json.
2020-07-09 09:50:28 +02:00
512d50a455 typos
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 09:34:58 +02:00
b53f637914 src/backup/manifest.rs: cleanup signature generation 2020-07-09 09:20:49 +02:00
152a926149 tests/blob_writer.rs: make it work again 2020-07-09 09:15:15 +02:00
7f388acea8 ship pbstest repo as sources.list.d file for beta
NOTE: the repo url is not yet working at time of commit, this is a
preparatory step.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 19:09:31 +02:00
b2bfb46835 docs: package repos: drop non-tests for now
they won't work and thus just confuse people, re-add them once we're
releasing final.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 18:17:55 +02:00
24406ebc0c docs: move host sysadmin out to own chapter, fix ZFS one
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 18:15:33 +02:00
1f24d9114c docs: add missing todos
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 18:14:17 +02:00
859fe9c1fb add local-zfs.rst
content is > 90% same as local-zfs.adoc in pve-docs.

adapted the format for .rst

fixed some typos and wrote some parts slightly different (wording).

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-07-08 16:49:40 +02:00
2107a5aebc src/backup/manifest.rs: include signature inside the manifest
This is more flexible, because we can choose what fileds we want to sign.
2020-07-08 16:23:26 +02:00
3638341aa4 src/backup/file_formats.rs: remove signed chunks
We can include signature in the manifest instead (patch will follow).
2020-07-08 16:23:26 +02:00
067fe514e6 docs: fix repo paths
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 15:41:09 +02:00
8c6e5ce23c improve administration guide
fixing some typos and grammar errors.

added example file layout for datastores.

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-07-08 14:20:49 +02:00
0351f23ba4 client: introduce --keyfd parameter
This is a more convenient way to pass along the key when
creating encrypted backups of unprivileged containers in PVE
where the unprivileged user namespace cannot access
`/etc/pve/priv`.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 13:56:38 +02:00
c1ff544eff src/backup/crypt_config.rs - compute_digest: make it more secure 2020-07-08 12:53:04 +02:00
69e5d71961 ui: ds/content: disable some button for in-progress backup
We cannot verify, download, file-browse backups which are currently
in progress.

'Forget' could work but is probably not desirable?

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 12:22:00 +02:00
48e22a8900 ui: ds/content: do not count in-progress backups for last made one
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 12:11:27 +02:00
a7a5f56daa ui: ds/content: show spinner for backups in progress
use the fact that they do not have a size property at all

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-08 12:09:21 +02:00
05389a0109 more xdg cleanup and encryption parameter improvements
Have a single common function to get the BaseDirectories
instance and a wrapper for `find()` and `place()` which
wrap the error with some context.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 10:57:28 +02:00
b65390ebc9 client: xdg usage: place() vs find()
place() is used when creating a file, as it will create
intermediate directories, only use it when actually placing
a new file.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 10:57:28 +02:00
3bad3e6e52 src/client/backup_writer.rs - upload_stream: add crypt_mode 2020-07-08 10:43:28 +02:00
24be37e3f6 client: fix schema to include --crypt-mode parameter
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 10:09:15 +02:00
1008a69a13 pxar: less confusing logic
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:58:29 +02:00
521a0acb2e DataStore::load_manifest: also return CryptMode
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:19:53 +02:00
3b66040de6 add DataBlob::crypt_mode
and move use statements up

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:19:53 +02:00
af3a0ae7b1 remove CryptMode::sign_only special method
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-08 09:19:53 +02:00
4e36f78438 src/backup/manifest.rs: support old encrypted property
Just to avoid confusion.
2020-07-08 08:52:27 +02:00
f28d9088ed introduce a CryptMode enum
This also replaces the recently introduced --encryption
parameter on the client with a --crypt-mode parameter.

This can be "none", "encrypt" or "sign-only".

Note that this introduces various changes in the API types
which previously did not take the above distinction into
account properly:

Both `BackupContent` and the manifest's `FileInfo`:
    lose `encryption: Option<bool>`
    gain `crypt_mode: Option<CryptMode>`

Within the backup manifest itself, the "crypt-mode" property
will always be set.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-07 15:24:19 +02:00
56b814e378 docs: add getting help section
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-07 13:24:39 +02:00
0c136efe30 docs: features: minor wording
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-07 13:23:17 +02:00
cdead6cd12 docs: drop initial out of context sentence
the footer mentions sphinx and this feels weird to read as user
(which doesn't really cares what language/format the source of the
docs are in)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-07 13:22:03 +02:00
c950826e46 bump version to 0.8.0-1 2020-07-07 10:15:44 +02:00
f91d58e157 src/tools/runtime.rs: implement get_runtime_with_builder 2020-07-07 10:11:04 +02:00
1ff840ffad bump version to 0.7.0-1 2020-07-07 07:40:22 +02:00
7443a6e092 src/client/remote_chunk_reader.rs: implement clone for RemoteChunkReader 2020-07-07 07:34:58 +02:00
3a9988638b docs: move todolist to own document, don't link in release build
It is always build for html, but not linked if the devbuild tag isn't
set. This tag is set in the Makefile if the $(BUILD_MODE) variable
isn't "release".

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-06 14:44:53 +02:00
96ee857752 client: add --encryption boolen parameter
This can be used to explicitly disable encryption even if a
default key file exists in ~/.config.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
887018bb79 client: use default encryption key if it is available
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
9696f5193b client: move key management into separate module
and use api macro for methods and Kdf type

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
e13c4f66bb minor style & whitespace fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 10:55:25 +02:00
8a25809573 docs: sync up copyright years
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:57:47 +02:00
d87b193b0b docs: todo: avoid leaking build details, link only
One can just search for them... If really wanted, we could set it to
true for dev builds (i.e., no DEB_VERSION defined)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:54:00 +02:00
ea5289e869 d/rules: do not compress .pdf files
as else the docs .pdf is a PITA to use for some endusers..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:53:04 +02:00
1f6a4f587a docs: do not hardcode version
use the debian package ones, if not defined we're doing a dev build

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:51:58 +02:00
705b2293ec d/control: add missing dependencies for lvm, smartmontools and ZFS
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 19:37:43 +02:00
d2c7ef09ba docs: rework and add a bit to introduction
Contributed-by: Daniela Häsler <daniela@proxmox.com>
[ discussed and edited some parts live with me, Thomas ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:58:17 +02:00
27f86f997e docs: fix index title
Contributed-by: Daniela Häsler <daniela@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:57:04 +02:00
fc93d38076 ui: ZFS create: set name-field minLength to 3 to match backend
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:03:51 +02:00
a5a85d41ff ui: ZFS create: use correct typeParameter name for disk selector
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:00:12 +02:00
08cb2038bd api: disks: indentation fixup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:59:30 +02:00
6f711c1737 ui: ZFS list: fix details top-bar button handler
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:20:33 +02:00
42ec9f577f ui: buildsys: actually include PBS.window.ZFSCreate component in source
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:19:59 +02:00
9de69cdb1a src/bin/proxmox_backup_client/catalog.rs: split out catalog code 2020-07-03 16:45:47 +02:00
bd260569d3 ui: fix glitch on some zoom steps
if the baseCls is not 'x-plain' the background of the flex
element is white, and on some zoom steps it gets taller
than one pixel and appears as a white line

making it have the plain baseCls, so it does not get any
background color and is always invisible

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 13:05:19 +02:00
36cb4b30ef add beta text with link to bugtracker
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 13:05:08 +02:00
4e717240bf bump version to 0.6.0-1 2020-07-03 09:46:19 +02:00
e9764238df make ReadChunk not require mutable self.
That way we can reduce lock contentions because we lock for much shorter
times.
2020-07-03 07:37:29 +02:00
26f499b17b ui: increase timeout for snapshot listing
the api call can take a very long time (for now), until we can
improve that, increase the timeout from the default of 30s

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 06:14:21 +02:00
cc7995ac40 src/bin/proxmox_backup_client/task.rs: split out task command 2020-07-02 18:04:29 +02:00
43abba4b4f src/bin/proxmox_backup_client/mount.rs: split out mount code 2020-07-02 17:49:59 +02:00
58f950c546 ui: consistently spell Datastore without space between words
Not even hard feeling on 'Datastore' vs. 'Data Store' but consistency
is desired in such names.
Talked shortly with Dominik, which also slightly favored the one
without space - so just go for that one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-02 17:20:41 +02:00
c426e65893 ui: disk create: sync and improve 'add-datastore' checkbox label
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-02 17:06:37 +02:00
caea8d611f proxmox-backup-client: add benchmark command
This is just a start, We need to add more useful things here...
2020-07-02 14:01:57 +02:00
7d0754a6d2 pxar: fixup 'vanished-file' logic a bit
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-30 14:41:42 +02:00
5afa0755ea pxar: fix missing newlines in warnings
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-30 14:37:20 +02:00
40b63186a6 DataStoreConfig.js: add verify button 2020-06-30 13:28:42 +02:00
8f6088c130 DataStoreContent.js: add verify button 2020-06-30 13:22:02 +02:00
2162e2c15d src/api2/admin/datastore.rs: avoid slash in UPID strings 2020-06-30 13:11:22 +02:00
118 changed files with 5217 additions and 2085 deletions

1
.gitignore vendored
View File

@ -3,3 +3,4 @@ local.mak
**/*.rs.bk **/*.rs.bk
/etc/proxmox-backup.service /etc/proxmox-backup.service
/etc/proxmox-backup-proxy.service /etc/proxmox-backup-proxy.service
build/

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "0.5.0" version = "0.8.9"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"] authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018" edition = "2018"
license = "AGPL-3" license = "AGPL-3"
@ -14,6 +14,7 @@ name = "proxmox_backup"
path = "src/lib.rs" path = "src/lib.rs"
[dependencies] [dependencies]
apt-pkg-native = "0.3.1" # custom patched version
base64 = "0.12" base64 = "0.12"
bitflags = "1.2.1" bitflags = "1.2.1"
bytes = "0.5" bytes = "0.5"
@ -37,12 +38,12 @@ pam = "0.7"
pam-sys = "0.5" pam-sys = "0.5"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-utils = "0.1.0" pin-utils = "0.1.0"
pathpatterns = "0.1.1" pathpatterns = "0.1.2"
proxmox = { version = "0.1.41", features = [ "sortable-macro", "api-macro" ] } proxmox = { version = "0.2.1", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0" proxmox-fuse = "0.1.0"
pxar = { version = "0.2.0", features = [ "tokio-io", "futures-io" ] } pxar = { version = "0.2.1", features = [ "tokio-io", "futures-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io", "futures-io" ] } #pxar = { path = "../pxar", features = [ "tokio-io", "futures-io" ] }
regex = "1.2" regex = "1.2"
rustyline = "6" rustyline = "6"
@ -50,11 +51,11 @@ serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
siphasher = "0.3" siphasher = "0.3"
syslog = "4.0" syslog = "4.0"
tokio = { version = "0.2.9", features = [ "blocking", "fs", "io-util", "macros", "rt-threaded", "signal", "stream", "tcp", "time", "uds" ] } tokio = { version = "0.2.9", features = [ "blocking", "fs", "dns", "io-util", "macros", "process", "rt-threaded", "signal", "stream", "tcp", "time", "uds" ] }
tokio-openssl = "0.4.0" tokio-openssl = "0.4.0"
tokio-util = { version = "0.3", features = [ "codec" ] } tokio-util = { version = "0.3", features = [ "codec" ] }
tower-service = "0.3.0" tower-service = "0.3.0"
udev = "0.3" udev = ">= 0.3, <0.5"
url = "2.1" url = "2.1"
#valgrind_request = { git = "https://github.com/edef1c/libvalgrind_request", version = "1.1.0", optional = true } #valgrind_request = { git = "https://github.com/edef1c/libvalgrind_request", version = "1.1.0", optional = true }
walkdir = "2" walkdir = "2"

View File

@ -37,11 +37,15 @@ CARGO ?= cargo
COMPILED_BINS := \ COMPILED_BINS := \
$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN)) $(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN))
export DEB_VERSION DEB_VERSION_UPSTREAM
SERVER_DEB=${PACKAGE}-server_${DEB_VERSION}_${ARCH}.deb SERVER_DEB=${PACKAGE}-server_${DEB_VERSION}_${ARCH}.deb
SERVER_DBG_DEB=${PACKAGE}-server-dbgsym_${DEB_VERSION}_${ARCH}.deb
CLIENT_DEB=${PACKAGE}-client_${DEB_VERSION}_${ARCH}.deb CLIENT_DEB=${PACKAGE}-client_${DEB_VERSION}_${ARCH}.deb
CLIENT_DBG_DEB=${PACKAGE}-client-dbgsym_${DEB_VERSION}_${ARCH}.deb
DOC_DEB=${PACKAGE}-docs_${DEB_VERSION}_all.deb DOC_DEB=${PACKAGE}-docs_${DEB_VERSION}_all.deb
DEBS=${SERVER_DEB} ${CLIENT_DEB} DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB}
DSC = rust-${PACKAGE}_${DEB_VERSION}.dsc DSC = rust-${PACKAGE}_${DEB_VERSION}.dsc
@ -76,18 +80,21 @@ build:
.PHONY: proxmox-backup-docs .PHONY: proxmox-backup-docs
proxmox-backup-docs: $(DOC_DEB) $(DOC_DEB) $(DEBS): proxmox-backup-docs
$(DOC_DEB): build proxmox-backup-docs: build
cd build; dpkg-buildpackage -b -us -uc --no-pre-clean cd build; dpkg-buildpackage -b -us -uc --no-pre-clean
lintian $(DOC_DEB) lintian $(DOC_DEB)
# copy the local target/ dir as a build-cache # copy the local target/ dir as a build-cache
.PHONY: deb .PHONY: deb
deb: $(DEBS) $(DEBS): deb
$(DEBS): build deb: build
cd build; dpkg-buildpackage -b -us -uc --no-pre-clean --build-profiles=nodoc cd build; dpkg-buildpackage -b -us -uc --no-pre-clean --build-profiles=nodoc
lintian $(DEBS) lintian $(DEBS)
.PHONY: deb-all
deb-all: $(DOC_DEB) $(DEBS)
.PHONY: dsc .PHONY: dsc
dsc: $(DSC) dsc: $(DSC)
$(DSC): build $(DSC): build
@ -140,5 +147,5 @@ install: $(COMPILED_BINS)
upload: ${SERVER_DEB} ${CLIENT_DEB} ${DOC_DEB} upload: ${SERVER_DEB} ${CLIENT_DEB} ${DOC_DEB}
# check if working directory is clean # check if working directory is clean
git diff --exit-code --stat && git diff --exit-code --stat --staged git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - ${SERVER_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve" --dist buster tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve" --dist buster

184
debian/changelog vendored
View File

@ -1,3 +1,187 @@
rust-proxmox-backup (0.8.9-1) unstable; urgency=medium
* improve termprocy (console) behavior on updating proxmox-backup-server and
other daemon restarts
* client: improve upload log output and speed calculation
* fix #2885: client upload: bail on duplicate backup targets
-- Proxmox Support Team <support@proxmox.com> Fri, 24 Jul 2020 11:24:07 +0200
rust-proxmox-backup (0.8.8-1) unstable; urgency=medium
* pxar: .pxarexclude: match behavior from absolute paths to the one described
in the documentation and use byte based paths
* catalog shell: add exit command
* manifest: revert signature canonicalization to old behaviour. Fallout from
encrypted older backups is expected and was ignored due to the beta status
of Proxmox Backup.
* documentation: various improvements and additions
* cached user info: print privilege path in error message
* docs: fix #2851 Add note about GC grace period
* api2/status: fix datastore full estimation bug if there where (almost) no
change for several days
* schedules, calendar event: support the 'weekly' special expression
* ui: sync job: group remote fields and use "Source" in labels
* ui: add calendar event selector
* ui: sync job: change default to false for "remove-vanished" for new jobs
* fix #2860: skip in-progress snapshots when syncing
* fix #2865: detect and skip vanished snapshots
* fix #2871: close FDs when scanning backup group, avoid leaking
* backup: list images: handle walkdir error, catch "lost+found" special
directory
* implement AsyncSeek for AsyncIndexReader
* client: rework logging upload info like size or bandwidth
* client writer: do not output chunklist for now on verbose=true
* add initial API for listing available updates and updating the APT
database
* ui: add xterm.js console implementation
-- Proxmox Support Team <support@proxmox.com> Thu, 23 Jul 2020 12:16:05 +0200
rust-proxmox-backup (0.8.7-2) unstable; urgency=medium
* support restoring file attributes from pxar archives
* docs: additions and fixes
* ui: running tasks: update limit to 100
-- Proxmox Support Team <support@proxmox.com> Tue, 14 Jul 2020 12:05:25 +0200
rust-proxmox-backup (0.8.6-1) unstable; urgency=medium
* ui: add button for easily showing the server fingerprint dashboard
* proxmox-backup-client benchmark: add --verbose flag and improve output
format
* docs: reference PDF variant in HTML output
* proxmox-backup-client: add simple version command
* improve keyfile and signature handling in catalog and manifest
-- Proxmox Support Team <support@proxmox.com> Fri, 10 Jul 2020 11:34:14 +0200
rust-proxmox-backup (0.8.5-1) unstable; urgency=medium
* fix cross process task listing
* docs: expand datastore documentation
* docs: add remotes and sync-jobs and schedules
* bump pathpatterns to 0.1.2
* ui: align version and user-menu spacing with pve/pmg
* ui: make username a menu-button
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 15:32:39 +0200
rust-proxmox-backup (0.8.4-1) unstable; urgency=medium
* add TaskButton in header
* simpler lost+found pattern
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 14:28:24 +0200
rust-proxmox-backup (0.8.3-1) unstable; urgency=medium
* get_disks: don't fail on zfs_devices
* allow some more characters for zpool list
* ui: adapt for new sign-only crypt mode
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 13:55:06 +0200
rust-proxmox-backup (0.8.2-1) unstable; urgency=medium
* buildsys: also upload debug packages
* src/backup/manifest.rs: rename into_string -> to_string
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 11:58:51 +0200
rust-proxmox-backup (0.8.1-1) unstable; urgency=medium
* remove authhenticated data blobs (not needed)
* add signature to manifest
* improve docs
* client: introduce --keyfd parameter
* ui improvements
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 10:01:25 +0200
rust-proxmox-backup (0.8.0-1) unstable; urgency=medium
* implement get_runtime_with_builder
-- Proxmox Support Team <support@proxmox.com> Tue, 07 Jul 2020 10:15:26 +0200
rust-proxmox-backup (0.7.0-1) unstable; urgency=medium
* implement clone for RemoteChunkReader
* improve docs
* client: add --encryption boolen parameter
* client: use default encryption key if it is available
* d/rules: do not compress .pdf files
* ui: various fixes
* add beta text with link to bugtracker
-- Proxmox Support Team <support@proxmox.com> Tue, 07 Jul 2020 07:40:05 +0200
rust-proxmox-backup (0.6.0-1) unstable; urgency=medium
* make ReadChunk not require mutable self.
* ui: increase timeout for snapshot listing
* ui: consistently spell Datastore without space between words
* ui: disk create: sync and improve 'add-datastore' checkbox label
* proxmox-backup-client: add benchmark command
* pxar: fixup 'vanished-file' logic a bit
* ui: add verify button
-- Proxmox Support Team <support@proxmox.com> Fri, 03 Jul 2020 09:45:52 +0200
rust-proxmox-backup (0.5.0-1) unstable; urgency=medium rust-proxmox-backup (0.5.0-1) unstable; urgency=medium
* partially revert commit 1f82f9b7b5d231da22a541432d5617cb303c0000 * partially revert commit 1f82f9b7b5d231da22a541432d5617cb303c0000

4
debian/control.in vendored
View File

@ -3,11 +3,15 @@ Architecture: any
Depends: fonts-font-awesome, Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1), libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8), libzstd1 (>= 1.3.8),
lvm2,
proxmox-backup-docs, proxmox-backup-docs,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4), proxmox-widget-toolkit (>= 2.2-4),
pve-xtermjs (>= 4.7.0-1),
smartmontools,
${misc:Depends}, ${misc:Depends},
${shlibs:Depends}, ${shlibs:Depends},
Recommends: zfsutils-linux,
Description: Proxmox Backup Server daemon with tools and GUI Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface. tools. This includes a web-based graphical user interface.

2
debian/lintian-overrides vendored Normal file
View File

@ -0,0 +1,2 @@
proxmox-backup-server: package-installs-apt-sources etc/apt/sources.list.d/pbstest-beta.list
proxmox-backup-server: systemd-service-file-refers-to-unusual-wantedby-target lib/systemd/system/proxmox-backup-banner.service getty.target

1
debian/proxmox-backup-docs.links vendored Normal file
View File

@ -0,0 +1 @@
/usr/share/doc/proxmox-backup/proxmox-backup.pdf /usr/share/doc/proxmox-backup/html/proxmox-backup.pdf

View File

@ -1,6 +1,7 @@
etc/proxmox-backup-proxy.service /lib/systemd/system/ etc/proxmox-backup-proxy.service /lib/systemd/system/
etc/proxmox-backup.service /lib/systemd/system/ etc/proxmox-backup.service /lib/systemd/system/
etc/proxmox-backup-banner.service /lib/systemd/system/ etc/proxmox-backup-banner.service /lib/systemd/system/
etc/pbstest-beta.list /etc/apt/sources.list.d/
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner

3
debian/rules vendored
View File

@ -45,3 +45,6 @@ override_dh_installsystemd:
# TODO: remove once available (Debian 11 ?) # TODO: remove once available (Debian 11 ?)
override_dh_dwz: override_dh_dwz:
dh_dwz --no-dwz-multifile dh_dwz --no-dwz-multifile
override_dh_compress:
dh_compress -X.pdf

View File

@ -1,11 +1,5 @@
include ../defines.mk include ../defines.mk
ifeq ($(BUILD_MODE), release)
COMPILEDIR := ../target/release
else
COMPILEDIR := ../target/debug
endif
GENERATED_SYNOPSIS := \ GENERATED_SYNOPSIS := \
proxmox-backup-client/synopsis.rst \ proxmox-backup-client/synopsis.rst \
proxmox-backup-client/catalog-shell-synopsis.rst \ proxmox-backup-client/catalog-shell-synopsis.rst \
@ -26,6 +20,15 @@ SPHINXOPTS =
SPHINXBUILD = sphinx-build SPHINXBUILD = sphinx-build
BUILDDIR = output BUILDDIR = output
ifeq ($(BUILD_MODE), release)
COMPILEDIR := ../target/release
SPHINXOPTS += -t release
else
COMPILEDIR := ../target/debug
SPHINXOPTS += -t devbuild
endif
# Sphinx internal variables. # Sphinx internal variables.
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) . ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) .

View File

@ -1,9 +1,8 @@
Administration Guide Backup Management
==================== =================
The administration guide. .. The administration guide.
.. todo:: either add a bit more explanation or remove the previous sentence
.. todo:: either add a bit more explanation or remove the previous sentence
Terminology Terminology
----------- -----------
@ -13,16 +12,16 @@ Backup Content
When doing deduplication, there are different strategies to get When doing deduplication, there are different strategies to get
optimal results in terms of performance and/or deduplication rates. optimal results in terms of performance and/or deduplication rates.
Depending on the type of data, one can split data into *fixed* or *variable* Depending on the type of data, it can be split into *fixed* or *variable*
sized chunks. sized chunks.
Fixed sized chunking needs almost no CPU performance, and is used to Fixed sized chunking requires minimal CPU power, and is used to
backup virtual machine images. backup virtual machine images.
Variable sized chunking needs more CPU power, but is essential to get Variable sized chunking needs more CPU power, but is essential to get
good deduplication rates for file archives. good deduplication rates for file archives.
The backup server supports both strategies. The Proxmox Backup Server supports both strategies.
File Archives: ``<name>.pxar`` File Archives: ``<name>.pxar``
@ -31,7 +30,7 @@ File Archives: ``<name>.pxar``
.. see https://moinakg.wordpress.com/2013/06/22/high-performance-content-defined-chunking/ .. see https://moinakg.wordpress.com/2013/06/22/high-performance-content-defined-chunking/
A file archive stores a full directory tree. Content is stored using A file archive stores a full directory tree. Content is stored using
the :ref:`pxar-format`, split into variable sized chunks. The format the :ref:`pxar-format`, split into variable-sized chunks. The format
is optimized to achieve good deduplication rates. is optimized to achieve good deduplication rates.
@ -39,7 +38,7 @@ Image Archives: ``<name>.img``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is used for virtual machine images and other large binary This is used for virtual machine images and other large binary
data. Content is split into fixed sized chunks. data. Content is split into fixed-sized chunks.
Binary Data (BLOBs) Binary Data (BLOBs)
@ -56,7 +55,7 @@ Catalog File: ``catalog.pcat1``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The catalog file is an index for file archives. It contains The catalog file is an index for file archives. It contains
the list of files and is used to speed-up search operations. the list of files and is used to speed up search operations.
The Manifest: ``index.json`` The Manifest: ``index.json``
@ -74,12 +73,12 @@ The backup server groups backups by *type*, where *type* is one of:
``vm`` ``vm``
This type is used for :term:`virtual machine`\ s. Typically This type is used for :term:`virtual machine`\ s. Typically
contains the virtual machine's configuration and an image archive consists of the virtual machine's configuration file and an image archive
for each disk. for each disk.
``ct`` ``ct``
This type is used for :term:`container`\ s. Contains the container's This type is used for :term:`container`\ s. Consists of the container's
configuration and a single file archive for the container content. configuration and a single file archive for the filesystem content.
``host`` ``host``
This type is used for backups created from within the backed up machine. This type is used for backups created from within the backed up machine.
@ -90,7 +89,7 @@ The backup server groups backups by *type*, where *type* is one of:
Backup ID Backup ID
~~~~~~~~~ ~~~~~~~~~
An unique ID. Usually the virtual machine or container ID. ``host`` A unique ID. Usually the virtual machine or container ID. ``host``
type backups normally use the hostname. type backups normally use the hostname.
@ -122,6 +121,13 @@ uniquely identifies a specific backup within a datastore.
As you can see, the time format is RFC3399_ with Coordinated As you can see, the time format is RFC3399_ with Coordinated
Universal Time (UTC_, identified by the trailing *Z*). Universal Time (UTC_, identified by the trailing *Z*).
Backup Server Management
------------------------
The command line tool to configure and manage the backup server is called
:command:`proxmox-backup-manager`.
:term:`DataStore` :term:`DataStore`
~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~
@ -133,21 +139,25 @@ or ``zfs``) to store the backup data.
Datastores are identified by a simple *ID*. You can configure it Datastores are identified by a simple *ID*. You can configure it
when setting up the backup server. when setting up the backup server.
.. note:: The `File Layout`_ requires the file system to support at least *65538*
Backup Server Management subdirectories per directory. That number comes from the 2\ :sup:`16`
------------------------ pre-created chunk namespace directories, and the ``.`` and ``..`` default
directory entries. This requirement excludes certain filesystems and
The command line tool to configure and manage the backup server is called filesystem configuration from being supported for a datastore. For example,
:command:`proxmox-backup-manager`. ``ext3`` as a whole or ``ext4`` with the ``dir_nlink`` feature manually disabled.
Datastore Configuration Datastore Configuration
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
A :term:`datastore` is a place to store backups. You can configure You can configure multiple datastores. Minimum one datastore needs to be
multiple datastores. At least one datastore needs to be configured. The datastore is identified by a simple `name` and points to a
configured. The datastore is identified by a simple `name` and points directory on the filesystem. Each datastore also has associated retention
to a directory. settings of how many backup snapshots for each interval of ``hourly``,
``daily``, ``weekly``, ``monthly``, ``yearly`` as well as a time-independent
number of backups to keep in that store. :ref:`Pruning <pruning>` and
:ref:`garbage collection <garbage-collection>` can also be configured to run
periodically based on a configured :term:`schedule` per datastore.
The following command creates a new datastore called ``store1`` on :file:`/backup/disk1/store1` The following command creates a new datastore called ``store1`` on :file:`/backup/disk1/store1`
@ -166,6 +176,30 @@ To list existing datastores run:
│ store1 │ /backup/disk1/store1 │ This is my default storage. │ │ store1 │ /backup/disk1/store1 │ This is my default storage. │
└────────┴──────────────────────┴─────────────────────────────┘ └────────┴──────────────────────┴─────────────────────────────┘
You can change settings of a datastore, for example to set a prune and garbage
collection schedule or retention settings using ``update`` subcommand and view
a datastore with the ``show`` subcommand:
.. code-block:: console
# proxmox-backup-manager datastore update store1 --keep-last 7 --prune-schedule daily --gc-schedule 'Tue 04:27'
# proxmox-backup-manager datastore show store1
┌────────────────┬─────────────────────────────┐
│ Name │ Value │
╞════════════════╪═════════════════════════════╡
│ name │ store1 │
├────────────────┼─────────────────────────────┤
│ path │ /backup/disk1/store1 │
├────────────────┼─────────────────────────────┤
│ comment │ This is my default storage. │
├────────────────┼─────────────────────────────┤
│ gc-schedule │ Tue 04:27 │
├────────────────┼─────────────────────────────┤
│ keep-last │ 7 │
├────────────────┼─────────────────────────────┤
│ prune-schedule │ daily │
└────────────────┴─────────────────────────────┘
Finally, it is possible to remove the datastore configuration: Finally, it is possible to remove the datastore configuration:
.. code-block:: console .. code-block:: console
@ -179,17 +213,58 @@ Finally, it is possible to remove the datastore configuration:
File Layout File Layout
^^^^^^^^^^^ ^^^^^^^^^^^
.. todo:: Add datastore file layout example After creating a datastore, the following default layout will appear:
.. code-block:: console
# ls -arilh /backup/disk1/store1
276493 -rw-r--r-- 1 backup backup 0 Jul 8 12:35 .lock
276490 drwxr-x--- 1 backup backup 1064960 Jul 8 12:35 .chunks
`.lock` is an empty file used for process locking.
The `.chunks` directory contains folders, starting from `0000` and taking hexadecimal values until `ffff`. These
directories will store the chunked data after a backup operation has been executed.
.. code-block:: console
# ls -arilh /backup/disk1/store1/.chunks
545824 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 ffff
545823 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffe
415621 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffd
415620 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffc
353187 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffb
344995 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffa
144079 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff9
144078 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff8
144077 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff7
...
403180 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000c
403179 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000b
403177 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000a
402530 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0009
402513 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0008
402509 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0007
276509 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0006
276508 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0005
276507 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0004
276501 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0003
276499 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0002
276498 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0001
276494 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0000
276489 drwxr-xr-x 3 backup backup 4.0K Jul 8 12:35 ..
276490 drwxr-x--- 1 backup backup 1.1M Jul 8 12:35 .
User Management User Management
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
Proxmox Backup support several authentication realms, and you need to Proxmox Backup Server supports several authentication realms, and you need to
choose the realm when you add a new user. Possible realms are: choose the realm when you add a new user. Possible realms are:
:pam: Linux PAM standard authentication. Use this if you want to :pam: Linux PAM standard authentication. Use this if you want to
authenticate as Linux system user (Users needs to exist on the authenticate as Linux system user (Users need to exist on the
system). system).
:pbs: Proxmox Backup Server realm. This type stores hashed passwords in :pbs: Proxmox Backup Server realm. This type stores hashed passwords in
@ -216,8 +291,8 @@ normally want to add other users with less privileges:
# proxmox-backup-manager user create john@pbs --email john@example.com # proxmox-backup-manager user create john@pbs --email john@example.com
The create command lets you specify many option like ``--email`` or The create command lets you specify many options like ``--email`` or
``--password``, but you can update or change any of them using the ``--password``. You can update or change any of them using the
update command later: update command later:
.. code-block:: console .. code-block:: console
@ -225,11 +300,10 @@ update command later:
# proxmox-backup-manager user update john@pbs --firstname John --lastname Smith # proxmox-backup-manager user update john@pbs --firstname John --lastname Smith
# proxmox-backup-manager user update john@pbs --comment "An example user." # proxmox-backup-manager user update john@pbs --comment "An example user."
.. todo:: Mention how to set password without passing plaintext password as cli argument. .. todo:: Mention how to set password without passing plaintext password as cli argument.
The resulting use list looks like this: The resulting user list looks like this:
.. code-block:: console .. code-block:: console
@ -242,16 +316,16 @@ The resulting use list looks like this:
│ root@pam │ 1 │ │ │ │ │ Superuser │ │ root@pam │ 1 │ │ │ │ │ Superuser │
└──────────┴────────┴────────┴───────────┴──────────┴──────────────────┴──────────────────┘ └──────────┴────────┴────────┴───────────┴──────────┴──────────────────┴──────────────────┘
Newly created users do not have an permissions. Please read the next Newly created users do not have any permissions. Please read the next
section to learn how to set access permissions. section to learn how to set access permissions.
If you want to disable an user account, you can do that by setting ``--enable`` to ``0`` If you want to disable a user account, you can do that by setting ``--enable`` to ``0``
.. code-block:: console .. code-block:: console
# proxmox-backup-manager user update john@pbs --enable 0 # proxmox-backup-manager user update john@pbs --enable 0
Or completely remove the users with: Or completely remove the user with:
.. code-block:: console .. code-block:: console
@ -261,20 +335,20 @@ Or completely remove the users with:
Access Control Access Control
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
Users do not have any permission by default. Instead you need to By default new users do not have any permission. Instead you need to
specify what is allowed and what not. You can do this by assigning specify what is allowed and what is not. You can do this by assigning
roles to users on specific objects like datastores or remotes. The roles to users on specific objects like datastores or remotes. The
following roles exist: following roles exist:
**NoAccess**
Disable Access - nothing is allowed.
**Admin** **Admin**
The Administrator can do anything. The Administrator can do anything.
**Audit** **Audit**
An Auditor can view things, but is not allowed to change settings. An Auditor can view things, but is not allowed to change settings.
**NoAccess**
Disable Access - nothing is allowed.
**DatastoreAdmin** **DatastoreAdmin**
Can do anything on datastores. Can do anything on datastores.
@ -301,6 +375,63 @@ following roles exist:
Is allowed to read data from a remote. Is allowed to read data from a remote.
:term:`Remote`
~~~~~~~~~~~~~~
A remote refers to a separate Proxmox Backup Server installation and a user on that
installation, from which you can `sync` datastores to a local datastore with a
`Sync Job`.
To add a remote, you need its hostname or ip, a userid and password on the
remote, and its certificate fingerprint. To get the fingerprint, use the
``proxmox-backup-manager cert info`` command on the remote.
.. code-block:: console
# proxmox-backup-manager cert info |grep Fingerprint
Fingerprint (sha256): 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe
Using the information specified above, add the remote with:
.. code-block:: console
# proxmox-backup-manager remote create pbs2 --host pbs2.mydomain.example --userid sync@pam --password 'SECRET' --fingerprint 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe
Use the ``list``, ``show``, ``update``, ``remove`` subcommands of
``proxmox-backup-manager remote`` to manage your remotes:
.. code-block:: console
# proxmox-backup-manager remote update pbs2 --host pbs2.example
# proxmox-backup-manager remote list
┌──────┬──────────────┬──────────┬───────────────────────────────────────────┬─────────┐
│ name │ host │ userid │ fingerprint │ comment │
╞══════╪══════════════╪══════════╪═══════════════════════════════════════════╪═════════╡
│ pbs2 │ pbs2.example │ sync@pam │64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe │ │
└──────┴──────────────┴──────────┴───────────────────────────────────────────┴─────────┘
# proxmox-backup-manager remote remove pbs2
Sync Jobs
~~~~~~~~~
Sync jobs are configured to pull the contents of a datastore on a `Remote` to a
local datastore. You can either start the sync job manually on the GUI or
provide it with a :term:`schedule` to run regularly. The
``proxmox-backup-manager sync-job`` command is used to manage sync jobs:
.. code-block:: console
# proxmox-backup-manager sync-job create pbs2-local --remote pbs2 --remote-store local --store local --schedule 'Wed 02:30'
# proxmox-backup-manager sync-job update pbs2-local --comment 'offsite'
# proxmox-backup-manager sync-job list
┌────────────┬───────┬────────┬──────────────┬───────────┬─────────┐
│ id │ store │ remote │ remote-store │ schedule │ comment │
╞════════════╪═══════╪════════╪══════════════╪═══════════╪═════════╡
│ pbs2-local │ local │ pbs2 │ local │ Wed 02:30 │ offsite │
└────────────┴───────┴────────┴──────────────┴───────────┴─────────┘
# proxmox-backup-manager sync-job remove pbs2-local
Backup Client usage Backup Client usage
------------------- -------------------
@ -308,16 +439,16 @@ Backup Client usage
The command line client is called :command:`proxmox-backup-client`. The command line client is called :command:`proxmox-backup-client`.
Respository Locations Repository Locations
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
The client uses the following notation to specify a datastore repository The client uses the following notation to specify a datastore repository
on the backup server. on the backup server.
[[username@]server:]datastore [[username@]server:]datastore
The default value for ``username`` ist ``root``. If no server is specified, the The default value for ``username`` ist ``root``. If no server is specified,
default is the local host (``localhost``). the default is the local host (``localhost``).
You can pass the repository with the ``--repository`` command You can pass the repository with the ``--repository`` command
line option, or by setting the ``PBS_REPOSITORY`` environment line option, or by setting the ``PBS_REPOSITORY`` environment
@ -381,7 +512,7 @@ This section explains how to create a backup from within the machine. This can
be a physical host, a virtual machine, or a container. Such backups may contain file be a physical host, a virtual machine, or a container. Such backups may contain file
and image archives. There are no restrictions in this case. and image archives. There are no restrictions in this case.
.. note:: If you want to backup virtual machines or containers on Proxmov VE, see :ref:`pve-integration`. .. note:: If you want to backup virtual machines or containers on Proxmox VE, see :ref:`pve-integration`.
For the following example you need to have a backup server set up, working For the following example you need to have a backup server set up, working
credentials and need to know the repository name. credentials and need to know the repository name.
@ -416,7 +547,7 @@ environment variable ``PBS_REPOSITORY``.
.. code-block:: console .. code-block:: console
# export PBS_REPOSTORY=backup-server:store1 # export PBS_REPOSITORY=backup-server:store1
After this you can execute all commands without specifying the ``--repository`` After this you can execute all commands without specifying the ``--repository``
option. option.
@ -469,17 +600,17 @@ the given patterns. It is only possible to match files in this directory and its
all files ending in ``.tmp`` within the directory or subdirectories with the all files ending in ``.tmp`` within the directory or subdirectories with the
following pattern ``**/*.tmp``. following pattern ``**/*.tmp``.
``[...]`` matches a single character from any of the provided characters within ``[...]`` matches a single character from any of the provided characters within
the brackets. ``[!...]`` does the complementary and matches any singe character the brackets. ``[!...]`` does the complementary and matches any single character
not contained within the brackets. It is also possible to specify ranges with two not contained within the brackets. It is also possible to specify ranges with two
characters separated by ``-``. For example, ``[a-z]`` matches any lowercase characters separated by ``-``. For example, ``[a-z]`` matches any lowercase
alphabetic character and ``[0-9]`` matches any one single digit. alphabetic character and ``[0-9]`` matches any one single digit.
The order of the glob match patterns defines if a file is included or The order of the glob match patterns defines whether a file is included or
excluded, later entries win over previous ones. excluded, that is to say later entries override previous ones.
This is also true for match patterns encountered deeper down the directory tree, This is also true for match patterns encountered deeper down the directory tree,
which can override a previous exclusion. which can override a previous exclusion.
Be aware that excluded directories will **not** be read by the backup client. Be aware that excluded directories will **not** be read by the backup client.
A ``.pxarexclude`` file in a subdirectory will have no effect. Thus, a ``.pxarexclude`` file in an excluded subdirectory will have no effect.
``.pxarexclude`` files are treated as regular files and will be included in the ``.pxarexclude`` files are treated as regular files and will be included in the
backup archive. backup archive.
@ -531,8 +662,8 @@ Restoring this backup will result in:
Encryption Encryption
^^^^^^^^^^ ^^^^^^^^^^
Proxmox backup supports client side encryption with AES-256 in GCM_ Proxmox Backup supports client-side encryption with AES-256 in GCM_
mode. First you need to create an encryption key: mode. To set this up, you first need to create an encryption key:
.. code-block:: console .. code-block:: console
@ -564,13 +695,13 @@ variables ``PBS_PASSWORD`` and ``PBS_ENCRYPTION_PASSWORD``.
Restoring Data Restoring Data
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
The regular creation of backups is a necessary step to avoid data The regular creation of backups is a necessary step to avoiding data
loss. More important, however, is the restoration. It is good practice to perform loss. More importantly, however, is the restoration. It is good practice to perform
periodic recovery tests to ensure that you can access the data in periodic recovery tests to ensure that you can access the data in
case of problems. case of problems.
First, you need to find the snapshot which you want to restore. The snapshot First, you need to find the snapshot which you want to restore. The snapshot
command gives a list of all snapshots on the server: command provides a list of all the snapshots on the server:
.. code-block:: console .. code-block:: console
@ -602,8 +733,8 @@ backup.
# proxmox-backup-client restore host/elsa/2019-12-03T09:35:01Z root.pxar /target/path/ # proxmox-backup-client restore host/elsa/2019-12-03T09:35:01Z root.pxar /target/path/
To get the contents of any archive you can restore the ``ìndex.json`` file in the To get the contents of any archive, you can restore the ``ìndex.json`` file in the
repository and restore it to '-'. This will dump the content to the standard output. repository to the target path '-'. This will dump the contents to the standard output.
.. code-block:: console .. code-block:: console
@ -640,7 +771,7 @@ working directory and list directory contents in the archive.
``pwd`` shows the full path of the current working directory with respect to the ``pwd`` shows the full path of the current working directory with respect to the
archive root. archive root.
Being able to quickly search the contents of the archive is a often needed feature. Being able to quickly search the contents of the archive is a commmonly needed feature.
That's where the catalog is most valuable. That's where the catalog is most valuable.
For example: For example:
@ -689,10 +820,10 @@ file archive as a read-only filesystem to a mountpoint on your host.
bin dev home lib32 libx32 media opt root sbin sys usr bin dev home lib32 libx32 media opt root sbin sys usr
boot etc lib lib64 lost+found mnt proc run srv tmp var boot etc lib lib64 lost+found mnt proc run srv tmp var
This allows you to access the full content of the archive in a seamless manner. This allows you to access the full contents of the archive in a seamless manner.
.. note:: As the FUSE connection needs to fetch and decrypt chunks from the .. note:: As the FUSE connection needs to fetch and decrypt chunks from the
backup servers datastore, this can cause some additional network and CPU backup server's datastore, this can cause some additional network and CPU
load on your host, depending on the operations you perform on the mounted load on your host, depending on the operations you perform on the mounted
filesystem. filesystem.
@ -726,6 +857,8 @@ To remove the ticket, issue a logout:
# proxmox-backup-client logout # proxmox-backup-client logout
.. _pruning:
Pruning and Removing Backups Pruning and Removing Backups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -787,7 +920,7 @@ backup is retained.
You can use the ``--dry-run`` option to test your settings. This only You can use the ``--dry-run`` option to test your settings. This only
shows the list of existing snapshots and which action prune would take. shows the list of existing snapshots and what actions prune would take.
.. code-block:: console .. code-block:: console
@ -829,6 +962,17 @@ unused data blocks are removed.
depending on the number of chunks and the speed of the underlying depending on the number of chunks and the speed of the underlying
disks. disks.
.. note:: The garbage collection will only remove chunks that haven't been used
for at least one day (exactly 24h 5m). This grace period is necessary because
chunks in use are marked by touching the chunk which updates the ``atime``
(access time) property. Filesystems are mounted with the ``relatime`` option
by default. This results in a better performance by only updating the
``atime`` property if the last access has been at least 24 hours ago. The
downside is, that touching a chunk within these 24 hours will not always
update its ``atime`` property.
Chunks in the grace period will be logged at the end of the garbage
collection task as *Pending removals*.
.. code-block:: console .. code-block:: console
@ -896,7 +1040,3 @@ After that you should be able to see storage status with:
.. include:: command-line-tools.rst .. include:: command-line-tools.rst
.. include:: services.rst .. include:: services.rst
.. include host system admin at the end
.. include:: sysadmin.rst

View File

@ -17,7 +17,7 @@
# add these directories to sys.path here. If the directory is relative to the # add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here. # documentation root, use os.path.abspath to make it absolute, like shown here.
# #
# import os import os
# import sys # import sys
# sys.path.insert(0, os.path.abspath('.')) # sys.path.insert(0, os.path.abspath('.'))
@ -45,8 +45,11 @@ PygmentsBridge.latex_formatter = CustomLatexFormatter
# Add any Sphinx extension module names here, as strings. They can be # Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones. # ones.
extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo"] extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo"]
todo_link_only = True
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates'] templates_path = ['_templates']
@ -76,9 +79,11 @@ author = 'Proxmox Support Team'
# built documents. # built documents.
# #
# The short X.Y version. # The short X.Y version.
version = '0.2' vstr = lambda s: '<devbuild>' if s is None else str(s)
version = vstr(os.getenv('DEB_VERSION_UPSTREAM'))
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = '0.2-1' release = vstr(os.getenv('DEB_VERSION'))
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.
@ -107,7 +112,7 @@ exclude_patterns = [
'pxar/man1.rst', 'pxar/man1.rst',
'epilog.rst', 'epilog.rst',
'pbs-copyright.rst', 'pbs-copyright.rst',
'sysadmin.rst', 'local-zfs.rst'
'package-repositories.rst', 'package-repositories.rst',
] ]

View File

@ -11,8 +11,10 @@
.. _Container: https://en.wikipedia.org/wiki/Container_(virtualization) .. _Container: https://en.wikipedia.org/wiki/Container_(virtualization)
.. _Zstandard: https://en.wikipedia.org/wiki/Zstandard .. _Zstandard: https://en.wikipedia.org/wiki/Zstandard
.. _Proxmox: https://www.proxmox.com .. _Proxmox: https://www.proxmox.com
.. _Proxmox Community Forum: https://forum.proxmox.com
.. _Proxmox Virtual Environment: https://www.proxmox.com/proxmox-ve .. _Proxmox Virtual Environment: https://www.proxmox.com/proxmox-ve
.. _Proxmox Backup: https://www.proxmox.com/proxmox-backup .. _Proxmox Backup: https://pbs.proxmox.com/wiki/index.php/Main_Page // FIXME
.. _PBS Development List: https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
.. _reStructuredText: https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html .. _reStructuredText: https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html
.. _Rust: https://www.rust-lang.org/ .. _Rust: https://www.rust-lang.org/
.. _SHA-256: https://en.wikipedia.org/wiki/SHA-2 .. _SHA-256: https://en.wikipedia.org/wiki/SHA-2

View File

@ -16,7 +16,7 @@ Glossary
Datastore Datastore
A place to store backups. A directory which contains the backup data. A place to store backups. A directory which contains the backup data.
The current implemenation is file-system based. The current implementation is file-system based.
`Rust`_ `Rust`_
@ -46,3 +46,19 @@ Glossary
kernel driver handles filesystem requests and sends them to a kernel driver handles filesystem requests and sends them to a
userspace application. userspace application.
Remote
A remote Proxmox Backup Server installation and credentials for a user on it.
You can pull datastores from a remote to a local datastore in order to
have redundant backups.
Schedule
Certain tasks, for example pruning and garbage collection, need to be
performed on a regular basis. Proxmox Backup Server uses a subset of the
`systemd Time and Date Specification
<https://www.freedesktop.org/software/systemd/man/systemd.time.html#>`_.
The subset currently supports time of day specifications and weekdays, in
addition to the shorthand expressions 'minutely', 'hourly', 'daily'.
There is no support for specifying timezones, the tasks are run in the
timezone configured on the server.

View File

@ -1,19 +1,20 @@
.. Proxmox Backup documentation master file .. Proxmox Backup documentation master file
Welcome to Proxmox Backup's documentation! Welcome to the Proxmox Backup documentation!
========================================== ============================================
Copyright (C) 2019 Proxmox Server Solutions GmbH Copyright (C) 2019-2020 Proxmox Server Solutions GmbH
Permission is granted to copy, distribute and/or modify this document Permission is granted to copy, distribute and/or modify this document under the
under the terms of the GNU Free Documentation License, Version 1.3 or terms of the GNU Free Documentation License, Version 1.3 or any later version
any later version published by the Free Software Foundation; with no published by the Free Software Foundation; with no Invariant Sections, no
Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included
copy of the license is included in the section entitled "GNU Free in the section entitled "GNU Free Documentation License".
Documentation License".
.. todolist::
.. only:: html
A `PDF` version of the documentation is `also available here <./proxmox-backup.pdf>`_
.. toctree:: .. toctree::
:maxdepth: 3 :maxdepth: 3
@ -22,6 +23,7 @@ Documentation License".
introduction.rst introduction.rst
installation.rst installation.rst
administration-guide.rst administration-guide.rst
sysadmin.rst
.. raw:: latex .. raw:: latex
@ -37,5 +39,14 @@ Documentation License".
glossary.rst glossary.rst
GFDL.rst GFDL.rst
.. only:: html and devbuild
.. toctree::
:maxdepth: 2
:caption: Developer Appendix
todos.rst
* :ref:`genindex` * :ref:`genindex`

View File

@ -83,6 +83,10 @@ In general this is not trivial, especially when LVM_ or ZFS_ is used.
The network configuration is completely up to you as well. The network configuration is completely up to you as well.
.. note:: You can access the webinterface of the Proxmox Backup Server with
your web browser, using HTTPS on port 8007. For example at
``https://<ip-or-dns-name>:8007``
Install Proxmox Backup server on `Proxmox VE`_ Install Proxmox Backup server on `Proxmox VE`_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -99,6 +103,10 @@ After configuring the
server to store backups. Should the hypervisor server fail, you can server to store backups. Should the hypervisor server fail, you can
still access the backups. still access the backups.
.. note:: You can access the webinterface of the Proxmox Backup Server with
your web browser, using HTTPS on port 8007. For example at
``https://<ip-or-dns-name>:8007``
Client installation Client installation
------------------- -------------------

View File

@ -1,120 +1,169 @@
Introduction Introduction
============ ============
This documentation is written in :term:`reStructuredText` and formatted with :term:`Sphinx`. What is Proxmox Backup Server
-----------------------------
Proxmox Backup Server is an enterprise-class, client-server backup software
package that backs up :term:`virtual machine`\ s, :term:`container`\ s, and
physical hosts. It is specially optimized for the `Proxmox Virtual Environment`_
platform and allows you to back up your data securely, even between remote
sites, providing easy management with a web-based user interface.
What is Proxmox Backup Proxmox Backup Server supports deduplication, compression, and authenticated
---------------------- encryption (AE_). Using :term:`Rust` as the implementation language guarantees high
performance, low resource usage, and a safe, high-quality codebase.
Proxmox Backup is an enterprise class client-server backup software, It features strong client-side encryption. Thus, it's possible to
specially optimized for the `Proxmox Virtual Environment`_ to backup backup data to not fully trusted targets.
:term:`virtual machine`\ s and :term:`container`\ s. It is also
possible to backup physical hosts.
It supports deduplication, compression and authenticated encryption
(AE_). Using :term:`Rust` as implementation language guarantees high
performance, low resource usage, and a safe, high quality code base.
Encryption is done at the client side. This makes backups to not fully
trusted targets possible.
Architecture Architecture
------------ ------------
Proxmox Backup uses a `Client-server model`_. The server is Proxmox Backup Server uses a `client-server model`_. The server stores the
responsible to store the backup data and provides an API to create backup data and provides an API to create backups and restore data. With the
backups and restore data. It is possible to manage disks and API it's also possible to manage disks and other server side resources.
other server side resources using this API.
A backup client uses this API to access the backed up data, The backup client uses this API to access the backed up data. With the command
i.e. ``proxmox-backup-client`` is a command line tool to create line tool ``proxmox-backup-client`` you can create backups and restore data.
backups and restore data. We deliver an integrated client for For QEMU_ with `Proxmox Virtual Environment`_ we deliver an integrated client.
QEMU_ with `Proxmox Virtual Environment`_.
A single backup is allowed to contain several archives. For example, A single backup is allowed to contain several archives. For example, when you
when you backup a :term:`virtual machine`, each disk is stored as a backup a :term:`virtual machine`, each disk is stored as a separate archive
separate archive inside that backup. The VM configuration also gets an inside that backup. The VM configuration itself is stored as an extra file.
extra file. This way, it is easy to access and restore important parts This way, it is easy to access and restore only important parts of the backup
of the backup without having to scan the whole backup. without the need to scan the whole backup.
Main Features Main Features
------------- -------------
:Proxmox VE: The `Proxmox Virtual Environment`_ is fully :Support for Proxmox VE: The `Proxmox Virtual Environment`_ is fully
supported. You can backup :term:`virtual machine`\ s and supported and you can easily backup :term:`virtual machine`\ s and
:term:`container`\ s. :term:`container`\ s.
:GUI: We provide a graphical, web based user interface. :Performance: The whole software stack is written in :term:`Rust`,
to provide high speed and memory efficiency.
:Deduplication: Incremental backups produce large amounts of duplicate :Deduplication: Periodic backups produce large amounts of duplicate
data. The deduplication layer removes that redundancy and makes data. The deduplication layer avoids redundancy and minimizes the used
incremental backups small and space efficient. storage space.
:Data Integrity: The built in `SHA-256`_ checksum algorithm assures the :Incremental backups: Changes between backups are typically low. Reading and
sending only the delta reduces storage and network impact of backups.
:Data Integrity: The built-in `SHA-256`_ checksum algorithm assures the
accuracy and consistency of your backups. accuracy and consistency of your backups.
:Remote Sync: It is possible to efficiently synchronize data to remote :Remote Sync: It is possible to efficiently synchronize data to remote
sites. Only deltas containing new data are transferred. sites. Only deltas containing new data are transferred.
:Performance: The whole software stack is written in :term:`Rust`, :Compression: The ultra fast Zstandard_ compression is able to compress
to provide high speed and memory efficiency.
:Compression: Ultra fast Zstandard_ compression is able to compress
several gigabytes of data per second. several gigabytes of data per second.
:Encryption: Backups can be encrypted client-side using AES-256 in :Encryption: Backups can be encrypted on the client-side using AES-256 in
GCM_ mode. This authenticated encryption mode (AE_) provides very Galois/Counter Mode (GCM_) mode. This authenticated encryption (AE_) mode
high performance on modern hardware. provides very high performance on modern hardware.
:Open Source: No secrets. You have access to all the source code. :Web interface: Manage the Proxmox Backup Server with the integrated web-based
user interface.
:Support: Commercial support options are available from `Proxmox`_. :Open Source: No secrets. Proxmox Backup Server is free and open-source
software. The source code is licensed under AGPL, v3.
:Support: Enterprise support will be available from `Proxmox`_ once the beta
phase is over.
Why Backup? Reasons for Data Backup?
----------- ------------------------
The primary purpose of a backup is to protect against data loss. Data The main purpose of a backup is to protect against data loss. Data loss can be
loss can be caused by faulty hardware, but also by human error. caused by faulty hardware but also by human error.
A common mistake is to delete a file or folder which is still A common mistake is to accidentally delete a file or folder which is still
required. Virtualization can amplify this problem. It is now required. Virtualization can even amplify this problem; it easily happens that
easy to delete a whole virtual machine by pressing a single button. a whole virtual machine is deleted by just pressing a single button.
Backups can serve as a toolkit for administrators to temporarily For administrators, backups can serve as a useful toolkit for temporarily
store data. For example, it is common practice to perform full backups storing data. For example, it is common practice to perform full backups before
before installing major software updates. If something goes wrong, you installing major software updates. If something goes wrong, you can easily
can restore the previous state. restore the previous state.
Another reason for backups are legal requirements. Some data must be Another reason for backups are legal requirements. Some data, especially
kept in a safe place for several years by law, so that it can be accessed if business records, must be kept in a safe place for several years by law, so
required. that they can be accessed if required.
Data loss can be very costly as it can severely restrict your In general, data loss is very costly as it can severely damage your business.
business. Therefore, make sure that you perform a backup regularly Therefore, ensure that you perform regular backups and run restore tests.
and run restore tests.
Software Stack Software Stack
-------------- --------------
.. todo:: Eplain why we use Rust (and Flutter) Proxmox Backup Server consists of multiple components:
* server-daemon providing, among others, a RESTfull API, super-fast
asynchronous tasks, lightweight usage statistic collection, scheduling
events, strict separation of privileged and unprivileged execution
environments, ...
* JavaScript management webinterface
* management CLI tool for the server (`proxmox-backup-manager`)
* client CLI tool (`proxmox-backup-client`) to access the server easily from
any `Linux amd64` environment.
Everything outside of the web interface is written in the Rust programming
language.
"The Rust programming language helps you write faster, more reliable software.
High-level ergonomics and low-level control are often at odds in programming
language design; Rust challenges that conflict. Through balancing powerful
technical capacity and a great developer experience, Rust gives you the option
to control low-level details (such as memory usage) without all the hassle
traditionally associated with such control."
-- `The Rust Programming Language <https://doc.rust-lang.org/book/ch00-00-introduction.html>`_
.. todo:: further explain the software stack
Getting Help
------------
Community Support Forum
~~~~~~~~~~~~~~~~~~~~~~~
We always encourage our users to discuss and share their knowledge using the
`Proxmox Community Forum`_. The forum is moderated by the Proxmox support team.
The large user base is spread out all over the world. Needless to say that such
a large forum is a great place to get information.
Mailing Lists
~~~~~~~~~~~~~
Proxmox Backup Server is fully open-source and contributions are welcome! Here
is the primary communication channel for developers:
:Mailing list for developers: `PBS Development List`_
Bug Tracker
~~~~~~~~~~~
Proxmox runs a public bug tracker at `<https://bugzilla.proxmox.com>`_. If an
issue appears, file your report there. An issue can be a bug as well as a
request for a new feature or enhancement. The bug tracker helps to keep track
of the issue and will send a notification once it has been solved.
License License
------- -------
Copyright (C) 2019 Proxmox Server Solutions GmbH Copyright (C) 2019-2020 Proxmox Server Solutions GmbH
This software is written by Proxmox Server Solutions GmbH <support@proxmox.com> This software is written by Proxmox Server Solutions GmbH <support@proxmox.com>
Proxmox Backup is free software: you can redistribute it and/or modify Proxmox Backup Server is free and open source software: you can use it,
it under the terms of the GNU Affero General Public License as redistribute it, and/or modify it under the terms of the GNU Affero General
published by the Free Software Foundation, either version 3 of the Public License as published by the Free Software Foundation, either version 3
License, or (at your option) any later version. of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but This program is distributed in the hope that it will be useful, but
``WITHOUT ANY WARRANTY``; without even the implied warranty of ``WITHOUT ANY WARRANTY``; without even the implied warranty of

400
docs/local-zfs.rst Normal file
View File

@ -0,0 +1,400 @@
ZFS on Linux
------------
ZFS is a combined file system and logical volume manager designed by
Sun Microsystems. There is no need to manually compile ZFS modules - all
packages are included.
By using ZFS, it's possible to achieve maximum enterprise features with
low budget hardware, but also high performance systems by leveraging
SSD caching or even SSD only setups. ZFS can replace cost intense
hardware raid cards by moderate CPU and memory load combined with easy
management.
General ZFS advantages
* Easy configuration and management with GUI and CLI.
* Reliable
* Protection against data corruption
* Data compression on file system level
* Snapshots
* Copy-on-write clone
* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
* Can use SSD for cache
* Self healing
* Continuous integrity checking
* Designed for high storage capacities
* Asynchronous replication over network
* Open Source
* Encryption
Hardware
~~~~~~~~~
ZFS depends heavily on memory, so you need at least 8GB to start. In
practice, use as much you can get for your hardware/budget. To prevent
data corruption, we recommend the use of high quality ECC RAM.
If you use a dedicated cache and/or log disk, you should use an
enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
increase the overall performance significantly.
IMPORTANT: Do not use ZFS on top of hardware controller which has its
own cache management. ZFS needs to directly communicate with disks. An
HBA adapter is the way to go, or something like LSI controller flashed
in ``IT`` mode.
ZFS Administration
~~~~~~~~~~~~~~~~~~
This section gives you some usage examples for common tasks. ZFS
itself is really powerful and provides many options. The main commands
to manage ZFS are `zfs` and `zpool`. Both commands come with great
manual pages, which can be read with:
.. code-block:: console
# man zpool
# man zfs
Create a new zpool
^^^^^^^^^^^^^^^^^^
To create a new pool, at least one disk is needed. The `ashift` should
have the same sector-size (2 power of `ashift`) or larger as the
underlying disk.
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device>
Create a new pool with RAID-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 1 disk
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device1> <device2>
Create a new pool with RAID-1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 2 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
Create a new pool with RAID-10
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 4 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
Create a new pool with RAIDZ-1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 3 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
Create a new pool with RAIDZ-2
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 4 disks
.. code-block:: console
# zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
Create a new pool with cache (L2ARC)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to use a dedicated cache drive partition to increase
the performance (use SSD).
As `<device>` it is possible to use more devices, like it's shown in
"Create a new pool with RAID*".
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
Create a new pool with log (ZIL)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to use a dedicated cache drive partition to increase
the performance (SSD).
As `<device>` it is possible to use more devices, like it's shown in
"Create a new pool with RAID*".
.. code-block:: console
# zpool create -f -o ashift=12 <pool> <device> log <log_device>
Add cache and log to an existing pool
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have a pool without cache and log. First partition the SSD in
2 partition with `parted` or `gdisk`
.. important:: Always use GPT partition tables.
The maximum size of a log device should be about half the size of
physical memory, so this is usually quite small. The rest of the SSD
can be used as cache.
.. code-block:: console
# zpool add -f <pool> log <device-part1> cache <device-part2>
Changing a failed device
^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
# zpool replace -f <pool> <old device> <new device>
Changing a failed bootable device
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Depending on how Proxmox Backup was installed it is either using `grub` or `systemd-boot`
as bootloader.
The first steps of copying the partition table, reissuing GUIDs and replacing
the ZFS partition are the same. To make the system bootable from the new disk,
different steps are needed which depend on the bootloader in use.
.. code-block:: console
# sgdisk <healthy bootable device> -R <new device>
# sgdisk -G <new device>
# zpool replace -f <pool> <old zfs partition> <new zfs partition>
.. NOTE:: Use the `zpool status -v` command to monitor how far the resilvering process of the new disk has progressed.
With `systemd-boot`:
.. code-block:: console
# pve-efiboot-tool format <new disk's ESP>
# pve-efiboot-tool init <new disk's ESP>
.. NOTE:: `ESP` stands for EFI System Partition, which is setup as partition #2 on
bootable disks setup by the {pve} installer since version 5.4. For details, see
xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP].
With `grub`:
Usually `grub.cfg` is located in `/boot/grub/grub.cfg`
.. code-block:: console
# grub-install <new disk>
# grub-mkconfig -o /path/to/grub.cfg
Activate E-Mail Notification
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ZFS comes with an event daemon, which monitors events generated by the
ZFS kernel module. The daemon can also send emails on ZFS events like
pool errors. Newer ZFS packages ship the daemon in a separate package,
and you can install it using `apt-get`:
.. code-block:: console
# apt-get install zfs-zed
To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
.. code-block:: console
ZED_EMAIL_ADDR="root"
Please note Proxmox Backup forwards mails to `root` to the email address
configured for the root user.
IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
other settings are optional.
Limit ZFS Memory Usage
^^^^^^^^^^^^^^^^^^^^^^
It is good to use at most 50 percent (which is the default) of the
system memory for ZFS ARC to prevent performance shortage of the
host. Use your preferred editor to change the configuration in
`/etc/modprobe.d/zfs.conf` and insert:
.. code-block:: console
options zfs zfs_arc_max=8589934592
This example setting limits the usage to 8GB.
.. IMPORTANT:: If your root file system is ZFS you must update your initramfs every time this value changes:
.. code-block:: console
# update-initramfs -u
SWAP on ZFS
^^^^^^^^^^^
Swap-space created on a zvol may generate some troubles, like blocking the
server or generating a high IO load, often seen when starting a Backup
to an external Storage.
We strongly recommend to use enough memory, so that you normally do not
run into low memory situations. Should you need or want to add swap, it is
preferred to create a partition on a physical disk and use it as swapdevice.
You can leave some space free for this purpose in the advanced options of the
installer. Additionally, you can lower the `swappiness` value.
A good value for servers is 10:
.. code-block:: console
# sysctl -w vm.swappiness=10
To make the swappiness persistent, open `/etc/sysctl.conf` with
an editor of your choice and add the following line:
.. code-block:: console
vm.swappiness = 10
.. table:: Linux kernel `swappiness` parameter values
:widths:auto
==================== ===============================================================
Value Strategy
==================== ===============================================================
vm.swappiness = 0 The kernel will swap only to avoid an 'out of memory' condition
vm.swappiness = 1 Minimum amount of swapping without disabling it entirely.
vm.swappiness = 10 Sometimes recommended to improve performance when sufficient memory exists in a system.
vm.swappiness = 60 The default value.
vm.swappiness = 100 The kernel will swap aggressively.
==================== ===============================================================
ZFS Compression
^^^^^^^^^^^^^^^
To activate compression:
.. code-block:: console
# zpool set compression=lz4 <pool>
We recommend using the `lz4` algorithm, since it adds very little CPU overhead.
Other algorithms such as `lzjb` and `gzip-N` (where `N` is an integer `1-9` representing
the compression ratio, 1 is fastest and 9 is best compression) are also available.
Depending on the algorithm and how compressible the data is, having compression enabled can even increase
I/O performance.
You can disable compression at any time with:
.. code-block:: console
# zfs set compression=off <dataset>
Only new blocks will be affected by this change.
ZFS Special Device
^^^^^^^^^^^^^^^^^^
Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
pool is used to store metadata, deduplication tables, and optionally small
file blocks.
A `special` device can improve the speed of a pool consisting of slow spinning
hard disks with a lot of metadata changes. For example workloads that involve
creating, updating or deleting a large number of files will benefit from the
presence of a `special` device. ZFS datasets can also be configured to store
whole small files on the `special` device which can further improve the
performance. Use fast SSDs for the `special` device.
.. IMPORTANT:: The redundancy of the `special` device should match the one of the
pool, since the `special` device is a point of failure for the whole pool.
.. WARNING:: Adding a `special` device to a pool cannot be undone!
Create a pool with `special` device and RAID-1:
.. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
Adding a `special` device to an existing pool with RAID-1:
.. code-block:: console
# zpool add <pool> special mirror <device1> <device2>
ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
`0` to disable storing small file blocks on the `special` device or a power of
two in the range between `512B` to `128K`. After setting the property new file
blocks smaller than `size` will be allocated on the `special` device.
.. IMPORTANT:: If the value for `special_small_blocks` is greater than or equal to
the `recordsize` (default `128K`) of the dataset, *all* data will be written to
the `special` device, so be careful!
Setting the `special_small_blocks` property on a pool will change the default
value of that property for all child ZFS datasets (for example all containers
in the pool will opt in for small file blocks).
Opt in for all file smaller than 4K-blocks pool-wide:
.. code-block:: console
# zfs set special_small_blocks=4K <pool>
Opt in for small file blocks for a single dataset:
.. code-block:: console
# zfs set special_small_blocks=4K <pool>/<filesystem>
Opt out from small file blocks for a single dataset:
.. code-block:: console
# zfs set special_small_blocks=0 <pool>/<filesystem>
Troubleshooting
^^^^^^^^^^^^^^^
Corrupted cachefile
In case of a corrupted ZFS cachefile, some volumes may not be mounted during
boot until mounted manually later.
For each pool, run:
.. code-block:: console
# zpool set cachefile=/etc/zfs/zpool.cache POOLNAME
and afterwards update the `initramfs` by running:
.. code-block:: console
# update-initramfs -u -k all
and finally reboot your node.
Sometimes the ZFS cachefile can get corrupted, and `zfs-import-cache.service`
doesn't import the pools that aren't present in the cachefile.
Another workaround to this problem is enabling the `zfs-import-scan.service`,
which searches and imports pools via device scanning (usually slower).

View File

@ -3,100 +3,150 @@
Debian Package Repositories Debian Package Repositories
--------------------------- ---------------------------
All Debian based systems use APT_ as package All Debian based systems use APT_ as package management tool. The list of
management tool. The list of repositories is defined in repositories is defined in ``/etc/apt/sources.list`` and ``.list`` files found
``/etc/apt/sources.list`` and ``.list`` files found in the in the ``/etc/apt/sources.d/`` directory. Updates can be installed directly
``/etc/apt/sources.d/`` directory. Updates can be installed directly with with the ``apt`` command line tool, or via the GUI.
the ``apt`` command line tool, or via the GUI.
APT_ ``sources.list`` files list one package repository per line, with APT_ ``sources.list`` files list one package repository per line, with the most
the most preferred source listed first. Empty lines are ignored and a preferred source listed first. Empty lines are ignored and a ``#`` character
``#`` character anywhere on a line marks the remainder of that line as a anywhere on a line marks the remainder of that line as a comment. The
comment. The information available from the configured sources is information available from the configured sources is acquired by ``apt
acquired by ``apt update``. update``.
.. code-block:: sources.list .. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list`` :caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian buster main contrib deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib deb http://ftp.debian.org/debian buster-updates main contrib
# security updates # security updates
deb http://security.debian.org/debian-security buster/updates main contrib deb http://security.debian.org/debian-security buster/updates main contrib
.. FIXME for 7.0: change security update suite to bullseye-security .. FIXME for 7.0: change security update suite to bullseye-security
In addition, Proxmox provides three different package repositories for In addition, you need a package repositories from Proxmox to get the backup
the backup server binaries. server updates.
`Proxmox Backup`_ Enterprise Repository During the Proxmox Backup beta phase only one repository (pbstest) will be
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ available. Once released, a Enterprise repository for production use and a
no-subscription repository will be provided.
This is the default, stable, and recommended repository. It is available for SecureApt
all `Proxmox Backup`_ subscription users. It contains the most stable packages, ~~~~~~~~~
and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default:
.. code-block:: sources.list The `Release` files in the repositories are signed with GnuPG. APT is using
:caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list`` these signatures to verify that all packages are from a trusted source.
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise If you install Proxmox Backup Server from an official ISO image, the key for
verification is already installed.
If you install Proxmox Backup Server on top of Debian, download and install the
key with the following commands:
.. code-block:: console
# wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
Verify the SHA512 checksum afterwards with:
.. code-block:: console
# sha512sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
The output should be:
.. code-block:: console
acca6f416917e8e11490a08a1e2842d500b3a5d9f322c6319db0927b2901c3eae23cfb5cd5df6facf2b57399d3cfa52ad7769ebdd75d9b204549ca147da52626 /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
and the md5sum:
.. code-block:: console
# md5sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
Here, the output should be:
.. code-block:: console
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
.. comment
`Proxmox Backup`_ Enterprise Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This will be the default, stable, and recommended repository. It is available for
all `Proxmox Backup`_ subscription users. It contains the most stable packages,
and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default:
.. note:: During the Proxmox Backup beta phase only one repository (pbstest)
will be available.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list``
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise
To never miss important security fixes, the superuser (``root@pam`` user) is To never miss important security fixes, the superuser (``root@pam`` user) is
notified via email about new packages as soon as they are available. The notified via email about new packages as soon as they are available. The
change-log and details of each package can be viewed in the GUI (if available). change-log and details of each package can be viewed in the GUI (if available).
Please note that you need a valid subscription key to access this Please note that you need a valid subscription key to access this
repository. More information regarding subscription levels and pricing can be repository. More information regarding subscription levels and pricing can be
found at https://www.proxmox.com/en/proxmox-backup/pricing. found at https://www.proxmox.com/en/proxmox-backup/pricing.
.. note:: You can disable this repository by commenting out the above .. note:: You can disable this repository by commenting out the above
line using a `#` (at the start of the line). This prevents error line using a `#` (at the start of the line). This prevents error
messages if you do not have a subscription key. Please configure the messages if you do not have a subscription key. Please configure the
``pbs-no-subscription`` repository in that case. ``pbs-no-subscription`` repository in that case.
`Proxmox Backup`_ No-Subscription Repository `Proxmox Backup`_ No-Subscription Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As the name suggests, you do not need a subscription key to access As the name suggests, you do not need a subscription key to access
this repository. It can be used for testing and non-production this repository. It can be used for testing and non-production
use. It is not recommended to use it on production servers, because these use. It is not recommended to use it on production servers, because these
packages are not always heavily tested and validated. packages are not always heavily tested and validated.
We recommend to configure this repository in ``/etc/apt/sources.list``. We recommend to configure this repository in ``/etc/apt/sources.list``.
.. code-block:: sources.list .. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list`` :caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian buster main contrib deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib deb http://ftp.debian.org/debian buster-updates main contrib
# PBS pbs-no-subscription repository provided by proxmox.com, # PBS pbs-no-subscription repository provided by proxmox.com,
# NOT recommended for production use # NOT recommended for production use
deb http://download.proxmox.com/debian/bps buster pbs-no-subscription deb http://download.proxmox.com/debian/pbs buster pbs-no-subscription
# security updates # security updates
deb http://security.debian.org/debian-security buster/updates main contrib deb http://security.debian.org/debian-security buster/updates main contrib
`Proxmox Backup`_ Test Repository `Proxmox Backup`_ Beta Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Finally, there is a repository called ``pbstest``. This one contains the During the public beta, there is a repository called ``pbstest``. This one
latest packages and is heavily used by developers to test new contains the latest packages and is heavily used by developers to test new
features. features.
.. warning:: the ``pbstest`` repository should (as the name implies) .. .. warning:: the ``pbstest`` repository should (as the name implies)
only be used to test new features or bug fixes. only be used to test new features or bug fixes.
You can configure this using ``/etc/apt/sources.list`` by You can configure this using ``/etc/apt/sources.list`` by adding the following
adding the following line: line:
.. code-block:: sources.list .. code-block:: sources.list
:caption: sources.list entry for ``pbstest`` :caption: sources.list entry for ``pbstest``
deb http://download.proxmox.com/debian/bps buster pbstest deb http://download.proxmox.com/debian/pbs buster pbstest
If you installed Proxmox Backup Server from the official beta ISO you should
have this repository already configured in
``/etc/apt/sources.list.d/pbstest-beta.list``

View File

@ -24,7 +24,7 @@ This daemon is normally started and managed as ``systemd`` service::
systemctl status proxmox-backup-proxy systemctl status proxmox-backup-proxy
For debugging, you can start the daemon in forground using:: For debugging, you can start the daemon in foreground using::
proxmox-backup-proxy proxmox-backup-proxy

View File

@ -1,5 +1,5 @@
Host System Administration Host System Administration
-------------------------- ==========================
`Proxmox Backup`_ is based on the famous Debian_ Linux `Proxmox Backup`_ is based on the famous Debian_ Linux
distribution. That means that you have access to the whole world of distribution. That means that you have access to the whole world of
@ -23,8 +23,4 @@ either explain things which are different on `Proxmox Backup`_, or
tasks which are commonly used on `Proxmox Backup`_. For other topics, tasks which are commonly used on `Proxmox Backup`_. For other topics,
please refer to the standard Debian documentation. please refer to the standard Debian documentation.
ZFS .. include:: local-zfs.rst
~~~
.. todo:: Add local ZFS admin guide (local.zfs.adoc)

6
docs/todos.rst Normal file
View File

@ -0,0 +1,6 @@
Documentation Todo List
=======================
This is an auto-generated list of the todo references in the documentation.
.. todolist::

View File

@ -7,7 +7,7 @@ DYNAMIC_UNITS := \
proxmox-backup.service \ proxmox-backup.service \
proxmox-backup-proxy.service proxmox-backup-proxy.service
all: $(UNITS) $(DYNAMIC_UNITS) all: $(UNITS) $(DYNAMIC_UNITS) pbstest-beta.list
clean: clean:
rm -f $(DYNAMIC_UNITS) rm -f $(DYNAMIC_UNITS)

1
etc/pbstest-beta.list Normal file
View File

@ -0,0 +1 @@
deb http://download.proxmox.com/debian/pbs buster pbstest

View File

@ -2,7 +2,7 @@ use anyhow::{Error};
use proxmox_backup::client::*; use proxmox_backup::client::*;
async fn upload_speed() -> Result<usize, Error> { async fn upload_speed() -> Result<f64, Error> {
let host = "localhost"; let host = "localhost";
let datastore = "store2"; let datastore = "store2";
@ -20,7 +20,7 @@ async fn upload_speed() -> Result<usize, Error> {
let client = BackupWriter::start(client, None, datastore, "host", "speedtest", backup_time, false).await?; let client = BackupWriter::start(client, None, datastore, "host", "speedtest", backup_time, false).await?;
println!("start upload speed test"); println!("start upload speed test");
let res = client.upload_speedtest().await?; let res = client.upload_speedtest(true).await?;
Ok(res) Ok(res)
} }

View File

@ -4,7 +4,6 @@ pub mod backup;
pub mod config; pub mod config;
pub mod node; pub mod node;
pub mod reader; pub mod reader;
mod subscription;
pub mod status; pub mod status;
pub mod types; pub mod types;
pub mod version; pub mod version;
@ -26,7 +25,6 @@ pub const SUBDIRS: SubdirMap = &[
("pull", &pull::ROUTER), ("pull", &pull::ROUTER),
("reader", &reader::ROUTER), ("reader", &reader::ROUTER),
("status", &status::ROUTER), ("status", &status::ROUTER),
("subscription", &subscription::ROUTER),
("version", &version::ROUTER), ("version", &version::ROUTER),
]; ];

View File

@ -13,15 +13,22 @@ use crate::auth_helpers::*;
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
use crate::config::acl::PRIV_PERMISSIONS_MODIFY; use crate::config::acl::{PRIVILEGES, PRIV_PERMISSIONS_MODIFY};
pub mod user; pub mod user;
pub mod domain; pub mod domain;
pub mod acl; pub mod acl;
pub mod role; pub mod role;
fn authenticate_user(username: &str, password: &str) -> Result<(), Error> { /// returns Ok(true) if a ticket has to be created
/// and Ok(false) if not
fn authenticate_user(
username: &str,
password: &str,
path: Option<String>,
privs: Option<String>,
port: Option<u16>,
) -> Result<bool, Error> {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
if !user_info.is_active_user(&username) { if !user_info.is_active_user(&username) {
@ -33,14 +40,43 @@ fn authenticate_user(username: &str, password: &str) -> Result<(), Error> {
if password.starts_with("PBS:") { if password.starts_with("PBS:") {
if let Ok((_age, Some(ticket_username))) = tools::ticket::verify_rsa_ticket(public_auth_key(), "PBS", password, None, -300, ticket_lifetime) { if let Ok((_age, Some(ticket_username))) = tools::ticket::verify_rsa_ticket(public_auth_key(), "PBS", password, None, -300, ticket_lifetime) {
if ticket_username == username { if ticket_username == username {
return Ok(()); return Ok(true);
} else { } else {
bail!("ticket login failed - wrong username"); bail!("ticket login failed - wrong username");
} }
} }
} else if password.starts_with("PBSTERM:") {
if path.is_none() || privs.is_none() || port.is_none() {
bail!("cannot check termnal ticket without path, priv and port");
}
let path = path.unwrap();
let privilege_name = privs.unwrap();
let port = port.unwrap();
if let Ok((_age, _data)) =
tools::ticket::verify_term_ticket(public_auth_key(), &username, &path, port, password)
{
for (name, privilege) in PRIVILEGES {
if *name == privilege_name {
let mut path_vec = Vec::new();
for part in path.split('/') {
if part != "" {
path_vec.push(part);
}
}
user_info.check_privs(username, &path_vec, *privilege, false)?;
return Ok(false);
}
}
bail!("No such privilege");
}
} }
crate::auth::authenticate_user(username, password) let _ = crate::auth::authenticate_user(username, password)?;
Ok(true)
} }
#[api( #[api(
@ -52,6 +88,21 @@ fn authenticate_user(username: &str, password: &str) -> Result<(), Error> {
password: { password: {
schema: PASSWORD_SCHEMA, schema: PASSWORD_SCHEMA,
}, },
path: {
type: String,
description: "Path for verifying terminal tickets.",
optional: true,
},
privs: {
type: String,
description: "Privilege for verifying terminal tickets.",
optional: true,
},
port: {
type: Integer,
description: "Port for verifying terminal tickets.",
optional: true,
},
}, },
}, },
returns: { returns: {
@ -78,11 +129,16 @@ fn authenticate_user(username: &str, password: &str) -> Result<(), Error> {
/// Create or verify authentication ticket. /// Create or verify authentication ticket.
/// ///
/// Returns: An authentication ticket with additional infos. /// Returns: An authentication ticket with additional infos.
fn create_ticket(username: String, password: String) -> Result<Value, Error> { fn create_ticket(
match authenticate_user(&username, &password) { username: String,
Ok(_) => { password: String,
path: Option<String>,
let ticket = assemble_rsa_ticket( private_auth_key(), "PBS", Some(&username), None)?; privs: Option<String>,
port: Option<u16>,
) -> Result<Value, Error> {
match authenticate_user(&username, &password, path, privs, port) {
Ok(true) => {
let ticket = assemble_rsa_ticket(private_auth_key(), "PBS", Some(&username), None)?;
let token = assemble_csrf_prevention_token(csrf_secret(), &username); let token = assemble_csrf_prevention_token(csrf_secret(), &username);
@ -94,6 +150,9 @@ fn create_ticket(username: String, password: String) -> Result<Value, Error> {
"CSRFPreventionToken": token, "CSRFPreventionToken": token,
})) }))
} }
Ok(false) => Ok(json!({
"username": username,
})),
Err(err) => { Err(err) => {
let client_ip = "unknown"; // $rpcenv->get_client_ip() || ''; let client_ip = "unknown"; // $rpcenv->get_client_ip() || '';
log::error!("authentication failure; rhost={} user={} msg={}", client_ip, username, err.to_string()); log::error!("authentication failure; rhost={} user={} msg={}", client_ip, username, err.to_string());

View File

@ -46,20 +46,20 @@ fn check_backup_owner(store: &DataStore, group: &BackupGroup, userid: &str) -> R
fn read_backup_index(store: &DataStore, backup_dir: &BackupDir) -> Result<Vec<BackupContent>, Error> { fn read_backup_index(store: &DataStore, backup_dir: &BackupDir) -> Result<Vec<BackupContent>, Error> {
let (manifest, index_size) = store.load_manifest(backup_dir)?; let (manifest, manifest_crypt_mode, index_size) = store.load_manifest(backup_dir)?;
let mut result = Vec::new(); let mut result = Vec::new();
for item in manifest.files() { for item in manifest.files() {
result.push(BackupContent { result.push(BackupContent {
filename: item.filename.clone(), filename: item.filename.clone(),
encrypted: item.encrypted, crypt_mode: Some(item.crypt_mode),
size: Some(item.size), size: Some(item.size),
}); });
} }
result.push(BackupContent { result.push(BackupContent {
filename: MANIFEST_BLOB_NAME.to_string(), filename: MANIFEST_BLOB_NAME.to_string(),
encrypted: Some(false), crypt_mode: Some(manifest_crypt_mode),
size: Some(index_size), size: Some(index_size),
}); });
@ -79,7 +79,11 @@ fn get_all_snapshot_files(
for file in &info.files { for file in &info.files {
if file_set.contains(file) { continue; } if file_set.contains(file) { continue; }
files.push(BackupContent { filename: file.to_string(), size: None, encrypted: None }); files.push(BackupContent {
filename: file.to_string(),
size: None,
crypt_mode: None,
});
} }
Ok(files) Ok(files)
@ -350,7 +354,15 @@ pub fn list_snapshots (
}, },
Err(err) => { Err(err) => {
eprintln!("error during snapshot file listing: '{}'", err); eprintln!("error during snapshot file listing: '{}'", err);
info.files.iter().map(|x| BackupContent { filename: x.to_string(), size: None, encrypted: None }).collect() info
.files
.iter()
.map(|x| BackupContent {
filename: x.to_string(),
size: None,
crypt_mode: None,
})
.collect()
}, },
}; };
@ -441,13 +453,13 @@ pub fn verify(
match (backup_type, backup_id, backup_time) { match (backup_type, backup_id, backup_time) {
(Some(backup_type), Some(backup_id), Some(backup_time)) => { (Some(backup_type), Some(backup_id), Some(backup_time)) => {
worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_time);
let dir = BackupDir::new(backup_type, backup_id, backup_time); let dir = BackupDir::new(backup_type, backup_id, backup_time);
worker_id = format!("{}_{}", store, dir);
backup_dir = Some(dir); backup_dir = Some(dir);
} }
(Some(backup_type), Some(backup_id), None) => { (Some(backup_type), Some(backup_id), None) => {
worker_id = format!("{}_{}_{}", store, backup_type, backup_id);
let group = BackupGroup::new(backup_type, backup_id); let group = BackupGroup::new(backup_type, backup_id);
worker_id = format!("{}_{}", store, group);
backup_group = Some(group); backup_group = Some(group);
} }
(None, None, None) => { (None, None, None) => {
@ -523,7 +535,7 @@ macro_rules! add_common_prune_prameters {
pub const API_RETURN_SCHEMA_PRUNE: Schema = ArraySchema::new( pub const API_RETURN_SCHEMA_PRUNE: Schema = ArraySchema::new(
"Returns the list of snapshots and a flag indicating if there are kept or removed.", "Returns the list of snapshots and a flag indicating if there are kept or removed.",
PruneListItem::API_SCHEMA &PruneListItem::API_SCHEMA
).schema(); ).schema();
const API_METHOD_PRUNE: ApiMethod = ApiMethod::new( const API_METHOD_PRUNE: ApiMethod = ApiMethod::new(
@ -902,7 +914,7 @@ fn download_file_decoded(
let files = read_backup_index(&datastore, &backup_dir)?; let files = read_backup_index(&datastore, &backup_dir)?;
for file in files { for file in files {
if file.filename == file_name && file.encrypted == Some(true) { if file.filename == file_name && file.crypt_mode == Some(CryptMode::Encrypt) {
bail!("cannot decode '{}' - is encrypted", file_name); bail!("cannot decode '{}' - is encrypted", file_name);
} }
} }

View File

@ -6,7 +6,12 @@ use proxmox::http_err;
pub async fn create_download_response(path: PathBuf) -> Result<Response<Body>, Error> { pub async fn create_download_response(path: PathBuf) -> Result<Response<Body>, Error> {
let file = tokio::fs::File::open(path.clone()) let file = tokio::fs::File::open(path.clone())
.map_err(move |err| http_err!(BAD_REQUEST, format!("open file {:?} failed: {}", path.clone(), err))) .map_err(move |err| {
match err.kind() {
std::io::ErrorKind::NotFound => http_err!(NOT_FOUND, format!("open file {:?} failed - not found", path.clone())),
_ => http_err!(BAD_REQUEST, format!("open file {:?} failed: {}", path.clone(), err)),
}
})
.await?; .await?;
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new()) let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())

View File

@ -1,18 +1,289 @@
use proxmox::api::router::{Router, SubdirMap}; use std::net::TcpListener;
use proxmox::list_subdirs_api_method; use std::os::unix::io::AsRawFd;
pub mod tasks; use anyhow::{bail, format_err, Error};
mod time; use futures::{
pub mod network; future::{FutureExt, TryFutureExt},
select,
};
use hyper::body::Body;
use hyper::http::request::Parts;
use hyper::upgrade::Upgraded;
use nix::fcntl::{fcntl, FcntlArg, FdFlag};
use serde_json::{json, Value};
use tokio::io::{AsyncBufReadExt, BufReader};
use proxmox::api::router::{Router, SubdirMap};
use proxmox::api::{
api, schema::*, ApiHandler, ApiMethod, ApiResponseFuture, Permission, RpcEnvironment,
};
use proxmox::list_subdirs_api_method;
use proxmox::tools::websocket::WebSocket;
use proxmox::{identity, sortable};
use crate::api2::types::*;
use crate::config::acl::PRIV_SYS_CONSOLE;
use crate::server::WorkerTask;
use crate::tools;
pub mod disks;
pub mod dns; pub mod dns;
mod syslog;
mod journal; mod journal;
pub mod network;
pub(crate) mod rrd;
mod services; mod services;
mod status; mod status;
pub(crate) mod rrd; mod subscription;
pub mod disks; mod apt;
mod syslog;
pub mod tasks;
mod time;
pub const SHELL_CMD_SCHEMA: Schema = StringSchema::new("The command to run.")
.format(&ApiStringFormat::Enum(&[
EnumEntry::new("login", "Login"),
EnumEntry::new("upgrade", "Upgrade"),
]))
.schema();
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
cmd: {
schema: SHELL_CMD_SCHEMA,
optional: true,
},
},
},
returns: {
type: Object,
description: "Object with the user, ticket, port and upid",
properties: {
user: {
description: "",
type: String,
},
ticket: {
description: "",
type: String,
},
port: {
description: "",
type: String,
},
upid: {
description: "",
type: String,
},
}
},
access: {
description: "Restricted to users on realm 'pam'",
permission: &Permission::Privilege(&["system"], PRIV_SYS_CONSOLE, false),
}
)]
/// Call termproxy and return shell ticket
async fn termproxy(
cmd: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid = rpcenv
.get_user()
.ok_or_else(|| format_err!("unknown user"))?;
let (username, realm) = crate::auth::parse_userid(&userid)?;
if realm != "pam" {
bail!("only pam users can use the console");
}
let path = "/system";
// use port 0 and let the kernel decide which port is free
let listener = TcpListener::bind("localhost:0")?;
let port = listener.local_addr()?.port();
let ticket = tools::ticket::assemble_term_ticket(
crate::auth_helpers::private_auth_key(),
&userid,
&path,
port,
)?;
let mut command = Vec::new();
match cmd.as_ref().map(|x| x.as_str()) {
Some("login") | None => {
command.push("login");
if userid == "root@pam" {
command.push("-f");
command.push("root");
}
}
Some("upgrade") => {
if userid != "root@pam" {
bail!("only root@pam can upgrade");
}
// TODO: add nicer/safer wrapper like in PVE instead
command.push("sh");
command.push("-c");
command.push("apt full-upgrade; bash -l");
}
_ => bail!("invalid command"),
};
let upid = WorkerTask::spawn(
"termproxy",
None,
&username,
false,
move |worker| async move {
// move inside the worker so that it survives and does not close the port
// remove CLOEXEC from listenere so that we can reuse it in termproxy
let fd = listener.as_raw_fd();
let mut flags = match fcntl(fd, FcntlArg::F_GETFD) {
Ok(bits) => FdFlag::from_bits_truncate(bits),
Err(err) => bail!("could not get fd: {}", err),
};
flags.remove(FdFlag::FD_CLOEXEC);
if let Err(err) = fcntl(fd, FcntlArg::F_SETFD(flags)) {
bail!("could not set fd: {}", err);
}
let mut arguments: Vec<&str> = Vec::new();
let fd_string = fd.to_string();
arguments.push(&fd_string);
arguments.extend_from_slice(&[
"--path",
&path,
"--perm",
"Sys.Console",
"--authport",
"82",
"--port-as-fd",
"--",
]);
arguments.extend_from_slice(&command);
let mut cmd = tokio::process::Command::new("/usr/bin/termproxy");
cmd.args(&arguments)
.kill_on_drop(true)
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped());
let mut child = cmd.spawn().expect("error executing termproxy");
let stdout = child.stdout.take().expect("no child stdout handle");
let stderr = child.stderr.take().expect("no child stderr handle");
let worker_stdout = worker.clone();
let stdout_fut = async move {
let mut reader = BufReader::new(stdout).lines();
while let Some(line) = reader.next_line().await? {
worker_stdout.log(line);
}
Ok::<(), Error>(())
};
let worker_stderr = worker.clone();
let stderr_fut = async move {
let mut reader = BufReader::new(stderr).lines();
while let Some(line) = reader.next_line().await? {
worker_stderr.warn(line);
}
Ok::<(), Error>(())
};
select!{
res = child.fuse() => {
let exit_code = res?;
if !exit_code.success() {
match exit_code.code() {
Some(code) => bail!("termproxy exited with {}", code),
None => bail!("termproxy exited by signal"),
}
}
Ok(())
},
res = stdout_fut.fuse() => res,
res = stderr_fut.fuse() => res,
res = worker.abort_future().fuse() => res.map_err(Error::from),
}
},
)?;
Ok(json!({
"user": username,
"ticket": ticket,
"port": port,
"upid": upid,
}))
}
#[sortable]
pub const API_METHOD_WEBSOCKET: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&upgrade_to_websocket),
&ObjectSchema::new(
"Upgraded to websocket",
&sorted!([
("node", false, &NODE_SCHEMA),
(
"vncticket",
false,
&StringSchema::new("Terminal ticket").schema()
),
("port", false, &IntegerSchema::new("Terminal port").schema()),
]),
),
)
.access(
Some("The user needs Sys.Console on /system."),
&Permission::Privilege(&["system"], PRIV_SYS_CONSOLE, false),
);
fn upgrade_to_websocket(
parts: Parts,
req_body: Body,
param: Value,
_info: &ApiMethod,
rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture {
async move {
let username = rpcenv.get_user().unwrap();
let ticket = tools::required_string_param(&param, "vncticket")?.to_owned();
let port: u16 = tools::required_integer_param(&param, "port")? as u16;
// will be checked again by termproxy
tools::ticket::verify_term_ticket(
crate::auth_helpers::public_auth_key(),
&username,
&"/system",
port,
&ticket,
)?;
let (ws, response) = WebSocket::new(parts.headers)?;
crate::server::spawn_internal_task(async move {
let conn: Upgraded = match req_body.on_upgrade().map_err(Error::from).await {
Ok(upgraded) => upgraded,
_ => bail!("error"),
};
let local = tokio::net::TcpStream::connect(format!("localhost:{}", port)).await?;
ws.serve_connection(conn, local).await
});
Ok(response)
}
.boxed()
}
pub const SUBDIRS: SubdirMap = &[ pub const SUBDIRS: SubdirMap = &[
("apt", &apt::ROUTER),
("disks", &disks::ROUTER), ("disks", &disks::ROUTER),
("dns", &dns::ROUTER), ("dns", &dns::ROUTER),
("journal", &journal::ROUTER), ("journal", &journal::ROUTER),
@ -20,9 +291,15 @@ pub const SUBDIRS: SubdirMap = &[
("rrd", &rrd::ROUTER), ("rrd", &rrd::ROUTER),
("services", &services::ROUTER), ("services", &services::ROUTER),
("status", &status::ROUTER), ("status", &status::ROUTER),
("subscription", &subscription::ROUTER),
("syslog", &syslog::ROUTER), ("syslog", &syslog::ROUTER),
("tasks", &tasks::ROUTER), ("tasks", &tasks::ROUTER),
("termproxy", &Router::new().post(&API_METHOD_TERMPROXY)),
("time", &time::ROUTER), ("time", &time::ROUTER),
(
"vncwebsocket",
&Router::new().upgrade(&API_METHOD_WEBSOCKET),
),
]; ];
pub const ROUTER: Router = Router::new() pub const ROUTER: Router = Router::new()

268
src/api2/node/apt.rs Normal file
View File

@ -0,0 +1,268 @@
use apt_pkg_native::Cache;
use anyhow::{Error, bail};
use serde_json::{json, Value};
use proxmox::{list_subdirs_api_method, const_regex};
use proxmox::api::{api, RpcEnvironment, RpcEnvironmentType, Permission};
use proxmox::api::router::{Router, SubdirMap};
use crate::server::WorkerTask;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::api2::types::{APTUpdateInfo, NODE_SCHEMA, UPID_SCHEMA};
const_regex! {
VERSION_EPOCH_REGEX = r"^\d+:";
FILENAME_EXTRACT_REGEX = r"^.*/.*?_(.*)_Packages$";
}
// FIXME: Replace with call to 'apt changelog <pkg> --print-uris'. Currently
// not possible as our packages do not have a URI set in their Release file
fn get_changelog_url(
package: &str,
filename: &str,
source_pkg: &str,
version: &str,
source_version: &str,
origin: &str,
component: &str,
) -> Result<String, Error> {
if origin == "" {
bail!("no origin available for package {}", package);
}
if origin == "Debian" {
let source_version = (VERSION_EPOCH_REGEX.regex_obj)().replace_all(source_version, "");
let prefix = if source_pkg.starts_with("lib") {
source_pkg.get(0..4)
} else {
source_pkg.get(0..1)
};
let prefix = match prefix {
Some(p) => p,
None => bail!("cannot get starting characters of package name '{}'", package)
};
// note: security updates seem to not always upload a changelog for
// their package version, so this only works *most* of the time
return Ok(format!("https://metadata.ftp-master.debian.org/changelogs/main/{}/{}/{}_{}_changelog",
prefix, source_pkg, source_pkg, source_version));
} else if origin == "Proxmox" {
let version = (VERSION_EPOCH_REGEX.regex_obj)().replace_all(version, "");
let base = match (FILENAME_EXTRACT_REGEX.regex_obj)().captures(filename) {
Some(captures) => {
let base_capture = captures.get(1);
match base_capture {
Some(base_underscore) => base_underscore.as_str().replace("_", "/"),
None => bail!("incompatible filename, cannot find regex group")
}
},
None => bail!("incompatible filename, doesn't match regex")
};
return Ok(format!("http://download.proxmox.com/{}/{}_{}.changelog",
base, package, version));
}
bail!("unknown origin ({}) or component ({})", origin, component)
}
fn list_installed_apt_packages<F: Fn(&str, &str, &str) -> bool>(filter: F)
-> Vec<APTUpdateInfo> {
let mut ret = Vec::new();
// note: this is not an 'apt update', it just re-reads the cache from disk
let mut cache = Cache::get_singleton();
cache.reload();
let mut cache_iter = cache.iter();
loop {
let view = match cache_iter.next() {
Some(view) => view,
None => break
};
let current_version = match view.current_version() {
Some(vers) => vers,
None => continue
};
let candidate_version = match view.candidate_version() {
Some(vers) => vers,
// if there's no candidate (i.e. no update) get info of currently
// installed version instead
None => current_version.clone()
};
let package = view.name();
if filter(&package, &current_version, &candidate_version) {
let mut origin_res = "unknown".to_owned();
let mut section_res = "unknown".to_owned();
let mut priority_res = "unknown".to_owned();
let mut change_log_url = "".to_owned();
let mut short_desc = package.clone();
let mut long_desc = "".to_owned();
// get additional information via nested APT 'iterators'
let mut view_iter = view.versions();
while let Some(ver) = view_iter.next() {
if ver.version() == candidate_version {
if let Some(section) = ver.section() {
section_res = section;
}
if let Some(prio) = ver.priority_type() {
priority_res = prio;
}
// assume every package has only one origin file (not
// origin, but origin *file*, for some reason those seem to
// be different concepts in APT)
let mut origin_iter = ver.origin_iter();
let origin = origin_iter.next();
if let Some(origin) = origin {
if let Some(sd) = origin.short_desc() {
short_desc = sd;
}
if let Some(ld) = origin.long_desc() {
long_desc = ld;
}
// the package files appear in priority order, meaning
// the one for the candidate version is first
let mut pkg_iter = origin.file();
let pkg_file = pkg_iter.next();
if let Some(pkg_file) = pkg_file {
if let Some(origin_name) = pkg_file.origin() {
origin_res = origin_name;
}
let filename = pkg_file.file_name();
let source_pkg = ver.source_package();
let source_ver = ver.source_version();
let component = pkg_file.component();
// build changelog URL from gathered information
// ignore errors, use empty changelog instead
let url = get_changelog_url(&package, &filename, &source_pkg,
&candidate_version, &source_ver, &origin_res, &component);
if let Ok(url) = url {
change_log_url = url;
}
}
}
break;
}
}
let info = APTUpdateInfo {
package,
title: short_desc,
arch: view.arch(),
description: long_desc,
change_log_url,
origin: origin_res,
version: candidate_version,
old_version: current_version,
priority: priority_res,
section: section_res,
};
ret.push(info);
}
}
return ret;
}
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
description: "A list of packages with available updates.",
type: Array,
items: { type: APTUpdateInfo },
},
access: {
permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
},
)]
/// List available APT updates
fn apt_update_available(_param: Value) -> Result<Value, Error> {
let ret = list_installed_apt_packages(|_pkg, cur_ver, can_ver| cur_ver != can_ver);
Ok(json!(ret))
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
quiet: {
description: "Only produces output suitable for logging, omitting progress indicators.",
type: bool,
default: false,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
access: {
permission: &Permission::Privilege(&[], PRIV_SYS_MODIFY, false),
},
)]
/// Update the APT database
pub fn apt_update_database(
quiet: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let username = rpcenv.get_user().unwrap();
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let quiet = quiet.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_QUIET);
let upid_str = WorkerTask::new_thread("aptupdate", None, &username.clone(), to_stdout, move |worker| {
if !quiet { worker.log("starting apt-get update") }
// TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE
let mut command = std::process::Command::new("apt-get");
command.arg("update");
let output = crate::tools::run_command(command, None)?;
if !quiet { worker.log(output) }
// TODO: add mail notify for new updates like PVE
Ok(())
})?;
Ok(upid_str)
}
const SUBDIRS: SubdirMap = &[
("update", &Router::new()
.get(&API_METHOD_APT_UPDATE_AVAILABLE)
.post(&API_METHOD_APT_UPDATE_DATABASE)
),
];
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

View File

@ -26,10 +26,10 @@ pub mod zfs;
schema: NODE_SCHEMA, schema: NODE_SCHEMA,
}, },
skipsmart: { skipsmart: {
description: "Skip smart checks.", description: "Skip smart checks.",
type: bool, type: bool,
optional: true, optional: true,
default: false, default: false,
}, },
"usage-type": { "usage-type": {
type: DiskUsageType, type: DiskUsageType,

View File

@ -41,6 +41,9 @@ pub const ZFS_ASHIFT_SCHEMA: Schema = IntegerSchema::new(
.default(12) .default(12)
.schema(); .schema();
pub const ZPOOL_NAME_SCHEMA: Schema =StringSchema::new("ZFS Pool Name")
.format(&ApiStringFormat::Pattern(&ZPOOL_NAME_REGEX))
.schema();
#[api( #[api(
default: "On", default: "On",
@ -157,7 +160,7 @@ pub fn list_zpools() -> Result<Vec<ZpoolListItem>, Error> {
schema: NODE_SCHEMA, schema: NODE_SCHEMA,
}, },
name: { name: {
schema: DATASTORE_SCHEMA, schema: ZPOOL_NAME_SCHEMA,
}, },
}, },
}, },

View File

@ -10,6 +10,7 @@ use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_POWER_MANAGEMENT}; use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_POWER_MANAGEMENT};
use crate::tools::cert::CertInfo;
#[api( #[api(
input: { input: {
@ -46,14 +47,24 @@ use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_POWER_MANAGEMENT};
description: "Total CPU usage since last query.", description: "Total CPU usage since last query.",
optional: true, optional: true,
}, },
} info: {
type: Object,
description: "contains node information",
properties: {
fingerprint: {
description: "The SSL Fingerprint",
type: String,
},
},
},
},
}, },
access: { access: {
permission: &Permission::Privilege(&["system", "status"], PRIV_SYS_AUDIT, false), permission: &Permission::Privilege(&["system", "status"], PRIV_SYS_AUDIT, false),
}, },
)] )]
/// Read node memory, CPU and (root) disk usage /// Read node memory, CPU and (root) disk usage
fn get_usage( fn get_status(
_param: Value, _param: Value,
_info: &ApiMethod, _info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment, _rpcenv: &mut dyn RpcEnvironment,
@ -63,6 +74,10 @@ fn get_usage(
let kstat: procfs::ProcFsStat = procfs::read_proc_stat()?; let kstat: procfs::ProcFsStat = procfs::read_proc_stat()?;
let disk_usage = crate::tools::disks::disk_usage(Path::new("/"))?; let disk_usage = crate::tools::disks::disk_usage(Path::new("/"))?;
// get fingerprint
let cert = CertInfo::new()?;
let fp = cert.fingerprint()?;
Ok(json!({ Ok(json!({
"memory": { "memory": {
"total": meminfo.memtotal, "total": meminfo.memtotal,
@ -74,7 +89,10 @@ fn get_usage(
"total": disk_usage.total, "total": disk_usage.total,
"used": disk_usage.used, "used": disk_usage.used,
"free": disk_usage.avail, "free": disk_usage.avail,
} },
"info": {
"fingerprint": fp,
},
})) }))
} }
@ -122,5 +140,5 @@ fn reboot_or_shutdown(command: NodePowerCommand) -> Result<(), Error> {
} }
pub const ROUTER: Router = Router::new() pub const ROUTER: Router = Router::new()
.get(&API_METHOD_GET_USAGE) .get(&API_METHOD_GET_STATUS)
.post(&API_METHOD_REBOOT_OR_SHUTDOWN); .post(&API_METHOD_REBOOT_OR_SHUTDOWN);

View File

@ -5,8 +5,16 @@ use proxmox::api::{api, Router, Permission};
use crate::tools; use crate::tools;
use crate::config::acl::PRIV_SYS_AUDIT; use crate::config::acl::PRIV_SYS_AUDIT;
use crate::api2::types::NODE_SCHEMA;
#[api( #[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: { returns: {
description: "Subscription status.", description: "Subscription status.",
properties: { properties: {

View File

@ -161,6 +161,8 @@ fn datastore_status(
if b != 0.0 { if b != 0.0 {
let estimate = (1.0 - a) / b; let estimate = (1.0 - a) / b;
entry["estimated-full-date"] = Value::from(estimate.floor() as u64); entry["estimated-full-date"] = Value::from(estimate.floor() as u64);
} else {
entry["estimated-full-date"] = Value::from(0);
} }
} }
} }

View File

@ -5,6 +5,8 @@ use proxmox::api::{api, schema::*};
use proxmox::const_regex; use proxmox::const_regex;
use proxmox::{IPRE, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32}; use proxmox::{IPRE, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::CryptMode;
// File names: may not contain slashes, may not start with "." // File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| { pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
if name.starts_with('.') { if name.starts_with('.') {
@ -76,6 +78,8 @@ const_regex!{
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$"); pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$"; pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$";
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
} }
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat = pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
@ -496,6 +500,10 @@ pub const PRUNE_SCHEMA_KEEP_YEARLY: Schema = IntegerSchema::new(
"filename": { "filename": {
schema: BACKUP_ARCHIVE_NAME_SCHEMA, schema: BACKUP_ARCHIVE_NAME_SCHEMA,
}, },
"crypt-mode": {
type: CryptMode,
optional: true,
},
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
@ -503,9 +511,9 @@ pub const PRUNE_SCHEMA_KEEP_YEARLY: Schema = IntegerSchema::new(
/// Basic information about archive files inside a backup snapshot. /// Basic information about archive files inside a backup snapshot.
pub struct BackupContent { pub struct BackupContent {
pub filename: String, pub filename: String,
/// Info if file is encrypted (or empty if we do not have that info) /// Info if file is encrypted, signed, or neither.
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub encrypted: Option<bool>, pub crypt_mode: Option<CryptMode>,
/// Archive size (from backup manifest). /// Archive size (from backup manifest).
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub size: Option<u64>, pub size: Option<u64>,
@ -954,3 +962,30 @@ pub enum RRDTimeFrameResolution {
/// 1 week => last 490 days /// 1 week => last 490 days
Year = 60*10080, Year = 60*10080,
} }
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
/// Describes a package for which an update is available.
pub struct APTUpdateInfo {
/// Package name
pub package: String,
/// Package title
pub title: String,
/// Package architecture
pub arch: String,
/// Human readable package description
pub description: String,
/// New version to be updated to
pub version: String,
/// Old version currently installed
pub old_version: String,
/// Package origin
pub origin: String,
/// Package priority in human-readable form
pub priority: String,
/// Package section
pub section: String,
/// URL under which the package's changelog can be retrieved
pub change_log_url: String,
}

View File

@ -40,21 +40,21 @@
//! //!
//! Acquire shared lock for ChunkStore (process wide). //! Acquire shared lock for ChunkStore (process wide).
//! //!
//! Note: When creating .idx files, we create temporary (.tmp) file, //! Note: When creating .idx files, we create temporary a (.tmp) file,
//! then do an atomic rename ... //! then do an atomic rename ...
//! //!
//! //!
//! * Garbage Collect: //! * Garbage Collect:
//! //!
//! Acquire exclusive lock for ChunkStore (process wide). If we have //! Acquire exclusive lock for ChunkStore (process wide). If we have
//! already an shared lock for ChunkStore, try to updraged that //! already a shared lock for the ChunkStore, try to upgrade that
//! lock. //! lock.
//! //!
//! //!
//! * Server Restart //! * Server Restart
//! //!
//! Try to abort running garbage collection to release exclusive //! Try to abort the running garbage collection to release exclusive
//! ChunkStore lock asap. Start new service with existing listening //! ChunkStore locks ASAP. Start the new service with the existing listening
//! socket. //! socket.
//! //!
//! //!
@ -62,10 +62,10 @@
//! //!
//! Deleting backups is as easy as deleting the corresponding .idx //! Deleting backups is as easy as deleting the corresponding .idx
//! files. Unfortunately, this does not free up any storage, because //! files. Unfortunately, this does not free up any storage, because
//! those files just contains references to chunks. //! those files just contain references to chunks.
//! //!
//! To free up some storage, we run a garbage collection process at //! To free up some storage, we run a garbage collection process at
//! regular intervals. The collector uses an mark and sweep //! regular intervals. The collector uses a mark and sweep
//! approach. In the first phase, it scans all .idx files to mark used //! approach. In the first phase, it scans all .idx files to mark used
//! chunks. The second phase then removes all unmarked chunks from the //! chunks. The second phase then removes all unmarked chunks from the
//! store. //! store.
@ -90,12 +90,12 @@
//! amount of time ago (by default 24h). So we may only delete chunks //! amount of time ago (by default 24h). So we may only delete chunks
//! with `atime` older than 24 hours. //! with `atime` older than 24 hours.
//! //!
//! Another problem arise from running backups. The mark phase does //! Another problem arises from running backups. The mark phase does
//! not find any chunks from those backups, because there is no .idx //! not find any chunks from those backups, because there is no .idx
//! file for them (created after the backup). Chunks created or //! file for them (created after the backup). Chunks created or
//! touched by those backups may have an `atime` as old as the start //! touched by those backups may have an `atime` as old as the start
//! time of those backup. Please not that the backup start time may //! time of those backups. Please note that the backup start time may
//! predate the GC start time. Se we may only delete chunk older than //! predate the GC start time. So we may only delete chunks older than
//! the start time of those running backup jobs. //! the start time of those running backup jobs.
//! //!
//! //!

View File

@ -1,30 +1,35 @@
use std::future::Future; use std::future::Future;
use std::task::{Poll, Context}; use std::task::{Poll, Context};
use std::pin::Pin; use std::pin::Pin;
use std::io::SeekFrom;
use anyhow::Error; use anyhow::Error;
use futures::future::FutureExt; use futures::future::FutureExt;
use futures::ready; use futures::ready;
use tokio::io::AsyncRead; use tokio::io::{AsyncRead, AsyncSeek};
use proxmox::sys::error::io_err_other; use proxmox::sys::error::io_err_other;
use proxmox::io_format_err; use proxmox::io_format_err;
use super::IndexFile; use super::IndexFile;
use super::read_chunk::AsyncReadChunk; use super::read_chunk::AsyncReadChunk;
use super::index::ChunkReadInfo;
enum AsyncIndexReaderState<S> { enum AsyncIndexReaderState<S> {
NoData, NoData,
WaitForData(Pin<Box<dyn Future<Output = Result<(S, Vec<u8>), Error>> + Send + 'static>>), WaitForData(Pin<Box<dyn Future<Output = Result<(S, Vec<u8>), Error>> + Send + 'static>>),
HaveData(usize), HaveData,
} }
pub struct AsyncIndexReader<S, I: IndexFile> { pub struct AsyncIndexReader<S, I: IndexFile> {
store: Option<S>, store: Option<S>,
index: I, index: I,
read_buffer: Vec<u8>, read_buffer: Vec<u8>,
current_chunk_offset: u64,
current_chunk_idx: usize, current_chunk_idx: usize,
current_chunk_digest: [u8; 32], current_chunk_info: Option<ChunkReadInfo>,
position: u64,
seek_to_pos: i64,
state: AsyncIndexReaderState<S>, state: AsyncIndexReaderState<S>,
} }
@ -36,17 +41,21 @@ impl<S: AsyncReadChunk, I: IndexFile> AsyncIndexReader<S, I> {
Self { Self {
store: Some(store), store: Some(store),
index, index,
read_buffer: Vec::with_capacity(1024*1024), read_buffer: Vec::with_capacity(1024 * 1024),
current_chunk_offset: 0,
current_chunk_idx: 0, current_chunk_idx: 0,
current_chunk_digest: [0u8; 32], current_chunk_info: None,
position: 0,
seek_to_pos: 0,
state: AsyncIndexReaderState::NoData, state: AsyncIndexReaderState::NoData,
} }
} }
} }
impl<S, I> AsyncRead for AsyncIndexReader<S, I> where impl<S, I> AsyncRead for AsyncIndexReader<S, I>
S: AsyncReadChunk + Unpin + 'static, where
I: IndexFile + Unpin S: AsyncReadChunk + Unpin + Sync + 'static,
I: IndexFile + Unpin,
{ {
fn poll_read( fn poll_read(
self: Pin<&mut Self>, self: Pin<&mut Self>,
@ -57,53 +66,71 @@ I: IndexFile + Unpin
loop { loop {
match &mut this.state { match &mut this.state {
AsyncIndexReaderState::NoData => { AsyncIndexReaderState::NoData => {
if this.current_chunk_idx >= this.index.index_count() { let (idx, offset) = if this.current_chunk_info.is_some() &&
this.position == this.current_chunk_info.as_ref().unwrap().range.end
{
// optimization for sequential chunk read
let next_idx = this.current_chunk_idx + 1;
(next_idx, 0)
} else {
match this.index.chunk_from_offset(this.position) {
Some(res) => res,
None => return Poll::Ready(Ok(0))
}
};
if idx >= this.index.index_count() {
return Poll::Ready(Ok(0)); return Poll::Ready(Ok(0));
} }
let digest = this let info = this
.index .index
.index_digest(this.current_chunk_idx) .chunk_info(idx)
.ok_or(io_format_err!("could not get digest"))? .ok_or(io_format_err!("could not get digest"))?;
.clone();
if digest == this.current_chunk_digest { this.current_chunk_offset = offset;
this.state = AsyncIndexReaderState::HaveData(0); this.current_chunk_idx = idx;
continue; let old_info = this.current_chunk_info.replace(info.clone());
if let Some(old_info) = old_info {
if old_info.digest == info.digest {
// hit, chunk is currently in cache
this.state = AsyncIndexReaderState::HaveData;
continue;
}
} }
this.current_chunk_digest = digest; // miss, need to download new chunk
let store = match this.store.take() {
let mut store = match this.store.take() {
Some(store) => store, Some(store) => store,
None => { None => {
return Poll::Ready(Err(io_format_err!("could not find store"))); return Poll::Ready(Err(io_format_err!("could not find store")));
}, }
}; };
let future = async move { let future = async move {
store.read_chunk(&digest) store.read_chunk(&info.digest)
.await .await
.map(move |x| (store, x)) .map(move |x| (store, x))
}; };
this.state = AsyncIndexReaderState::WaitForData(future.boxed()); this.state = AsyncIndexReaderState::WaitForData(future.boxed());
}, }
AsyncIndexReaderState::WaitForData(ref mut future) => { AsyncIndexReaderState::WaitForData(ref mut future) => {
match ready!(future.as_mut().poll(cx)) { match ready!(future.as_mut().poll(cx)) {
Ok((store, mut chunk_data)) => { Ok((store, mut chunk_data)) => {
this.read_buffer.clear(); this.read_buffer.clear();
this.read_buffer.append(&mut chunk_data); this.read_buffer.append(&mut chunk_data);
this.state = AsyncIndexReaderState::HaveData(0); this.state = AsyncIndexReaderState::HaveData;
this.store = Some(store); this.store = Some(store);
}, }
Err(err) => { Err(err) => {
return Poll::Ready(Err(io_err_other(err))); return Poll::Ready(Err(io_err_other(err)));
}, }
}; };
}, }
AsyncIndexReaderState::HaveData(offset) => { AsyncIndexReaderState::HaveData => {
let offset = *offset; let offset = this.current_chunk_offset as usize;
let len = this.read_buffer.len(); let len = this.read_buffer.len();
let n = if len - offset < buf.len() { let n = if len - offset < buf.len() {
len - offset len - offset
@ -111,17 +138,67 @@ I: IndexFile + Unpin
buf.len() buf.len()
}; };
buf[0..n].copy_from_slice(&this.read_buffer[offset..offset+n]); buf[0..n].copy_from_slice(&this.read_buffer[offset..(offset + n)]);
this.position += n as u64;
if offset + n == len { if offset + n == len {
this.state = AsyncIndexReaderState::NoData; this.state = AsyncIndexReaderState::NoData;
this.current_chunk_idx += 1;
} else { } else {
this.state = AsyncIndexReaderState::HaveData(offset + n); this.current_chunk_offset += n as u64;
this.state = AsyncIndexReaderState::HaveData;
} }
return Poll::Ready(Ok(n)); return Poll::Ready(Ok(n));
}, }
} }
} }
} }
} }
impl<S, I> AsyncSeek for AsyncIndexReader<S, I>
where
S: AsyncReadChunk + Unpin + Sync + 'static,
I: IndexFile + Unpin,
{
fn start_seek(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
pos: SeekFrom,
) -> Poll<tokio::io::Result<()>> {
let this = Pin::get_mut(self);
this.seek_to_pos = match pos {
SeekFrom::Start(offset) => {
offset as i64
},
SeekFrom::End(offset) => {
this.index.index_bytes() as i64 + offset
},
SeekFrom::Current(offset) => {
this.position as i64 + offset
}
};
Poll::Ready(Ok(()))
}
fn poll_complete(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
) -> Poll<tokio::io::Result<u64>> {
let this = Pin::get_mut(self);
let index_bytes = this.index.index_bytes();
if this.seek_to_pos < 0 {
return Poll::Ready(Err(io_format_err!("cannot seek to negative values")));
} else if this.seek_to_pos > index_bytes as i64 {
this.position = index_bytes;
} else {
this.position = this.seek_to_pos as u64;
}
// even if seeking within one chunk, we need to go to NoData to
// recalculate the current_chunk_offset (data is cached anyway)
this.state = AsyncIndexReaderState::NoData;
Poll::Ready(Ok(this.position))
}
}

View File

@ -106,7 +106,11 @@ impl BackupGroup {
use nix::fcntl::{openat, OFlag}; use nix::fcntl::{openat, OFlag};
match openat(l2_fd, &manifest_path, OFlag::O_RDONLY, nix::sys::stat::Mode::empty()) { match openat(l2_fd, &manifest_path, OFlag::O_RDONLY, nix::sys::stat::Mode::empty()) {
Ok(_) => { /* manifest exists --> assume backup was successful */ }, Ok(rawfd) => {
/* manifest exists --> assume backup was successful */
/* close else this leaks! */
nix::unistd::close(rawfd)?;
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => { return Ok(()); } Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => { return Ok(()); }
Err(err) => { Err(err) => {
bail!("last_successful_backup: unexpected error - {}", err); bail!("last_successful_backup: unexpected error - {}", err);

View File

@ -89,6 +89,10 @@ pub fn catalog_shell_cli() -> CommandLineInterface {
"find", "find",
CliCommand::new(&API_METHOD_FIND_COMMAND).arg_param(&["pattern"]), CliCommand::new(&API_METHOD_FIND_COMMAND).arg_param(&["pattern"]),
) )
.insert(
"exit",
CliCommand::new(&API_METHOD_EXIT),
)
.insert_help(), .insert_help(),
) )
} }
@ -104,6 +108,14 @@ fn complete_path(complete_me: &str, _map: &HashMap<String, String>) -> Vec<Strin
} }
} }
// just an empty wrapper so that it is displayed in help/docs, we check
// in the readloop for 'exit' again break
#[api(input: { properties: {} })]
/// Exit the shell
async fn exit() -> Result<(), Error> {
Ok(())
}
#[api(input: { properties: {} })] #[api(input: { properties: {} })]
/// List the current working directory. /// List the current working directory.
async fn pwd_command() -> Result<(), Error> { async fn pwd_command() -> Result<(), Error> {
@ -439,6 +451,9 @@ impl Shell {
SHELL = Some(this as *mut Shell as usize); SHELL = Some(this as *mut Shell as usize);
} }
while let Ok(line) = this.rl.readline(&this.prompt) { while let Ok(line) = this.rl.readline(&this.prompt) {
if line == "exit" {
break;
}
let helper = this.rl.helper().unwrap(); let helper = this.rl.helper().unwrap();
let args = match cli::shellword_split(&line) { let args = match cli::shellword_split(&line) {
Ok(args) => args, Ok(args) => args,

View File

@ -80,8 +80,9 @@ impl ChunkStore {
let default_options = CreateOptions::new(); let default_options = CreateOptions::new();
if let Err(err) = create_path(&base, Some(default_options.clone()), Some(options.clone())) { match create_path(&base, Some(default_options.clone()), Some(options.clone())) {
bail!("unable to create chunk store '{}' at {:?} - {}", name, base, err); Err(err) => bail!("unable to create chunk store '{}' at {:?} - {}", name, base, err),
Ok(res) => if ! res { nix::unistd::chown(&base, Some(uid), Some(gid))? },
} }
if let Err(err) = create_dir(&chunk_dir, options.clone()) { if let Err(err) = create_dir(&chunk_dir, options.clone()) {
@ -177,7 +178,7 @@ impl ChunkStore {
return Ok(false); return Ok(false);
} }
bail!("updata atime failed for chunk {:?} - {}", chunk_path, err); bail!("update atime failed for chunk {:?} - {}", chunk_path, err);
} }
Ok(true) Ok(true)

View File

@ -5,15 +5,15 @@
/// use hash value 0 to detect a boundary. /// use hash value 0 to detect a boundary.
const CA_CHUNKER_WINDOW_SIZE: usize = 64; const CA_CHUNKER_WINDOW_SIZE: usize = 64;
/// Slinding window chunker (Buzhash) /// Sliding window chunker (Buzhash)
/// ///
/// This is a rewrite of *casync* chunker (cachunker.h) in rust. /// This is a rewrite of *casync* chunker (cachunker.h) in rust.
/// ///
/// Hashing by cyclic polynomial (also called Buzhash) has the benefit /// Hashing by cyclic polynomial (also called Buzhash) has the benefit
/// of avoiding multiplications, using barrel shifts instead. For more /// of avoiding multiplications, using barrel shifts instead. For more
/// information please take a look at the [Rolling /// information please take a look at the [Rolling
/// Hash](https://en.wikipedia.org/wiki/Rolling_hash) artikel from /// Hash](https://en.wikipedia.org/wiki/Rolling_hash) article from
/// wikipedia. /// Wikipedia.
pub struct Chunker { pub struct Chunker {
h: u32, h: u32,

View File

@ -6,12 +6,30 @@
//! See the Wikipedia Artikel for [Authenticated //! See the Wikipedia Artikel for [Authenticated
//! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption) //! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption)
//! for a short introduction. //! for a short introduction.
use anyhow::{bail, Error};
use openssl::pkcs5::pbkdf2_hmac;
use openssl::hash::MessageDigest;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
use std::io::Write; use std::io::Write;
use anyhow::{bail, Error};
use chrono::{Local, TimeZone, DateTime}; use chrono::{Local, TimeZone, DateTime};
use openssl::hash::MessageDigest;
use openssl::pkcs5::pbkdf2_hmac;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
use serde::{Deserialize, Serialize};
use proxmox::api::api;
#[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Defines whether data is encrypted (using an AEAD cipher), only signed, or neither.
pub enum CryptMode {
/// Don't encrypt.
None,
/// Encrypt.
Encrypt,
/// Only sign.
SignOnly,
}
/// Encryption Configuration with secret key /// Encryption Configuration with secret key
/// ///
@ -26,7 +44,6 @@ pub struct CryptConfig {
id_pkey: openssl::pkey::PKey<openssl::pkey::Private>, id_pkey: openssl::pkey::PKey<openssl::pkey::Private>,
// The private key used by the cipher. // The private key used by the cipher.
enc_key: [u8; 32], enc_key: [u8; 32],
} }
impl CryptConfig { impl CryptConfig {
@ -63,10 +80,9 @@ impl CryptConfig {
/// chunk digest values do not clash with values computed for /// chunk digest values do not clash with values computed for
/// other sectret keys. /// other sectret keys.
pub fn compute_digest(&self, data: &[u8]) -> [u8; 32] { pub fn compute_digest(&self, data: &[u8]) -> [u8; 32] {
// FIXME: use HMAC-SHA256 instead??
let mut hasher = openssl::sha::Sha256::new(); let mut hasher = openssl::sha::Sha256::new();
hasher.update(&self.id_key);
hasher.update(data); hasher.update(data);
hasher.update(&self.id_key); // at the end, to avoid length extensions attacks
hasher.finish() hasher.finish()
} }
@ -203,7 +219,7 @@ impl CryptConfig {
created: DateTime<Local>, created: DateTime<Local>,
) -> Result<Vec<u8>, Error> { ) -> Result<Vec<u8>, Error> {
let modified = Local.timestamp(Local::now().timestamp(), 0); let modified = Local.timestamp(Local::now().timestamp(), 0);
let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() }; let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() };
let data = serde_json::to_string(&key_config)?.as_bytes().to_vec(); let data = serde_json::to_string(&key_config)?.as_bytes().to_vec();

View File

@ -3,10 +3,10 @@ use std::convert::TryInto;
use proxmox::tools::io::{ReadExt, WriteExt}; use proxmox::tools::io::{ReadExt, WriteExt};
const MAX_BLOB_SIZE: usize = 128*1024*1024;
use super::file_formats::*; use super::file_formats::*;
use super::CryptConfig; use super::{CryptConfig, CryptMode};
const MAX_BLOB_SIZE: usize = 128*1024*1024;
/// Encoded data chunk with digest and positional information /// Encoded data chunk with digest and positional information
pub struct ChunkInfo { pub struct ChunkInfo {
@ -166,6 +166,19 @@ impl DataBlob {
Ok(blob) Ok(blob)
} }
/// Get the encryption mode for this blob.
pub fn crypt_mode(&self) -> Result<CryptMode, Error> {
let magic = self.magic();
Ok(if magic == &UNCOMPRESSED_BLOB_MAGIC_1_0 || magic == &COMPRESSED_BLOB_MAGIC_1_0 {
CryptMode::None
} else if magic == &ENCR_COMPR_BLOB_MAGIC_1_0 || magic == &ENCRYPTED_BLOB_MAGIC_1_0 {
CryptMode::Encrypt
} else {
bail!("Invalid blob magic number.");
})
}
/// Decode blob data /// Decode blob data
pub fn decode(&self, config: Option<&CryptConfig>) -> Result<Vec<u8>, Error> { pub fn decode(&self, config: Option<&CryptConfig>) -> Result<Vec<u8>, Error> {
@ -194,75 +207,11 @@ impl DataBlob {
} else { } else {
bail!("unable to decrypt blob - missing CryptConfig"); bail!("unable to decrypt blob - missing CryptConfig");
} }
} else if magic == &AUTH_COMPR_BLOB_MAGIC_1_0 || magic == &AUTHENTICATED_BLOB_MAGIC_1_0 {
let header_len = std::mem::size_of::<AuthenticatedDataBlobHeader>();
let head = unsafe {
(&self.raw_data[..header_len]).read_le_value::<AuthenticatedDataBlobHeader>()?
};
let data_start = std::mem::size_of::<AuthenticatedDataBlobHeader>();
// Note: only verify if we have a crypt config
if let Some(config) = config {
let signature = config.compute_auth_tag(&self.raw_data[data_start..]);
if signature != head.tag {
bail!("verifying blob signature failed");
}
}
if magic == &AUTH_COMPR_BLOB_MAGIC_1_0 {
let data = zstd::block::decompress(&self.raw_data[data_start..], 16*1024*1024)?;
Ok(data)
} else {
Ok(self.raw_data[data_start..].to_vec())
}
} else { } else {
bail!("Invalid blob magic number."); bail!("Invalid blob magic number.");
} }
} }
/// Create a signed DataBlob, optionally compressed
pub fn create_signed(
data: &[u8],
config: &CryptConfig,
compress: bool,
) -> Result<Self, Error> {
if data.len() > MAX_BLOB_SIZE {
bail!("data blob too large ({} bytes).", data.len());
}
let compr_data;
let (_compress, data, magic) = if compress {
compr_data = zstd::block::compress(data, 1)?;
// Note: We only use compression if result is shorter
if compr_data.len() < data.len() {
(true, &compr_data[..], AUTH_COMPR_BLOB_MAGIC_1_0)
} else {
(false, data, AUTHENTICATED_BLOB_MAGIC_1_0)
}
} else {
(false, data, AUTHENTICATED_BLOB_MAGIC_1_0)
};
let header_len = std::mem::size_of::<AuthenticatedDataBlobHeader>();
let mut raw_data = Vec::with_capacity(data.len() + header_len);
let head = AuthenticatedDataBlobHeader {
head: DataBlobHeader { magic, crc: [0; 4] },
tag: config.compute_auth_tag(data),
};
unsafe {
raw_data.write_le_value(head)?;
}
raw_data.extend_from_slice(data);
let mut blob = DataBlob { raw_data };
blob.set_crc(blob.compute_crc());
Ok(blob)
}
/// Load blob from ``reader`` /// Load blob from ``reader``
pub fn load(reader: &mut dyn std::io::Read) -> Result<Self, Error> { pub fn load(reader: &mut dyn std::io::Read) -> Result<Self, Error> {
@ -294,14 +243,6 @@ impl DataBlob {
let blob = DataBlob { raw_data: data }; let blob = DataBlob { raw_data: data };
Ok(blob)
} else if magic == AUTH_COMPR_BLOB_MAGIC_1_0 || magic == AUTHENTICATED_BLOB_MAGIC_1_0 {
if data.len() < std::mem::size_of::<AuthenticatedDataBlobHeader>() {
bail!("authenticated blob too small ({} bytes).", data.len());
}
let blob = DataBlob { raw_data: data };
Ok(blob) Ok(blob)
} else { } else {
bail!("unable to parse raw blob - wrong magic"); bail!("unable to parse raw blob - wrong magic");
@ -376,7 +317,7 @@ impl <'a, 'b> DataChunkBuilder<'a, 'b> {
/// Set encryption Configuration /// Set encryption Configuration
/// ///
/// If set, chunks are encrypted. /// If set, chunks are encrypted
pub fn crypt_config(mut self, value: &'b CryptConfig) -> Self { pub fn crypt_config(mut self, value: &'b CryptConfig) -> Self {
if self.digest_computed { if self.digest_computed {
panic!("unable to set crypt_config after compute_digest()."); panic!("unable to set crypt_config after compute_digest().");
@ -415,12 +356,7 @@ impl <'a, 'b> DataChunkBuilder<'a, 'b> {
self.compute_digest(); self.compute_digest();
} }
let chunk = DataBlob::encode( let chunk = DataBlob::encode(self.orig_data, self.config, self.compress)?;
self.orig_data,
self.config,
self.compress,
)?;
Ok((chunk, self.digest)) Ok((chunk, self.digest))
} }

View File

@ -1,4 +1,4 @@
use anyhow::{bail, Error}; use anyhow::{bail, format_err, Error};
use std::sync::Arc; use std::sync::Arc;
use std::io::{Read, BufReader}; use std::io::{Read, BufReader};
use proxmox::tools::io::ReadExt; use proxmox::tools::io::ReadExt;
@ -8,8 +8,6 @@ use super::*;
enum BlobReaderState<R: Read> { enum BlobReaderState<R: Read> {
Uncompressed { expected_crc: u32, csum_reader: ChecksumReader<R> }, Uncompressed { expected_crc: u32, csum_reader: ChecksumReader<R> },
Compressed { expected_crc: u32, decompr: zstd::stream::read::Decoder<BufReader<ChecksumReader<R>>> }, Compressed { expected_crc: u32, decompr: zstd::stream::read::Decoder<BufReader<ChecksumReader<R>>> },
Signed { expected_crc: u32, expected_hmac: [u8; 32], csum_reader: ChecksumReader<R> },
SignedCompressed { expected_crc: u32, expected_hmac: [u8; 32], decompr: zstd::stream::read::Decoder<BufReader<ChecksumReader<R>>> },
Encrypted { expected_crc: u32, decrypt_reader: CryptReader<BufReader<ChecksumReader<R>>> }, Encrypted { expected_crc: u32, decrypt_reader: CryptReader<BufReader<ChecksumReader<R>>> },
EncryptedCompressed { expected_crc: u32, decompr: zstd::stream::read::Decoder<BufReader<CryptReader<BufReader<ChecksumReader<R>>>>> }, EncryptedCompressed { expected_crc: u32, decompr: zstd::stream::read::Decoder<BufReader<CryptReader<BufReader<ChecksumReader<R>>>>> },
} }
@ -41,40 +39,26 @@ impl <R: Read> DataBlobReader<R> {
let decompr = zstd::stream::read::Decoder::new(csum_reader)?; let decompr = zstd::stream::read::Decoder::new(csum_reader)?;
Ok(Self { state: BlobReaderState::Compressed { expected_crc, decompr }}) Ok(Self { state: BlobReaderState::Compressed { expected_crc, decompr }})
} }
AUTHENTICATED_BLOB_MAGIC_1_0 => {
let expected_crc = u32::from_le_bytes(head.crc);
let mut expected_hmac = [0u8; 32];
reader.read_exact(&mut expected_hmac)?;
let csum_reader = ChecksumReader::new(reader, config);
Ok(Self { state: BlobReaderState::Signed { expected_crc, expected_hmac, csum_reader }})
}
AUTH_COMPR_BLOB_MAGIC_1_0 => {
let expected_crc = u32::from_le_bytes(head.crc);
let mut expected_hmac = [0u8; 32];
reader.read_exact(&mut expected_hmac)?;
let csum_reader = ChecksumReader::new(reader, config);
let decompr = zstd::stream::read::Decoder::new(csum_reader)?;
Ok(Self { state: BlobReaderState::SignedCompressed { expected_crc, expected_hmac, decompr }})
}
ENCRYPTED_BLOB_MAGIC_1_0 => { ENCRYPTED_BLOB_MAGIC_1_0 => {
let config = config.ok_or_else(|| format_err!("unable to read encrypted blob without key"))?;
let expected_crc = u32::from_le_bytes(head.crc); let expected_crc = u32::from_le_bytes(head.crc);
let mut iv = [0u8; 16]; let mut iv = [0u8; 16];
let mut expected_tag = [0u8; 16]; let mut expected_tag = [0u8; 16];
reader.read_exact(&mut iv)?; reader.read_exact(&mut iv)?;
reader.read_exact(&mut expected_tag)?; reader.read_exact(&mut expected_tag)?;
let csum_reader = ChecksumReader::new(reader, None); let csum_reader = ChecksumReader::new(reader, None);
let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config.unwrap())?; let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config)?;
Ok(Self { state: BlobReaderState::Encrypted { expected_crc, decrypt_reader }}) Ok(Self { state: BlobReaderState::Encrypted { expected_crc, decrypt_reader }})
} }
ENCR_COMPR_BLOB_MAGIC_1_0 => { ENCR_COMPR_BLOB_MAGIC_1_0 => {
let config = config.ok_or_else(|| format_err!("unable to read encrypted blob without key"))?;
let expected_crc = u32::from_le_bytes(head.crc); let expected_crc = u32::from_le_bytes(head.crc);
let mut iv = [0u8; 16]; let mut iv = [0u8; 16];
let mut expected_tag = [0u8; 16]; let mut expected_tag = [0u8; 16];
reader.read_exact(&mut iv)?; reader.read_exact(&mut iv)?;
reader.read_exact(&mut expected_tag)?; reader.read_exact(&mut expected_tag)?;
let csum_reader = ChecksumReader::new(reader, None); let csum_reader = ChecksumReader::new(reader, None);
let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config.unwrap())?; let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config)?;
let decompr = zstd::stream::read::Decoder::new(decrypt_reader)?; let decompr = zstd::stream::read::Decoder::new(decrypt_reader)?;
Ok(Self { state: BlobReaderState::EncryptedCompressed { expected_crc, decompr }}) Ok(Self { state: BlobReaderState::EncryptedCompressed { expected_crc, decompr }})
} }
@ -99,31 +83,6 @@ impl <R: Read> DataBlobReader<R> {
} }
Ok(reader) Ok(reader)
} }
BlobReaderState::Signed { csum_reader, expected_crc, expected_hmac } => {
let (reader, crc, hmac) = csum_reader.finish()?;
if crc != expected_crc {
bail!("blob crc check failed");
}
if let Some(hmac) = hmac {
if hmac != expected_hmac {
bail!("blob signature check failed");
}
}
Ok(reader)
}
BlobReaderState::SignedCompressed { expected_crc, expected_hmac, decompr } => {
let csum_reader = decompr.finish().into_inner();
let (reader, crc, hmac) = csum_reader.finish()?;
if crc != expected_crc {
bail!("blob crc check failed");
}
if let Some(hmac) = hmac {
if hmac != expected_hmac {
bail!("blob signature check failed");
}
}
Ok(reader)
}
BlobReaderState::Encrypted { expected_crc, decrypt_reader } => { BlobReaderState::Encrypted { expected_crc, decrypt_reader } => {
let csum_reader = decrypt_reader.finish()?.into_inner(); let csum_reader = decrypt_reader.finish()?.into_inner();
let (reader, crc, _) = csum_reader.finish()?; let (reader, crc, _) = csum_reader.finish()?;
@ -155,12 +114,6 @@ impl <R: Read> Read for DataBlobReader<R> {
BlobReaderState::Compressed { decompr, .. } => { BlobReaderState::Compressed { decompr, .. } => {
decompr.read(buf) decompr.read(buf)
} }
BlobReaderState::Signed { csum_reader, .. } => {
csum_reader.read(buf)
}
BlobReaderState::SignedCompressed { decompr, .. } => {
decompr.read(buf)
}
BlobReaderState::Encrypted { decrypt_reader, .. } => { BlobReaderState::Encrypted { decrypt_reader, .. } => {
decrypt_reader.read(buf) decrypt_reader.read(buf)
} }

View File

@ -8,8 +8,6 @@ use super::*;
enum BlobWriterState<W: Write> { enum BlobWriterState<W: Write> {
Uncompressed { csum_writer: ChecksumWriter<W> }, Uncompressed { csum_writer: ChecksumWriter<W> },
Compressed { compr: zstd::stream::write::Encoder<ChecksumWriter<W>> }, Compressed { compr: zstd::stream::write::Encoder<ChecksumWriter<W>> },
Signed { csum_writer: ChecksumWriter<W> },
SignedCompressed { compr: zstd::stream::write::Encoder<ChecksumWriter<W>> },
Encrypted { crypt_writer: CryptWriter<ChecksumWriter<W>> }, Encrypted { crypt_writer: CryptWriter<ChecksumWriter<W>> },
EncryptedCompressed { compr: zstd::stream::write::Encoder<CryptWriter<ChecksumWriter<W>>> }, EncryptedCompressed { compr: zstd::stream::write::Encoder<CryptWriter<ChecksumWriter<W>>> },
} }
@ -42,33 +40,6 @@ impl <W: Write + Seek> DataBlobWriter<W> {
Ok(Self { state: BlobWriterState::Compressed { compr }}) Ok(Self { state: BlobWriterState::Compressed { compr }})
} }
pub fn new_signed(mut writer: W, config: Arc<CryptConfig>) -> Result<Self, Error> {
writer.seek(SeekFrom::Start(0))?;
let head = AuthenticatedDataBlobHeader {
head: DataBlobHeader { magic: AUTHENTICATED_BLOB_MAGIC_1_0, crc: [0; 4] },
tag: [0u8; 32],
};
unsafe {
writer.write_le_value(head)?;
}
let csum_writer = ChecksumWriter::new(writer, Some(config));
Ok(Self { state: BlobWriterState::Signed { csum_writer }})
}
pub fn new_signed_compressed(mut writer: W, config: Arc<CryptConfig>) -> Result<Self, Error> {
writer.seek(SeekFrom::Start(0))?;
let head = AuthenticatedDataBlobHeader {
head: DataBlobHeader { magic: AUTH_COMPR_BLOB_MAGIC_1_0, crc: [0; 4] },
tag: [0u8; 32],
};
unsafe {
writer.write_le_value(head)?;
}
let csum_writer = ChecksumWriter::new(writer, Some(config));
let compr = zstd::stream::write::Encoder::new(csum_writer, 1)?;
Ok(Self { state: BlobWriterState::SignedCompressed { compr }})
}
pub fn new_encrypted(mut writer: W, config: Arc<CryptConfig>) -> Result<Self, Error> { pub fn new_encrypted(mut writer: W, config: Arc<CryptConfig>) -> Result<Self, Error> {
writer.seek(SeekFrom::Start(0))?; writer.seek(SeekFrom::Start(0))?;
let head = EncryptedDataBlobHeader { let head = EncryptedDataBlobHeader {
@ -129,37 +100,6 @@ impl <W: Write + Seek> DataBlobWriter<W> {
Ok(writer) Ok(writer)
} }
BlobWriterState::Signed { csum_writer } => {
let (mut writer, crc, tag) = csum_writer.finish()?;
let head = AuthenticatedDataBlobHeader {
head: DataBlobHeader { magic: AUTHENTICATED_BLOB_MAGIC_1_0, crc: crc.to_le_bytes() },
tag: tag.unwrap(),
};
writer.seek(SeekFrom::Start(0))?;
unsafe {
writer.write_le_value(head)?;
}
Ok(writer)
}
BlobWriterState::SignedCompressed { compr } => {
let csum_writer = compr.finish()?;
let (mut writer, crc, tag) = csum_writer.finish()?;
let head = AuthenticatedDataBlobHeader {
head: DataBlobHeader { magic: AUTH_COMPR_BLOB_MAGIC_1_0, crc: crc.to_le_bytes() },
tag: tag.unwrap(),
};
writer.seek(SeekFrom::Start(0))?;
unsafe {
writer.write_le_value(head)?;
}
Ok(writer)
}
BlobWriterState::Encrypted { crypt_writer } => { BlobWriterState::Encrypted { crypt_writer } => {
let (csum_writer, iv, tag) = crypt_writer.finish()?; let (csum_writer, iv, tag) = crypt_writer.finish()?;
let (mut writer, crc, _) = csum_writer.finish()?; let (mut writer, crc, _) = csum_writer.finish()?;
@ -203,12 +143,6 @@ impl <W: Write + Seek> Write for DataBlobWriter<W> {
BlobWriterState::Compressed { ref mut compr } => { BlobWriterState::Compressed { ref mut compr } => {
compr.write(buf) compr.write(buf)
} }
BlobWriterState::Signed { ref mut csum_writer } => {
csum_writer.write(buf)
}
BlobWriterState::SignedCompressed { ref mut compr } => {
compr.write(buf)
}
BlobWriterState::Encrypted { ref mut crypt_writer } => { BlobWriterState::Encrypted { ref mut crypt_writer } => {
crypt_writer.write(buf) crypt_writer.write(buf)
} }
@ -226,13 +160,7 @@ impl <W: Write + Seek> Write for DataBlobWriter<W> {
BlobWriterState::Compressed { ref mut compr } => { BlobWriterState::Compressed { ref mut compr } => {
compr.flush() compr.flush()
} }
BlobWriterState::Signed { ref mut csum_writer } => { BlobWriterState::Encrypted { ref mut crypt_writer } => {
csum_writer.flush()
}
BlobWriterState::SignedCompressed { ref mut compr } => {
compr.flush()
}
BlobWriterState::Encrypted { ref mut crypt_writer } => {
crypt_writer.flush() crypt_writer.flush()
} }
BlobWriterState::EncryptedCompressed { ref mut compr } => { BlobWriterState::EncryptedCompressed { ref mut compr } => {

View File

@ -15,6 +15,7 @@ use super::fixed_index::{FixedIndexReader, FixedIndexWriter};
use super::manifest::{MANIFEST_BLOB_NAME, CLIENT_LOG_BLOB_NAME, BackupManifest}; use super::manifest::{MANIFEST_BLOB_NAME, CLIENT_LOG_BLOB_NAME, BackupManifest};
use super::index::*; use super::index::*;
use super::{DataBlob, ArchiveType, archive_type}; use super::{DataBlob, ArchiveType, archive_type};
use crate::backup::CryptMode;
use crate::config::datastore; use crate::config::datastore;
use crate::server::WorkerTask; use crate::server::WorkerTask;
use crate::tools; use crate::tools;
@ -143,7 +144,7 @@ impl DataStore {
self.chunk_store.base_path() self.chunk_store.base_path()
} }
/// Clenaup a backup directory /// Cleanup a backup directory
/// ///
/// Removes all files not mentioned in the manifest. /// Removes all files not mentioned in the manifest.
pub fn cleanup_backup_dir(&self, backup_dir: &BackupDir, manifest: &BackupManifest pub fn cleanup_backup_dir(&self, backup_dir: &BackupDir, manifest: &BackupManifest
@ -339,9 +340,30 @@ impl DataStore {
.map(|s| s.starts_with(".")) .map(|s| s.starts_with("."))
.unwrap_or(false) .unwrap_or(false)
} }
let handle_entry_err = |err: walkdir::Error| {
if let Some(inner) = err.io_error() {
let path = err.path().unwrap_or(Path::new(""));
match inner.kind() {
io::ErrorKind::PermissionDenied => {
// only allow to skip ext4 fsck directory, avoid GC if, for example,
// a user got file permissions wrong on datastore rsync to new server
if err.depth() > 1 || !path.ends_with("lost+found") {
bail!("cannot continue garbage-collection safely, permission denied on: {}", path.display())
}
},
_ => bail!("unexpected error on datastore traversal: {} - {}", inner, path.display()),
}
}
Ok(())
};
for entry in walker.filter_entry(|e| !is_hidden(e)) { for entry in walker.filter_entry(|e| !is_hidden(e)) {
let path = entry?.into_path(); let path = match entry {
Ok(entry) => entry.into_path(),
Err(err) => {
handle_entry_err(err)?;
continue
},
};
if let Ok(archive_type) = archive_type(&path) { if let Ok(archive_type) = archive_type(&path) {
if archive_type == ArchiveType::FixedIndex || archive_type == ArchiveType::DynamicIndex { if archive_type == ArchiveType::FixedIndex || archive_type == ArchiveType::DynamicIndex {
list.push(path); list.push(path);
@ -494,9 +516,13 @@ impl DataStore {
Ok((blob, raw_size)) Ok((blob, raw_size))
} }
pub fn load_manifest(&self, backup_dir: &BackupDir) -> Result<(BackupManifest, u64), Error> { pub fn load_manifest(
&self,
backup_dir: &BackupDir,
) -> Result<(BackupManifest, CryptMode, u64), Error> {
let (blob, raw_size) = self.load_blob(backup_dir, MANIFEST_BLOB_NAME)?; let (blob, raw_size) = self.load_blob(backup_dir, MANIFEST_BLOB_NAME)?;
let crypt_mode = blob.crypt_mode()?;
let manifest = BackupManifest::try_from(blob)?; let manifest = BackupManifest::try_from(blob)?;
Ok((manifest, raw_size)) Ok((manifest, crypt_mode, raw_size))
} }
} }

View File

@ -216,6 +216,24 @@ impl IndexFile for DynamicIndexReader {
digest: self.index[pos].digest.clone(), digest: self.index[pos].digest.clone(),
}) })
} }
fn chunk_from_offset(&self, offset: u64) -> Option<(usize, u64)> {
let end_idx = self.index.len() - 1;
let end = self.chunk_end(end_idx);
let found_idx = self.binary_search(0, 0, end_idx, end, offset);
let found_idx = match found_idx {
Ok(i) => i,
Err(_) => return None
};
let found_start = if found_idx == 0 {
0
} else {
self.chunk_end(found_idx - 1)
};
Some((found_idx, offset - found_start))
}
} }
struct CachedChunk { struct CachedChunk {

View File

@ -17,12 +17,6 @@ pub const ENCRYPTED_BLOB_MAGIC_1_0: [u8; 8] = [123, 103, 133, 190, 34, 45, 76, 2
// openssl::sha::sha256(b"Proxmox Backup zstd compressed encrypted blob v1.0")[0..8] // openssl::sha::sha256(b"Proxmox Backup zstd compressed encrypted blob v1.0")[0..8]
pub const ENCR_COMPR_BLOB_MAGIC_1_0: [u8; 8] = [230, 89, 27, 191, 11, 191, 216, 11]; pub const ENCR_COMPR_BLOB_MAGIC_1_0: [u8; 8] = [230, 89, 27, 191, 11, 191, 216, 11];
//openssl::sha::sha256(b"Proxmox Backup authenticated blob v1.0")[0..8]
pub const AUTHENTICATED_BLOB_MAGIC_1_0: [u8; 8] = [31, 135, 238, 226, 145, 206, 5, 2];
//openssl::sha::sha256(b"Proxmox Backup zstd compressed authenticated blob v1.0")[0..8]
pub const AUTH_COMPR_BLOB_MAGIC_1_0: [u8; 8] = [126, 166, 15, 190, 145, 31, 169, 96];
// openssl::sha::sha256(b"Proxmox Backup fixed sized chunk index v1.0")[0..8] // openssl::sha::sha256(b"Proxmox Backup fixed sized chunk index v1.0")[0..8]
pub const FIXED_SIZED_CHUNK_INDEX_1_0: [u8; 8] = [47, 127, 65, 237, 145, 253, 15, 205]; pub const FIXED_SIZED_CHUNK_INDEX_1_0: [u8; 8] = [47, 127, 65, 237, 145, 253, 15, 205];
@ -50,19 +44,6 @@ pub struct DataBlobHeader {
pub crc: [u8; 4], pub crc: [u8; 4],
} }
/// Authenticated data blob binary storage format
///
/// The ``DataBlobHeader`` for authenticated blobs additionally contains
/// a 16 byte HMAC tag, followed by the data:
///
/// (MAGIC || CRC32 || TAG || Data).
#[derive(Endian)]
#[repr(C,packed)]
pub struct AuthenticatedDataBlobHeader {
pub head: DataBlobHeader,
pub tag: [u8; 32],
}
/// Encrypted data blob binary storage format /// Encrypted data blob binary storage format
/// ///
/// The ``DataBlobHeader`` for encrypted blobs additionally contains /// The ``DataBlobHeader`` for encrypted blobs additionally contains
@ -87,8 +68,6 @@ pub fn header_size(magic: &[u8; 8]) -> usize {
&COMPRESSED_BLOB_MAGIC_1_0 => std::mem::size_of::<DataBlobHeader>(), &COMPRESSED_BLOB_MAGIC_1_0 => std::mem::size_of::<DataBlobHeader>(),
&ENCRYPTED_BLOB_MAGIC_1_0 => std::mem::size_of::<EncryptedDataBlobHeader>(), &ENCRYPTED_BLOB_MAGIC_1_0 => std::mem::size_of::<EncryptedDataBlobHeader>(),
&ENCR_COMPR_BLOB_MAGIC_1_0 => std::mem::size_of::<EncryptedDataBlobHeader>(), &ENCR_COMPR_BLOB_MAGIC_1_0 => std::mem::size_of::<EncryptedDataBlobHeader>(),
&AUTHENTICATED_BLOB_MAGIC_1_0 => std::mem::size_of::<AuthenticatedDataBlobHeader>(),
&AUTH_COMPR_BLOB_MAGIC_1_0 => std::mem::size_of::<AuthenticatedDataBlobHeader>(),
_ => panic!("unknown blob magic"), _ => panic!("unknown blob magic"),
} }
} }

View File

@ -13,7 +13,6 @@ use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use super::read_chunk::*;
use super::ChunkInfo; use super::ChunkInfo;
use proxmox::tools::io::ReadExt; use proxmox::tools::io::ReadExt;
@ -146,20 +145,6 @@ impl FixedIndexReader {
Ok(()) Ok(())
} }
#[inline]
fn chunk_end(&self, pos: usize) -> u64 {
if pos >= self.index_length {
panic!("chunk index out of range");
}
let end = ((pos + 1) * self.chunk_size) as u64;
if end > self.size {
self.size
} else {
end
}
}
pub fn print_info(&self) { pub fn print_info(&self) {
println!("Size: {}", self.size); println!("Size: {}", self.size);
println!("ChunkSize: {}", self.chunk_size); println!("ChunkSize: {}", self.chunk_size);
@ -219,6 +204,17 @@ impl IndexFile for FixedIndexReader {
(csum, chunk_end) (csum, chunk_end)
} }
fn chunk_from_offset(&self, offset: u64) -> Option<(usize, u64)> {
if offset >= self.size {
return None;
}
Some((
(offset / self.chunk_size as u64) as usize,
offset & (self.chunk_size - 1) as u64 // fast modulo, valid for 2^x chunk_size
))
}
} }
pub struct FixedIndexWriter { pub struct FixedIndexWriter {
@ -465,142 +461,3 @@ impl FixedIndexWriter {
Ok(()) Ok(())
} }
} }
pub struct BufferedFixedReader<S> {
store: S,
index: FixedIndexReader,
archive_size: u64,
read_buffer: Vec<u8>,
buffered_chunk_idx: usize,
buffered_chunk_start: u64,
read_offset: u64,
}
impl<S: ReadChunk> BufferedFixedReader<S> {
pub fn new(index: FixedIndexReader, store: S) -> Self {
let archive_size = index.size;
Self {
store,
index,
archive_size,
read_buffer: Vec::with_capacity(1024 * 1024),
buffered_chunk_idx: 0,
buffered_chunk_start: 0,
read_offset: 0,
}
}
pub fn archive_size(&self) -> u64 {
self.archive_size
}
fn buffer_chunk(&mut self, idx: usize) -> Result<(), Error> {
let index = &self.index;
let info = match index.chunk_info(idx) {
Some(info) => info,
None => bail!("chunk index out of range"),
};
// fixme: avoid copy
let data = self.store.read_chunk(&info.digest)?;
let size = info.range.end - info.range.start;
if size != data.len() as u64 {
bail!("read chunk with wrong size ({} != {}", size, data.len());
}
self.read_buffer.clear();
self.read_buffer.extend_from_slice(&data);
self.buffered_chunk_idx = idx;
self.buffered_chunk_start = info.range.start as u64;
Ok(())
}
}
impl<S: ReadChunk> crate::tools::BufferedRead for BufferedFixedReader<S> {
fn buffered_read(&mut self, offset: u64) -> Result<&[u8], Error> {
if offset == self.archive_size {
return Ok(&self.read_buffer[0..0]);
}
let buffer_len = self.read_buffer.len();
let index = &self.index;
// optimization for sequential read
if buffer_len > 0
&& ((self.buffered_chunk_idx + 1) < index.index_length)
&& (offset >= (self.buffered_chunk_start + (self.read_buffer.len() as u64)))
{
let next_idx = self.buffered_chunk_idx + 1;
let next_end = index.chunk_end(next_idx);
if offset < next_end {
self.buffer_chunk(next_idx)?;
let buffer_offset = (offset - self.buffered_chunk_start) as usize;
return Ok(&self.read_buffer[buffer_offset..]);
}
}
if (buffer_len == 0)
|| (offset < self.buffered_chunk_start)
|| (offset >= (self.buffered_chunk_start + (self.read_buffer.len() as u64)))
{
let idx = (offset / index.chunk_size as u64) as usize;
self.buffer_chunk(idx)?;
}
let buffer_offset = (offset - self.buffered_chunk_start) as usize;
Ok(&self.read_buffer[buffer_offset..])
}
}
impl<S: ReadChunk> std::io::Read for BufferedFixedReader<S> {
fn read(&mut self, buf: &mut [u8]) -> Result<usize, std::io::Error> {
use crate::tools::BufferedRead;
use std::io::{Error, ErrorKind};
let data = match self.buffered_read(self.read_offset) {
Ok(v) => v,
Err(err) => return Err(Error::new(ErrorKind::Other, err.to_string())),
};
let n = if data.len() > buf.len() {
buf.len()
} else {
data.len()
};
unsafe {
std::ptr::copy_nonoverlapping(data.as_ptr(), buf.as_mut_ptr(), n);
}
self.read_offset += n as u64;
Ok(n)
}
}
impl<S: ReadChunk> Seek for BufferedFixedReader<S> {
fn seek(&mut self, pos: SeekFrom) -> Result<u64, std::io::Error> {
let new_offset = match pos {
SeekFrom::Start(start_offset) => start_offset as i64,
SeekFrom::End(end_offset) => (self.archive_size as i64) + end_offset,
SeekFrom::Current(offset) => (self.read_offset as i64) + offset,
};
use std::io::{Error, ErrorKind};
if (new_offset < 0) || (new_offset > (self.archive_size as i64)) {
return Err(Error::new(
ErrorKind::Other,
format!(
"seek is out of range {} ([0..{}])",
new_offset, self.archive_size
),
));
}
self.read_offset = new_offset as u64;
Ok(self.read_offset)
}
}

View File

@ -1,6 +1,7 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::ops::Range; use std::ops::Range;
#[derive(Clone)]
pub struct ChunkReadInfo { pub struct ChunkReadInfo {
pub range: Range<u64>, pub range: Range<u64>,
pub digest: [u8; 32], pub digest: [u8; 32],
@ -22,6 +23,9 @@ pub trait IndexFile {
fn index_bytes(&self) -> u64; fn index_bytes(&self) -> u64;
fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo>; fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo>;
/// Get the chunk index and the relative offset within it for a byte offset
fn chunk_from_offset(&self, offset: u64) -> Option<(usize, u64)>;
/// Compute index checksum and size /// Compute index checksum and size
fn compute_csum(&self) -> ([u8; 32], u64); fn compute_csum(&self) -> ([u8; 32], u64);

View File

@ -1,4 +1,4 @@
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Context, Error};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use chrono::{Local, TimeZone, DateTime}; use chrono::{Local, TimeZone, DateTime};
@ -146,12 +146,26 @@ pub fn encrypt_key_with_passphrase(
}) })
} }
pub fn load_and_decrypt_key(path: &std::path::Path, passphrase: &dyn Fn() -> Result<Vec<u8>, Error>) -> Result<([u8;32], DateTime<Local>), Error> { pub fn load_and_decrypt_key(
path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], DateTime<Local>), Error> {
do_load_and_decrypt_key(path, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
}
let raw = file_get_contents(&path)?; fn do_load_and_decrypt_key(
let data = String::from_utf8(raw)?; path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], DateTime<Local>), Error> {
decrypt_key(&file_get_contents(&path)?, passphrase)
}
let key_config: KeyConfig = serde_json::from_str(&data)?; pub fn decrypt_key(
mut keydata: &[u8],
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], DateTime<Local>), Error> {
let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?;
let raw_data = key_config.data; let raw_data = key_config.data;
let created = key_config.created; let created = key_config.created;

View File

@ -3,22 +3,61 @@ use std::convert::TryFrom;
use std::path::Path; use std::path::Path;
use serde_json::{json, Value}; use serde_json::{json, Value};
use ::serde::{Deserialize, Serialize};
use crate::backup::BackupDir; use crate::backup::{BackupDir, CryptMode, CryptConfig};
pub const MANIFEST_BLOB_NAME: &str = "index.json.blob"; pub const MANIFEST_BLOB_NAME: &str = "index.json.blob";
pub const CLIENT_LOG_BLOB_NAME: &str = "client.log.blob"; pub const CLIENT_LOG_BLOB_NAME: &str = "client.log.blob";
mod hex_csum {
use serde::{self, Deserialize, Serializer, Deserializer};
pub fn serialize<S>(
csum: &[u8; 32],
serializer: S,
) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let s = proxmox::tools::digest_to_hex(csum);
serializer.serialize_str(&s)
}
pub fn deserialize<'de, D>(
deserializer: D,
) -> Result<[u8; 32], D::Error>
where
D: Deserializer<'de>,
{
let s = String::deserialize(deserializer)?;
proxmox::tools::hex_to_digest(&s).map_err(serde::de::Error::custom)
}
}
fn crypt_mode_none() -> CryptMode { CryptMode::None }
fn empty_value() -> Value { json!({}) }
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
pub struct FileInfo { pub struct FileInfo {
pub filename: String, pub filename: String,
pub encrypted: Option<bool>, #[serde(default="crypt_mode_none")] // to be compatible with < 0.8.0 backups
pub crypt_mode: CryptMode,
pub size: u64, pub size: u64,
#[serde(with = "hex_csum")]
pub csum: [u8; 32], pub csum: [u8; 32],
} }
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
pub struct BackupManifest { pub struct BackupManifest {
snapshot: BackupDir, backup_type: String,
backup_id: String,
backup_time: i64,
files: Vec<FileInfo>, files: Vec<FileInfo>,
#[serde(default="empty_value")] // to be compatible with < 0.8.0 backups
pub unprotected: Value,
} }
#[derive(PartialEq)] #[derive(PartialEq)]
@ -46,12 +85,18 @@ pub fn archive_type<P: AsRef<Path>>(
impl BackupManifest { impl BackupManifest {
pub fn new(snapshot: BackupDir) -> Self { pub fn new(snapshot: BackupDir) -> Self {
Self { files: Vec::new(), snapshot } Self {
backup_type: snapshot.group().backup_type().into(),
backup_id: snapshot.group().backup_id().into(),
backup_time: snapshot.backup_time().timestamp(),
files: Vec::new(),
unprotected: json!({}),
}
} }
pub fn add_file(&mut self, filename: String, size: u64, csum: [u8; 32], encrypted: Option<bool>) -> Result<(), Error> { pub fn add_file(&mut self, filename: String, size: u64, csum: [u8; 32], crypt_mode: CryptMode) -> Result<(), Error> {
let _archive_type = archive_type(&filename)?; // check type let _archive_type = archive_type(&filename)?; // check type
self.files.push(FileInfo { filename, size, csum, encrypted }); self.files.push(FileInfo { filename, size, csum, crypt_mode });
Ok(()) Ok(())
} }
@ -59,7 +104,7 @@ impl BackupManifest {
&self.files[..] &self.files[..]
} }
fn lookup_file_info(&self, name: &str) -> Result<&FileInfo, Error> { pub fn lookup_file_info(&self, name: &str) -> Result<&FileInfo, Error> {
let info = self.files.iter().find(|item| item.filename == name); let info = self.files.iter().find(|item| item.filename == name);
@ -84,31 +129,111 @@ impl BackupManifest {
Ok(()) Ok(())
} }
pub fn into_json(self) -> Value { // Generate cannonical json
json!({ fn to_canonical_json(value: &Value) -> Result<Vec<u8>, Error> {
"backup-type": self.snapshot.group().backup_type(), let mut data = Vec::new();
"backup-id": self.snapshot.group().backup_id(), Self::write_canonical_json(value, &mut data)?;
"backup-time": self.snapshot.backup_time().timestamp(), Ok(data)
"files": self.files.iter()
.fold(Vec::new(), |mut acc, info| {
let mut value = json!({
"filename": info.filename,
"encrypted": info.encrypted,
"size": info.size,
"csum": proxmox::tools::digest_to_hex(&info.csum),
});
if let Some(encrypted) = info.encrypted {
value["encrypted"] = encrypted.into();
}
acc.push(value);
acc
})
})
} }
fn write_canonical_json(value: &Value, output: &mut Vec<u8>) -> Result<(), Error> {
match value {
Value::Null => bail!("got unexpected null value"),
Value::String(_) | Value::Number(_) | Value::Bool(_) => {
serde_json::to_writer(output, &value)?;
}
Value::Array(list) => {
output.push(b'[');
let mut iter = list.iter();
if let Some(item) = iter.next() {
Self::write_canonical_json(item, output)?;
for item in iter {
output.push(b',');
Self::write_canonical_json(item, output)?;
}
}
output.push(b']');
}
Value::Object(map) => {
output.push(b'{');
let mut keys: Vec<&str> = map.keys().map(String::as_str).collect();
keys.sort();
let mut iter = keys.into_iter();
if let Some(key) = iter.next() {
Self::write_canonical_json(&key.into(), output)?;
output.push(b':');
Self::write_canonical_json(&map[key], output)?;
for key in iter {
output.push(b',');
Self::write_canonical_json(&key.into(), output)?;
output.push(b':');
Self::write_canonical_json(&map[key], output)?;
}
}
output.push(b'}');
}
}
Ok(())
}
/// Compute manifest signature
///
/// By generating a HMAC SHA256 over the canonical json
/// representation, The 'unpreotected' property is excluded.
pub fn signature(&self, crypt_config: &CryptConfig) -> Result<[u8; 32], Error> {
Self::json_signature(&serde_json::to_value(&self)?, crypt_config)
}
fn json_signature(data: &Value, crypt_config: &CryptConfig) -> Result<[u8; 32], Error> {
let mut signed_data = data.clone();
signed_data.as_object_mut().unwrap().remove("unprotected"); // exclude
signed_data.as_object_mut().unwrap().remove("signature"); // exclude
let canonical = Self::to_canonical_json(&signed_data)?;
let sig = crypt_config.compute_auth_tag(&canonical);
Ok(sig)
}
/// Converts the Manifest into json string, and add a signature if there is a crypt_config.
pub fn to_string(&self, crypt_config: Option<&CryptConfig>) -> Result<String, Error> {
let mut manifest = serde_json::to_value(&self)?;
if let Some(crypt_config) = crypt_config {
let sig = self.signature(crypt_config)?;
manifest["signature"] = proxmox::tools::digest_to_hex(&sig).into();
}
let manifest = serde_json::to_string_pretty(&manifest).unwrap().into();
Ok(manifest)
}
/// Try to read the manifest. This verifies the signature if there is a crypt_config.
pub fn from_data(data: &[u8], crypt_config: Option<&CryptConfig>) -> Result<BackupManifest, Error> {
let json: Value = serde_json::from_slice(data)?;
let signature = json["signature"].as_str().map(String::from);
if let Some(ref crypt_config) = crypt_config {
if let Some(signature) = signature {
let expected_signature = proxmox::tools::digest_to_hex(&Self::json_signature(&json, crypt_config)?);
if signature != expected_signature {
bail!("wrong signature in manifest");
}
} else {
// not signed: warn/fail?
}
}
let manifest: BackupManifest = serde_json::from_value(json)?;
Ok(manifest)
}
} }
impl TryFrom<super::DataBlob> for BackupManifest { impl TryFrom<super::DataBlob> for BackupManifest {
type Error = Error; type Error = Error;
@ -117,41 +242,50 @@ impl TryFrom<super::DataBlob> for BackupManifest {
.map_err(|err| format_err!("decode backup manifest blob failed - {}", err))?; .map_err(|err| format_err!("decode backup manifest blob failed - {}", err))?;
let json: Value = serde_json::from_slice(&data[..]) let json: Value = serde_json::from_slice(&data[..])
.map_err(|err| format_err!("unable to parse backup manifest json - {}", err))?; .map_err(|err| format_err!("unable to parse backup manifest json - {}", err))?;
BackupManifest::try_from(json) let manifest: BackupManifest = serde_json::from_value(json)?;
Ok(manifest)
} }
} }
impl TryFrom<Value> for BackupManifest {
type Error = Error;
fn try_from(data: Value) -> Result<Self, Error> { #[test]
fn test_manifest_signature() -> Result<(), Error> {
use crate::tools::{required_string_property, required_integer_property, required_array_property}; use crate::backup::{KeyDerivationConfig};
proxmox::try_block!({ let pw = b"test";
let backup_type = required_string_property(&data, "backup-type")?;
let backup_id = required_string_property(&data, "backup-id")?;
let backup_time = required_integer_property(&data, "backup-time")?;
let snapshot = BackupDir::new(backup_type, backup_id, backup_time); let kdf = KeyDerivationConfig::Scrypt {
n: 65536,
r: 8,
p: 1,
salt: Vec::new(),
};
let mut manifest = BackupManifest::new(snapshot); let testkey = kdf.derive_key(pw)?;
for item in required_array_property(&data, "files")?.iter() { let crypt_config = CryptConfig::new(testkey)?;
let filename = required_string_property(item, "filename")?.to_owned();
let csum = required_string_property(item, "csum")?;
let csum = proxmox::tools::hex_to_digest(csum)?;
let size = required_integer_property(item, "size")? as u64;
let encrypted = item["encrypted"].as_bool();
manifest.add_file(filename, size, csum, encrypted)?;
}
if manifest.files().is_empty() { let snapshot: BackupDir = "host/elsa/2020-06-26T13:56:05Z".parse()?;
bail!("manifest does not list any files.");
}
Ok(manifest) let mut manifest = BackupManifest::new(snapshot);
}).map_err(|err: Error| format_err!("unable to parse backup manifest - {}", err))
} manifest.add_file("test1.img.fidx".into(), 200, [1u8; 32], CryptMode::Encrypt)?;
manifest.add_file("abc.blob".into(), 200, [2u8; 32], CryptMode::None)?;
manifest.unprotected["note"] = "This is not protected by the signature.".into();
let text = manifest.to_string(Some(&crypt_config))?;
let manifest: Value = serde_json::from_str(&text)?;
let signature = manifest["signature"].as_str().unwrap().to_string();
assert_eq!(signature, "d7b446fb7db081662081d4b40fedd858a1d6307a5aff4ecff7d5bf4fd35679e9");
let manifest: BackupManifest = serde_json::from_value(manifest)?;
let expected_signature = proxmox::tools::digest_to_hex(&manifest.signature(&crypt_config)?);
assert_eq!(signature, expected_signature);
Ok(())
} }

View File

@ -11,10 +11,10 @@ use super::datastore::DataStore;
/// The ReadChunk trait allows reading backup data chunks (local or remote) /// The ReadChunk trait allows reading backup data chunks (local or remote)
pub trait ReadChunk { pub trait ReadChunk {
/// Returns the encoded chunk data /// Returns the encoded chunk data
fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error>; fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error>;
/// Returns the decoded chunk data /// Returns the decoded chunk data
fn read_chunk(&mut self, digest: &[u8; 32]) -> Result<Vec<u8>, Error>; fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error>;
} }
#[derive(Clone)] #[derive(Clone)]
@ -33,7 +33,7 @@ impl LocalChunkReader {
} }
impl ReadChunk for LocalChunkReader { impl ReadChunk for LocalChunkReader {
fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error> { fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let (path, _) = self.store.chunk_path(digest); let (path, _) = self.store.chunk_path(digest);
let raw_data = proxmox::tools::fs::file_get_contents(&path)?; let raw_data = proxmox::tools::fs::file_get_contents(&path)?;
let chunk = DataBlob::from_raw(raw_data)?; let chunk = DataBlob::from_raw(raw_data)?;
@ -42,7 +42,7 @@ impl ReadChunk for LocalChunkReader {
Ok(chunk) Ok(chunk)
} }
fn read_chunk(&mut self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> { fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
let chunk = ReadChunk::read_raw_chunk(self, digest)?; let chunk = ReadChunk::read_raw_chunk(self, digest)?;
let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?; let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?;
@ -56,20 +56,20 @@ impl ReadChunk for LocalChunkReader {
pub trait AsyncReadChunk: Send { pub trait AsyncReadChunk: Send {
/// Returns the encoded chunk data /// Returns the encoded chunk data
fn read_raw_chunk<'a>( fn read_raw_chunk<'a>(
&'a mut self, &'a self,
digest: &'a [u8; 32], digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>>; ) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>>;
/// Returns the decoded chunk data /// Returns the decoded chunk data
fn read_chunk<'a>( fn read_chunk<'a>(
&'a mut self, &'a self,
digest: &'a [u8; 32], digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>>; ) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>>;
} }
impl AsyncReadChunk for LocalChunkReader { impl AsyncReadChunk for LocalChunkReader {
fn read_raw_chunk<'a>( fn read_raw_chunk<'a>(
&'a mut self, &'a self,
digest: &'a [u8; 32], digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>> { ) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>> {
Box::pin(async move{ Box::pin(async move{
@ -84,7 +84,7 @@ impl AsyncReadChunk for LocalChunkReader {
} }
fn read_chunk<'a>( fn read_chunk<'a>(
&'a mut self, &'a self,
digest: &'a [u8; 32], digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>> { ) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>> {
Box::pin(async move { Box::pin(async move {

View File

@ -101,7 +101,7 @@ fn verify_dynamic_index(datastore: &DataStore, backup_dir: &BackupDir, info: &Fi
pub fn verify_backup_dir(datastore: &DataStore, backup_dir: &BackupDir, worker: &WorkerTask) -> Result<bool, Error> { pub fn verify_backup_dir(datastore: &DataStore, backup_dir: &BackupDir, worker: &WorkerTask) -> Result<bool, Error> {
let manifest = match datastore.load_manifest(&backup_dir) { let manifest = match datastore.load_manifest(&backup_dir) {
Ok((manifest, _)) => manifest, Ok((manifest, _crypt_mode, _)) => manifest,
Err(err) => { Err(err) => {
worker.log(format!("verify {}:{} - manifest load error: {}", datastore.name(), backup_dir, err)); worker.log(format!("verify {}:{} - manifest load error: {}", datastore.name(), backup_dir, err));
return Ok(false); return Ok(false);

File diff suppressed because it is too large Load Diff

View File

@ -127,7 +127,7 @@ async fn garbage_collection_status(param: Value) -> Result<Value, Error> {
let mut result = client.get(&path, None).await?; let mut result = client.get(&path, None).await?;
let mut data = result["data"].take(); let mut data = result["data"].take();
let schema = api2::admin::datastore::API_RETURN_SCHEMA_GARBAGE_COLLECTION_STATUS; let schema = &api2::admin::datastore::API_RETURN_SCHEMA_GARBAGE_COLLECTION_STATUS;
let options = default_table_format_options(); let options = default_table_format_options();
@ -193,7 +193,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let mut result = client.get("api2/json/nodes/localhost/tasks", Some(args)).await?; let mut result = client.get("api2/json/nodes/localhost/tasks", Some(args)).await?;
let mut data = result["data"].take(); let mut data = result["data"].take();
let schema = api2::node::tasks::API_RETURN_SCHEMA_LIST_TASKS; let schema = &api2::node::tasks::API_RETURN_SCHEMA_LIST_TASKS;
let options = default_table_format_options() let options = default_table_format_options()
.column(ColumnConfig::new("starttime").right_align(false).renderer(tools::format::render_epoch)) .column(ColumnConfig::new("starttime").right_align(false).renderer(tools::format::render_epoch))

View File

@ -1,5 +1,5 @@
use std::sync::Arc; use std::sync::Arc;
use std::path::Path; use std::path::{Path, PathBuf};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::*; use futures::*;
@ -53,6 +53,11 @@ async fn run() -> Result<(), Error> {
config.add_alias("css", "/usr/share/javascript/proxmox-backup/css"); config.add_alias("css", "/usr/share/javascript/proxmox-backup/css");
config.add_alias("docs", "/usr/share/doc/proxmox-backup/html"); config.add_alias("docs", "/usr/share/doc/proxmox-backup/html");
let mut indexpath = PathBuf::from(buildcfg::JS_DIR);
indexpath.push("index.hbs");
config.register_template("index", &indexpath)?;
config.register_template("console", "/usr/share/pve-xtermjs/index.html.hbs")?;
let rest_server = RestServer::new(config); let rest_server = RestServer::new(config);
//openssl req -x509 -newkey rsa:4096 -keyout /etc/proxmox-backup/proxy.key -out /etc/proxmox-backup/proxy.pem -nodes //openssl req -x509 -newkey rsa:4096 -keyout /etc/proxmox-backup/proxy.key -out /etc/proxmox-backup/proxy.pem -nodes

View File

@ -0,0 +1,326 @@
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::{Error};
use serde_json::Value;
use chrono::{TimeZone, Utc};
use serde::Serialize;
use proxmox::api::{ApiMethod, RpcEnvironment};
use proxmox::api::{
api,
cli::{
OUTPUT_FORMAT,
ColumnConfig,
get_output_format,
format_and_print_result_full,
default_table_format_options,
},
};
use proxmox_backup::backup::{
load_and_decrypt_key,
CryptConfig,
KeyDerivationConfig,
};
use proxmox_backup::client::*;
use crate::{
KEYFILE_SCHEMA, REPO_URL_SCHEMA,
extract_repository_from_value,
record_repository,
connect,
};
#[api()]
#[derive(Copy, Clone, Serialize)]
/// Speed test result
struct Speed {
/// The meassured speed in Bytes/second
#[serde(skip_serializing_if="Option::is_none")]
speed: Option<f64>,
/// Top result we want to compare with
top: f64,
}
#[api(
properties: {
"tls": {
type: Speed,
},
"sha256": {
type: Speed,
},
"compress": {
type: Speed,
},
"decompress": {
type: Speed,
},
"aes256_gcm": {
type: Speed,
},
},
)]
#[derive(Copy, Clone, Serialize)]
/// Benchmark Results
struct BenchmarkResult {
/// TLS upload speed
tls: Speed,
/// SHA256 checksum comptation speed
sha256: Speed,
/// ZStd level 1 compression speed
compress: Speed,
/// ZStd level 1 decompression speed
decompress: Speed,
/// AES256 GCM encryption speed
aes256_gcm: Speed,
}
static BENCHMARK_RESULT_2020_TOP: BenchmarkResult = BenchmarkResult {
tls: Speed {
speed: None,
top: 1_000_000.0 * 590.0, // TLS to localhost, AMD Ryzen 7 2700X
},
sha256: Speed {
speed: None,
top: 1_000_000.0 * 2120.0, // AMD Ryzen 7 2700X
},
compress: Speed {
speed: None,
top: 1_000_000.0 * 2158.0, // AMD Ryzen 7 2700X
},
decompress: Speed {
speed: None,
top: 1_000_000.0 * 8062.0, // AMD Ryzen 7 2700X
},
aes256_gcm: Speed {
speed: None,
top: 1_000_000.0 * 3803.0, // AMD Ryzen 7 2700X
},
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
verbose: {
description: "Verbose output.",
type: bool,
optional: true,
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// Run benchmark tests
pub async fn benchmark(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let repo = extract_repository_from_value(&param).ok();
let keyfile = param["keyfile"].as_str().map(PathBuf::from);
let verbose = param["verbose"].as_bool().unwrap_or(false);
let output_format = get_output_format(&param);
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let mut benchmark_result = BENCHMARK_RESULT_2020_TOP;
// do repo tests first, because this may prompt for a password
if let Some(repo) = repo {
test_upload_speed(&mut benchmark_result, repo, crypt_config.clone(), verbose).await?;
}
test_crypt_speed(&mut benchmark_result, verbose)?;
render_result(&output_format, &benchmark_result)?;
Ok(())
}
// print comparison table
fn render_result(
output_format: &str,
benchmark_result: &BenchmarkResult,
) -> Result<(), Error> {
let mut data = serde_json::to_value(benchmark_result)?;
let schema = &BenchmarkResult::API_SCHEMA;
let render_speed = |value: &Value, _record: &Value| -> Result<String, Error> {
match value["speed"].as_f64() {
None => Ok(String::from("not tested")),
Some(speed) => {
let top = value["top"].as_f64().unwrap();
Ok(format!("{:.2} MB/s ({:.0}%)", speed/1_000_000.0, (speed*100.0)/top))
}
}
};
let options = default_table_format_options()
.column(ColumnConfig::new("tls")
.header("TLS (maximal backup upload speed)")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("sha256")
.header("SHA256 checksum comptation speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("compress")
.header("ZStd level 1 compression speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("decompress")
.header("ZStd level 1 decompression speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("aes256_gcm")
.header("AES256 GCM encryption speed")
.right_align(false).renderer(render_speed));
format_and_print_result_full(&mut data, schema, output_format, &options);
Ok(())
}
async fn test_upload_speed(
benchmark_result: &mut BenchmarkResult,
repo: BackupRepository,
crypt_config: Option<Arc<CryptConfig>>,
verbose: bool,
) -> Result<(), Error> {
let backup_time = Utc.timestamp(Utc::now().timestamp(), 0);
let client = connect(repo.host(), repo.user())?;
record_repository(&repo);
if verbose { eprintln!("Connecting to backup server"); }
let client = BackupWriter::start(
client,
crypt_config.clone(),
repo.store(),
"host",
"benchmark",
backup_time,
false,
).await?;
if verbose { eprintln!("Start TLS speed test"); }
let speed = client.upload_speedtest(verbose).await?;
eprintln!("TLS speed: {:.2} MB/s", speed/1_000_000.0);
benchmark_result.tls.speed = Some(speed);
Ok(())
}
// test hash/crypt/compress speed
fn test_crypt_speed(
benchmark_result: &mut BenchmarkResult,
_verbose: bool,
) -> Result<(), Error> {
let pw = b"test";
let kdf = KeyDerivationConfig::Scrypt {
n: 65536,
r: 8,
p: 1,
salt: Vec::new(),
};
let testkey = kdf.derive_key(pw)?;
let crypt_config = CryptConfig::new(testkey)?;
let random_data = proxmox::sys::linux::random_data(1024*1024)?;
let start_time = std::time::Instant::now();
let mut bytes = 0;
loop {
openssl::sha::sha256(&random_data);
bytes += random_data.len();
if start_time.elapsed().as_micros() > 1_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.sha256.speed = Some(speed);
eprintln!("SHA256 speed: {:.2} MB/s", speed/1_000_000_.0);
let start_time = std::time::Instant::now();
let mut bytes = 0;
loop {
let mut reader = &random_data[..];
zstd::stream::encode_all(&mut reader, 1)?;
bytes += random_data.len();
if start_time.elapsed().as_micros() > 3_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.compress.speed = Some(speed);
eprintln!("Compression speed: {:.2} MB/s", speed/1_000_000_.0);
let start_time = std::time::Instant::now();
let compressed_data = {
let mut reader = &random_data[..];
zstd::stream::encode_all(&mut reader, 1)?
};
let mut bytes = 0;
loop {
let mut reader = &compressed_data[..];
let data = zstd::stream::decode_all(&mut reader)?;
bytes += data.len();
if start_time.elapsed().as_micros() > 1_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.decompress.speed = Some(speed);
eprintln!("Decompress speed: {:.2} MB/s", speed/1_000_000_.0);
let start_time = std::time::Instant::now();
let mut bytes = 0;
loop {
let mut out = Vec::new();
crypt_config.encrypt_to(&random_data, &mut out)?;
bytes += random_data.len();
if start_time.elapsed().as_micros() > 1_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.aes256_gcm.speed = Some(speed);
eprintln!("AES256/GCM speed: {:.2} MB/s", speed/1_000_000_.0);
Ok(())
}

View File

@ -0,0 +1,261 @@
use std::os::unix::fs::OpenOptionsExt;
use std::io::{Seek, SeekFrom};
use std::sync::Arc;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::api::{api, cli::*};
use proxmox_backup::tools;
use proxmox_backup::client::*;
use crate::{
REPO_URL_SCHEMA,
KEYFD_SCHEMA,
extract_repository_from_value,
record_repository,
keyfile_parameters,
key::get_encryption_key_password,
decrypt_key,
api_datastore_latest_snapshot,
complete_repository,
complete_backup_snapshot,
complete_group_or_snapshot,
complete_pxar_archive_name,
connect,
BackupDir,
BackupGroup,
BufferedDynamicReader,
BufferedDynamicReadAt,
CatalogReader,
CATALOG_NAME,
CryptConfig,
DynamicIndexReader,
IndexFile,
Shell,
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
"keyfile": {
optional: true,
type: String,
description: "Path to encryption key.",
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
}
}
)]
/// Dump catalog.
async fn dump_catalog(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let (keydata, _) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let client = connect(repo.host(), repo.user())?;
let client = BackupReader::start(
client,
crypt_config.clone(),
repo.store(),
&snapshot.group().backup_type(),
&snapshot.group().backup_id(),
snapshot.backup_time(),
true,
).await?;
let (manifest, _) = client.download_manifest().await?;
let index = client.download_dynamic_index(&manifest, CATALOG_NAME).await?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used);
let mut reader = BufferedDynamicReader::new(index, chunk_reader);
let mut catalogfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
std::io::copy(&mut reader, &mut catalogfile)
.map_err(|err| format_err!("unable to download catalog - {}", err))?;
catalogfile.seek(SeekFrom::Start(0))?;
let mut catalog_reader = CatalogReader::new(catalogfile);
catalog_reader.dump()?;
record_repository(&repo);
Ok(Value::Null)
}
#[api(
input: {
properties: {
"snapshot": {
type: String,
description: "Group/Snapshot path.",
},
"archive-name": {
type: String,
description: "Backup archive name.",
},
"repository": {
optional: true,
schema: REPO_URL_SCHEMA,
},
"keyfile": {
optional: true,
type: String,
description: "Path to encryption key.",
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
},
},
)]
/// Shell to interactively inspect and restore snapshots.
async fn catalog_shell(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?;
let path = tools::required_string_param(&param, "snapshot")?;
let archive_name = tools::required_string_param(&param, "archive-name")?;
let (backup_type, backup_id, backup_time) = if path.matches('/').count() == 1 {
let group: BackupGroup = path.parse()?;
api_datastore_latest_snapshot(&client, repo.store(), group).await?
} else {
let snapshot: BackupDir = path.parse()?;
(snapshot.group().backup_type().to_owned(), snapshot.group().backup_id().to_owned(), snapshot.backup_time())
};
let (keydata, _) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let server_archive_name = if archive_name.ends_with(".pxar") {
format!("{}.didx", archive_name)
} else {
bail!("Can only mount pxar archives.");
};
let client = BackupReader::start(
client,
crypt_config.clone(),
repo.store(),
&backup_type,
&backup_id,
backup_time,
true,
).await?;
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
let (manifest, _) = client.download_manifest().await?;
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config.clone(), most_used);
let reader = BufferedDynamicReader::new(index, chunk_reader);
let archive_size = reader.archive_size();
let reader: proxmox_backup::pxar::fuse::Reader =
Arc::new(BufferedDynamicReadAt::new(reader));
let decoder = proxmox_backup::pxar::fuse::Accessor::new(reader, archive_size).await?;
client.download(CATALOG_NAME, &mut tmpfile).await?;
let index = DynamicIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read catalog index - {}", err))?;
// Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum();
manifest.verify_file(CATALOG_NAME, &csum, size)?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used);
let mut reader = BufferedDynamicReader::new(index, chunk_reader);
let mut catalogfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
std::io::copy(&mut reader, &mut catalogfile)
.map_err(|err| format_err!("unable to download catalog - {}", err))?;
catalogfile.seek(SeekFrom::Start(0))?;
let catalog_reader = CatalogReader::new(catalogfile);
let state = Shell::new(
catalog_reader,
&server_archive_name,
decoder,
).await?;
println!("Starting interactive shell");
state.shell().await?;
record_repository(&repo);
Ok(())
}
pub fn catalog_mgmt_cli() -> CliCommandMap {
let catalog_shell_cmd_def = CliCommand::new(&API_METHOD_CATALOG_SHELL)
.arg_param(&["snapshot", "archive-name"])
.completion_cb("repository", complete_repository)
.completion_cb("archive-name", complete_pxar_archive_name)
.completion_cb("snapshot", complete_group_or_snapshot);
let catalog_dump_cmd_def = CliCommand::new(&API_METHOD_DUMP_CATALOG)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot);
CliCommandMap::new()
.insert("dump", catalog_dump_cmd_def)
.insert("shell", catalog_shell_cmd_def)
}

View File

@ -0,0 +1,284 @@
use std::path::PathBuf;
use anyhow::{bail, format_err, Error};
use chrono::{Local, TimeZone};
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::cli::{CliCommand, CliCommandMap};
use proxmox::sys::linux::tty;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox_backup::backup::{
encrypt_key_with_passphrase, load_and_decrypt_key, store_key_config, KeyConfig,
};
use proxmox_backup::tools;
pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
pub const MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
pub fn find_master_pubkey() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(MASTER_PUBKEY_FILE_NAME, "main public key file")
}
pub fn place_master_pubkey() -> Result<PathBuf, Error> {
super::place_xdg_file(MASTER_PUBKEY_FILE_NAME, "main public key file")
}
pub fn find_default_encryption_key() -> Result<Option<PathBuf>, Error> {
super::find_xdg_file(DEFAULT_ENCRYPTION_KEY_FILE_NAME, "default encryption key file")
}
pub fn place_default_encryption_key() -> Result<PathBuf, Error> {
super::place_xdg_file(DEFAULT_ENCRYPTION_KEY_FILE_NAME, "default encryption key file")
}
pub fn read_optional_default_encryption_key() -> Result<Option<Vec<u8>>, Error> {
find_default_encryption_key()?
.map(file_get_contents)
.transpose()
}
pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
// fixme: implement other input methods
use std::env::VarError::*;
match std::env::var("PBS_ENCRYPTION_PASSWORD") {
Ok(p) => return Ok(p.as_bytes().to_vec()),
Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
Err(NotPresent) => {
// Try another method
}
}
// If we're on a TTY, query the user for a password
if tty::stdin_isatty() {
return Ok(tty::read_password("Encryption Key Password: ")?);
}
bail!("no password input mechanism available");
}
#[api(
default: "scrypt",
)]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api(
input: {
properties: {
kdf: {
type: Kdf,
optional: true,
},
path: {
description:
"Output file. Without this the key will become the new default encryption key.",
optional: true,
}
},
},
)]
/// Create a new encryption key.
fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => {
let path = place_default_encryption_key()?;
println!("creating default key at: {:?}", path);
path
}
};
let kdf = kdf.unwrap_or_default();
let key = proxmox::sys::linux::random_data(32)?;
match kdf {
Kdf::None => {
let created = Local.timestamp(Local::now().timestamp(), 0);
store_key_config(
&path,
false,
KeyConfig {
kdf: None,
created,
modified: created,
data: key,
},
)?;
}
Kdf::Scrypt => {
// always read passphrase from tty
if !tty::stdin_isatty() {
bail!("unable to read passphrase - no tty");
}
let password = tty::read_and_verify_password("Encryption Key Password: ")?;
let key_config = encrypt_key_with_passphrase(&key, &password)?;
store_key_config(&path, false, key_config)?;
}
}
Ok(())
}
#[api(
input: {
properties: {
kdf: {
type: Kdf,
optional: true,
},
path: {
description: "Key file. Without this the default key's password will be changed.",
optional: true,
}
},
},
)]
/// Change the encryption key's password.
fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => {
let path = find_default_encryption_key()?
.ok_or_else(|| {
format_err!("no encryption file provided and no default file found")
})?;
println!("updating default key at: {:?}", path);
path
}
};
let kdf = kdf.unwrap_or_default();
if !tty::stdin_isatty() {
bail!("unable to change passphrase - no tty");
}
let (key, created) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
match kdf {
Kdf::None => {
let modified = Local.timestamp(Local::now().timestamp(), 0);
store_key_config(
&path,
true,
KeyConfig {
kdf: None,
created, // keep original value
modified,
data: key.to_vec(),
},
)?;
}
Kdf::Scrypt => {
let password = tty::read_and_verify_password("New Password: ")?;
let mut new_key_config = encrypt_key_with_passphrase(&key, &password)?;
new_key_config.created = created; // keep original value
store_key_config(&path, true, new_key_config)?;
}
}
Ok(())
}
#[api(
input: {
properties: {
path: {
description: "Path to the PEM formatted RSA public key.",
},
},
},
)]
/// Import an RSA public key used to put an encrypted version of the symmetric backup encryption
/// key onto the backup server along with each backup.
fn import_master_pubkey(path: String) -> Result<(), Error> {
let pem_data = file_get_contents(&path)?;
if let Err(err) = openssl::pkey::PKey::public_key_from_pem(&pem_data) {
bail!("Unable to decode PEM data - {}", err);
}
let target_path = place_master_pubkey()?;
replace_file(&target_path, &pem_data, CreateOptions::new())?;
println!("Imported public master key to {:?}", target_path);
Ok(())
}
#[api]
/// Create an RSA public/private key pair used to put an encrypted version of the symmetric backup
/// encryption key onto the backup server along with each backup.
fn create_master_key() -> Result<(), Error> {
// we need a TTY to query the new password
if !tty::stdin_isatty() {
bail!("unable to create master key - no tty");
}
let rsa = openssl::rsa::Rsa::generate(4096)?;
let pkey = openssl::pkey::PKey::from_rsa(rsa)?;
let password = String::from_utf8(tty::read_and_verify_password("Master Key Password: ")?)?;
let pub_key: Vec<u8> = pkey.public_key_to_pem()?;
let filename_pub = "master-public.pem";
println!("Writing public master key to {}", filename_pub);
replace_file(filename_pub, pub_key.as_slice(), CreateOptions::new())?;
let cipher = openssl::symm::Cipher::aes_256_cbc();
let priv_key: Vec<u8> = pkey.private_key_to_pem_pkcs8_passphrase(cipher, password.as_bytes())?;
let filename_priv = "master-private.pem";
println!("Writing private master key to {}", filename_priv);
replace_file(filename_priv, priv_key.as_slice(), CreateOptions::new())?;
Ok(())
}
pub fn cli() -> CliCommandMap {
let key_create_cmd_def = CliCommand::new(&API_METHOD_CREATE)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let key_change_passphrase_cmd_def = CliCommand::new(&API_METHOD_CHANGE_PASSPHRASE)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let key_create_master_key_cmd_def = CliCommand::new(&API_METHOD_CREATE_MASTER_KEY);
let key_import_master_pubkey_cmd_def = CliCommand::new(&API_METHOD_IMPORT_MASTER_PUBKEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
CliCommandMap::new()
.insert("create", key_create_cmd_def)
.insert("create-master-key", key_create_master_key_cmd_def)
.insert("import-master-pubkey", key_import_master_pubkey_cmd_def)
.insert("change-passphrase", key_change_passphrase_cmd_def)
}

View File

@ -0,0 +1,39 @@
use anyhow::{Context, Error};
mod benchmark;
pub use benchmark::*;
mod mount;
pub use mount::*;
mod task;
pub use task::*;
mod catalog;
pub use catalog::*;
pub mod key;
pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
}
/// Convenience helper for better error messages:
pub fn find_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<Option<std::path::PathBuf>, Error> {
let file_name = file_name.as_ref();
base_directories()
.map(|base| base.find_config_file(file_name))
.with_context(|| format!("error searching for {}", description))
}
pub fn place_xdg_file(
file_name: impl AsRef<std::path::Path>,
description: &'static str,
) -> Result<std::path::PathBuf, Error> {
let file_name = file_name.as_ref();
base_directories()
.and_then(|base| {
base.place_config_file(file_name).map_err(Error::from)
})
.with_context(|| format!("failed to place {} in xdg home", description))
}

View File

@ -0,0 +1,195 @@
use std::path::PathBuf;
use std::sync::Arc;
use std::os::unix::io::RawFd;
use std::path::Path;
use std::ffi::OsStr;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use tokio::signal::unix::{signal, SignalKind};
use nix::unistd::{fork, ForkResult, pipe};
use futures::select;
use futures::future::FutureExt;
use proxmox::{sortable, identity};
use proxmox::api::{ApiHandler, ApiMethod, RpcEnvironment, schema::*, cli::*};
use proxmox_backup::tools;
use proxmox_backup::backup::{
load_and_decrypt_key,
CryptConfig,
IndexFile,
BackupDir,
BackupGroup,
BufferedDynamicReader,
};
use proxmox_backup::client::*;
use crate::{
REPO_URL_SCHEMA,
extract_repository_from_value,
complete_pxar_archive_name,
complete_group_or_snapshot,
complete_repository,
record_repository,
connect,
api_datastore_latest_snapshot,
BufferedDynamicReadAt,
};
#[sortable]
const API_METHOD_MOUNT: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&mount),
&ObjectSchema::new(
"Mount pxar archive.",
&sorted!([
("snapshot", false, &StringSchema::new("Group/Snapshot path.").schema()),
("archive-name", false, &StringSchema::new("Backup archive name.").schema()),
("target", false, &StringSchema::new("Target directory path.").schema()),
("repository", true, &REPO_URL_SCHEMA),
("keyfile", true, &StringSchema::new("Path to encryption key.").schema()),
("verbose", true, &BooleanSchema::new("Verbose output.").default(false).schema()),
]),
)
);
pub fn mount_cmd_def() -> CliCommand {
CliCommand::new(&API_METHOD_MOUNT)
.arg_param(&["snapshot", "archive-name", "target"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_group_or_snapshot)
.completion_cb("archive-name", complete_pxar_archive_name)
.completion_cb("target", tools::complete_file_name)
}
fn mount(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let verbose = param["verbose"].as_bool().unwrap_or(false);
if verbose {
// This will stay in foreground with debug output enabled as None is
// passed for the RawFd.
return proxmox_backup::tools::runtime::main(mount_do(param, None));
}
// Process should be deamonized.
// Make sure to fork before the async runtime is instantiated to avoid troubles.
let pipe = pipe()?;
match fork() {
Ok(ForkResult::Parent { .. }) => {
nix::unistd::close(pipe.1).unwrap();
// Blocks the parent process until we are ready to go in the child
let _res = nix::unistd::read(pipe.0, &mut [0]).unwrap();
Ok(Value::Null)
}
Ok(ForkResult::Child) => {
nix::unistd::close(pipe.0).unwrap();
nix::unistd::setsid().unwrap();
proxmox_backup::tools::runtime::main(mount_do(param, Some(pipe.1)))
}
Err(_) => bail!("failed to daemonize process"),
}
}
async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let archive_name = tools::required_string_param(&param, "archive-name")?;
let target = tools::required_string_param(&param, "target")?;
let client = connect(repo.host(), repo.user())?;
record_repository(&repo);
let path = tools::required_string_param(&param, "snapshot")?;
let (backup_type, backup_id, backup_time) = if path.matches('/').count() == 1 {
let group: BackupGroup = path.parse()?;
api_datastore_latest_snapshot(&client, repo.store(), group).await?
} else {
let snapshot: BackupDir = path.parse()?;
(snapshot.group().backup_type().to_owned(), snapshot.group().backup_id().to_owned(), snapshot.backup_time())
};
let keyfile = param["keyfile"].as_str().map(PathBuf::from);
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
Some(Arc::new(CryptConfig::new(key)?))
}
};
let server_archive_name = if archive_name.ends_with(".pxar") {
format!("{}.didx", archive_name)
} else {
bail!("Can only mount pxar archives.");
};
let client = BackupReader::start(
client,
crypt_config.clone(),
repo.store(),
&backup_type,
&backup_id,
backup_time,
true,
).await?;
let (manifest, _) = client.download_manifest().await?;
if server_archive_name.ends_with(".didx") {
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used);
let reader = BufferedDynamicReader::new(index, chunk_reader);
let archive_size = reader.archive_size();
let reader: proxmox_backup::pxar::fuse::Reader =
Arc::new(BufferedDynamicReadAt::new(reader));
let decoder = proxmox_backup::pxar::fuse::Accessor::new(reader, archive_size).await?;
let options = OsStr::new("ro,default_permissions");
let session = proxmox_backup::pxar::fuse::Session::mount(
decoder,
&options,
false,
Path::new(target),
)
.map_err(|err| format_err!("pxar mount failed: {}", err))?;
if let Some(pipe) = pipe {
nix::unistd::chdir(Path::new("/")).unwrap();
// Finish creation of daemon by redirecting filedescriptors.
let nullfd = nix::fcntl::open(
"/dev/null",
nix::fcntl::OFlag::O_RDWR,
nix::sys::stat::Mode::empty(),
).unwrap();
nix::unistd::dup2(nullfd, 0).unwrap();
nix::unistd::dup2(nullfd, 1).unwrap();
nix::unistd::dup2(nullfd, 2).unwrap();
if nullfd > 2 {
nix::unistd::close(nullfd).unwrap();
}
// Signal the parent process that we are done with the setup and it can
// terminate.
nix::unistd::write(pipe, &[0u8])?;
nix::unistd::close(pipe).unwrap();
}
let mut interrupt = signal(SignalKind::interrupt())?;
select! {
res = session.fuse() => res?,
_ = interrupt.recv().fuse() => {
// exit on interrupted
}
}
} else {
bail!("unknown archive file extension (expected .pxar)");
}
Ok(Value::Null)
}

View File

@ -0,0 +1,148 @@
use anyhow::{Error};
use serde_json::{json, Value};
use proxmox::api::{api, cli::*};
use proxmox_backup::tools;
use proxmox_backup::client::*;
use proxmox_backup::api2::types::UPID_SCHEMA;
use crate::{
REPO_URL_SCHEMA,
extract_repository_from_value,
complete_repository,
connect,
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
limit: {
description: "The maximal number of tasks to list.",
type: Integer,
optional: true,
minimum: 1,
maximum: 1000,
default: 50,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
all: {
type: Boolean,
description: "Also list stopped tasks.",
optional: true,
},
}
}
)]
/// List running server tasks for this repo user
async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false);
let args = json!({
"running": running,
"start": 0,
"limit": limit,
"userfilter": repo.user(),
"store": repo.store(),
});
let mut result = client.get("api2/json/nodes/localhost/tasks", Some(args)).await?;
let mut data = result["data"].take();
let schema = &proxmox_backup::api2::node::tasks::API_RETURN_SCHEMA_LIST_TASKS;
let options = default_table_format_options()
.column(ColumnConfig::new("starttime").right_align(false).renderer(tools::format::render_epoch))
.column(ColumnConfig::new("endtime").right_align(false).renderer(tools::format::render_epoch))
.column(ColumnConfig::new("upid"))
.column(ColumnConfig::new("status").renderer(tools::format::render_task_status));
format_and_print_result_full(&mut data, schema, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
upid: {
schema: UPID_SCHEMA,
},
}
}
)]
/// Display the task log.
async fn task_log(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid = tools::required_string_param(&param, "upid")?;
let client = connect(repo.host(), repo.user())?;
display_task_log(client, upid, true).await?;
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
upid: {
schema: UPID_SCHEMA,
},
}
}
)]
/// Try to stop a specific task.
async fn task_stop(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect(repo.host(), repo.user())?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let _ = client.delete(&path, None).await?;
Ok(Value::Null)
}
pub fn task_mgmt_cli() -> CliCommandMap {
let task_list_cmd_def = CliCommand::new(&API_METHOD_TASK_LIST)
.completion_cb("repository", complete_repository);
let task_log_cmd_def = CliCommand::new(&API_METHOD_TASK_LOG)
.arg_param(&["upid"]);
let task_stop_cmd_def = CliCommand::new(&API_METHOD_TASK_STOP)
.arg_param(&["upid"]);
CliCommandMap::new()
.insert("log", task_log_cmd_def)
.insert("list", task_list_cmd_def)
.insert("stop", task_stop_cmd_def)
}

View File

@ -1,32 +1,18 @@
use std::path::PathBuf;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
use proxmox::api::{api, cli::*}; use proxmox::api::{api, cli::*};
use proxmox_backup::config; use proxmox_backup::config;
use proxmox_backup::configdir;
use proxmox_backup::auth_helpers::*; use proxmox_backup::auth_helpers::*;
use proxmox_backup::tools::cert::CertInfo;
fn x509name_to_string(name: &openssl::x509::X509NameRef) -> Result<String, Error> {
let mut parts = Vec::new();
for entry in name.entries() {
parts.push(format!("{} = {}", entry.object().nid().short_name()?, entry.data().as_utf8()?));
}
Ok(parts.join(", "))
}
#[api] #[api]
/// Display node certificate information. /// Display node certificate information.
fn cert_info() -> Result<(), Error> { fn cert_info() -> Result<(), Error> {
let cert_path = PathBuf::from(configdir!("/proxy.pem")); let cert = CertInfo::new()?;
let cert_pem = proxmox::tools::fs::file_get_contents(&cert_path)?; println!("Subject: {}", cert.subject_name()?);
let cert = openssl::x509::X509::from_pem(&cert_pem)?;
println!("Subject: {}", x509name_to_string(cert.subject_name())?);
if let Some(san) = cert.subject_alt_names() { if let Some(san) = cert.subject_alt_names() {
for name in san.iter() { for name in san.iter() {
@ -42,17 +28,12 @@ fn cert_info() -> Result<(), Error> {
} }
} }
println!("Issuer: {}", x509name_to_string(cert.issuer_name())?); println!("Issuer: {}", cert.issuer_name()?);
println!("Validity:"); println!("Validity:");
println!(" Not Before: {}", cert.not_before()); println!(" Not Before: {}", cert.not_before());
println!(" Not After : {}", cert.not_after()); println!(" Not After : {}", cert.not_after());
let fp = cert.digest(openssl::hash::MessageDigest::sha256())?; println!("Fingerprint (sha256): {}", cert.fingerprint()?);
let fp_string = proxmox::tools::digest_to_hex(&fp);
let fp_string = fp_string.as_bytes().chunks(2).map(|v| std::str::from_utf8(v).unwrap())
.collect::<Vec<&str>>().join(":");
println!("Fingerprint (sha256): {}", fp_string);
let pubkey = cert.public_key()?; let pubkey = cert.public_key()?;
println!("Public key type: {}", openssl::nid::Nid::from_raw(pubkey.id().as_raw()).long_name()?); println!("Public key type: {}", openssl::nid::Nid::from_raw(pubkey.id().as_raw()).long_name()?);

View File

@ -123,18 +123,19 @@ impl BackupReader {
} }
/// Download backup manifest (index.json) /// Download backup manifest (index.json)
pub async fn download_manifest(&self) -> Result<BackupManifest, Error> { ///
/// The manifest signature is verified if we have a crypt_config.
use std::convert::TryFrom; pub async fn download_manifest(&self) -> Result<(BackupManifest, Vec<u8>), Error> {
let mut raw_data = Vec::with_capacity(64 * 1024); let mut raw_data = Vec::with_capacity(64 * 1024);
self.download(MANIFEST_BLOB_NAME, &mut raw_data).await?; self.download(MANIFEST_BLOB_NAME, &mut raw_data).await?;
let blob = DataBlob::from_raw(raw_data)?; let blob = DataBlob::from_raw(raw_data)?;
blob.verify_crc()?; blob.verify_crc()?;
let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref))?; let data = blob.decode(None)?;
let json: Value = serde_json::from_slice(&data[..])?;
BackupManifest::try_from(json) let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;
Ok((manifest, data))
} }
/// Download a .blob file /// Download a .blob file

View File

@ -3,7 +3,7 @@ use std::os::unix::fs::OpenOptionsExt;
use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use anyhow::{format_err, Error}; use anyhow::{bail, format_err, Error};
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use futures::*; use futures::*;
use futures::stream::Stream; use futures::stream::Stream;
@ -16,6 +16,7 @@ use proxmox::tools::digest_to_hex;
use super::merge_known_chunks::{MergedChunkInfo, MergeKnownChunks}; use super::merge_known_chunks::{MergedChunkInfo, MergeKnownChunks};
use crate::backup::*; use crate::backup::*;
use crate::tools::format::HumanByte;
use super::{HttpClient, H2Client}; use super::{HttpClient, H2Client};
@ -163,21 +164,12 @@ impl BackupWriter {
data: Vec<u8>, data: Vec<u8>,
file_name: &str, file_name: &str,
compress: bool, compress: bool,
crypt_or_sign: Option<bool>, encrypt: bool,
) -> Result<BackupStats, Error> { ) -> Result<BackupStats, Error> {
let blob = match (encrypt, &self.crypt_config) {
let blob = if let Some(ref crypt_config) = self.crypt_config { (false, _) => DataBlob::encode(&data, None, compress)?,
if let Some(encrypt) = crypt_or_sign { (true, None) => bail!("requested encryption without a crypt config"),
if encrypt { (true, Some(crypt_config)) => DataBlob::encode(&data, Some(crypt_config), compress)?,
DataBlob::encode(&data, Some(crypt_config), compress)?
} else {
DataBlob::create_signed(&data, crypt_config, compress)?
}
} else {
DataBlob::encode(&data, None, compress)?
}
} else {
DataBlob::encode(&data, None, compress)?
}; };
let raw_data = blob.into_inner(); let raw_data = blob.into_inner();
@ -194,8 +186,8 @@ impl BackupWriter {
src_path: P, src_path: P,
file_name: &str, file_name: &str,
compress: bool, compress: bool,
crypt_or_sign: Option<bool>, encrypt: bool,
) -> Result<BackupStats, Error> { ) -> Result<BackupStats, Error> {
let src_path = src_path.as_ref(); let src_path = src_path.as_ref();
@ -209,7 +201,7 @@ impl BackupWriter {
.await .await
.map_err(|err| format_err!("unable to read file {:?} - {}", src_path, err))?; .map_err(|err| format_err!("unable to read file {:?} - {}", src_path, err))?;
self.upload_blob_from_data(contents, file_name, compress, crypt_or_sign).await self.upload_blob_from_data(contents, file_name, compress, encrypt).await
} }
pub async fn upload_stream( pub async fn upload_stream(
@ -219,6 +211,8 @@ impl BackupWriter {
stream: impl Stream<Item = Result<bytes::BytesMut, Error>>, stream: impl Stream<Item = Result<bytes::BytesMut, Error>>,
prefix: &str, prefix: &str,
fixed_size: Option<u64>, fixed_size: Option<u64>,
compress: bool,
encrypt: bool,
) -> Result<BackupStats, Error> { ) -> Result<BackupStats, Error> {
let known_chunks = Arc::new(Mutex::new(HashSet::new())); let known_chunks = Arc::new(Mutex::new(HashSet::new()));
@ -227,6 +221,10 @@ impl BackupWriter {
param["size"] = size.into(); param["size"] = size.into();
} }
if encrypt && self.crypt_config.is_none() {
bail!("requested encryption without a crypt config");
}
let index_path = format!("{}_index", prefix); let index_path = format!("{}_index", prefix);
let close_path = format!("{}_close", prefix); let close_path = format!("{}_close", prefix);
@ -245,22 +243,43 @@ impl BackupWriter {
let wid = self.h2.post(&index_path, Some(param)).await?.as_u64().unwrap(); let wid = self.h2.post(&index_path, Some(param)).await?.as_u64().unwrap();
let (chunk_count, size, duration, speed, csum) = let (chunk_count, chunk_reused, size, size_reused, duration, csum) =
Self::upload_chunk_info_stream( Self::upload_chunk_info_stream(
self.h2.clone(), self.h2.clone(),
wid, wid,
stream, stream,
&prefix, &prefix,
known_chunks.clone(), known_chunks.clone(),
self.crypt_config.clone(), if encrypt { self.crypt_config.clone() } else { None },
compress,
self.verbose, self.verbose,
) )
.await?; .await?;
println!("{}: Uploaded {} bytes as {} chunks in {} seconds ({} MB/s).", archive_name, size, chunk_count, duration.as_secs(), speed); let uploaded = size - size_reused;
if chunk_count > 0 { let vsize_h: HumanByte = size.into();
println!("{}: Average chunk size was {} bytes.", archive_name, size/chunk_count); let archive = if self.verbose {
println!("{}: Time per request: {} microseconds.", archive_name, (duration.as_micros())/(chunk_count as u128)); archive_name.to_string()
} else {
crate::tools::format::strip_server_file_expenstion(archive_name.clone())
};
if archive_name != CATALOG_NAME {
let speed: HumanByte = ((uploaded * 1_000_000) / (duration.as_micros() as usize)).into();
let uploaded: HumanByte = uploaded.into();
println!("{}: had to upload {} of {} in {:.2}s, avgerage speed {}/s).", archive, uploaded, vsize_h, duration.as_secs_f64(), speed);
} else {
println!("Uploaded backup catalog ({})", vsize_h);
}
if size_reused > 0 && size > 1024*1024 {
let reused_percent = size_reused as f64 * 100. / size as f64;
let reused: HumanByte = size_reused.into();
println!("{}: backup was done incrementally, reused {} ({:.1}%)", archive, reused, reused_percent);
}
if self.verbose && chunk_count > 0 {
println!("{}: Reused {} from {} chunks.", archive, chunk_reused, chunk_count);
println!("{}: Average chunk size was {}.", archive, HumanByte::from(size/chunk_count));
println!("{}: Average time per request: {} microseconds.", archive, (duration.as_micros())/(chunk_count as u128));
} }
let param = json!({ let param = json!({
@ -276,7 +295,7 @@ impl BackupWriter {
}) })
} }
fn response_queue() -> ( fn response_queue(verbose: bool) -> (
mpsc::Sender<h2::client::ResponseFuture>, mpsc::Sender<h2::client::ResponseFuture>,
oneshot::Receiver<Result<(), Error>> oneshot::Receiver<Result<(), Error>>
) { ) {
@ -300,11 +319,11 @@ impl BackupWriter {
tokio::spawn( tokio::spawn(
verify_queue_rx verify_queue_rx
.map(Ok::<_, Error>) .map(Ok::<_, Error>)
.try_for_each(|response: h2::client::ResponseFuture| { .try_for_each(move |response: h2::client::ResponseFuture| {
response response
.map_err(Error::from) .map_err(Error::from)
.and_then(H2Client::h2api_response) .and_then(H2Client::h2api_response)
.map_ok(|result| println!("RESPONSE: {:?}", result)) .map_ok(move |result| if verbose { println!("RESPONSE: {:?}", result) })
.map_err(|err| format_err!("pipelined request failed: {}", err)) .map_err(|err| format_err!("pipelined request failed: {}", err))
}) })
.map(|result| { .map(|result| {
@ -455,8 +474,6 @@ impl BackupWriter {
/// Download backup manifest (index.json) of last backup /// Download backup manifest (index.json) of last backup
pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> { pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {
use std::convert::TryFrom;
let mut raw_data = Vec::with_capacity(64 * 1024); let mut raw_data = Vec::with_capacity(64 * 1024);
let param = json!({ "archive-name": MANIFEST_BLOB_NAME }); let param = json!({ "archive-name": MANIFEST_BLOB_NAME });
@ -465,8 +482,8 @@ impl BackupWriter {
let blob = DataBlob::from_raw(raw_data)?; let blob = DataBlob::from_raw(raw_data)?;
blob.verify_crc()?; blob.verify_crc()?;
let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref))?; let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref))?;
let json: Value = serde_json::from_slice(&data[..])?;
let manifest = BackupManifest::try_from(json)?; let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;
Ok(manifest) Ok(manifest)
} }
@ -478,14 +495,19 @@ impl BackupWriter {
prefix: &str, prefix: &str,
known_chunks: Arc<Mutex<HashSet<[u8;32]>>>, known_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
crypt_config: Option<Arc<CryptConfig>>, crypt_config: Option<Arc<CryptConfig>>,
compress: bool,
verbose: bool, verbose: bool,
) -> impl Future<Output = Result<(usize, usize, std::time::Duration, usize, [u8; 32]), Error>> { ) -> impl Future<Output = Result<(usize, usize, usize, usize, std::time::Duration, [u8; 32]), Error>> {
let repeat = Arc::new(AtomicUsize::new(0)); let total_chunks = Arc::new(AtomicUsize::new(0));
let repeat2 = repeat.clone(); let total_chunks2 = total_chunks.clone();
let known_chunk_count = Arc::new(AtomicUsize::new(0));
let known_chunk_count2 = known_chunk_count.clone();
let stream_len = Arc::new(AtomicUsize::new(0)); let stream_len = Arc::new(AtomicUsize::new(0));
let stream_len2 = stream_len.clone(); let stream_len2 = stream_len.clone();
let reused_len = Arc::new(AtomicUsize::new(0));
let reused_len2 = reused_len.clone();
let append_chunk_path = format!("{}_index", prefix); let append_chunk_path = format!("{}_index", prefix);
let upload_chunk_path = format!("{}_chunk", prefix); let upload_chunk_path = format!("{}_chunk", prefix);
@ -504,11 +526,11 @@ impl BackupWriter {
let chunk_len = data.len(); let chunk_len = data.len();
repeat.fetch_add(1, Ordering::SeqCst); total_chunks.fetch_add(1, Ordering::SeqCst);
let offset = stream_len.fetch_add(chunk_len, Ordering::SeqCst) as u64; let offset = stream_len.fetch_add(chunk_len, Ordering::SeqCst) as u64;
let mut chunk_builder = DataChunkBuilder::new(data.as_ref()) let mut chunk_builder = DataChunkBuilder::new(data.as_ref())
.compress(true); .compress(compress);
if let Some(ref crypt_config) = crypt_config { if let Some(ref crypt_config) = crypt_config {
chunk_builder = chunk_builder.crypt_config(crypt_config); chunk_builder = chunk_builder.crypt_config(crypt_config);
@ -527,6 +549,8 @@ impl BackupWriter {
let chunk_is_known = known_chunks.contains(digest); let chunk_is_known = known_chunks.contains(digest);
if chunk_is_known { if chunk_is_known {
known_chunk_count.fetch_add(1, Ordering::SeqCst);
reused_len.fetch_add(chunk_len, Ordering::SeqCst);
future::ok(MergedChunkInfo::Known(vec![(offset, *digest)])) future::ok(MergedChunkInfo::Known(vec![(offset, *digest)]))
} else { } else {
known_chunks.insert(*digest); known_chunks.insert(*digest);
@ -549,7 +573,7 @@ impl BackupWriter {
let digest = chunk_info.digest; let digest = chunk_info.digest;
let digest_str = digest_to_hex(&digest); let digest_str = digest_to_hex(&digest);
if verbose { if false && verbose { // TO verbose, needs finer verbosity setting granularity
println!("upload new chunk {} ({} bytes, offset {})", digest_str, println!("upload new chunk {} ({} bytes, offset {})", digest_str,
chunk_info.chunk_len, offset); chunk_info.chunk_len, offset);
} }
@ -592,18 +616,21 @@ impl BackupWriter {
upload_result.await?.and(result) upload_result.await?.and(result)
}.boxed()) }.boxed())
.and_then(move |_| { .and_then(move |_| {
let repeat = repeat2.load(Ordering::SeqCst); let duration = start_time.elapsed();
let total_chunks = total_chunks2.load(Ordering::SeqCst);
let known_chunk_count = known_chunk_count2.load(Ordering::SeqCst);
let stream_len = stream_len2.load(Ordering::SeqCst); let stream_len = stream_len2.load(Ordering::SeqCst);
let speed = ((stream_len*1_000_000)/(1024*1024))/(start_time.elapsed().as_micros() as usize); let reused_len = reused_len2.load(Ordering::SeqCst);
let mut guard = index_csum_2.lock().unwrap(); let mut guard = index_csum_2.lock().unwrap();
let csum = guard.take().unwrap().finish(); let csum = guard.take().unwrap().finish();
futures::future::ok((repeat, stream_len, start_time.elapsed(), speed, csum)) futures::future::ok((total_chunks, known_chunk_count, stream_len, reused_len, duration, csum))
}) })
} }
pub async fn upload_speedtest(&self) -> Result<usize, Error> { /// Upload speed test - prints result ot stderr
pub async fn upload_speedtest(&self, verbose: bool) -> Result<f64, Error> {
let mut data = vec![]; let mut data = vec![];
// generate pseudo random byte sequence // generate pseudo random byte sequence
@ -618,7 +645,7 @@ impl BackupWriter {
let mut repeat = 0; let mut repeat = 0;
let (upload_queue, upload_result) = Self::response_queue(); let (upload_queue, upload_result) = Self::response_queue(verbose);
let start_time = std::time::Instant::now(); let start_time = std::time::Instant::now();
@ -630,7 +657,7 @@ impl BackupWriter {
let mut upload_queue = upload_queue.clone(); let mut upload_queue = upload_queue.clone();
println!("send test data ({} bytes)", data.len()); if verbose { eprintln!("send test data ({} bytes)", data.len()); }
let request = H2Client::request_builder("localhost", "POST", "speedtest", None, None).unwrap(); let request = H2Client::request_builder("localhost", "POST", "speedtest", None, None).unwrap();
let request_future = self.h2.send_request(request, Some(bytes::Bytes::from(data.clone()))).await?; let request_future = self.h2.send_request(request, Some(bytes::Bytes::from(data.clone()))).await?;
@ -641,9 +668,9 @@ impl BackupWriter {
let _ = upload_result.await?; let _ = upload_result.await?;
println!("Uploaded {} chunks in {} seconds.", repeat, start_time.elapsed().as_secs()); eprintln!("Uploaded {} chunks in {} seconds.", repeat, start_time.elapsed().as_secs());
let speed = ((item_len*1_000_000*(repeat as usize))/(1024*1024))/(start_time.elapsed().as_micros() as usize); let speed = ((item_len*(repeat as usize)) as f64)/start_time.elapsed().as_secs_f64();
println!("Time per request: {} microseconds.", (start_time.elapsed().as_micros())/(repeat as u128)); eprintln!("Time per request: {} microseconds.", (start_time.elapsed().as_micros())/(repeat as u128));
Ok(speed) Ok(speed)
} }

View File

@ -16,6 +16,7 @@ use percent_encoding::percent_encode;
use xdg::BaseDirectories; use xdg::BaseDirectories;
use proxmox::{ use proxmox::{
api::error::HttpError,
sys::linux::tty, sys::linux::tty,
tools::{ tools::{
fs::{file_get_json, replace_file, CreateOptions}, fs::{file_get_json, replace_file, CreateOptions},
@ -606,7 +607,7 @@ impl HttpClient {
Ok(value) Ok(value)
} }
} else { } else {
bail!("HTTP Error {}: {}", status, text); Err(Error::from(HttpError::new(status, text)))
} }
} }
@ -819,7 +820,7 @@ impl H2Client {
bail!("got result without data property"); bail!("got result without data property");
} }
} else { } else {
bail!("HTTP Error {}: {}", status, text); Err(Error::from(HttpError::new(status, text)))
} }
} }

View File

@ -6,8 +6,8 @@ use std::convert::TryFrom;
use std::sync::Arc; use std::sync::Arc;
use std::collections::HashMap; use std::collections::HashMap;
use std::io::{Seek, SeekFrom}; use std::io::{Seek, SeekFrom};
use chrono::{Utc, TimeZone};
use proxmox::api::error::{StatusCode, HttpError};
use crate::server::{WorkerTask}; use crate::server::{WorkerTask};
use crate::backup::*; use crate::backup::*;
use crate::api2::types::*; use crate::api2::types::*;
@ -152,7 +152,28 @@ async fn pull_snapshot(
let mut tmp_manifest_name = manifest_name.clone(); let mut tmp_manifest_name = manifest_name.clone();
tmp_manifest_name.set_extension("tmp"); tmp_manifest_name.set_extension("tmp");
let mut tmp_manifest_file = download_manifest(&reader, &tmp_manifest_name).await?; let download_res = download_manifest(&reader, &tmp_manifest_name).await;
let mut tmp_manifest_file = match download_res {
Ok(manifest_file) => manifest_file,
Err(err) => {
match err.downcast_ref::<HttpError>() {
Some(HttpError { code, message }) => {
match code {
&StatusCode::NOT_FOUND => {
worker.log(format!("skipping snapshot {} - vanished since start of sync", snapshot));
return Ok(());
},
_ => {
bail!("HTTP error {} - {}", code, message);
},
}
},
None => {
return Err(err);
},
};
},
};
let tmp_manifest_blob = DataBlob::load(&mut tmp_manifest_file)?; let tmp_manifest_blob = DataBlob::load(&mut tmp_manifest_file)?;
tmp_manifest_blob.verify_crc()?; tmp_manifest_blob.verify_crc()?;
@ -302,7 +323,16 @@ pub async fn pull_group(
let mut remote_snapshots = std::collections::HashSet::new(); let mut remote_snapshots = std::collections::HashSet::new();
for item in list { for item in list {
let backup_time = Utc.timestamp(item.backup_time, 0); let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time);
// in-progress backups can't be synced
if let None = item.size {
worker.log(format!("skipping snapshot {} - in-progress backup", snapshot));
continue;
}
let backup_time = snapshot.backup_time();
remote_snapshots.insert(backup_time); remote_snapshots.insert(backup_time);
if let Some(last_sync_time) = last_sync { if let Some(last_sync_time) = last_sync {
@ -319,14 +349,12 @@ pub async fn pull_group(
new_client, new_client,
None, None,
src_repo.store(), src_repo.store(),
&item.backup_type, snapshot.group().backup_type(),
&item.backup_id, snapshot.group().backup_id(),
backup_time, backup_time,
true, true,
).await?; ).await?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time);
pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot).await?; pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot).await?;
} }

View File

@ -1,7 +1,7 @@
use std::future::Future; use std::future::Future;
use std::collections::HashMap; use std::collections::HashMap;
use std::pin::Pin; use std::pin::Pin;
use std::sync::Arc; use std::sync::{Arc, Mutex};
use anyhow::Error; use anyhow::Error;
@ -10,11 +10,12 @@ use crate::backup::{AsyncReadChunk, CryptConfig, DataBlob, ReadChunk};
use crate::tools::runtime::block_on; use crate::tools::runtime::block_on;
/// Read chunks from remote host using ``BackupReader`` /// Read chunks from remote host using ``BackupReader``
#[derive(Clone)]
pub struct RemoteChunkReader { pub struct RemoteChunkReader {
client: Arc<BackupReader>, client: Arc<BackupReader>,
crypt_config: Option<Arc<CryptConfig>>, crypt_config: Option<Arc<CryptConfig>>,
cache_hint: HashMap<[u8; 32], usize>, cache_hint: HashMap<[u8; 32], usize>,
cache: HashMap<[u8; 32], Vec<u8>>, cache: Arc<Mutex<HashMap<[u8; 32], Vec<u8>>>>,
} }
impl RemoteChunkReader { impl RemoteChunkReader {
@ -30,11 +31,11 @@ impl RemoteChunkReader {
client, client,
crypt_config, crypt_config,
cache_hint, cache_hint,
cache: HashMap::new(), cache: Arc::new(Mutex::new(HashMap::new())),
} }
} }
pub async fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error> { pub async fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let mut chunk_data = Vec::with_capacity(4 * 1024 * 1024); let mut chunk_data = Vec::with_capacity(4 * 1024 * 1024);
self.client self.client
@ -49,12 +50,12 @@ impl RemoteChunkReader {
} }
impl ReadChunk for RemoteChunkReader { impl ReadChunk for RemoteChunkReader {
fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error> { fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
block_on(Self::read_raw_chunk(self, digest)) block_on(Self::read_raw_chunk(self, digest))
} }
fn read_chunk(&mut self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> { fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
if let Some(raw_data) = self.cache.get(digest) { if let Some(raw_data) = (*self.cache.lock().unwrap()).get(digest) {
return Ok(raw_data.to_vec()); return Ok(raw_data.to_vec());
} }
@ -66,7 +67,7 @@ impl ReadChunk for RemoteChunkReader {
let use_cache = self.cache_hint.contains_key(digest); let use_cache = self.cache_hint.contains_key(digest);
if use_cache { if use_cache {
self.cache.insert(*digest, raw_data.to_vec()); (*self.cache.lock().unwrap()).insert(*digest, raw_data.to_vec());
} }
Ok(raw_data) Ok(raw_data)
@ -75,18 +76,18 @@ impl ReadChunk for RemoteChunkReader {
impl AsyncReadChunk for RemoteChunkReader { impl AsyncReadChunk for RemoteChunkReader {
fn read_raw_chunk<'a>( fn read_raw_chunk<'a>(
&'a mut self, &'a self,
digest: &'a [u8; 32], digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>> { ) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>> {
Box::pin(Self::read_raw_chunk(self, digest)) Box::pin(Self::read_raw_chunk(self, digest))
} }
fn read_chunk<'a>( fn read_chunk<'a>(
&'a mut self, &'a self,
digest: &'a [u8; 32], digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>> { ) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>> {
Box::pin(async move { Box::pin(async move {
if let Some(raw_data) = self.cache.get(digest) { if let Some(raw_data) = (*self.cache.lock().unwrap()).get(digest) {
return Ok(raw_data.to_vec()); return Ok(raw_data.to_vec());
} }
@ -98,7 +99,7 @@ impl AsyncReadChunk for RemoteChunkReader {
let use_cache = self.cache_hint.contains_key(digest); let use_cache = self.cache_hint.contains_key(digest);
if use_cache { if use_cache {
self.cache.insert(*digest, raw_data.to_vec()); (*self.cache.lock().unwrap()).insert(*digest, raw_data.to_vec());
} }
Ok(raw_data) Ok(raw_data)

View File

@ -39,6 +39,8 @@ constnamemap! {
PRIV_REMOTE_MODIFY("Remote.Modify") = 1 << 10; PRIV_REMOTE_MODIFY("Remote.Modify") = 1 << 10;
PRIV_REMOTE_READ("Remote.Read") = 1 << 11; PRIV_REMOTE_READ("Remote.Read") = 1 << 11;
PRIV_REMOTE_PRUNE("Remote.Prune") = 1 << 12; PRIV_REMOTE_PRUNE("Remote.Prune") = 1 << 12;
PRIV_SYS_CONSOLE("Sys.Console") = 1 << 13;
} }
} }

View File

@ -89,7 +89,9 @@ impl CachedUserInfo {
(user_privs & required_privs) == required_privs (user_privs & required_privs) == required_privs
}; };
if !allowed { if !allowed {
bail!("no permissions"); // printing the path doesn't leaks any information as long as we
// always check privilege before resource existence
bail!("no permissions on '/{}'", path.join("/"));
} }
Ok(()) Ok(())
} }

View File

@ -23,6 +23,7 @@ use proxmox::tools::fd::RawFdNum;
use proxmox::tools::vec; use proxmox::tools::vec;
use crate::pxar::catalog::BackupCatalogWriter; use crate::pxar::catalog::BackupCatalogWriter;
use crate::pxar::metadata::errno_is_unsupported;
use crate::pxar::Flags; use crate::pxar::Flags;
use crate::pxar::tools::assert_single_path_component; use crate::pxar::tools::assert_single_path_component;
use crate::tools::{acl, fs, xattr, Fd}; use crate::tools::{acl, fs, xattr, Fd};
@ -161,7 +162,7 @@ where
if skip_lost_and_found { if skip_lost_and_found {
patterns.push(MatchEntry::parse_pattern( patterns.push(MatchEntry::parse_pattern(
"**/lost+found", "lost+found",
PatternFlag::PATH_NAME, PatternFlag::PATH_NAME,
MatchType::Exclude, MatchType::Exclude,
)?); )?);
@ -248,11 +249,15 @@ impl<'a, 'b> Archiver<'a, 'b> {
} }
/// openat() wrapper which allows but logs `EACCES` and turns `ENOENT` into `None`. /// openat() wrapper which allows but logs `EACCES` and turns `ENOENT` into `None`.
///
/// The `existed` flag is set when iterating through a directory to note that we know the file
/// is supposed to exist and we should warn if it doesnt'.
fn open_file( fn open_file(
&mut self, &mut self,
parent: RawFd, parent: RawFd,
file_name: &CStr, file_name: &CStr,
oflags: OFlag, oflags: OFlag,
existed: bool,
) -> Result<Option<Fd>, Error> { ) -> Result<Option<Fd>, Error> {
match Fd::openat( match Fd::openat(
&unsafe { RawFdNum::from_raw_fd(parent) }, &unsafe { RawFdNum::from_raw_fd(parent) },
@ -261,9 +266,14 @@ impl<'a, 'b> Archiver<'a, 'b> {
Mode::empty(), Mode::empty(),
) { ) {
Ok(fd) => Ok(Some(fd)), Ok(fd) => Ok(Some(fd)),
Err(nix::Error::Sys(Errno::ENOENT)) => Ok(None), Err(nix::Error::Sys(Errno::ENOENT)) => {
if existed {
self.report_vanished_file()?;
}
Ok(None)
}
Err(nix::Error::Sys(Errno::EACCES)) => { Err(nix::Error::Sys(Errno::EACCES)) => {
write!(self.errors, "failed to open file: {:?}: access denied", file_name)?; writeln!(self.errors, "failed to open file: {:?}: access denied", file_name)?;
Ok(None) Ok(None)
} }
Err(other) => Err(Error::from(other)), Err(other) => Err(Error::from(other)),
@ -275,19 +285,22 @@ impl<'a, 'b> Archiver<'a, 'b> {
parent, parent,
c_str!(".pxarexclude"), c_str!(".pxarexclude"),
OFlag::O_RDONLY | OFlag::O_CLOEXEC | OFlag::O_NOCTTY, OFlag::O_RDONLY | OFlag::O_CLOEXEC | OFlag::O_NOCTTY,
false,
)?; )?;
let old_pattern_count = self.patterns.len(); let old_pattern_count = self.patterns.len();
let path_bytes = self.path.as_os_str().as_bytes();
if let Some(fd) = fd { if let Some(fd) = fd {
let file = unsafe { std::fs::File::from_raw_fd(fd.into_raw_fd()) }; let file = unsafe { std::fs::File::from_raw_fd(fd.into_raw_fd()) };
use io::BufRead; use io::BufRead;
for line in io::BufReader::new(file).lines() { for line in io::BufReader::new(file).split(b'\n') {
let line = match line { let line = match line {
Ok(line) => line, Ok(line) => line,
Err(err) => { Err(err) => {
let _ = write!( let _ = writeln!(
self.errors, self.errors,
"ignoring .pxarexclude after read error in {:?}: {}", "ignoring .pxarexclude after read error in {:?}: {}",
self.path, self.path,
@ -298,16 +311,32 @@ impl<'a, 'b> Archiver<'a, 'b> {
} }
}; };
let line = line.trim(); let line = crate::tools::strip_ascii_whitespace(&line);
if line.is_empty() || line.starts_with('#') { if line.is_empty() || line[0] == b'#' {
continue; continue;
} }
match MatchEntry::parse_pattern(line, PatternFlag::PATH_NAME, MatchType::Exclude) { let mut buf;
let (line, mode) = if line[0] == b'/' {
buf = Vec::with_capacity(path_bytes.len() + 1 + line.len());
buf.extend(path_bytes);
buf.extend(line);
(&buf[..], MatchType::Exclude)
} else if line.starts_with(b"!/") {
// inverted case with absolute path
buf = Vec::with_capacity(path_bytes.len() + line.len());
buf.extend(path_bytes);
buf.extend(&line[1..]); // without the '!'
(&buf[..], MatchType::Include)
} else {
(line, MatchType::Exclude)
};
match MatchEntry::parse_pattern(line, PatternFlag::PATH_NAME, mode) {
Ok(pattern) => self.patterns.push(pattern), Ok(pattern) => self.patterns.push(pattern),
Err(err) => { Err(err) => {
let _ = write!(self.errors, "bad pattern in {:?}: {}", self.path, err); let _ = writeln!(self.errors, "bad pattern in {:?}: {}", self.path, err);
} }
} }
} }
@ -410,12 +439,12 @@ impl<'a, 'b> Archiver<'a, 'b> {
} }
fn report_vanished_file(&mut self) -> Result<(), Error> { fn report_vanished_file(&mut self) -> Result<(), Error> {
write!(self.errors, "warning: file vanished while reading: {:?}", self.path)?; writeln!(self.errors, "warning: file vanished while reading: {:?}", self.path)?;
Ok(()) Ok(())
} }
fn report_file_shrunk_while_reading(&mut self) -> Result<(), Error> { fn report_file_shrunk_while_reading(&mut self) -> Result<(), Error> {
write!( writeln!(
self.errors, self.errors,
"warning: file size shrunk while reading: {:?}, file will be padded with zeros!", "warning: file size shrunk while reading: {:?}, file will be padded with zeros!",
self.path, self.path,
@ -424,7 +453,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
} }
fn report_file_grew_while_reading(&mut self) -> Result<(), Error> { fn report_file_grew_while_reading(&mut self) -> Result<(), Error> {
write!( writeln!(
self.errors, self.errors,
"warning: file size increased while reading: {:?}, file will be truncated!", "warning: file size increased while reading: {:?}, file will be truncated!",
self.path, self.path,
@ -442,24 +471,22 @@ impl<'a, 'b> Archiver<'a, 'b> {
use pxar::format::mode; use pxar::format::mode;
let file_mode = stat.st_mode & libc::S_IFMT; let file_mode = stat.st_mode & libc::S_IFMT;
let open_mode = if !(file_mode == libc::S_IFREG || file_mode == libc::S_IFDIR) { let open_mode = if file_mode == libc::S_IFREG || file_mode == libc::S_IFDIR {
OFlag::O_PATH
} else {
OFlag::empty() OFlag::empty()
} else {
OFlag::O_PATH
}; };
let fd = self.open_file( let fd = self.open_file(
parent, parent,
c_file_name, c_file_name,
open_mode | OFlag::O_RDONLY | OFlag::O_NOFOLLOW | OFlag::O_CLOEXEC | OFlag::O_NOCTTY, open_mode | OFlag::O_RDONLY | OFlag::O_NOFOLLOW | OFlag::O_CLOEXEC | OFlag::O_NOCTTY,
true,
)?; )?;
let fd = match fd { let fd = match fd {
Some(fd) => fd, Some(fd) => fd,
None => { None => return Ok(()),
self.report_vanished_file()?;
return Ok(());
}
}; };
let metadata = get_metadata(fd.as_raw_fd(), &stat, self.flags(), self.fs_magic)?; let metadata = get_metadata(fd.as_raw_fd(), &stat, self.flags(), self.fs_magic)?;
@ -690,13 +717,6 @@ fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Resu
Ok(meta) Ok(meta)
} }
fn errno_is_unsupported(errno: Errno) -> bool {
match errno {
Errno::ENOTTY | Errno::ENOSYS | Errno::EBADF | Errno::EOPNOTSUPP | Errno::EINVAL => true,
_ => false,
}
}
fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags) -> Result<(), Error> { fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags) -> Result<(), Error> {
if flags.contains(Flags::WITH_FCAPS) { if flags.contains(Flags::WITH_FCAPS) {
return Ok(()); return Ok(());
@ -761,7 +781,7 @@ fn get_xattr_fcaps_acl(
} }
fn get_chattr(metadata: &mut Metadata, fd: RawFd) -> Result<(), Error> { fn get_chattr(metadata: &mut Metadata, fd: RawFd) -> Result<(), Error> {
let mut attr: usize = 0; let mut attr: libc::c_long = 0;
match unsafe { fs::read_attr_fd(fd, &mut attr) } { match unsafe { fs::read_attr_fd(fd, &mut attr) } {
Ok(_) => (), Ok(_) => (),
@ -771,7 +791,7 @@ fn get_chattr(metadata: &mut Metadata, fd: RawFd) -> Result<(), Error> {
Err(err) => bail!("failed to read file attributes: {}", err), Err(err) => bail!("failed to read file attributes: {}", err),
} }
metadata.stat.flags |= Flags::from_chattr(attr as u32).bits(); metadata.stat.flags |= Flags::from_chattr(attr).bits();
Ok(()) Ok(())
} }

View File

@ -230,7 +230,8 @@ impl Extractor {
dir.metadata(), dir.metadata(),
fd, fd,
&CString::new(dir.file_name().as_bytes())?, &CString::new(dir.file_name().as_bytes())?,
)?; )
.map_err(|err| format_err!("failed to apply directory metadata: {}", err))?;
} }
Ok(()) Ok(())
@ -241,7 +242,9 @@ impl Extractor {
} }
fn parent_fd(&mut self) -> Result<RawFd, Error> { fn parent_fd(&mut self) -> Result<RawFd, Error> {
self.dir_stack.last_dir_fd(self.allow_existing_dirs) self.dir_stack
.last_dir_fd(self.allow_existing_dirs)
.map_err(|err| format_err!("failed to get parent directory file descriptor: {}", err))
} }
pub fn extract_symlink( pub fn extract_symlink(
@ -320,10 +323,14 @@ impl Extractor {
file_name, file_name,
OFlag::O_CREAT | OFlag::O_EXCL | OFlag::O_WRONLY | OFlag::O_CLOEXEC, OFlag::O_CREAT | OFlag::O_EXCL | OFlag::O_WRONLY | OFlag::O_CLOEXEC,
Mode::from_bits(0o600).unwrap(), Mode::from_bits(0o600).unwrap(),
)?) )
.map_err(|err| format_err!("failed to create file {:?}: {}", file_name, err))?)
}; };
let extracted = io::copy(&mut *contents, &mut file)?; metadata::apply_initial_flags(self.feature_flags, metadata, file.as_raw_fd())?;
let extracted = io::copy(&mut *contents, &mut file)
.map_err(|err| format_err!("failed to copy file contents: {}", err))?;
if size != extracted { if size != extracted {
bail!("extracted {} bytes of a file of {} bytes", extracted, size); bail!("extracted {} bytes of a file of {} bytes", extracted, size);
} }
@ -345,10 +352,15 @@ impl Extractor {
file_name, file_name,
OFlag::O_CREAT | OFlag::O_EXCL | OFlag::O_WRONLY | OFlag::O_CLOEXEC, OFlag::O_CREAT | OFlag::O_EXCL | OFlag::O_WRONLY | OFlag::O_CLOEXEC,
Mode::from_bits(0o600).unwrap(), Mode::from_bits(0o600).unwrap(),
)?) )
.map_err(|err| format_err!("failed to create file {:?}: {}", file_name, err))?)
}); });
let extracted = tokio::io::copy(&mut *contents, &mut file).await?; metadata::apply_initial_flags(self.feature_flags, metadata, file.as_raw_fd())?;
let extracted = tokio::io::copy(&mut *contents, &mut file)
.await
.map_err(|err| format_err!("failed to copy file contents: {}", err))?;
if size != extracted { if size != extracted {
bail!("extracted {} bytes of a file of {} bytes", extracted, size); bail!("extracted {} bytes of a file of {} bytes", extracted, size);
} }

View File

@ -3,6 +3,8 @@
//! Flags for known supported features for a given filesystem can be derived //! Flags for known supported features for a given filesystem can be derived
//! from the superblocks magic number. //! from the superblocks magic number.
use libc::c_long;
use bitflags::bitflags; use bitflags::bitflags;
bitflags! { bitflags! {
@ -149,34 +151,54 @@ impl Default for Flags {
} }
} }
// form /usr/include/linux/fs.h
const FS_APPEND_FL: c_long = 0x0000_0020;
const FS_NOATIME_FL: c_long = 0x0000_0080;
const FS_COMPR_FL: c_long = 0x0000_0004;
const FS_NOCOW_FL: c_long = 0x0080_0000;
const FS_NODUMP_FL: c_long = 0x0000_0040;
const FS_DIRSYNC_FL: c_long = 0x0001_0000;
const FS_IMMUTABLE_FL: c_long = 0x0000_0010;
const FS_SYNC_FL: c_long = 0x0000_0008;
const FS_NOCOMP_FL: c_long = 0x0000_0400;
const FS_PROJINHERIT_FL: c_long = 0x2000_0000;
pub(crate) const INITIAL_FS_FLAGS: c_long =
FS_NOATIME_FL
| FS_COMPR_FL
| FS_NOCOW_FL
| FS_NOCOMP_FL
| FS_PROJINHERIT_FL;
#[rustfmt::skip]
const CHATTR_MAP: [(Flags, c_long); 10] = [
( Flags::WITH_FLAG_APPEND, FS_APPEND_FL ),
( Flags::WITH_FLAG_NOATIME, FS_NOATIME_FL ),
( Flags::WITH_FLAG_COMPR, FS_COMPR_FL ),
( Flags::WITH_FLAG_NOCOW, FS_NOCOW_FL ),
( Flags::WITH_FLAG_NODUMP, FS_NODUMP_FL ),
( Flags::WITH_FLAG_DIRSYNC, FS_DIRSYNC_FL ),
( Flags::WITH_FLAG_IMMUTABLE, FS_IMMUTABLE_FL ),
( Flags::WITH_FLAG_SYNC, FS_SYNC_FL ),
( Flags::WITH_FLAG_NOCOMP, FS_NOCOMP_FL ),
( Flags::WITH_FLAG_PROJINHERIT, FS_PROJINHERIT_FL ),
];
// from /usr/include/linux/msdos_fs.h
const ATTR_HIDDEN: u32 = 2;
const ATTR_SYS: u32 = 4;
const ATTR_ARCH: u32 = 32;
#[rustfmt::skip]
const FAT_ATTR_MAP: [(Flags, u32); 3] = [
( Flags::WITH_FLAG_HIDDEN, ATTR_HIDDEN ),
( Flags::WITH_FLAG_SYSTEM, ATTR_SYS ),
( Flags::WITH_FLAG_ARCHIVE, ATTR_ARCH ),
];
impl Flags { impl Flags {
/// Get a set of feature flags from file attributes. /// Get a set of feature flags from file attributes.
pub fn from_chattr(attr: u32) -> Flags { pub fn from_chattr(attr: c_long) -> Flags {
// form /usr/include/linux/fs.h
const FS_APPEND_FL: u32 = 0x0000_0020;
const FS_NOATIME_FL: u32 = 0x0000_0080;
const FS_COMPR_FL: u32 = 0x0000_0004;
const FS_NOCOW_FL: u32 = 0x0080_0000;
const FS_NODUMP_FL: u32 = 0x0000_0040;
const FS_DIRSYNC_FL: u32 = 0x0001_0000;
const FS_IMMUTABLE_FL: u32 = 0x0000_0010;
const FS_SYNC_FL: u32 = 0x0000_0008;
const FS_NOCOMP_FL: u32 = 0x0000_0400;
const FS_PROJINHERIT_FL: u32 = 0x2000_0000;
const CHATTR_MAP: [(Flags, u32); 10] = [
( Flags::WITH_FLAG_APPEND, FS_APPEND_FL ),
( Flags::WITH_FLAG_NOATIME, FS_NOATIME_FL ),
( Flags::WITH_FLAG_COMPR, FS_COMPR_FL ),
( Flags::WITH_FLAG_NOCOW, FS_NOCOW_FL ),
( Flags::WITH_FLAG_NODUMP, FS_NODUMP_FL ),
( Flags::WITH_FLAG_DIRSYNC, FS_DIRSYNC_FL ),
( Flags::WITH_FLAG_IMMUTABLE, FS_IMMUTABLE_FL ),
( Flags::WITH_FLAG_SYNC, FS_SYNC_FL ),
( Flags::WITH_FLAG_NOCOMP, FS_NOCOMP_FL ),
( Flags::WITH_FLAG_PROJINHERIT, FS_PROJINHERIT_FL ),
];
let mut flags = Flags::empty(); let mut flags = Flags::empty();
for (fe_flag, fs_flag) in &CHATTR_MAP { for (fe_flag, fs_flag) in &CHATTR_MAP {
@ -188,19 +210,25 @@ impl Flags {
flags flags
} }
/// Get the chattr bit representation of these feature flags.
pub fn to_chattr(self) -> c_long {
let mut flags: c_long = 0;
for (fe_flag, fs_flag) in &CHATTR_MAP {
if self.contains(*fe_flag) {
flags |= *fs_flag;
}
}
flags
}
pub fn to_initial_chattr(self) -> c_long {
self.to_chattr() & INITIAL_FS_FLAGS
}
/// Get a set of feature flags from FAT attributes. /// Get a set of feature flags from FAT attributes.
pub fn from_fat_attr(attr: u32) -> Flags { pub fn from_fat_attr(attr: u32) -> Flags {
// from /usr/include/linux/msdos_fs.h
const ATTR_HIDDEN: u32 = 2;
const ATTR_SYS: u32 = 4;
const ATTR_ARCH: u32 = 32;
const FAT_ATTR_MAP: [(Flags, u32); 3] = [
( Flags::WITH_FLAG_HIDDEN, ATTR_HIDDEN ),
( Flags::WITH_FLAG_SYSTEM, ATTR_SYS ),
( Flags::WITH_FLAG_ARCHIVE, ATTR_ARCH ),
];
let mut flags = Flags::empty(); let mut flags = Flags::empty();
for (fe_flag, fs_flag) in &FAT_ATTR_MAP { for (fe_flag, fs_flag) in &FAT_ATTR_MAP {
@ -212,6 +240,19 @@ impl Flags {
flags flags
} }
/// Get the fat attribute bit representation of these feature flags.
pub fn to_fat_attr(self) -> u32 {
let mut flags = 0u32;
for (fe_flag, fs_flag) in &FAT_ATTR_MAP {
if self.contains(*fe_flag) {
flags |= *fs_flag;
}
}
flags
}
/// Return the supported *pxar* feature flags based on the magic number of the filesystem. /// Return the supported *pxar* feature flags based on the magic number of the filesystem.
pub fn from_magic(magic: i64) -> Flags { pub fn from_magic(magic: i64) -> Flags {
use proxmox::sys::linux::magic::*; use proxmox::sys::linux::magic::*;

View File

@ -79,13 +79,19 @@ pub fn apply_at(
apply(flags, metadata, fd.as_raw_fd(), file_name) apply(flags, metadata, fd.as_raw_fd(), file_name)
} }
pub fn apply_initial_flags(
flags: Flags,
metadata: &Metadata,
fd: RawFd,
) -> Result<(), Error> {
let entry_flags = Flags::from_bits_truncate(metadata.stat.flags);
apply_chattr(fd, entry_flags.to_initial_chattr(), flags.to_initial_chattr())?;
Ok(())
}
pub fn apply(flags: Flags, metadata: &Metadata, fd: RawFd, file_name: &CStr) -> Result<(), Error> { pub fn apply(flags: Flags, metadata: &Metadata, fd: RawFd, file_name: &CStr) -> Result<(), Error> {
let c_proc_path = CString::new(format!("/proc/self/fd/{}", fd)).unwrap(); let c_proc_path = CString::new(format!("/proc/self/fd/{}", fd)).unwrap();
if metadata.stat.flags != 0 {
todo!("apply flags!");
}
unsafe { unsafe {
// UID and GID first, as this fails if we lose access anyway. // UID and GID first, as this fails if we lose access anyway.
c_result!(libc::chown( c_result!(libc::chown(
@ -94,13 +100,15 @@ pub fn apply(flags: Flags, metadata: &Metadata, fd: RawFd, file_name: &CStr) ->
metadata.stat.gid metadata.stat.gid
)) ))
.map(drop) .map(drop)
.or_else(allow_notsupp)?; .or_else(allow_notsupp)
.map_err(|err| format_err!("failed to set ownership: {}", err))?;
} }
let mut skip_xattrs = false; let mut skip_xattrs = false;
apply_xattrs(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)?; apply_xattrs(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)?;
add_fcaps(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)?; add_fcaps(flags, c_proc_path.as_ptr(), metadata, &mut skip_xattrs)?;
apply_acls(flags, &c_proc_path, metadata)?; apply_acls(flags, &c_proc_path, metadata)
.map_err(|err| format_err!("failed to apply acls: {}", err))?;
apply_quota_project_id(flags, fd, metadata)?; apply_quota_project_id(flags, fd, metadata)?;
// Finally mode and time. We may lose access with mode, but the changing the mode also // Finally mode and time. We may lose access with mode, but the changing the mode also
@ -110,7 +118,12 @@ pub fn apply(flags: Flags, metadata: &Metadata, fd: RawFd, file_name: &CStr) ->
libc::chmod(c_proc_path.as_ptr(), perms_from_metadata(metadata)?.bits()) libc::chmod(c_proc_path.as_ptr(), perms_from_metadata(metadata)?.bits())
}) })
.map(drop) .map(drop)
.or_else(allow_notsupp)?; .or_else(allow_notsupp)
.map_err(|err| format_err!("failed to change file mode: {}", err))?;
}
if metadata.stat.flags != 0 {
apply_flags(flags, fd, metadata.stat.flags)?;
} }
let res = c_result!(unsafe { let res = c_result!(unsafe {
@ -160,7 +173,8 @@ fn add_fcaps(
) )
}) })
.map(drop) .map(drop)
.or_else(|err| allow_notsupp_remember(err, skip_xattrs))?; .or_else(|err| allow_notsupp_remember(err, skip_xattrs))
.map_err(|err| format_err!("failed to apply file capabilities: {}", err))?;
Ok(()) Ok(())
} }
@ -195,7 +209,8 @@ fn apply_xattrs(
) )
}) })
.map(drop) .map(drop)
.or_else(|err| allow_notsupp_remember(err, &mut *skip_xattrs))?; .or_else(|err| allow_notsupp_remember(err, &mut *skip_xattrs))
.map_err(|err| format_err!("failed to apply extended attributes: {}", err))?;
} }
Ok(()) Ok(())
@ -317,3 +332,49 @@ fn apply_quota_project_id(flags: Flags, fd: RawFd, metadata: &Metadata) -> Resul
Ok(()) Ok(())
} }
pub(crate) fn errno_is_unsupported(errno: Errno) -> bool {
match errno {
Errno::ENOTTY | Errno::ENOSYS | Errno::EBADF | Errno::EOPNOTSUPP | Errno::EINVAL => true,
_ => false,
}
}
fn apply_chattr(fd: RawFd, chattr: libc::c_long, mask: libc::c_long) -> Result<(), Error> {
if chattr == 0 {
return Ok(());
}
let mut fattr: libc::c_long = 0;
match unsafe { fs::read_attr_fd(fd, &mut fattr) } {
Ok(_) => (),
Err(nix::Error::Sys(errno)) if errno_is_unsupported(errno) => {
return Ok(());
}
Err(err) => bail!("failed to read file attributes: {}", err),
}
let attr = (chattr & mask) | (fattr & !mask);
match unsafe { fs::write_attr_fd(fd, &attr) } {
Ok(_) => Ok(()),
Err(nix::Error::Sys(errno)) if errno_is_unsupported(errno) => Ok(()),
Err(err) => bail!("failed to set file attributes: {}", err),
}
}
fn apply_flags(flags: Flags, fd: RawFd, entry_flags: u64) -> Result<(), Error> {
let entry_flags = Flags::from_bits_truncate(entry_flags);
apply_chattr(fd, entry_flags.to_chattr(), flags.to_chattr())?;
let fatattr = (flags & entry_flags).to_fat_attr();
if fatattr != 0 {
match unsafe { fs::write_fat_attr_fd(fd, &fatattr) } {
Ok(_) => (),
Err(nix::Error::Sys(errno)) if errno_is_unsupported(errno) => (),
Err(err) => bail!("failed to set file attributes: {}", err),
}
}
Ok(())
}

View File

@ -1,9 +1,13 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::path::{PathBuf}; use std::path::PathBuf;
use anyhow::Error; use std::time::SystemTime;
use std::fs::metadata;
use std::sync::RwLock;
use anyhow::{bail, Error, format_err};
use hyper::Method; use hyper::Method;
use handlebars::Handlebars; use handlebars::Handlebars;
use serde::Serialize;
use proxmox::api::{ApiMethod, Router, RpcEnvironmentType}; use proxmox::api::{ApiMethod, Router, RpcEnvironmentType};
@ -12,21 +16,20 @@ pub struct ApiConfig {
router: &'static Router, router: &'static Router,
aliases: HashMap<String, PathBuf>, aliases: HashMap<String, PathBuf>,
env_type: RpcEnvironmentType, env_type: RpcEnvironmentType,
pub templates: Handlebars<'static>, templates: RwLock<Handlebars<'static>>,
template_files: RwLock<HashMap<String, (SystemTime, PathBuf)>>,
} }
impl ApiConfig { impl ApiConfig {
pub fn new<B: Into<PathBuf>>(basedir: B, router: &'static Router, env_type: RpcEnvironmentType) -> Result<Self, Error> { pub fn new<B: Into<PathBuf>>(basedir: B, router: &'static Router, env_type: RpcEnvironmentType) -> Result<Self, Error> {
let mut templates = Handlebars::new();
let basedir = basedir.into();
templates.register_template_file("index", basedir.join("index.hbs"))?;
Ok(Self { Ok(Self {
basedir, basedir: basedir.into(),
router, router,
aliases: HashMap::new(), aliases: HashMap::new(),
env_type, env_type,
templates templates: RwLock::new(Handlebars::new()),
template_files: RwLock::new(HashMap::new()),
}) })
} }
@ -67,4 +70,52 @@ impl ApiConfig {
pub fn env_type(&self) -> RpcEnvironmentType { pub fn env_type(&self) -> RpcEnvironmentType {
self.env_type self.env_type
} }
pub fn register_template<P>(&self, name: &str, path: P) -> Result<(), Error>
where
P: Into<PathBuf>
{
if self.template_files.read().unwrap().contains_key(name) {
bail!("template already registered");
}
let path: PathBuf = path.into();
let metadata = metadata(&path)?;
let mtime = metadata.modified()?;
self.templates.write().unwrap().register_template_file(name, &path)?;
self.template_files.write().unwrap().insert(name.to_string(), (mtime, path));
Ok(())
}
/// Checks if the template was modified since the last rendering
/// if yes, it loads a the new version of the template
pub fn render_template<T>(&self, name: &str, data: &T) -> Result<String, Error>
where
T: Serialize,
{
let path;
let mtime;
{
let template_files = self.template_files.read().unwrap();
let (old_mtime, old_path) = template_files.get(name).ok_or_else(|| format_err!("template not found"))?;
mtime = metadata(old_path)?.modified()?;
if mtime <= *old_mtime {
return self.templates.read().unwrap().render(name, data).map_err(|err| format_err!("{}", err));
}
path = old_path.to_path_buf();
}
{
let mut template_files = self.template_files.write().unwrap();
let mut templates = self.templates.write().unwrap();
templates.register_template_file(name, &path)?;
template_files.insert(name.to_string(), (mtime, path));
templates.render(name, data).map_err(|err| format_err!("{}", err))
}
}
} }

View File

@ -55,7 +55,7 @@ impl <E: RpcEnvironment + Clone> H2Service<E> {
match self.router.find_method(&components, method, &mut uri_param) { match self.router.find_method(&components, method, &mut uri_param) {
None => { None => {
let err = http_err!(NOT_FOUND, "Path not found.".to_string()); let err = http_err!(NOT_FOUND, format!("Path '{}' not found.", path).to_string());
future::ok((formatter.format_error)(err)).boxed() future::ok((formatter.format_error)(err)).boxed()
} }
Some(api_method) => { Some(api_method) => {

View File

@ -16,7 +16,6 @@ use serde_json::{json, Value};
use tokio::fs::File; use tokio::fs::File;
use tokio::time::Instant; use tokio::time::Instant;
use url::form_urlencoded; use url::form_urlencoded;
use handlebars::Handlebars;
use proxmox::http_err; use proxmox::http_err;
use proxmox::api::{ApiHandler, ApiMethod, HttpError}; use proxmox::api::{ApiHandler, ApiMethod, HttpError};
@ -312,7 +311,7 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
Ok(resp) Ok(resp)
} }
fn get_index(username: Option<String>, token: Option<String>, template: &Handlebars, parts: Parts) -> Response<Body> { fn get_index(username: Option<String>, token: Option<String>, api: &Arc<ApiConfig>, parts: Parts) -> Response<Body> {
let nodename = proxmox::tools::nodename(); let nodename = proxmox::tools::nodename();
let username = username.unwrap_or_else(|| String::from("")); let username = username.unwrap_or_else(|| String::from(""));
@ -320,11 +319,14 @@ fn get_index(username: Option<String>, token: Option<String>, template: &Handleb
let token = token.unwrap_or_else(|| String::from("")); let token = token.unwrap_or_else(|| String::from(""));
let mut debug = false; let mut debug = false;
let mut template_file = "index";
if let Some(query_str) = parts.uri.query() { if let Some(query_str) = parts.uri.query() {
for (k, v) in form_urlencoded::parse(query_str.as_bytes()).into_owned() { for (k, v) in form_urlencoded::parse(query_str.as_bytes()).into_owned() {
if k == "debug" && v != "0" && v != "false" { if k == "debug" && v != "0" && v != "false" {
debug = true; debug = true;
} else if k == "console" {
template_file = "console";
} }
} }
} }
@ -338,12 +340,12 @@ fn get_index(username: Option<String>, token: Option<String>, template: &Handleb
let mut ct = "text/html"; let mut ct = "text/html";
let index = match template.render("index", &data) { let index = match api.render_template(template_file, &data) {
Ok(index) => index, Ok(index) => index,
Err(err) => { Err(err) => {
ct = "text/plain"; ct = "text/plain";
format!("Error rendering template: {}", err.desc) format!("Error rendering template: {}", err)
}, }
}; };
Response::builder() Response::builder()
@ -497,8 +499,8 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
let comp_len = components.len(); let comp_len = components.len();
println!("REQUEST {} {}", method, path); //println!("REQUEST {} {}", method, path);
println!("COMPO {:?}", components); //println!("COMPO {:?}", components);
let env_type = api.env_type(); let env_type = api.env_type();
let mut rpcenv = RestEnvironment::new(env_type); let mut rpcenv = RestEnvironment::new(env_type);
@ -542,7 +544,7 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
match api.find_method(&components[2..], method, &mut uri_param) { match api.find_method(&components[2..], method, &mut uri_param) {
None => { None => {
let err = http_err!(NOT_FOUND, "Path not found.".to_string()); let err = http_err!(NOT_FOUND, format!("Path '{}' not found.", path).to_string());
return Ok((formatter.format_error)(err)); return Ok((formatter.format_error)(err));
} }
Some(api_method) => { Some(api_method) => {
@ -580,15 +582,15 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
match check_auth(&method, &ticket, &token, &user_info) { match check_auth(&method, &ticket, &token, &user_info) {
Ok(username) => { Ok(username) => {
let new_token = assemble_csrf_prevention_token(csrf_secret(), &username); let new_token = assemble_csrf_prevention_token(csrf_secret(), &username);
return Ok(get_index(Some(username), Some(new_token), &api.templates, parts)); return Ok(get_index(Some(username), Some(new_token), &api, parts));
} }
_ => { _ => {
tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await; tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await;
return Ok(get_index(None, None, &api.templates, parts)); return Ok(get_index(None, None, &api, parts));
} }
} }
} else { } else {
return Ok(get_index(None, None, &api.templates, parts)); return Ok(get_index(None, None, &api, parts));
} }
} else { } else {
let filename = api.find_alias(&components); let filename = api.find_alias(&components);
@ -596,5 +598,5 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
} }
} }
Err(http_err!(NOT_FOUND, "Path not found.".to_string())) Err(http_err!(NOT_FOUND, format!("Path '{}' not found.", path).to_string()))
} }

View File

@ -19,6 +19,7 @@ pub struct ServerState {
pub shutdown_listeners: BroadcastData<()>, pub shutdown_listeners: BroadcastData<()>,
pub last_worker_listeners: BroadcastData<()>, pub last_worker_listeners: BroadcastData<()>,
pub worker_count: usize, pub worker_count: usize,
pub task_count: usize,
pub reload_request: bool, pub reload_request: bool,
} }
@ -28,6 +29,7 @@ lazy_static! {
shutdown_listeners: BroadcastData::new(), shutdown_listeners: BroadcastData::new(),
last_worker_listeners: BroadcastData::new(), last_worker_listeners: BroadcastData::new(),
worker_count: 0, worker_count: 0,
task_count: 0,
reload_request: false, reload_request: false,
}); });
} }
@ -101,20 +103,40 @@ pub fn last_worker_future() -> impl Future<Output = Result<(), Error>> {
} }
pub fn set_worker_count(count: usize) { pub fn set_worker_count(count: usize) {
let mut data = SERVER_STATE.lock().unwrap(); SERVER_STATE.lock().unwrap().worker_count = count;
data.worker_count = count;
if !(data.mode == ServerMode::Shutdown && data.worker_count == 0) { return; } check_last_worker();
data.last_worker_listeners.notify_listeners(Ok(()));
} }
pub fn check_last_worker() { pub fn check_last_worker() {
let mut data = SERVER_STATE.lock().unwrap(); let mut data = SERVER_STATE.lock().unwrap();
if !(data.mode == ServerMode::Shutdown && data.worker_count == 0) { return; } if !(data.mode == ServerMode::Shutdown && data.worker_count == 0 && data.task_count == 0) { return; }
data.last_worker_listeners.notify_listeners(Ok(())); data.last_worker_listeners.notify_listeners(Ok(()));
} }
/// Spawns a tokio task that will be tracked for reload
/// and if it is finished, notify the last_worker_listener if we
/// are in shutdown mode
pub fn spawn_internal_task<T>(task: T)
where
T: Future + Send + 'static,
T::Output: Send + 'static,
{
let mut data = SERVER_STATE.lock().unwrap();
data.task_count += 1;
tokio::spawn(async move {
let _ = tokio::spawn(task).await; // ignore errors
{ // drop mutex
let mut data = SERVER_STATE.lock().unwrap();
if data.task_count > 0 {
data.task_count -= 1;
}
}
check_last_worker();
});
}

View File

@ -270,28 +270,22 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
let line = line?; let line = line?;
match parse_worker_status_line(&line) { match parse_worker_status_line(&line) {
Err(err) => bail!("unable to parse active worker status '{}' - {}", line, err), Err(err) => bail!("unable to parse active worker status '{}' - {}", line, err),
Ok((upid_str, upid, state)) => { Ok((upid_str, upid, state)) => match state {
None if worker_is_active_local(&upid) => {
let running = worker_is_active_local(&upid);
if running {
active_list.push(TaskListInfo { upid, upid_str, state: None }); active_list.push(TaskListInfo { upid, upid_str, state: None });
} else { },
match state { None => {
None => { println!("Detected stopped UPID {}", upid_str);
println!("Detected stopped UPID {}", upid_str); let status = upid_read_status(&upid)
let status = upid_read_status(&upid) .unwrap_or_else(|_| String::from("unknown"));
.unwrap_or_else(|_| String::from("unknown")); finish_list.push(TaskListInfo {
finish_list.push(TaskListInfo { upid, upid_str, state: Some((Local::now().timestamp(), status))
upid, upid_str, state: Some((Local::now().timestamp(), status)) });
}); },
} Some((endtime, status)) => {
Some((endtime, status)) => { finish_list.push(TaskListInfo {
finish_list.push(TaskListInfo { upid, upid_str, state: Some((endtime, status))
upid, upid_str, state: Some((endtime, status)) })
})
}
}
} }
} }
} }

View File

@ -23,6 +23,7 @@ pub use proxmox::tools::fd::Fd;
pub mod acl; pub mod acl;
pub mod async_io; pub mod async_io;
pub mod borrow; pub mod borrow;
pub mod cert;
pub mod daemon; pub mod daemon;
pub mod disks; pub mod disks;
pub mod fs; pub mod fs;
@ -646,3 +647,14 @@ pub fn setup_safe_path_env() {
std::env::remove_var(name); std::env::remove_var(name);
} }
} }
pub fn strip_ascii_whitespace(line: &[u8]) -> &[u8] {
let line = match line.iter().position(|&b| !b.is_ascii_whitespace()) {
Some(n) => &line[n..],
None => return &[],
};
match line.iter().rev().position(|&b| !b.is_ascii_whitespace()) {
Some(n) => &line[..(line.len() - n)],
None => &[],
}
}

67
src/tools/cert.rs Normal file
View File

@ -0,0 +1,67 @@
use std::path::PathBuf;
use anyhow::Error;
use openssl::x509::{X509, GeneralName};
use openssl::stack::Stack;
use openssl::pkey::{Public, PKey};
use crate::configdir;
pub struct CertInfo {
x509: X509,
}
fn x509name_to_string(name: &openssl::x509::X509NameRef) -> Result<String, Error> {
let mut parts = Vec::new();
for entry in name.entries() {
parts.push(format!("{} = {}", entry.object().nid().short_name()?, entry.data().as_utf8()?));
}
Ok(parts.join(", "))
}
impl CertInfo {
pub fn new() -> Result<Self, Error> {
Self::from_path(PathBuf::from(configdir!("/proxy.pem")))
}
pub fn from_path(path: PathBuf) -> Result<Self, Error> {
let cert_pem = proxmox::tools::fs::file_get_contents(&path)?;
let x509 = openssl::x509::X509::from_pem(&cert_pem)?;
Ok(Self{
x509
})
}
pub fn subject_alt_names(&self) -> Option<Stack<GeneralName>> {
self.x509.subject_alt_names()
}
pub fn subject_name(&self) -> Result<String, Error> {
Ok(x509name_to_string(self.x509.subject_name())?)
}
pub fn issuer_name(&self) -> Result<String, Error> {
Ok(x509name_to_string(self.x509.issuer_name())?)
}
pub fn fingerprint(&self) -> Result<String, Error> {
let fp = self.x509.digest(openssl::hash::MessageDigest::sha256())?;
let fp_string = proxmox::tools::digest_to_hex(&fp);
let fp_string = fp_string.as_bytes().chunks(2).map(|v| std::str::from_utf8(v).unwrap())
.collect::<Vec<&str>>().join(":");
Ok(fp_string)
}
pub fn public_key(&self) -> Result<PKey<Public>, Error> {
let pubkey = self.x509.public_key()?;
Ok(pubkey)
}
pub fn not_before(&self) -> &openssl::asn1::Asn1TimeRef {
self.x509.not_before()
}
pub fn not_after(&self) -> &openssl::asn1::Asn1TimeRef {
self.x509.not_after()
}
}

View File

@ -743,7 +743,10 @@ pub fn get_disks(
let partition_type_map = get_partition_type_info()?; let partition_type_map = get_partition_type_info()?;
let zfs_devices = zfs_devices(&partition_type_map, None)?; let zfs_devices = zfs_devices(&partition_type_map, None).or_else(|err| -> Result<HashSet<u64>, Error> {
eprintln!("error getting zfs devices: {}", err);
Ok(HashSet::new())
})?;
let lvm_devices = get_lvm_devices(&partition_type_map)?; let lvm_devices = get_lvm_devices(&partition_type_map)?;

View File

@ -64,7 +64,7 @@ fn parse_zpool_list_header(i: &str) -> IResult<&str, ZFSPoolInfo> {
let (i, (text, size, alloc, free, _, _, let (i, (text, size, alloc, free, _, _,
frag, _, dedup, health, frag, _, dedup, health,
_altroot, _eol)) = tuple(( _altroot, _eol)) = tuple((
take_while1(|c| char::is_alphanumeric(c)), // name take_while1(|c| char::is_alphanumeric(c) || c == '-' || c == ':' || c == '_' || c == '.'), // name
preceded(multispace1, parse_optional_u64), // size preceded(multispace1, parse_optional_u64), // size
preceded(multispace1, parse_optional_u64), // allocated preceded(multispace1, parse_optional_u64), // allocated
preceded(multispace1, parse_optional_u64), // free preceded(multispace1, parse_optional_u64), // free
@ -221,7 +221,7 @@ logs
assert_eq!(data, expect); assert_eq!(data, expect);
let output = "\ let output = "\
btest 427349245952 761856 427348484096 - - 0 0 1.00 ONLINE - b-test 427349245952 761856 427348484096 - - 0 0 1.00 ONLINE -
mirror 213674622976 438272 213674184704 - - 0 0 - ONLINE mirror 213674622976 438272 213674184704 - - 0 0 - ONLINE
/dev/sda1 - - - - - - - - ONLINE /dev/sda1 - - - - - - - - ONLINE
/dev/sda2 - - - - - - - - ONLINE /dev/sda2 - - - - - - - - ONLINE
@ -235,7 +235,7 @@ logs - - - - - - - - -
let data = parse_zpool_list(&output)?; let data = parse_zpool_list(&output)?;
let expect = vec![ let expect = vec![
ZFSPoolInfo { ZFSPoolInfo {
name: String::from("btest"), name: String::from("b-test"),
health: String::from("ONLINE"), health: String::from("ONLINE"),
usage: Some(ZFSPoolUsage { usage: Some(ZFSPoolUsage {
size: 427349245952, size: 427349245952,
@ -261,5 +261,31 @@ logs - - - - - - - - -
assert_eq!(data, expect); assert_eq!(data, expect);
let output = "\
b.test 427349245952 761856 427348484096 - - 0 0 1.00 ONLINE -
mirror 213674622976 438272 213674184704 - - 0 0 - ONLINE
/dev/sda1 - - - - - - - - ONLINE
";
let data = parse_zpool_list(&output)?;
let expect = vec![
ZFSPoolInfo {
name: String::from("b.test"),
health: String::from("ONLINE"),
usage: Some(ZFSPoolUsage {
size: 427349245952,
alloc: 761856,
free: 427348484096,
dedup: 1.0,
frag: 0,
}),
devices: vec![
String::from("/dev/sda1"),
]
},
];
assert_eq!(data, expect);
Ok(()) Ok(())
} }

View File

@ -430,3 +430,38 @@ errors: No known data errors
Ok(()) Ok(())
} }
#[test]
fn test_zpool_status_parser3() -> Result<(), Error> {
let output = r###" pool: bt-est
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
bt-est ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/dev/sda1 ONLINE 0 0 0
/dev/sda2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/dev/sda3 ONLINE 0 0 0
/dev/sda4 ONLINE 0 0 0
logs
/dev/sda5 ONLINE 0 0 0
errors: No known data errors
"###;
let key_value_list = parse_zpool_status(&output)?;
for (k, v) in key_value_list {
println!("{} => {}", k,v);
if k == "config" {
let vdev_list = parse_zpool_status_config_tree(&v)?;
let _tree = vdev_list_to_tree(&vdev_list);
//println!("TREE1 {}", serde_json::to_string_pretty(&tree)?);
}
}
Ok(())
}

View File

@ -46,3 +46,49 @@ pub fn render_bool_with_default_true(value: &Value, _record: &Value) -> Result<S
let value = value.as_bool().unwrap_or(true); let value = value.as_bool().unwrap_or(true);
Ok((if value { "1" } else { "0" }).to_string()) Ok((if value { "1" } else { "0" }).to_string())
} }
pub struct HumanByte {
b: usize,
}
impl std::fmt::Display for HumanByte {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
if self.b < 1024 {
return write!(f, "{} B", self.b);
}
let kb: f64 = self.b as f64 / 1024.0;
if kb < 1024.0 {
return write!(f, "{:.2} KiB", kb);
}
let mb: f64 = kb / 1024.0;
if mb < 1024.0 {
return write!(f, "{:.2} MiB", mb);
}
let gb: f64 = mb / 1024.0;
if gb < 1024.0 {
return write!(f, "{:.2} GiB", gb);
}
let tb: f64 = gb / 1024.0;
if tb < 1024.0 {
return write!(f, "{:.2} TiB", tb);
}
let pb: f64 = tb / 1024.0;
return write!(f, "{:.2} PiB", pb);
}
}
impl From<usize> for HumanByte {
fn from(v: usize) -> Self {
HumanByte { b: v }
}
}
#[test]
fn correct_byte_convert() {
fn convert(b: usize) -> String {
HumanByte::from(b).to_string()
}
assert_eq!(convert(1023), "1023 B");
assert_eq!(convert(1<<10), "1.00 KiB");
assert_eq!(convert(1<<20), "1.00 MiB");
assert_eq!(convert((1<<30) + (103 * 1<<20)), "1.10 GiB");
assert_eq!(convert((2<<50) + (500 * 1<<40)), "2.49 PiB");
}

View File

@ -222,11 +222,13 @@ where
// /usr/include/linux/fs.h: #define FS_IOC_GETFLAGS _IOR('f', 1, long) // /usr/include/linux/fs.h: #define FS_IOC_GETFLAGS _IOR('f', 1, long)
// read Linux file system attributes (see man chattr) // read Linux file system attributes (see man chattr)
nix::ioctl_read!(read_attr_fd, b'f', 1, usize); nix::ioctl_read!(read_attr_fd, b'f', 1, libc::c_long);
nix::ioctl_write_ptr!(write_attr_fd, b'f', 2, libc::c_long);
// /usr/include/linux/msdos_fs.h: #define FAT_IOCTL_GET_ATTRIBUTES _IOR('r', 0x10, __u32) // /usr/include/linux/msdos_fs.h: #define FAT_IOCTL_GET_ATTRIBUTES _IOR('r', 0x10, __u32)
// read FAT file system attributes // read FAT file system attributes
nix::ioctl_read!(read_fat_attr_fd, b'r', 0x10, u32); nix::ioctl_read!(read_fat_attr_fd, b'r', 0x10, u32);
nix::ioctl_write_ptr!(write_fat_attr_fd, b'r', 0x11, u32);
// From /usr/include/linux/fs.h // From /usr/include/linux/fs.h
// #define FS_IOC_FSGETXATTR _IOR('X', 31, struct fsxattr) // #define FS_IOC_FSGETXATTR _IOR('X', 31, struct fsxattr)

View File

@ -56,30 +56,41 @@ extern {
/// ///
/// This makes sure that tokio's worker threads are marked for us so that we know whether we /// This makes sure that tokio's worker threads are marked for us so that we know whether we
/// can/need to use `block_in_place` in our `block_on` helper. /// can/need to use `block_in_place` in our `block_on` helper.
pub fn get_runtime() -> Arc<Runtime> { pub fn get_runtime_with_builder<F: Fn() -> runtime::Builder>(get_builder: F) -> Arc<Runtime> {
let mut guard = RUNTIME.lock().unwrap(); let mut guard = RUNTIME.lock().unwrap();
if let Some(rt) = guard.upgrade() { return rt; } if let Some(rt) = guard.upgrade() { return rt; }
let rt = Arc::new( let mut builder = get_builder();
runtime::Builder::new() builder.on_thread_stop(|| {
.on_thread_stop(|| { // avoid openssl bug: https://github.com/openssl/openssl/issues/6214
// avoid openssl bug: https://github.com/openssl/openssl/issues/6214 // call OPENSSL_thread_stop to avoid race with openssl cleanup handlers
// call OPENSSL_thread_stop to avoid race with openssl cleanup handlers unsafe { OPENSSL_thread_stop(); }
unsafe { OPENSSL_thread_stop(); } });
})
.threaded_scheduler() let runtime = builder.build().expect("failed to spawn tokio runtime");
.enable_all() let rt = Arc::new(runtime);
.build()
.expect("failed to spawn tokio runtime")
);
*guard = Arc::downgrade(&rt.clone()); *guard = Arc::downgrade(&rt.clone());
rt rt
} }
/// Get or create the current main tokio runtime.
///
/// This calls get_runtime_with_builder() using the tokio default threaded scheduler
pub fn get_runtime() -> Arc<Runtime> {
get_runtime_with_builder(|| {
let mut builder = runtime::Builder::new();
builder.threaded_scheduler();
builder.enable_all();
builder
})
}
/// Block on a synchronous piece of code. /// Block on a synchronous piece of code.
pub fn block_in_place<R>(fut: impl FnOnce() -> R) -> R { pub fn block_in_place<R>(fut: impl FnOnce() -> R) -> R {
// don't double-exit the context (tokio doesn't like that) // don't double-exit the context (tokio doesn't like that)

View File

@ -219,7 +219,16 @@ fn parse_calendar_event_incomplete(mut i: &str) -> IResult<&str, CalendarEvent>
..Default::default() ..Default::default()
})); }));
} }
"monthly" | "weekly" | "yearly" | "quarterly" | "semiannually" => { "weekly" => {
return Ok(("", CalendarEvent {
hour: vec![DateTimeValue::Single(0)],
minute: vec![DateTimeValue::Single(0)],
second: vec![DateTimeValue::Single(0)],
days: WeekDays::MONDAY,
..Default::default()
}));
}
"monthly" | "yearly" | "quarterly" | "semiannually" => {
return Err(parse_error(i, "unimplemented date or time specification")); return Err(parse_error(i, "unimplemented date or time specification"));
} }
_ => { /* continue */ } _ => { /* continue */ }

View File

@ -88,12 +88,27 @@ impl DateTimeValue {
} }
} }
/// Calendar events may be used to refer to one or more points in time in a
/// single expression. They are designed after the systemd.time Calendar Events
/// specification, but are not guaranteed to be 100% compatible.
#[derive(Default, Debug)] #[derive(Default, Debug)]
pub struct CalendarEvent { pub struct CalendarEvent {
/// the days in a week this event should trigger
pub days: WeekDays, pub days: WeekDays,
/// the second(s) this event should trigger
pub second: Vec<DateTimeValue>, // todo: support float values pub second: Vec<DateTimeValue>, // todo: support float values
/// the minute(s) this event should trigger
pub minute: Vec<DateTimeValue>, pub minute: Vec<DateTimeValue>,
/// the hour(s) this event should trigger
pub hour: Vec<DateTimeValue>, pub hour: Vec<DateTimeValue>,
/* FIXME: TODO
/// the day(s) in a month this event should trigger
pub day: Vec<DateTimeValue>,
/// the month(s) in a year this event should trigger
pub month: Vec<DateTimeValue>,
/// the years(s) this event should trigger
pub year: Vec<DateTimeValue>,
*/
} }
#[derive(Default)] #[derive(Default)]

View File

@ -11,6 +11,38 @@ use crate::tools::epoch_now_u64;
pub const TICKET_LIFETIME: i64 = 3600*2; // 2 hours pub const TICKET_LIFETIME: i64 = 3600*2; // 2 hours
const TERM_PREFIX: &str = "PBSTERM";
pub fn assemble_term_ticket(
keypair: &PKey<Private>,
username: &str,
path: &str,
port: u16,
) -> Result<String, Error> {
assemble_rsa_ticket(
keypair,
TERM_PREFIX,
None,
Some(&format!("{}{}{}", username, path, port)),
)
}
pub fn verify_term_ticket(
keypair: &PKey<Public>,
username: &str,
path: &str,
port: u16,
ticket: &str,
) -> Result<(i64, Option<String>), Error> {
verify_rsa_ticket(
keypair,
TERM_PREFIX,
ticket,
Some(&format!("{}{}{}", username, path, port)),
-300,
TICKET_LIFETIME,
)
}
pub fn assemble_rsa_ticket( pub fn assemble_rsa_ticket(
keypair: &PKey<Private>, keypair: &PKey<Private>,

View File

@ -82,7 +82,7 @@ pub fn flistxattr(fd: RawFd) -> Result<ListXAttr, nix::errno::Errno> {
let mut size = 256; let mut size = 256;
let mut buffer = vec::undefined(size); let mut buffer = vec::undefined(size);
let mut bytes = unsafe { let mut bytes = unsafe {
libc::flistxattr(fd, buffer.as_mut_ptr() as *mut i8, buffer.len()) libc::flistxattr(fd, buffer.as_mut_ptr() as *mut libc::c_char, buffer.len())
}; };
while bytes < 0 { while bytes < 0 {
let err = Errno::last(); let err = Errno::last();
@ -96,7 +96,7 @@ pub fn flistxattr(fd: RawFd) -> Result<ListXAttr, nix::errno::Errno> {
// Retry to read the list with new buffer // Retry to read the list with new buffer
buffer.resize(size, 0); buffer.resize(size, 0);
bytes = unsafe { bytes = unsafe {
libc::flistxattr(fd, buffer.as_mut_ptr() as *mut i8, buffer.len()) libc::flistxattr(fd, buffer.as_mut_ptr() as *mut libc::c_char, buffer.len())
}; };
} }
buffer.truncate(bytes as usize); buffer.truncate(bytes as usize);
@ -125,7 +125,7 @@ pub fn fgetxattr(fd: RawFd, name: &CStr) -> Result<Vec<u8>, nix::errno::Errno> {
} }
buffer.resize(size, 0); buffer.resize(size, 0);
bytes = unsafe { bytes = unsafe {
libc::fgetxattr(fd, name.as_ptr() as *const i8, buffer.as_mut_ptr() as *mut core::ffi::c_void, buffer.len()) libc::fgetxattr(fd, name.as_ptr() as *const libc::c_char, buffer.as_mut_ptr() as *mut core::ffi::c_void, buffer.len())
}; };
} }
buffer.resize(bytes as usize, 0); buffer.resize(bytes as usize, 0);

View File

@ -78,24 +78,6 @@ fn test_compressed_blob_writer() -> Result<(), Error> {
verify_test_blob(blob_writer.finish()?) verify_test_blob(blob_writer.finish()?)
} }
#[test]
fn test_signed_blob_writer() -> Result<(), Error> {
let tmp = Cursor::new(Vec::<u8>::new());
let mut blob_writer = DataBlobWriter::new_signed(tmp, CRYPT_CONFIG.clone())?;
blob_writer.write_all(&TEST_DATA)?;
verify_test_blob(blob_writer.finish()?)
}
#[test]
fn test_signed_compressed_blob_writer() -> Result<(), Error> {
let tmp = Cursor::new(Vec::<u8>::new());
let mut blob_writer = DataBlobWriter::new_signed_compressed(tmp, CRYPT_CONFIG.clone())?;
blob_writer.write_all(&TEST_DATA)?;
verify_test_blob(blob_writer.finish()?)
}
#[test] #[test]
fn test_encrypted_blob_writer() -> Result<(), Error> { fn test_encrypted_blob_writer() -> Result<(), Error> {
let tmp = Cursor::new(Vec::<u8>::new()); let tmp = Cursor::new(Vec::<u8>::new());

View File

@ -76,6 +76,7 @@ Ext.define('PBS.Dashboard', {
let viewmodel = me.getViewModel(); let viewmodel = me.getViewModel();
let res = records[0].data; let res = records[0].data;
viewmodel.set('fingerprint', res.info.fingerprint || Proxmox.Utils.unknownText);
let cpu = res.cpu, let cpu = res.cpu,
mem = res.memory, mem = res.memory,
@ -91,6 +92,45 @@ Ext.define('PBS.Dashboard', {
hdPanel.updateValue(root.used / root.total); hdPanel.updateValue(root.used / root.total);
}, },
showFingerPrint: function() {
let me = this;
let vm = me.getViewModel();
let fingerprint = vm.get('fingerprint');
Ext.create('Ext.window.Window', {
modal: true,
width: 600,
title: gettext('Fingerprint'),
layout: 'form',
bodyPadding: '10 0',
items: [
{
xtype: 'textfield',
inputId: 'fingerprintField',
value: fingerprint,
editable: false,
},
],
buttons: [
{
xtype: 'button',
iconCls: 'fa fa-clipboard',
handler: function(b) {
var el = document.getElementById('fingerprintField');
el.select();
document.execCommand("copy");
},
text: gettext('Copy')
},
{
text: gettext('Ok'),
handler: function() {
this.up('window').close();
},
},
],
}).show();
},
updateTasks: function(store, records, success) { updateTasks: function(store, records, success) {
if (!success) return; if (!success) return;
let me = this; let me = this;
@ -134,11 +174,16 @@ Ext.define('PBS.Dashboard', {
timespan: 300, // in seconds timespan: 300, // in seconds
hours: 12, // in hours hours: 12, // in hours
error_shown: false, error_shown: false,
fingerprint: "",
'bytes_in': 0, 'bytes_in': 0,
'bytes_out': 0, 'bytes_out': 0,
'avg_ptime': 0.0 'avg_ptime': 0.0
}, },
formulas: {
disableFPButton: (get) => get('fingerprint') === "",
},
stores: { stores: {
usage: { usage: {
storeid: 'dash-usage', storeid: 'dash-usage',
@ -164,7 +209,7 @@ Ext.define('PBS.Dashboard', {
autoDestroy: true, autoDestroy: true,
proxy: { proxy: {
type: 'proxmox', type: 'proxmox',
url: '/api2/json/subscription' url: '/api2/json/nodes/localhost/subscription'
}, },
listeners: { listeners: {
load: 'updateSubscription' load: 'updateSubscription'
@ -211,6 +256,16 @@ Ext.define('PBS.Dashboard', {
iconCls: 'fa fa-tasks', iconCls: 'fa fa-tasks',
title: gettext('Server Resources'), title: gettext('Server Resources'),
bodyPadding: '0 20 0 20', bodyPadding: '0 20 0 20',
tools: [
{
xtype: 'button',
text: gettext('Show Fingerprint'),
handler: 'showFingerPrint',
bind: {
disabled: '{disableFPButton}',
},
},
],
layout: { layout: {
type: 'hbox', type: 'hbox',
align: 'center' align: 'center'

View File

@ -10,28 +10,30 @@ Ext.define('pbs-data-store-snapshots', {
}, },
'files', 'files',
'owner', 'owner',
{ name: 'size', type: 'int' }, { name: 'size', type: 'int', allowNull: true, },
{ {
name: 'encrypted', name: 'crypt-mode',
type: 'boolean', type: 'boolean',
calculate: function(data) { calculate: function(data) {
let encrypted = 0; let encrypted = 0;
let files = 0; let crypt = {
none: 0,
mixed: 0,
'sign-only': 0,
encrypt: 0,
count: 0,
};
let signed = 0;
data.files.forEach(file => { data.files.forEach(file => {
if (file.filename === 'index.json.blob') return; // is never encrypted if (file.filename === 'index.json.blob') return; // is never encrypted
if (file.encrypted) { let mode = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
encrypted++; if (mode !== -1) {
crypt[file['crypt-mode']]++;
} }
files++; crypt.count++;
}); });
if (encrypted === 0) { return PBS.Utils.calculateCryptMode(crypt);
return 0;
} else if (encrypted < files) {
return 1;
} else {
return 2;
}
} }
} }
] ]
@ -79,6 +81,7 @@ Ext.define('PBS.DataStoreContent', {
let url = `/api2/json/admin/datastore/${view.datastore}/snapshots`; let url = `/api2/json/admin/datastore/${view.datastore}/snapshots`;
this.store.setProxy({ this.store.setProxy({
type: 'proxmox', type: 'proxmox',
timeout: 300*1000, // 5 minutes, we should make that api call faster
url: url url: url
}); });
@ -148,12 +151,15 @@ Ext.define('PBS.DataStoreContent', {
let children = []; let children = [];
for (const [_key, group] of Object.entries(groups)) { for (const [_key, group] of Object.entries(groups)) {
let last_backup = 0; let last_backup = 0;
let encrypted = 0; let crypt = {
none: 0,
mixed: 0,
'sign-only': 0,
encrypt: 0,
};
for (const item of group.children) { for (const item of group.children) {
if (item.encrypted > 0) { crypt[PBS.Utils.cryptmap[item['crypt-mode']]]++;
encrypted++; if (item["backup-time"] > last_backup && item.size !== null) {
}
if (item["backup-time"] > last_backup) {
last_backup = item["backup-time"]; last_backup = item["backup-time"];
group["backup-time"] = last_backup; group["backup-time"] = last_backup;
group.files = item.files; group.files = item.files;
@ -162,14 +168,9 @@ Ext.define('PBS.DataStoreContent', {
} }
} }
if (encrypted === 0) {
group.encrypted = 0;
} else if (encrypted < group.children.length) {
group.encrypted = 1;
} else {
group.encrypted = 2;
}
group.count = group.children.length; group.count = group.children.length;
crypt.count = group.count;
group['crypt-mode'] = PBS.Utils.calculateCryptMode(crypt);
children.push(group); children.push(group);
} }
@ -199,6 +200,45 @@ Ext.define('PBS.DataStoreContent', {
win.show(); win.show();
}, },
onVerify: function() {
var view = this.getView();
if (!view.datastore) return;
let rec = view.selModel.getSelection()[0];
if (!(rec && rec.data)) return;
let data = rec.data;
let params;
if (data.leaf) {
params = {
"backup-type": data["backup-type"],
"backup-id": data["backup-id"],
"backup-time": (data['backup-time'].getTime()/1000).toFixed(0),
};
} else {
params = {
"backup-type": data.backup_type,
"backup-id": data.backup_id,
};
}
Proxmox.Utils.API2Request({
params: params,
url: `/admin/datastore/${view.datastore}/verify`,
method: 'POST',
failure: function(response) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
success: function(response, options) {
Ext.create('Proxmox.window.TaskViewer', {
upid: response.result.data,
}).show();
},
});
},
onForget: function() { onForget: function() {
var view = this.getView(); var view = this.getView();
@ -256,7 +296,7 @@ Ext.define('PBS.DataStoreContent', {
let encrypted = false; let encrypted = false;
data.files.forEach(file => { data.files.forEach(file => {
if (file.filename === 'catalog.pcat1.didx' && file.encrypted) { if (file.filename === 'catalog.pcat1.didx' && file['crypt-mode'] === 'encrypt') {
encrypted = true; encrypted = true;
} }
}); });
@ -303,7 +343,13 @@ Ext.define('PBS.DataStoreContent', {
header: gettext("Size"), header: gettext("Size"),
sortable: true, sortable: true,
dataIndex: 'size', dataIndex: 'size',
renderer: Proxmox.Utils.format_size, renderer: (v, meta, record) => {
if (v === undefined || v === null) {
meta.tdCls = "x-grid-row-loading";
return '';
}
return Proxmox.Utils.format_size(v);
},
}, },
{ {
xtype: 'numbercolumn', xtype: 'numbercolumn',
@ -319,15 +365,8 @@ Ext.define('PBS.DataStoreContent', {
}, },
{ {
header: gettext('Encrypted'), header: gettext('Encrypted'),
dataIndex: 'encrypted', dataIndex: 'crypt-mode',
renderer: function(value) { renderer: value => PBS.Utils.cryptText[value] || Proxmox.Utils.unknownText,
switch (value) {
case 0: return Proxmox.Utils.noText;
case 1: return gettext('Mixed');
case 2: return Proxmox.Utils.yesText;
default: Proxmox.Utils.unknownText;
}
}
}, },
{ {
header: gettext("Files"), header: gettext("Files"),
@ -337,8 +376,10 @@ Ext.define('PBS.DataStoreContent', {
return files.map((file) => { return files.map((file) => {
let icon = ''; let icon = '';
let size = ''; let size = '';
if (file.encrypted) { let mode = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
icon = '<i class="fa fa-lock"></i> '; let iconCls = PBS.Utils.cryptIconCls[mode] || '';
if (iconCls !== '') {
icon = `<i class="fa fa-${iconCls}"></i> `;
} }
if (file.size) { if (file.size) {
size = ` (${Proxmox.Utils.format_size(file.size)})`; size = ` (${Proxmox.Utils.format_size(file.size)})`;
@ -356,12 +397,21 @@ Ext.define('PBS.DataStoreContent', {
iconCls: 'fa fa-refresh', iconCls: 'fa fa-refresh',
handler: 'reload', handler: 'reload',
}, },
'-',
{
xtype: 'proxmoxButton',
text: gettext('Verify'),
disabled: true,
parentXType: 'pbsDataStoreContent',
enableFn: (rec) => !!rec.data && rec.data.size !== null,
handler: 'onVerify',
},
{ {
xtype: 'proxmoxButton', xtype: 'proxmoxButton',
text: gettext('Prune'), text: gettext('Prune'),
disabled: true, disabled: true,
parentXType: 'pbsDataStoreContent', parentXType: 'pbsDataStoreContent',
enableFn: function(record) { return !record.data.leaf; }, enableFn: (rec) => !rec.data.leaf,
handler: 'onPrune', handler: 'onPrune',
}, },
{ {
@ -370,24 +420,22 @@ Ext.define('PBS.DataStoreContent', {
disabled: true, disabled: true,
parentXType: 'pbsDataStoreContent', parentXType: 'pbsDataStoreContent',
handler: 'onForget', handler: 'onForget',
dangerous: true,
confirmMsg: function(record) { confirmMsg: function(record) {
console.log(record); //console.log(record);
let name = record.data.text; let name = record.data.text;
return Ext.String.format(gettext('Are you sure you want to remove snapshot {0}'), `'${name}'`); return Ext.String.format(gettext('Are you sure you want to remove snapshot {0}'), `'${name}'`);
}, },
enableFn: function(record) { enableFn: (rec) => !!rec.data.leaf && rec.data.size !== null,
return !!record.data.leaf;
},
}, },
'-',
{ {
xtype: 'proxmoxButton', xtype: 'proxmoxButton',
text: gettext('Download Files'), text: gettext('Download Files'),
disabled: true, disabled: true,
parentXType: 'pbsDataStoreContent', parentXType: 'pbsDataStoreContent',
handler: 'openBackupFileDownloader', handler: 'openBackupFileDownloader',
enableFn: function(record) { enableFn: (rec) => !!rec.data.leaf && rec.data.size !== null,
return !!record.data.leaf;
},
}, },
{ {
xtype: "proxmoxButton", xtype: "proxmoxButton",
@ -396,7 +444,7 @@ Ext.define('PBS.DataStoreContent', {
handler: 'openPxarBrowser', handler: 'openPxarBrowser',
parentXType: 'pbsDataStoreContent', parentXType: 'pbsDataStoreContent',
enableFn: function(record) { enableFn: function(record) {
return !!record.data.leaf && record.data.files.some(el => el.filename.endsWith('pxar.didx')); return !!record.data.leaf && record.size !== null && record.data.files.some(el => el.filename.endsWith('pxar.didx'));
}, },
} }
], ],

View File

@ -40,7 +40,7 @@ Ext.define('PBS.DataStorePanel', {
initComponent: function() { initComponent: function() {
let me = this; let me = this;
me.title = `${gettext("Data Store")}: ${me.datastore}`; me.title = `${gettext("Datastore")}: ${me.datastore}`;
me.callParent(); me.callParent();
}, },
}); });

View File

@ -125,7 +125,7 @@ Ext.define('PBS.MainView', {
}, },
control: { control: {
'button[reference=logoutButton]': { '[reference=logoutButton]': {
click: 'logout' click: 'logout'
} }
}, },
@ -133,7 +133,8 @@ Ext.define('PBS.MainView', {
init: function(view) { init: function(view) {
var me = this; var me = this;
me.lookupReference('usernameinfo').update({username:Proxmox.UserName}); PBS.data.RunningTasksStore.startUpdate();
me.lookupReference('usernameinfo').setText(Proxmox.UserName);
// show login on requestexception // show login on requestexception
// fixme: what about other errors // fixme: what about other errors
@ -189,7 +190,7 @@ Ext.define('PBS.MainView', {
type: 'hbox', type: 'hbox',
align: 'middle' align: 'middle'
}, },
margin: '2 5 2 5', margin: '2 0 2 5',
height: 38, height: 38,
items: [ items: [
{ {
@ -197,16 +198,17 @@ Ext.define('PBS.MainView', {
prefix: '', prefix: '',
}, },
{ {
xtype: 'versioninfo' padding: '0 0 0 5',
}, xtype: 'versioninfo',
{
flex: 1
}, },
{ {
padding: 5,
html: '<a href="https://bugzilla.proxmox.com" target="_blank">BETA</a>',
baseCls: 'x-plain',
},
{
flex: 1,
baseCls: 'x-plain', baseCls: 'x-plain',
reference: 'usernameinfo',
padding: '0 5',
tpl: Ext.String.format(gettext("You are logged in as {0}"), "'{username}'")
}, },
{ {
xtype: 'button', xtype: 'button',
@ -218,11 +220,27 @@ Ext.define('PBS.MainView', {
margin: '0 5 0 0', margin: '0 5 0 0',
}, },
{ {
reference: 'logoutButton', xtype: 'pbsTaskButton',
margin: '0 5 0 0',
},
{
xtype: 'button', xtype: 'button',
iconCls: 'fa fa-sign-out', reference: 'usernameinfo',
text: gettext('Logout') style: {
} // proxmox dark grey p light grey as border
backgroundColor: '#464d4d',
borderColor: '#ABBABA'
},
margin: '0 5 0 0',
iconCls: 'fa fa-user',
menu: [
{
reference: 'logoutButton',
iconCls: 'fa fa-sign-out',
text: gettext('Logout'),
},
],
},
] ]
}, },
{ {

View File

@ -8,6 +8,9 @@ JSSRC= \
form/UserSelector.js \ form/UserSelector.js \
form/RemoteSelector.js \ form/RemoteSelector.js \
form/DataStoreSelector.js \ form/DataStoreSelector.js \
form/CalendarEvent.js \
data/RunningTasksStore.js \
button/TaskButton.js \
config/UserView.js \ config/UserView.js \
config/RemoteView.js \ config/RemoteView.js \
config/ACLView.js \ config/ACLView.js \
@ -19,6 +22,7 @@ JSSRC= \
window/ACLEdit.js \ window/ACLEdit.js \
window/DataStoreEdit.js \ window/DataStoreEdit.js \
window/CreateDirectory.js \ window/CreateDirectory.js \
window/ZFSCreate.js \
window/FileBrowser.js \ window/FileBrowser.js \
window/BackupFileDownloader.js \ window/BackupFileDownloader.js \
dashboard/DataStoreStatistics.js \ dashboard/DataStoreStatistics.js \
@ -52,6 +56,10 @@ js/proxmox-backup-gui.js: js OnlineHelpInfo.js ${JSSRC}
cat OnlineHelpInfo.js ${JSSRC} >$@.tmp cat OnlineHelpInfo.js ${JSSRC} >$@.tmp
mv $@.tmp $@ mv $@.tmp $@
.PHONY: lint
lint: ${JSSRC}
eslint ${JSSRC}
.PHONY: clean .PHONY: clean
clean: clean:
find . -name '*~' -exec rm {} ';' find . -name '*~' -exec rm {} ';'

Some files were not shown because too many files have changed in this diff Show More