Compare commits

..

642 Commits

Author SHA1 Message Date
64394b0de8 bump version to 1.0.7-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-03 10:36:18 +01:00
2f617a4548 docs: tfa: add screenshots
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-03 10:36:18 +01:00
2ba64bed18 ui: tfa: fix emptyText for password
One needs to enter their password, not the one from the user one
adds/deletes TFA.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-03 10:36:18 +01:00
cafccb5991 d/control: update
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-03 10:36:18 +01:00
b22e8c3632 tape: add media pool regression tests 2021-02-03 10:23:04 +01:00
7929292618 tape: add regresion test for media state 2021-02-03 09:34:31 +01:00
0d4e4cae7f tape: improve pmt command line completion 2021-02-03 08:54:12 +01:00
f4ba2e3155 depend on proxmox 0.10.1 2021-02-03 08:53:34 +01:00
7101ed6e27 ui: tape: add TapeInventory panel
since we do not show the tapes anymore in the BackupOverview, add
another panel where we can list the available tapes in the inventory

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-02-02 14:47:35 +01:00
85ac35aa9a ui: tape: add Restore Window
in the BackupOverview, when a media-set is selected

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-02-02 14:47:21 +01:00
40590561fe ui: tape: TapeBackupWindow: add missing DriveSelector
and make it a bit wider

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-02-02 14:47:05 +01:00
631e550920 ui: tape: rework BackupOverview
instead of grouping by tape (which is rarely interesting),
group by pool -> group -> id -> mediaset

this way a user looking for a backup of specific vm can do just that

we may want to have an additional view here were we list all snapshots
included in the selected media-set ?

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-02-02 14:46:43 +01:00
f806c0effa ui: refactor get_type_icon_cls
we need this later again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-02-02 14:46:15 +01:00
50a4797fb1 api2/types/tape/media: add media_set_ctime to MediaContentEntry
to be able to better sort in the ui

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-02-02 14:45:54 +01:00
cc2a0b12f8 test: define tape tests as submodule 2021-02-02 14:38:15 +01:00
988e8de122 tape: set correct ownership on lock file 2021-02-02 14:18:57 +01:00
2f8809c6bc test: src/tape/inventory.rs - avoid chown when running tests 2021-02-02 13:43:16 +01:00
92b7775fa1 fix debian/control 2021-02-02 12:33:00 +01:00
f4d231e70a test: add regression tests for tape inventory 2021-02-02 12:19:28 +01:00
b419050aa7 bump pxar to 0.8
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-02-02 11:02:08 +01:00
8937c65951 tape: add pmt stoptions/stsethoptions/stclearoptions 2021-02-02 08:58:02 +01:00
6c6ad82d90 tape: add pmt setblk 2021-02-02 07:19:54 +01:00
d0f11b66f7 thape: add read_tapedev_options, display driver options with status command 2021-02-02 06:40:40 +01:00
f9fcac51a5 docs: add initial TFA documentation
better than nothing..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-01 19:46:24 +01:00
ca953d831f cleanup: remove MT_ST_ prefix from SetDrvBufferOptions 2021-02-01 17:54:53 +01:00
01c023d50f paperkey: rustfmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-02-01 17:05:40 +01:00
c2113a405e paperkey: simplify block generation
the chunk-iterator already does exactly what we want here..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-02-01 17:05:32 +01:00
5dae81d199 paperkey: allow RSA keys without passphrase
some users might want to store the plain version of their master key for
long-term storage and rely on physical security instead of a passphrase
to protect the paper key.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-02-01 17:05:22 +01:00
bd768c3320 ui: tfa: adapt low recovery key hint, drop unused other hint
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-01 15:39:56 +01:00
572fc035a2 ui: webauthn: add notes/warnings for better UX
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-01 15:37:47 +01:00
99b2f045af ui: tfa: add auto-fill button for webAuthn setup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-01 15:37:47 +01:00
6248e51797 change half-ticket time range from -120..240 to -60..600
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-02-01 15:13:11 +01:00
19e4a36c70 tape: do not use drive.open() within pmt
Do not fail if no media is loaded. Inportant for load command.
2021-02-01 12:39:50 +01:00
90769e5694 tape: add pmt lock/unlock 2021-02-01 12:18:55 +01:00
b8cbe5d65b tape: fix tape alert flag decoding 2021-02-01 12:18:55 +01:00
35c95ca653 bump apt-pkg-native dependency
our patches got applied upstream, and a release was cut, so we no longer
need to depend on a manually patched version here.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-02-01 11:53:25 +01:00
2dbc1a9a55 ui: tfa: improve button text for webAuthn
So users now what to press for starting off a webauthn challenge.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-01 11:48:43 +01:00
dceecb0bbf debcargo: fix maintainer directive"
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-02-01 11:21:21 +01:00
d690d14568 tape: add pmt bsr/fsr 2021-02-01 10:39:04 +01:00
85ef624440 tape: add pmt asf 2021-02-01 10:32:21 +01:00
e995996290 tape: pmt - fix count parameter schema 2021-02-01 10:21:25 +01:00
8e6ad4301d tape: add pmt fsfm/bsfm, pass count as arg_param 2021-02-01 10:18:18 +01:00
86740dfc89 tape: ui - remove drive from pool config 2021-02-01 10:01:06 +01:00
1399c592d1 garbage_collection: only ignore 'missing chunk' errors
with the fix for #2909 (improving handling missing chunks), we
changed from bailing to warning during a garbage collection when
updating the atime of a chunk.

but, updating the atime can not only fail when the chunk is missing,
but also on other occasions, e.g. no permissions or more importantly,
no space left on the device. in that case, the atime of a valid and used
chunk cannot be updated, and the second sweep of the gc will remove that chunk.
[0] is a real world example of that happening.

instead, only warn on really missin chunks, and bail on all other
errors.

0: https://forum.proxmox.com/threads/pbs-server-full-two-days-later-almost-empty.83274/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-02-01 09:18:59 +01:00
9883b54cba tape: remove drive from pool config 2021-02-01 09:14:28 +01:00
83b8949a98 tape: add pmt weof 2021-01-31 17:33:07 +01:00
28f60e5291 cleanup: avoid compiler warnings 2021-01-31 17:02:55 +01:00
1f31d06f48 tape: add pmt bsf 2021-01-31 17:00:15 +01:00
2f2e83c890 tape: add pmt fsf 2021-01-31 16:54:16 +01:00
b22c618734 tape: add pmt erase 2021-01-31 16:34:10 +01:00
1e041082bb tape: add pmt command line tool
Experimental, not installed by now.
2021-01-31 16:19:53 +01:00
a57ce270ac postinst: add user backup to group tape
So that it is possible to access tape and changer devcies.
2021-01-30 11:48:49 +01:00
b5b99a52cd tape: API type cleanup, use serde flatten to derive types 2021-01-30 09:36:54 +01:00
9586ce2f46 tape: move scan_drives API code to correct file 2021-01-30 08:03:17 +01:00
b8d526f18d ui: tape/ChangerStatus - use POST for barcode-label-media 2021-01-29 17:06:53 +01:00
d2edc68ead ui: tape/ChangerStatus: add missing tooltips
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-29 16:54:37 +01:00
4d651378e2 ui: tape: change wrong window title
this is the 'status' msgbox not the label information

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-29 16:54:19 +01:00
58791864d7 ui: tape/ChangerStatus: add import action for import/export slots
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-29 16:54:03 +01:00
1a41e9af4f ui: tape: add Changer config grid
analogous to the drive grid

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-29 16:53:33 +01:00
c297835b01 tape: proxmox-tape - use API instead of direct functions calls 2021-01-29 11:49:11 +01:00
e68269fcaf tape: proxmox-tape inventory: call API 2021-01-29 11:21:57 +01:00
5243df4712 tape: proxmox-tape - use API instead of direct functions calls 2021-01-29 10:50:11 +01:00
4470eba551 cleanup: factor out common client code to view task log/result 2021-01-29 10:10:04 +01:00
1f2c4713ef tape: improve backup task abort behaviour 2021-01-29 09:23:39 +01:00
a6c16894ff worker_task: log something when we receive an abort request 2021-01-29 09:22:37 +01:00
271764deb9 tape: make it possible to abort tape backup tasks (check_abort)
Also use task_log makro instead of worker.log.
2021-01-29 09:07:55 +01:00
52f7a73009 display_task_log: make it possible to abort tasks with CTRL-C 2021-01-29 09:06:15 +01:00
bdb6e6b83f api2/reader: asyncify the reader worker task
this way, the code is much more readable

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-29 06:59:25 +01:00
41dacd5d3d tape: use worker task for eject-media api 2021-01-28 16:49:08 +01:00
eb1dfb02b5 tape: proxmox-tape - use api for erase-media and rewind 2021-01-28 16:36:10 +01:00
1a0eb86344 tape: gui: s/encryption/encrypt/ in media pool config panel 2021-01-28 15:50:01 +01:00
bdb62b20a3 tape: media_pool config api - set protected flags where required 2021-01-28 15:42:32 +01:00
f2ca03d7d0 cleanup: avoid compiler warning 2021-01-28 15:32:21 +01:00
00ac86c31b tape/drive/linux_tape: fix and refactor usage of sg-tape-cmd
when executing this code as non-root, we use sg-tape-cmd (a setuid binary)
to execute various ioctls on the tape device

we give the command the open tape device fd as stdin, but did not
dup it, so the std::process:Stdio handle closed it on drop,
which let subsequent operation on that file fail (since it was closed)

fix it by dup'ing it before giving it to the command, and also refactor
the calling code, so that we do not forget to do this

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:24:32 +01:00
627d000098 tape: change changer-drive-id to changer-drivenum
because it changed in the config

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:11:22 +01:00
4be4736603 tape/changer: refactor marking of import/export slots from config
we did this for 'mtx', but missed it for the sg_pt_changer code
refactor it into the MtxStatus strut, and call it from both
code paths

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:10:55 +01:00
2da7aca8e8 tape/changer: add vendor/model to DriveStatus
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:10:31 +01:00
8306b8b1a5 ui: tape: use panels in tape interface
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:08:56 +01:00
605cfd4ab1 ui: tape: move TapeManagement.js to tape dir
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:08:31 +01:00
dec3147501 ui: tape: add PoolConfig
CRUD interface to manage media pools

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:08:21 +01:00
c642aec128 ui: tape: add DriveConfig panel
mostly typical CRUD interface for managing drives, with an
additional actioncolumn containing some useful actions, e.g.
* reading the label
* show volume-statistics
* show the status
* label the inserted tape

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:08:08 +01:00
fd9aa8dfa2 ui: tape: add ChangerStatus panel
this lets the users manage changers and lets them view the status of one
by having an overview of:
* slots for tapes
* import/export slots
* drives

lets the user:
* barcode-label all the tapes in the library
* move tapes between slots, into/out of drives
* show some basic info when a tape is loaded into a drive
* show the status of a drive
* clean a drive

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:07:57 +01:00
07d6c0967d ui: tape: add BackupOverview Panel
shows all tapes with the relevant info
* which pool it belongs to
* what backups are on it
* which media-set
* location
* etc.

This is very rough, and maybe not the best way to display this information.
It may make sense to reverse the tree, i.e. having pools at top-level,
then media-sets, then tapes, then snapshots..

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:07:44 +01:00
80a3749088 ui: tape: add Edit Windows
includes edit windows for
* Drives
* Changers
* Media Pools
* Labeling Media
* Making new Tape Backups

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:07:29 +01:00
c72fdb53ae ui: tape: add form fields
this includes selectors for
* Allocation Policy
* Retention Policy
* Drives
* Changers
* Tape Device Paths
* Pools

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:07:09 +01:00
b03ec281bf api2/config/{drive, changer}: prevent adding same device multiple times
this check is not perfect since there are often multiple device
nodes per drive/changer, but from the scan api we should return always
the same, so for an api user this should be enough

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:03:56 +01:00
cef4654ff4 api2/tape/drive: change methods of some api calls from put to get
makes more sense to have retrieving api calls as get instead of put

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:02:52 +01:00
f45dceeb73 api2/tape/drive: add load_media as api call
code was already there, just add it as api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:02:13 +01:00
18262a88c9 api2/tape/changer: add changer filter to list_drives api call
so that an api user can get the drives belonging to a changer
without having to parse the config listing themselves

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 15:01:41 +01:00
87f4be7998 tape: use api to run proxmox-tape backup 2021-01-28 14:56:42 +01:00
d737adc6be tape: rename changer_drive_id to changer_drivenum 2021-01-28 11:29:59 +01:00
5fdaecf6f4 api2/tape/drive: reorganize drive api
similar to the changers, create a listing at /tape/drive and put
the specific api calls below that

move the scan api call up one level

remove the status info from the config listing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 11:15:17 +01:00
d8792b88ef api2/types/tape/drive: add changer_drivenum
so that an api user can see which drive belongs to which drivenum of a changer
for ones with multiple drives

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-28 11:14:28 +01:00
8b1174f50a ui: tfa: drop useless extjs state save handling
was replaced with our own, not much more code and actually works.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-27 20:20:35 +01:00
8c8f7b5a09 ui: tfa: disable confirm during handling of challenge
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-27 20:20:35 +01:00
44915932d5 ui: tfa: webautn: move spinning icon down to waiting message
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-27 20:20:35 +01:00
e90fdf5bed ui: tfa: make webAuthn abortable and restartable
Fix two things:
* do not reject the login promise when we get the abort DOMException
  error
* safely save the original challenge string as we work on a reference
  here and avoid to convert to a UInt8 array twice to avoid an
  exception.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-27 20:20:35 +01:00
a11c8ab485 ui: tfa: only immediately trigger webAuthn when its the initial tab
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-27 19:38:40 +01:00
74a50158ca ui: tfa: drop bogus console.error
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-27 19:38:08 +01:00
6ee85d57be ui: tfa: save last used TFA method and prefer it next time
simple heuristic for those people who always prefer a specific TFA
method and have the others only as backup.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-27 18:45:36 +01:00
b2fc6f9228 fix build: commit missing file 2021-01-27 18:13:58 +01:00
f91481eded ui: rework TFA prompt on login
Improve UX by avoiding the need to click some buttons twice, or
calling TOTP and Recovery codes both "OTP" codes and showing multiple
buttons, with all having the same goal "submit a TFA token" at the
same time.

Instead use a tab panel with a single submit button.

WebAuthn can and should be still improved, but that can be OK as
followup.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-27 13:21:25 +01:00
651a61f559 pmtx: implement scan command 2021-01-27 12:40:51 +01:00
b06edeca02 remove generated file synopsis.rst (no need to track in git) 2021-01-27 12:38:02 +01:00
89ccb125d1 tape: use 36 byte Inquiry (recommended size) 2021-01-27 12:35:28 +01:00
c972704477 install pmtx binary 2021-01-27 11:36:15 +01:00
887f1cb90c cleanup: move scan changers API implementation 2021-01-27 09:58:16 +01:00
16b4d78400 tape: rename retry_command to execute_scsi_command, make retry a flag 2021-01-27 09:34:24 +01:00
ec8d9c6b80 tape: repeat changer scsi command until successful 2021-01-27 08:59:10 +01:00
49c2d1dcad sgutils2: use sg_get_asc_ascq_str to produce error messages 2021-01-27 06:56:11 +01:00
d0f51651f9 sgutils2: add ASC codes from tandeberg docs 2021-01-26 18:54:08 +01:00
481ccf16a5 sgutils2: further improve error messages 2021-01-26 15:19:43 +01:00
a223458753 sgutils2: support RequestSense Descriptor format 2021-01-26 13:38:16 +01:00
e1740f3f01 tape/changer/mtx: add mtx parser test
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-26 12:51:26 +01:00
740dc9d1d4 api2/tape/changer: reorganize api
add a changer listing here (copied from api2/config/changer)
and put the status and transfer api calls below that

puts the changer scan into the top level tape api
and removes the (now redundant) info from the config api path

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-26 12:47:34 +01:00
bbf01b644c tape: fix typos
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-26 12:39:54 +01:00
66d066964c docs/tape: fix some typos and improve wording
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-26 12:39:06 +01:00
c81c46c336 sgutils2: improve error messages 2021-01-26 12:24:58 +01:00
c3747b93c8 tape: add new command line tool "pmtx"
Also improve sgutil2 error reporting
2021-01-26 11:57:15 +01:00
d43265b7f1 ui: add missing uri encoding in user edit and view
userid parameter needs to be properly encoded when shown on the browser

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-26 10:53:30 +01:00
6864fd0149 server/worker_task: improve newline handling in upid_read_status
improves upid_read_status with:
* ignore multiple newlines at the end
* remove all code that could panic (array index access)
  the one place where we access with '[pos+1..]' is ok since
  we explicitely test the len of the vector, this is done to
  let rust optimize away the range checks, so it cannot panic

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-26 10:48:15 +01:00
340c0bf9e3 pxar: don't clone patterns unnecessarily
The options struct has no Drop handler and is passed by-move
so we can partially move out of it.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-26 10:24:18 +01:00
4d104cd4d8 clippy: more misc fixes
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:54:55 +01:00
367c0ff7c6 clippy: allow api functions with many arguments
some of those can be reduced/cleaned up when we have updater support in
the api macro.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:54:52 +01:00
9c26a3d61a verify: factor out common parameters
all the verify methods pass along the following:
- task worker
- datastore
- corrupt and verified chunks

might as well pull that out into a common type, with the added bonus of
now having a single point for construction instead of copying the
default capacaties in three different modules..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:54:49 +01:00
93e3581ce7 derive/impl and use Default for some structs
and revamp HttpClientOptions with two constructors for the common use
cases

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:54:45 +01:00
f4e52bb27d authid: make Tokenname(Ref) derive Eq
it's needed to derive Hash, and we always compare Authids or their
Userid components, never just the Tokenname part anyway..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:54:40 +01:00
72064fd0df pxar: extract PxarExtractOptions
same as PxarCreateOptions, but for extraction/restore rather than
create.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:54:36 +01:00
77486a608e pxar: factor out PxarCreateOptions
containing the CLI parameters that are mostly passed-through from the
client to our pxar archive creation wrapper in pxar::create

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:54:32 +01:00
e97025ab02 pxar: typedef on_error as ErrorHandler
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:54:26 +01:00
e43b9175c0 client: factor out UploadOptions
to reduce function signature complexity.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:54:18 +01:00
9cc1415ef5 systemd/time: extract Time/DateSpec structs
could be pulled up into CalendarEvent if desired..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:54:13 +01:00
bd215dc0e4 async index reader: typedef ReadFuture
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:53:58 +01:00
12e874cef0 allow complex Futures in tower_service impl
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:53:55 +01:00
6d233161b0 client: refactor catalog upload spawning
by pulling out Result type into separate struct

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:53:51 +01:00
905a570489 broadcast_future: refactor broadcast/future binding
into its own, private struct.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:53:48 +01:00
432fe44187 report: type-alias function call tuple
to make clippy happy.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:53:43 +01:00
51b938496d tools::sgutils2: name fixup
it's not a box anymore

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 15:05:52 +01:00
b7f9b25e4d tools::sgutils2: use NonNull
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 14:56:10 +01:00
fe61280b6b tools::sgutils2: extern 'C' and import ordering
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 14:54:25 +01:00
68c087d578 tools::sgutils2: don't transmute to a Box
Otherwise we run the drop handler for the scsi pt object AND
the box itself, which shouldn't even work as it should be
doing a double-free (unless the library does some kind of
reference counting in which case this should simply crash
later on?)

anyway, let's make a wrapper simply called `SgPt` containing
the pointer from `construct_scsi_pt_obj()`

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 14:48:27 +01:00
d6bf87cab7 tools::sgutils2: const correctness
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 14:33:45 +01:00
2b96a43879 tape: cleanup - use ScsiMediaChange trait instead of mtx_status() 2021-01-25 13:25:22 +01:00
697c41c584 tape: add/use rust scsi changer implementation using libsgutil2 2021-01-25 13:14:07 +01:00
a2379996e6 sgutils2: add scsi_inquiry command 2021-01-25 13:14:07 +01:00
29077d95db http-client: further clippy cleanups
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 11:41:54 +01:00
dbd00a57b0 http-client: fix typoed ticket cache condition
which was even copy-pasted once without noticing.

found with clippy.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 11:41:51 +01:00
d08cff51a4 rework GC traversal error handling
the error message don't make sense with an empty default

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 11:41:48 +01:00
3e461dec1c apt: let api handle optional bool with default
one less FIXME :)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 11:41:46 +01:00
4d08e25913 clippy: rewrite ifs with identical return values
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 11:41:43 +01:00
43313c2ee7 clippy: rewrite comparison chains
chunk_stream one can be collapsed, since split == split_to with at set
to buffer.len() anyway.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 11:41:39 +01:00
81b2a87232 clippy: fix Mutex with unused value
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 11:41:36 +01:00
3d8cd0ced7 clippy: add is_empty() when len() is implemented
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-25 11:41:32 +01:00
7c78d54231 sgutils: allow command which does not transfer any data 2021-01-24 15:19:43 +01:00
f9d71e8b17 sgutils2: allow to set custom timeouts 2021-01-24 14:54:30 +01:00
0107fd323c cleanup: avoid compiler warnings 2021-01-23 17:34:26 +01:00
8ba47929a0 tape: add docu about paperkey 2021-01-23 15:34:28 +01:00
794b0fe9ce tape: document hardware encryption 2021-01-23 15:19:28 +01:00
979dccc7ec tape: avoid error when clearing encryption key
Simply ignore clear request when sg_spin_data_encryption_caps fails.
Assume those are tapes without hardware encryption support.
2021-01-23 10:20:43 +01:00
44a5f38bc4 docs: clarify that client-server communication is secure
This clarifies the fact that all communication between client and server
uses TLS for secure communication.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-01-22 16:07:44 +01:00
bf78f70885 improve code docs in api2
Note: API methos should be declared pub, so that they show up in the generated docu.
2021-01-22 15:57:42 +01:00
545706cbee d/control: bump B-D on pve-eslint
the old one does not understand www/config/TfaView.js and fails the
build..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-22 14:47:39 +01:00
0d916ac531 tape: add media pool config code docs 2021-01-22 12:01:46 +01:00
d4ab407045 tape: add drive config code docs 2021-01-22 11:51:36 +01:00
45212a8c78 fix mtx parser bug: s/strip_suffix/strip_prefix/ 2021-01-22 11:00:56 +01:00
64b83c3d70 tape: implement paperkey command for tape encryption keys 2021-01-22 09:56:14 +01:00
639a6782bd paperkey: move code to src/tools/paperkey.rs 2021-01-22 09:42:59 +01:00
5f34d69bcc tape: add volume-statistics api/command 2021-01-22 08:45:35 +01:00
337ff5a3cc tape: add estimated medium wearout to status 2021-01-22 08:06:25 +01:00
8e6459a818 tape: set encryption key on restore 2021-01-22 07:26:42 +01:00
aff3e16194 tape: add code docs to src/config/tape_encryption_keys.rs 2021-01-21 18:23:07 +01:00
9372c0787d renamed src/tape/sgutils2.rs -> src/tools/sgutils2.rs 2021-01-21 17:57:17 +01:00
83fb2da53e tape: move MediaCatalog magic number into struct (doc cleanup) 2021-01-21 17:48:07 +01:00
645a044bf6 tape: further hierarchy improvements 2021-01-21 17:25:32 +01:00
37796ff73f tape: change code hierarchy to improve docs 2021-01-21 17:12:01 +01:00
e1fdcb1678 tape: do not export/doc low level libsgutils2 bindings 2021-01-21 16:38:24 +01:00
aab9a26409 ui: cleanup order of declraing properties
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-21 15:09:22 +01:00
958055a789 ui: fix on-parse use of global Proxmox.UserName
This is wrong most of the time, when not loading the web interface
with valid credentials, and thus some checks or defaults did not
evaluated correctly when the underlying value was only set later.

Needs to be set on component creation only, this can be done through
initComponent, even listeners, view controllers or cbind closures.

Use the latter, as all affected components already use cbind.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-21 15:08:46 +01:00
edda5039d4 tape: improve code docs 2021-01-21 13:19:07 +01:00
1c86893d95 cleanup: always compute fingerprint in KeyConfig constructors 2021-01-21 11:56:54 +01:00
d543587d34 Merge branch 'master' of ssh://proxdev.maurer-it.com/rust/proxmox-backup 2021-01-21 10:56:52 +01:00
780bc4cad2 tape: try to set encryption key with read-label command 2021-01-21 10:31:49 +01:00
18bd6ba13d tape: restore_key - always update key, even if there is already an entry 2021-01-21 10:31:49 +01:00
4dafc513cc tape: fix file permissions for tape encryptiuon keys 2021-01-21 10:31:49 +01:00
7acd5c5659 cleanup: remove missleading wording from code docs 2021-01-21 10:31:49 +01:00
8428063d9e cleanup: KeyConfig::decrypt - show password hint on error 2021-01-21 10:31:49 +01:00
f490dda05a tape: use type Uuid instead of String 2021-01-21 10:31:49 +01:00
2b191385ea tape: use specialized encryption key per media-set 2021-01-21 10:31:49 +01:00
bc228e5eaf api: add types for UUIDs 2021-01-20 17:16:46 +01:00
8be65e34de clippy: replace transmute with &*
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:41:02 +01:00
d967d8f1a7 clippy: remove drop(&..)
it does nothing.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:41:02 +01:00
50deb0d3f8 clippy: use is_null to check for null pointers
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:41:02 +01:00
1d928b25fe clippy: remove some unnecessary reference taking
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
f2f81791d1 clippy: fix for_kv_map
and allow it in the one case where the entry loop is intended, but the
code is not yet implemented fully.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
382f10a0cc clippy: fix/allow needless_range_loop
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
0d2133db98 clippy: use while let loops
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
09faa9ee95 clippy: pass &str/&[..] instead of &String/&Vec
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
ccec086e25 clippy: remove unnecessary &mut
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
05725ac9a4 clippy: remove unnecessary let binding
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
96b7483138 clippy: remove/replace needless explicit lifetimes
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
81281d04a4 clippy: fix/allow identity_op
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
e062ebbc29 clippy: us *_or_else with function calls
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
b92cad0938 clippy: convert single match to if let
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
ea368a06cd clippy: misc. fixes
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
3f48cdb380 clippy: don't pass along unit value
make it explicit. this whole section should probably be re-written with
select!

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
17c7b46a69 clippy: use unwrap_or_default
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
a375df6f4c clippy: use copied/cloned instead of map
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
a3775bb4e8 clippy: shorten assignments
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
1e0c6194b5 clippy: fix option_as_ref_deref
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
a6bd669854 clippy: use matches!
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
6334bdc1c5 clippy: collapse nested ifs
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
3b82f3eea5 clippy: avoid useless format!
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
38556bf60d clippy: remove explicit returns
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
d8d8af9826 clippy: use chars / byte string literals
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
3984a5fd77 clippy: is_some/none/ok/err/empty
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
397356096a clippy: remove needless bool literals
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:52 +01:00
365915da9a clippy: use strip_prefix instead of manual stripping
it's less error-prone (off-by-one!)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:22:59 +01:00
87152fbac6 clippy: drop redundant 'static lifetime
those declarations are already const/static..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:22:59 +01:00
22a9189ee0 clippy: remove unnecessary closures
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:22:59 +01:00
4428818412 clippy: remove unnecessary clones
and from::<T>(T)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:22:59 +01:00
47ea98e0e3 clippy: collapse/rework nested ifs
no semantic changes (intended).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:22:59 +01:00
6dd0513546 tape: allocate new media set when pool encryption key changes 2021-01-20 15:43:39 +01:00
8abe51b71d improve code docs 2021-01-20 15:43:19 +01:00
69b8bc3bfa tape: implemenmt show key
Moved API types Kdf and KeyInfo to src/api2/types/mod.rs.
2021-01-20 15:43:19 +01:00
301b8aa0a5 tape: implement change-passphrase for tape encryption keys 2021-01-20 15:43:19 +01:00
e5b6c93323 tape: add --kdf parameter to create key api 2021-01-20 15:43:19 +01:00
9a045790ed cleanup KeyConfig 2021-01-20 15:43:19 +01:00
82a103c8f9 add "password hint" to KeyConfig 2021-01-20 15:43:19 +01:00
0123039271 ui: tfa: rework removal confirmation dialog
present all relevant information about the TFA token to be removed,
so that a user can make a better decision.

Rework layout to match our commonly used style.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-19 19:46:10 +01:00
9a0e115a37 ui: tfa view: add userid to TFA data model
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-19 19:46:10 +01:00
867bfc4378 ui: login view: fix missing trailing comma
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-19 19:46:10 +01:00
feb1645f37 tape: generate random encryptions keys and store key_config on media 2021-01-19 11:20:07 +01:00
8ca37d6a65 cleanup: factor out decrypt_key_config 2021-01-19 11:20:07 +01:00
ac163a7c18 ui: tfa/totp: fix setting issuer in secret URL
it's recommended to set the issuer for both, the get parameter and
the initial issuer label prefix[0].

[0]: https://github.com/google/google-authenticator/wiki/Key-Uri-Format#label

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-18 16:27:02 +01:00
9b6bddb24c tfa: remove/empty description for recovery keys
While the user chosen description is not allowed to be
empty, we do leave it empty for recovery keys, as a "dummy
description" makes little sense...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-18 15:20:39 +01:00
f57ae48286 ui: tfa: fix ctime column width
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-18 14:31:15 +01:00
4cbd7eb7f9 gui: tfa: make description fill the remaining space
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-18 14:06:12 +01:00
310686726a gui: tfa: show when entries were created
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-18 14:06:12 +01:00
ad5cee1d22 tfa: add 'created' timestamp to entries
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-18 14:06:12 +01:00
bad6e32075 docs: fix typo in client manpage
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2021-01-18 13:52:11 +01:00
8ae6d28cd4 gui: enumerate recovery keys and list in 2nd factor window
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-18 13:51:23 +01:00
ca1060862e tfa: remember recovery indices
and tell the client which keys are still available rather
than just yes/no/low

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-18 13:51:23 +01:00
8a0046f519 tape: implement encrypted backup - simple version
This is just a proof of concept, only storing the encryption key fingerprint
inside the media-set label.
2021-01-18 13:38:22 +01:00
84cbdb35c4 implement FromStr for Fingerprint 2021-01-18 13:38:22 +01:00
1e93fbb5c1 tape: add encrypt property to media pool configuration 2021-01-18 13:38:22 +01:00
619554af2b tape: clear encryption key before writing labels
We always write labels unencrypted.
2021-01-18 13:38:22 +01:00
d5a48b5ce4 tape: add hardware encryption key managenent api 2021-01-18 13:38:22 +01:00
4e9cc3e97c ui: tfa: fix title for removal confirmation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-18 13:28:02 +01:00
492bc2ba63 ui: tfa/recovery: add print button to key info window
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-18 10:45:47 +01:00
995492100a ui: tfa/recovery: fix copy button text, add icon
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-18 10:45:28 +01:00
854319d88c ui: tfa/recovery: disallow to close key info window with ESC
to avoid accidental closing it

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-18 10:44:40 +01:00
3189d05134 ui: tfa: specify which confirmation password is required
Clarify that the password of the user one wants to add TFA too is
required, which is not necessarily the one of the current logged in
user. Use an empty text for that.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-18 10:12:23 +01:00
b2a43b987c ui: tfa totp: whitespace and padding fix
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-18 10:10:16 +01:00
6676409f7f ui: access: stream line add/edit/.. button order and separators
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-18 09:33:29 +01:00
44de5bcc00 pull: add error context for initial group list call
otherwise the user is confronted with a generic error like "permission
check failed" with no indication that it refers to a request made to the
remote PBS instance..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-18 06:51:05 +01:00
e2956c605d pull: rustfmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-18 06:50:23 +01:00
b22b6c2299 tape: encryption scsi command cleanup 2021-01-16 18:24:04 +01:00
90950c9c20 tape: add scsi commands to control drive hardware encryption 2021-01-16 15:59:05 +01:00
0c5b9e7820 tape: sgutils2.rs - add do_out_command()
Make it possible to run commands that writes data.
2021-01-16 15:59:05 +01:00
a9ffa010c8 ui: webauthn config: set default values for unconfigured case
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-15 16:25:47 +01:00
a6a903293b ui: webauthn config: use ID instead of Id/id
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-15 16:25:26 +01:00
3fffcb5d77 gui: tfa configuration
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-15 15:19:52 +01:00
a670b99db1 tfa: add webauthn configuration API entry points
Currently there's not yet a node config and the WA config is
somewhat "tightly coupled" to the user entries in that
changing it can lock them all out, so for now I opted for
fewer reorganization and just use a digest of the
canonicalized config here, and keep it all in the tfa.json
file.

Experimentally using the flatten feature on the methods with
an`Updater` struct similar to what the api macro is supposed
to be able to derive on its own in the future.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-15 15:19:52 +01:00
aefd74197a bakckup::manifest: use tools::json for canonical representation
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-15 15:19:52 +01:00
9ff747ef50 add tools::json for canonical json generation
moving this from backup::manifest, no functional changes

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-15 15:19:52 +01:00
a08a198577 tape: do not abort backup if tape drive does not support tape-alert-flags 2021-01-15 11:43:17 +01:00
4cfb123448 tape: update restore docu 2021-01-15 09:44:46 +01:00
198ebc6c86 d/rules: patch out wrongly linked libraries from ELFs
this is a HACK!

It seems that due to lots of binaries getting compiled from a single
crate the compiler is confused when linking in dependencies to each
binaries ELF.

It picks up the combined set (union) of all dependencies and sets
those to every ELF. This results in the client, for example, linking
to libapt-pkg or libsystemd even if none of that symbols are used..

This could be possibly fixed by restructuring the source tree into
sub crates/workspaces or what not, not really tested and *lots* of
work.

So as stop gap measure use `ldd -u` to find out unused linkage and
remove them using `patchelf`.

While this works well, and seems to not interfere with any debug
symbol usage or other usage in general it still is a hack and should
be dropped once the restructuring of the source tree has shown to
bring similar effects.

This allows for much easier re-use of the generated client .deb
package on other Debian derivaties (e.g., Ubuntu) which got blocked
until now due to wrong libt-apt verison or the like.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-15 08:52:53 +01:00
a8abcd9b30 debian/control: set VCS urls
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-15 08:52:53 +01:00
b7469f5a9a d/control: sort and fix whitespace errors
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-15 08:52:53 +01:00
6bbe49aa14 access: restrict password changes on @pam realm to superuser
for behavior consistency with `update_user`

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2021-01-15 08:49:22 +01:00
5aa1019010 access: limit editing pam credentials to superuser
modifying @pam users credentials should be only possible for root@pam,
otherwise it can have unintended consequences.

also enforce the same limit on user creation (except self_service check,
since it makes no sense during user creation)

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2021-01-15 08:49:22 +01:00
29a59b380c proxmox 0.10: adapt to moved ParameterSchema
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
0bfcea6a11 cleanup: remove unnecessary 'mut' and '.clone()'
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
19f5aa252f examples: unify h2 examples
update them to the new tokio-openssl API and remove socket buffer size
setting - it was removed from the TcpStream API, and is now only
available via TcpSocket (which can in turn be converted to a
TcpListener), but this is not needed for this example.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
89e9134a3f hyper: use new hyper::upgrade
the old Body::on_upgrade method is no more

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
b5a202acb6 tokio 1.0: update to new Signal interface
Signal does not yet re-implement Stream (and is not yet wrapped in
tokio-stream either).

see https://github.com/tokio-rs/tokio/pull/3383

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
0f860f712f tokio 1.0: update to new tokio-openssl interface
connect/accept are now happening on pinned SslStreams

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
7c66701366 tokio 1.0: use ReceiverStream from tokio-stream
to wrap a Receiver in a Stream. this will likely move back into tokio
proper once we have a std Stream..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
585e90c0de tokio: adapt to 1.0 process:Child changes
Child itself is no longer a Future, but it has a new wait() async fn
that does the same thing

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
5c852d5b82 tokio: adapt to 1.0 runtime changes
enter() now returns a guard, and the builder got revamped to make the
choice between MT and current thread explicit.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
484172b5f8 tokio 1.0: AsyncRead/Seek with ReadBuf
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
d148958b67 proxmox 0.10: use tokio::time::timeout directly
TimeoutFutureExt is no more

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
0a8d773ad0 tokio 1.0: delay -> sleep
almost the same thing, new name(s), no longer Unpin

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
427d90e6c1 update to tokio 1.0
and various related crates

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
9b2e4079d0 d/control: sort and fix whitespace errors
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-14 15:11:06 +01:00
1a0b410554 manager: user/token list: fix rendering 0 (never) expire date
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-14 13:59:08 +01:00
2d50a6192f tape: sg-tape-cmd - add more ways to specify devices 2021-01-14 13:05:26 +01:00
781da7f6f0 tape: add --inventorize flag to read-label API/CLI 2021-01-14 11:51:23 +01:00
646221cc29 ui: window/{AddWebauthn, TfaEdit}: fix spacing/border of the windows
the password field should not be indented differently than the rest of
the fields, and we never have a border on the panels

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-13 16:46:47 +01:00
b168a27f73 ui: window/AddTotp: fix spacing styling of form fields
by moving the lower fields into the form itself and dropping the padding

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-13 16:46:47 +01:00
a442bd9792 ui: window/AddTfaRecovery: fix style of TfaRecoveryShow window
to have a more similar layout/spacing to our other windows

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-13 16:46:47 +01:00
884fec7735 ui: window/AddTfaRecovery: rewrite to a Proxmox.window.Edit
we can reuse the edit window from widget toolkit for the most part
this solves some spacing and layout issues and is less code

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-13 16:46:47 +01:00
1cb89f302f ui: config/TfaView: disable Remove button by default
gets enabled when an item is clicked

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-13 16:46:47 +01:00
da36bbe756 ui: LoginView: remove not used viewModel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-13 16:46:47 +01:00
25e464c5ce tape: MediaPool - allow to allocate free tapes 2021-01-13 14:25:51 +01:00
8446fbca85 tape: rename changer_id to label_text 2021-01-13 13:26:59 +01:00
9738dd545f tape: docu - explain manual backups and tape cleaning 2021-01-12 17:26:15 +01:00
0bce2118e7 tape: improve docu 2021-01-12 16:37:23 +01:00
6543214dde tape: MediaListEntry - add ctime 2021-01-12 12:01:21 +01:00
d91c6fd4e1 ui: tfa: drop bogus gettext of empty string
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-12 11:44:05 +01:00
711d1f6fc3 ui: notify options: Remove gettext for root@pam
Translating root@pam is not useful, especially as the empty text symbolises the
default value.

Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
2021-01-12 11:41:24 +01:00
e422beec74 fix #3245: only use default schedule for new jobs
an empty schedule means 'none', so do not fill it with the default
in case we edit an existing job (like we do already for sync jobs)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-12 10:26:59 +01:00
a484c9cf96 tape: automatically reload tapes inside autoloader
We always automatically unload tapes to free library slots,
so it should not happen that an ejected tape resides inside the drive.

This is just a safe guard to handle the situation in case it happens ...

You can manually produce the situation by ejecting a tape without unloading:

 mt -f /dev/nst0 eject

Note: Our "proxmox-tape eject" does automatic unload
2021-01-12 09:49:05 +01:00
5654d8ceba tape: make eject/export more reliable, improve logging 2021-01-12 09:16:16 +01:00
31cf625af5 tape: improve backup logs 2021-01-11 13:23:12 +01:00
93be18ffd2 tape: fix tape alert flag values 2021-01-11 13:23:12 +01:00
e96464c795 d/control bump
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 12:09:19 +01:00
ad0ed40a59 api: return "invalid" as CSRF token for partial tickets
So that old clients don't `unwrap` a `None` value.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:13 +01:00
63fd8e58b2 gui: masks for: adding recovery and removals
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:13 +01:00
758a827c2d gui: add load mask during webauthn api calls
so that if we run into the 3s delay due to the wrong
password the window is properly masked

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:13 +01:00
7ad33e8052 tfa: use UNAUTHORIZED http status in password check
to trigger our 3s delay in the rest handler

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:13 +01:00
abfe0c0e70 tfa: fixup for challenge file split
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:13 +01:00
f22dfb5ece tfa: remove tfa user when a user is deleted
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:10 +01:00
4bda51688b tfa: improve user existence check
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:03 +01:00
eab25e2f33 tfa: allow deletion of entries of non-existent users
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:03 +01:00
94bd11bae2 typo fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:03 +01:00
759af9f00c tfa api: return types and 'pub' structs/methods
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:03 +01:00
f58e5132aa tfa: entry access/iteration cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:03 +01:00
d831846706 tfa: r#type parameter name
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:03 +01:00
1fc9ac0433 tfa: _entry api method name suffix consistency
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:23:03 +01:00
5c48d0af1f tfa gui: fix adding recovery keys as user
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:22:32 +01:00
30fb19be35 tfa view: html-escape description text
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:22:32 +01:00
fbeac4ea28 gui: tfa support
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:22:32 +01:00
7f066a9b21 proxy: expose qrcodejs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:22:32 +01:00
c5a767cd1d depend on libjs-qrcodejs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:22:32 +01:00
027ef213aa api: tfa management and login
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:22:32 +01:00
dc1fdd6267 config: add tfa configuration
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:22:32 +01:00
96918252e5 buildcfg: add rundir helper macro
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:22:32 +01:00
014dc5f9d7 tools: add create_run_dir helper
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:22:32 +01:00
59e94227af add tools::serde_filter submodule
can be used to perform filtering at parse time

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:22:32 +01:00
e84b801c2e tape: improve retention period docu 2021-01-11 07:11:17 +01:00
6638c034d2 tape: remove unused eject_on_unload method 2021-01-10 16:20:18 +01:00
04df41cec1 tape: more MediaChange cleanups
Try to provide generic implementation for complex operations:

- unload_to_free_slot
- load_media
- export media
- clean drive
- online_media_changer_ids
2021-01-10 15:32:52 +01:00
483da89d03 tape: improve export media to directly export from drive, add CLI 2021-01-10 13:44:44 +01:00
c92e3832bf tape: cleanup: s/transfer/transfer_media/, avoid compiler warnings 2021-01-10 12:18:30 +01:00
edb90f6afa tape: backup - implement export-media-set option 2021-01-10 11:59:55 +01:00
0057f0e580 tape: MediaChange - add transfer, implement export 2021-01-10 11:51:09 +01:00
e6217b8b36 tape: renamed src/tape/changer/linux_tape.rs -> src/tape/changer/mtx.rs 2021-01-10 10:07:40 +01:00
6fe16039b9 tape: simplify media changer implementation - new struct MtxMediaChanger 2021-01-10 10:02:01 +01:00
42967bf185 tape: backup - implement --eject-media option 2021-01-09 15:17:03 +01:00
5843268c47 tape: abort backup when we detect critical tape alert flags 2021-01-09 12:34:00 +01:00
7273ba3de2 tape: change default media set naming template to "%c" 2021-01-09 10:51:51 +01:00
0bf1c314da tape: show catalog status in media list 2021-01-09 10:24:48 +01:00
c7926d8e8c tape: split MediaSet into extra file 2021-01-09 08:54:58 +01:00
44ce25e7ac tape: docu - improve Administration section 2021-01-08 19:17:31 +01:00
3a2cc5c66e tape: minor docu update in retention policy 2021-01-08 19:01:38 +01:00
3838ce3330 tape: add retention policy docu 2021-01-08 17:34:58 +01:00
59217472aa tape: improve media set docu 2021-01-08 16:53:46 +01:00
df69a4fc59 tape: implement drive clean 2021-01-08 11:32:56 +01:00
25d3965769 tape: correctly skip cleaning tapes (not regular tapes) 2021-01-08 09:16:42 +01:00
08d8b2a4fd tape: add some media pool docu 2021-01-08 08:46:25 +01:00
879569d73f tape: changer transfer - make name parameter optional 2021-01-07 17:09:47 +01:00
b63f833d36 tape: fix paramater name - s/slot/source-slot/ 2021-01-07 15:39:25 +01:00
482c6e33dd tape: changer status command: make changer name optional 2021-01-07 15:12:19 +01:00
46a1863f88 tape: improve MediaChange trait
We expose the whole MtxStatus, and we can load/store from/to
specified slot numbers.
2021-01-07 14:26:43 +01:00
632756b6fb tape: more docs 2021-01-06 16:13:58 +01:00
04eba29c55 tape: document tape drive configuration 2021-01-06 16:00:31 +01:00
0912878ecf tape: document new export-slots feature 2021-01-06 14:11:35 +01:00
d5035c5600 tape: mtx_status - consider new export-slots property 2021-01-06 11:53:33 +01:00
38ae42b11a tape: changer - add export-slot config 2021-01-06 11:06:50 +01:00
a174854a0d tape: improve tape changer docs 2021-01-06 09:45:36 +01:00
c4b2b9ab41 tape: only query volume stats if we can read MAM 2021-01-06 09:20:36 +01:00
ef942e04c2 tape: add function to classify tape-alert-flags 2021-01-05 17:23:30 +01:00
f54cd66924 ui: running tasks: Use gettext for column labels
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
2021-01-05 13:53:33 +01:00
b40ab10d38 tape: add volume_mounts and medium_passes to LinuxDriveAndMediaStatus 2021-01-05 13:43:17 +01:00
f8ccbfdedd tape: implement read_volume_statistics 2021-01-05 12:58:18 +01:00
470f1c798a tape: status - show thape alert flags 2021-01-04 13:15:30 +01:00
5c012b392a tape: use LP 12h TapeAlert Response to query tape alert flags 2021-01-04 13:14:02 +01:00
165b641c1d tape: changer status - show full slots (for cartridge without barcode) 2021-01-04 12:06:05 +01:00
66e42bec05 tape: further PoolWriter cleanups 2021-01-03 12:08:40 +01:00
c503ea7045 tape: cleanup - rename 'info' to 'media_id'
Second try.
2021-01-03 11:38:00 +01:00
745ec187ce Revert "tape: cleanup - rename 'info' to 'media_id'"
This reverts commit f046313c0e.

media_id is already use as parameter, so this commit is totally buggy.
2021-01-03 11:14:58 +01:00
f046313c0e tape: cleanup - rename 'info' to 'media_id' 2021-01-03 10:37:42 +01:00
74595b8821 tape: sg-tape-cmd tape-alert-flags 2021-01-03 10:09:43 +01:00
c9fdd142a4 tape: commit missing file 2021-01-02 13:39:34 +01:00
abaa6d0ac9 tape: decode TapeAlertFlags in cartridge-memory command 2021-01-02 10:55:30 +01:00
cfae8f0656 tape: merge MediaStateDatabase into Inventory 2021-01-01 16:15:13 +01:00
54f4ecd46a tape: implement MediaPool flag to consider offline media
For standalone tape drives.
2021-01-01 10:03:59 +01:00
1835d86e9d gui: update tape job descriptions 2020-12-31 10:37:09 +01:00
b9b4b31284 tape: add basic restore api/command 2020-12-31 10:26:48 +01:00
b4772d1c43 tape: new inventory helper - lookup_media_set_pool 2020-12-31 10:03:17 +01:00
9933dc3133 update TODO 2020-12-31 08:38:22 +01:00
08ac90f920 api: allow tokens to list users
their owner, or all if they have the appropriate privileges.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-31 08:29:49 +01:00
13f5863561 api: improve error messages for restricted endpoints
the old variant attempted to parse a tokenid as userid and returned the
cryptic parsing error to the client, which is rather confusing.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-31 08:29:09 +01:00
81764111fe tape: media_change - log all errors 2020-12-30 19:17:18 +01:00
cb022525ff tape: only log to stdout in CLI environment 2020-12-30 19:01:39 +01:00
75656a78c6 tape: improve inline docu 2020-12-30 17:28:33 +01:00
284eb5daff tape: cleanup/simplify media_change code 2020-12-30 17:16:57 +01:00
ff58c51919 tape: improve media request/load 2020-12-30 13:09:28 +01:00
2fb1bdda20 verify-api: fix allOf duplicates check
it triggered with a wrongly-formatted message on schemas that did NOT
contain any duplicates..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-30 12:36:00 +01:00
12299b333b tape: set minimal media label length to 2 2020-12-30 10:15:02 +01:00
b017bbc441 tape: add restore code, implement catalog api/command 2020-12-30 09:48:18 +01:00
9e8c0d2e33 tape: cleanup - remove debug messages 2020-12-30 08:41:30 +01:00
250c29edd2 tape: correctly sort media api entries 2020-12-29 12:09:51 +01:00
c431659d05 cleanup: remove debug output 2020-12-29 11:59:57 +01:00
a33389c391 tape: implement media content list api 2020-12-29 11:58:26 +01:00
3460565414 tape: create the MediaCatalog when we label a tape 2020-12-29 10:55:20 +01:00
26b62138ee cleanup: disable debug message when we detect a stopped worker task 2020-12-29 10:53:16 +01:00
afb0220642 tape: cleanup LinuxDriveStatus - make density optional 2020-12-29 09:10:30 +01:00
0993923ed5 tape: factor out get_drive_and_media_status 2020-12-29 08:39:06 +01:00
e0362b0d0f tape: correctly parse mtx import/export slots 2020-12-28 13:32:56 +01:00
df3a74d7e0 debian: correctly install sg-tape-cmd setuid binary 2020-12-28 13:22:17 +01:00
d5d457e667 fix typo in Makefile 2020-12-28 11:41:10 +01:00
b27c32821c tape: install new sg-tape-cmd setuid binary 2020-12-28 11:10:25 +01:00
76b15a035f tape: MediaCatalog: write magic number before content 2020-12-26 11:05:25 +01:00
eb8feb1281 tape: add LTO1 to TapeDensity 2020-12-26 10:48:32 +01:00
fc6ce9835b tape: fix non-rewinding tape device check 2020-12-25 15:38:29 +01:00
8ae9f4efc2 tape: minor cleanups 2020-12-25 13:45:26 +01:00
c9d13b0fc4 tape: expose check_tape_is_linux_tape_device 2020-12-24 15:51:49 +01:00
bfacc1d8c3 tape: cleanup - factor out open_linux_tape_device 2020-12-24 11:24:45 +01:00
02d484370f fix build depends 2020-12-23 11:54:44 +01:00
5ae86dfaa1 tape: return media usage info with status command 2020-12-23 11:24:34 +01:00
dbe7e556b0 tape: implement binding for libsgutils2
So that we can read cartridge memory without calling "sg_raw". In future,
we may need further low level command to control the tape..
2020-12-23 09:44:53 +01:00
4799280ccd http_client: add timeouts for critical connects
Use timeout futures for sections that might hang in certain error
conditions. This is mostly intended to be used as a safeguard, not a
first line of defense - i.e. best-effort avoidance of total hangs.

Not every future used for the HttpClient/H2Client is changed, only those
where a quick response is to be expected. For example, the response
reading futures are left alone, so data transfer is never capped with
timeout, only the initial server connect.

It is also used for upgrading to H2 connections, as that can take a long
time on overloaded servers.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-12-22 13:31:10 +01:00
cb4865466e depend on proxmox 0.9.1 2020-12-22 13:30:41 +01:00
cb80d900b3 tape: add drive status api 2020-12-22 10:42:22 +01:00
ee01737e87 tape: rename 'mam' api to 'cartridge-memory' 2020-12-22 09:27:34 +01:00
2012825913 depend on proxmox 0.9.0 2020-12-22 08:52:24 +01:00
eb5e3420ae tests: verify-api: check AllOf schemas
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-12-22 07:31:38 +01:00
b2362a1207 adaptions for proxmox 0.9 and proxmox-api-macro 0.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-12-22 07:31:05 +01:00
54d968664a tape: update user docu 2020-12-21 12:13:35 +01:00
1e20f819d5 tape: add command to read cartridge memory (MAM)
Thsi add an additional dependency to sg3-utils (small).
2020-12-21 12:12:33 +01:00
8001c82e81 tape: update user docu - howto label tapes 2020-12-20 10:41:40 +01:00
baefbc444e tape: update user docu 2020-12-20 09:16:09 +01:00
4a227b54bf add LTO barcode generator App 2020-12-19 17:39:48 +01:00
8a192bedde tape: update user docu 2020-12-19 16:56:54 +01:00
d5efa18ae4 tape: update user docu 2020-12-19 15:13:38 +01:00
5f79dc2805 tape: start user documentation 2020-12-19 11:14:56 +01:00
9aa58f0143 cleanup: rename mtfsf into forward_space_count_files 2020-12-18 16:57:49 +01:00
8835664653 tape: add tape backup api 2020-12-18 15:32:12 +01:00
d37da6b7fc tape: add PoolWriter 2020-12-18 15:27:44 +01:00
b9ee86efe1 tape: use SnapshotReader to create snapshot archive 2020-12-18 12:11:29 +01:00
d108b610fd tape: fix write_media_set_label - move to correct position 2020-12-18 12:11:29 +01:00
0ec79339f7 tools/daemon: improve reload behaviour
it seems that sometimes, the child process signal gets handled
before the parent process signal. Systemd then ignores the
childs signal (finished reloading) and only after going into
reloading state because of the parent. this will never finish.

Instead, wait for the state to change to 'reloading' after sending
that signal in the parent, an only fork afterwards. This way
we ensure that systemd knows about the reloading before actually trying
to do it.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Fabian Ebner <f.ebner@proxmox.com>
2020-12-18 10:30:37 +01:00
2afdc7f27d tape: MediaPool::with_config() - remove name parameter
Not required, because config already contains the pool name.
2020-12-18 08:14:24 +01:00
26aa9aca40 tape: return current_file_number as u64 2020-12-18 07:44:50 +01:00
3e2984bcb9 tools/process_locker: Decrement writer count in drop handler
of ProcessLockSharedGuard.

We use a counter to determine if we can unlock the file again, but
we never actually decremented the writer count, so we held the
lock forever.

This fixes the issue that we could not start a garbage collect after
a reload, as long as the old process is still running, even when that
process has no active backup anymore but another long running task
(e.g. file download, terminal, etc.).

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-18 07:15:08 +01:00
a7a5406c32 acl: rustfmt module
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-18 07:07:01 +01:00
4f727a783e acl: reformat privileges
for better readability, and tell rustfmt to leave those definitions
alone.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-18 07:05:45 +01:00
23dc68fdea acl: add docs and adapt visibility
document all public things, add some doc links and make some
previously-public things only available for test cases or within the
crate:

previously public, now private:
- AclTreeNode::extract_user_roles (we have extract_roles())
- AclTreeNode::extract_group_roles (same)
- AclTreeNode::delete_group_role (exists on AclTree)
- AclTreeNode::delete_user_role (same)
- AclTreeNode::insert_group_role (same)
- AclTreeNode::insert_user_role (same)
- AclTree::write_config (we have save_config())
- AclTree::load (we have config()/cached_config())

previously public, now crate-internal:
- AclTree::from_raw (only used by tests)
- split_acl_path (used by some test binaries)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-18 07:05:11 +01:00
b532dd00c4 tape: add helper to read snapshot contents
- lock the snapshot for reading
- use openat to open files
- provides an iterator over all chunks
2020-12-17 13:07:52 +01:00
c01742855a KeyConfig: bail on wrong fingerprint
instead of just logging the error. this should never happen in practice
unless someone is messing with the keyfile, in which case, it's better
to abort.

update tests accordingly (wrong fingerprint should fail, no fingerprint
should get the expected one).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-17 11:27:06 +01:00
9c953dd260 tape: add code to write backup snapshot files (without chunks) to tape 2020-12-17 08:28:47 +01:00
3fbf2d2fcd tape: cleanup MediaCatalog 2020-12-17 08:05:53 +01:00
e0af222ec3 KeyConfig: always calculate fingerprint
and warn if stored and calculated fingerprint don't match.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-17 06:52:55 +01:00
73b5011786 KeyConfig: add encrypt/decrypt test
the RSA key and the encryption key itself are hard-coded to avoid
stalling the test runs because of lack of entropy, they have no special
significance otherwise.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-17 06:47:45 +01:00
2ea5abcd65 docs: replace openssl command with client
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-17 06:46:59 +01:00
7137630d43 client: add 'import-with-master-key' command
to import an encrypted encryption key using a master key.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-17 06:46:24 +01:00
8acfd15d6e key: move RSA-encryption to KeyConfig
since that is what gets encrypted, and not a CryptConfig.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-17 06:43:34 +01:00
48fbbfeb7e fix #3197: skip fingerprint check when restoring key
when restoring an encrypted key, the original one is obviously not
available to check the fingerprint with.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-17 06:37:54 +01:00
9990af3042 master key: store blob name in constant
since we will use it in more than one place.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-17 06:36:06 +01:00
fe6c19383b tape: remove MediaLabelInfo, use MediaId instead
The additional content_uuid was quite useless...
2020-12-16 13:31:32 +01:00
42150d263b update pxar dependency to 0.6.2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-16 13:13:31 +01:00
9839d3f778 tape: improve docu 2020-12-16 12:43:51 +01:00
dd59e3c2a1 tape: improve docu 2020-12-16 12:23:52 +01:00
0b7432ae09 tape: add chunk archive reader/writer 2020-12-16 12:08:34 +01:00
c1c2c8f635 tape: cleanup MediaLocation type for direct use with API 2020-12-16 10:49:01 +01:00
7680525eec docs: prune-sim: folluwp: add missing semicolon
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-16 10:08:11 +01:00
42298d5896 tape: add magic number to identify media catalog files 2020-12-16 09:00:14 +01:00
39478aa52c prune sim: correctly keep track of already included backups
This needs to happen in a separate loop, because some time intervals are not
subsets of others, i.e. weeks and months. Previously, with a daily backup
schedule, having:
* a backup on Sun, 06 Dec 2020 kept by keep-daily
* a backup on Sun, 29 Nov 2020 kept by keep-weekly
would lead to the backup on Mon, 30 Nov 2020 to be selected for keep-monthly,
because the iteration did not yet reach the backup on Sun, 29 Nov 2020 that
would mark November as being covered.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-15 14:03:18 +01:00
6a99b930c4 followup: use arrow function for sorting
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-15 13:45:51 +01:00
f6ce45b373 prune sim: fix #3192: by fixing usage of sort()
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-15 13:45:51 +01:00
205e187613 tape: add MediaCatalog implementation 2020-12-15 13:40:49 +01:00
a78348acbb tape: rename DriveLabel to MediaLabel 2020-12-14 17:37:16 +01:00
410611b4f2 tape: improve file format docu 2020-12-14 17:29:57 +01:00
af07ec8f29 tape: minor code cleanup 2020-12-14 16:56:26 +01:00
3f803af00b tape: scan - print more debug info 2020-12-14 13:16:18 +01:00
ac461bd651 tape: implement scan command (useful for debug) 2020-12-14 12:55:49 +01:00
ce955e1635 tape: implement eod cli command (debug tool) 2020-12-14 09:56:59 +01:00
e20d008c6a tape: rename cli 'media media-destroy' toö 'media destroy' 2020-12-14 09:30:32 +01:00
fb657d8ee5 tape: implement destroy_media 2020-12-14 08:58:40 +01:00
fba0b77469 tape: add media api 2020-12-14 07:55:57 +01:00
b5c1296eaa tape: make changer get_status async 2020-12-14 07:14:24 +01:00
065df12872 tape: split api type definitions for changers into extra file 2020-12-13 09:31:02 +01:00
7e1d4712b8 tape: rename CHANGER_ID_SCHEMA to CHANGER_NAME_SCHEMA 2020-12-13 09:22:08 +01:00
49c965a497 tape: rename DRIVE_ID_SCHEMA to DRIVE_NAME_SCHEMA 2020-12-13 09:18:16 +01:00
6fe9aedd0b tape: correctly call Async handler in proxmox-tape 2020-12-12 09:58:47 +01:00
42cb9bd6a5 tape: avoid executor blocking in changer api 2020-12-12 09:45:08 +01:00
66dbe5639e tape: avoid executor blocking in drive API
By using tokio::task::spawn_blocking().
2020-12-12 09:20:04 +01:00
2d87f2fb73 bump version to 1.0.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-11 14:19:28 +01:00
4c81273274 debian: just install whole images directory
fixes build for recently added tape icon (and includes it for real)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-11 14:19:28 +01:00
73b8f6793e tape: add svg icon 2020-12-11 13:02:23 +01:00
663ef85992 tape: use WorkerTask for erase and rewind 2020-12-11 11:19:33 +01:00
e92c75815b tape: split inventory api
inventory: sync, list labels with uuids,
update_inventory: WorkerTask, updates database
2020-12-11 10:42:29 +01:00
6dbad5b4b5 tape: run label commands as WorkerTask (threads) 2020-12-11 09:10:22 +01:00
bff7e3f3e4 tape: implement barcode-label-mdedia 2020-12-11 07:50:19 +01:00
83abc7497d tape: implement inventory command 2020-12-11 07:39:28 +01:00
8bc5eebeb8 depend on package mt-st
We do not use the mt utility directly, but the package also provides
an udev helper to correctly initialize tape drives (stinit). Also,
the mt utility is helpful for debugging tap issues.
2020-12-11 06:38:45 +01:00
1433b96ba0 control.in: fix indentation
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-12-11 06:31:30 +01:00
be1a8c94ae fix build: add missing file 2020-12-10 13:40:20 +01:00
4606f34353 tape: implement read-label command 2020-12-10 13:20:39 +01:00
7bb720cb4d tape: implement label command 2020-12-10 12:30:27 +01:00
c4d8542ec1 tape: add media pool handling 2020-12-10 11:41:35 +01:00
9700d5374a tape: add media pool cli 2020-12-10 11:13:12 +01:00
05e90d6463 tape: add media pool config api 2020-12-10 10:52:27 +01:00
55118ca18e tape: correctly sort drive api subdir 2020-12-10 10:09:12 +01:00
f70d8091d3 tape: implement option changer-drive-id 2020-12-10 09:09:06 +01:00
a3c709ef21 tape: cli cleanup - avoid api redefinition 2020-12-10 08:35:11 +01:00
4917f1e2d4 tape: implement delete property for drive update command 2020-12-10 08:25:46 +01:00
93829fc680 tape: cleanup load-slot api 2020-12-10 08:04:55 +01:00
5605ca5619 tape: cli cleanup - rename scana-for-* into scan 2020-12-10 07:58:45 +01:00
e49f0c03d9 tape: implement load-media command 2020-12-10 07:52:56 +01:00
0098b712a5 tape: implement eject 2020-12-09 17:50:48 +01:00
5fb694e8c0 tape: implement rewind 2020-12-09 17:43:38 +01:00
583a68a446 tape: implement erase media 2020-12-09 17:35:31 +01:00
e6604cf391 tape: add command line interface proxmox-tape 2020-12-09 13:00:20 +01:00
43cfb3c35a tape: do not remove changer while still used 2020-12-09 12:55:54 +01:00
8a16c571d2 tape: add changer property to drive create api 2020-12-09 12:55:10 +01:00
314652a499 tape: set protected flag for configuration change api methods 2020-12-09 12:02:55 +01:00
6b68e5d597 client: move connect_to_localhost into client module 2020-12-09 11:59:50 +01:00
cafd51bf42 tape: add media state database 2020-12-09 11:21:56 +01:00
eaff09f483 update control file 2020-12-09 11:21:56 +01:00
9b93c62044 remove unused descriptions from api macros
these are now a hard error in the api macro

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-12-09 10:55:18 +01:00
5d90860688 tape: expose basic tape/changer functionality at api2/tape/ 2020-12-08 15:42:50 +01:00
5ba83ed099 tape: check digest on config update 2020-12-08 11:24:38 +01:00
50bf10ad56 tape: add changer configuration API 2020-12-08 09:04:56 +01:00
16d444c979 tape: add tape drive configuration API 2020-12-07 13:04:32 +01:00
fa9c9be737 tape: add tape device driver 2020-12-07 08:29:22 +01:00
2e7014e31d tape: add BlockeReader/BlockedWriter streams
This is the basic format used to write data to tapes.
2020-12-06 12:09:55 +01:00
a84050c1f0 tape: add BlockHeader impl 2020-12-06 10:26:24 +01:00
7c9835465e tape: add helpers to emulate tape read/write behavior 2020-12-06 09:41:16 +01:00
ec00200411 fix bug #3189: fix change_password permission checks, run protected 2020-12-05 16:20:29 +01:00
956e5fec1f depend on mtx (tape changer control)
A very small package with no additional dependencies.
2020-12-05 14:54:12 +01:00
b107fdb99a tape: add tape changer support using 'mtx' command 2020-12-05 14:54:12 +01:00
7320e9ff4b tape: add media invenotry 2020-12-05 12:54:15 +01:00
c4d2d54a6d tape: define useful constants 2020-12-05 12:20:46 +01:00
1142350e8d tape: add media pool config 2020-12-05 11:59:38 +01:00
d735b31345 tape: add tape read trait 2020-12-05 10:54:38 +01:00
e211fee562 tape: add tape write trait 2020-12-05 10:51:34 +01:00
8c15560b68 tape: add file format definitions 2020-12-05 10:45:08 +01:00
327e93711f commit missing file: tape api type definitions 2020-12-04 16:00:52 +01:00
a076571470 tape support: add drive configuration 2020-12-04 15:42:32 +01:00
ff50c07ebf start experimental tape management GUI
You need to set the environment TEST_TAPE_GUI=1 to enable this.
The current GUI is only a placeholder.
2020-12-04 12:50:08 +01:00
179145dc24 backup/datastore: move manifest locking to /run
this fixes the issue that on some filesystems, you cannot recursively
remove a directory when you hold a lock on a file inside (e.g. nfs/cifs)

it is not really backwards compatible (so during an upgrade, there
could be two daemons have the lock), but since the locking was
broken before (see previous patch) it should not really matter
(also it seems very unlikely that someone will trigger this)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-03 09:56:42 +01:00
6bd0a00c46 backup/datastore: really lock manifest on delete
'lock_manifest' returns a Result<File, Error> so we always got the result,
even when we did not get the lock, but we acted like we had.

bubble the locking error up

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-02 14:37:05 +01:00
f6e28f4e62 client/pull: log how many groups to pull were found
if no groups were found, the task log was very confusing as it
contained no real information why nothing was synced, e.g.:

 Starting datastore sync job 'remote:datastore:local-datastore:s-79412799-e6ee'
 Sync datastore 'local-datastore' from 'remote/datastore'
 sync job 'remote:datastore:local-datastore:s-79412799-e6ee' end
 TASK OK

this patch simply logs how many groups were found and are about to be synced:

 Starting datastore sync job 'remote:datastore:local-datastore:s-79412799-e6ee'
 Sync datastore 'local-datastore' from 'remote/datastore'
 found 0 groups to sync
 sync job 'remote:datastore:local-datastore:s-79412799-e6ee' end
 TASK OK

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-02 07:22:50 +01:00
37f1b7dd8d docs: add more thoughts about chunk size 2020-12-01 10:28:06 +01:00
60e6ee46de doc: add some thoughts about large chunk sizes 2020-12-01 08:47:15 +01:00
2260f065d4 cleanup: use extra file for StoreProgress 2020-12-01 06:34:33 +01:00
6eff8dec4f cleanup: remove unnecessary StoreProgress clone() 2020-12-01 06:29:11 +01:00
7e25b9aaaa verify: use same progress as pull
percentage of verified groups, interpolating based on snapshot count
within the group. in most cases, this will also be closer to 'real'
progress since added snapshots (those which will be verified) in active
backup groups will be roughly evenly distributed, while number of total
snapshots per group will be heavily skewed towards those groups which
have existed the longest, even though most of those old snapshots will
only be re-verified very infrequently.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:22:55 +01:00
f867ef9c4a progress: add format variants
for iterating over a single group, or iterating just on the group level

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:22:12 +01:00
fc8920e35d pull: factor out interpolated progress
and add group/snapshot count info.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:13:11 +01:00
7f3b0f67e7 remove BackupGroup::list_groups
BackupInfo::list_backup_groups is identical code-wise, and makes more
sense as entry point for listing groups.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:09:44 +01:00
844660036b gc: don't limit index listing to same filesystem
WalkDir does not follow symlinks by default anyway, and this behaviour
is not documented anywhere. e.g., if a sysadmin mounts 'extra storage'
for some backup group or type (not knowing that only metadata is stored
in those directories), GC will ignore all the indices contained within
and happily garbage collect their chunks..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:07:09 +01:00
efcac39d34 gc: remove duplicate variable
list_images already returns absolute paths, we don't need to prepend
anything.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:06:51 +01:00
cb4b721cb0 gc: log index files found outside of expected scheme
for safety reason, GC finds and marks all index files below the
datastore base path. as a result of regular operations, only index files
within the expected scheme of <TYPE>/<ID>/<TIMESTAMP> should exist.

add a small check + warning if the index list contains index files out
side of this expected scheme, so that an admin with shell access can
investigate.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:06:17 +01:00
7956877f14 gc: shorten progress messages
we have messages starting the phases anyway, and limit the number of
progress updates so that context remains available at all times.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-12-01 06:04:13 +01:00
2241c6795f d/control bump
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:28:02 +01:00
43e60ceb41 file logger: remove test.log after test as well
and a doc formatting fixup

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:13:21 +01:00
b760d8a23f derive PartialEq for Userid
the manual implementation is equivalent

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:10:17 +01:00
2c1592263d tiny clippy hint
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:03:43 +01:00
616533823c don't enforce Vec and String in tools::join
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:56:59 +01:00
913dddea85 minor cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:56:21 +01:00
3530430365 tools avoid unnecessary copying of parameters/properties
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:53:49 +01:00
a4ba60be8f minor cleanups
whitespace, formatting and superfluous lifetime annotations

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 13:47:31 +01:00
99e98f605c network helpers: fix fd leak in get_network_interfaces
This one always leaked.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
935ee97b17 use fd_change_cloexec helper
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
6b9bfd7fe9 minor cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
dd519bbad1 pxar: stricter file descriptor guards
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
35fe981c7d client: use tools::pipe instead of nix
nix::unistd::pipe returns unguarded RawFds which should be
avoided

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
b6570abe79 changes for proxmox 0.8
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
54813c650e bump proxmox dep to 0.8.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 11:25:53 +01:00
781106f8c5 ui: fix usage of findRecord
findRecord does not match exactly, but only at the beginning and
case insensitive, by default. Change all calls to be case sensitive
and an exactmatch (we never want the default behaviour afaics).

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-27 07:20:32 +01:00
96f35520a0 bump version to 1.0.5-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 15:30:06 +01:00
490560e0c6 restore: print to STDERR
else restoring to STDOUT is broken..

Reported-by: Dominic Jäger <d.jaeger@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-25 14:38:02 +01:00
52f53d8280 control: update versions 2020-11-25 10:35:51 +01:00
27b8a3f671 bump version to 1.0.4-1 2020-11-25 08:03:11 +01:00
abf9b6da42 docs: fix renamed commands 2020-11-25 08:03:11 +01:00
0c9209b04c cli: rename command "upload-log" to "snapshot upload-log" 2020-11-25 07:57:39 +01:00
edebd52374 cli: rename command "forget" to "snapshot forget" 2020-11-25 07:57:39 +01:00
61205f00fb cli: rename command "files" to "snapshot files" 2020-11-25 07:57:39 +01:00
a303e00289 fingerprint: add new() method 2020-11-25 07:57:39 +01:00
af9f72e9d8 fingerprint: add bytes() accessor
needed for libproxmox-backup-qemu0

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-25 06:34:34 +01:00
5176346b30 ui: fix broken gettext use
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 00:21:17 +01:00
731eeef25b cli: use new alias feature for "snapshots"
Now maps to "snapshot list".
2020-11-24 13:26:43 +01:00
a65e3e4bc0 client: add 'snapshot notes show/update' command
to show and update snapshot notes from the cli

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-24 11:44:19 +01:00
027eb2bbe6 bump version to 1.0.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:56:18 +01:00
6982a54701 gui: add snapshot/file fingerprint tooltip
display short key ID, like backend's Display trait.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
035c40e638 list_snapshots: return manifest fingerprint
for display in clients.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
79c535955d refactor BackupInfo -> SnapshotListItem helper
before adding more fields to the tuple, let's just create the struct
inside the match arms to improve readability.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
8b7f8d3f3d expose previous backup time in backup env
and use this information to add more information to client backup log
and guide the download manifest decision.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
866c859a1e bump version to 1.0.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:33:20 +01:00
23e4e90540 verification: fix message in notification mail
the errors Vec can contain failed groups as well (e.g., if a group has
no or an invalid owner).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
a4fa3fc241 verification job: log failed dirs
else users have to manually search through a potentially very long task
log to find the entries that are different.. this is the same summary
printed at the end of a manual verify task.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
81d10c3b37 cleanup: remove dead code 2020-11-24 08:03:00 +01:00
f1e2904150 paperkey: refactor common code
from formatting functions to main function, and pass along the key data
lines instead of the full string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:57:21 +01:00
23f9503a31 client: check fingerprint after downloading manifest
this is stricter than the check that happened on manifest load, as it
also fails if the manifest is signed but we don't have a key available.

add some additional output at the start of a backup to indicate whether
a previous manifest is available to base the backup on.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:55:12 +01:00
a0ef68b93c manifest: check fingerprint when loading with key
otherwise loading will run into the signature mismatch which is
technically true, but not the complete picture in this case.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:49:51 +01:00
6b127e6ea0 fix #3139: add key fingerprint to manifest
if the manifest is signed/the contained archives/blobs are encrypted.
stored in 'unprotected' area, since there is already a strong binding
between key and manifest via the signature, and this avoids breaking
backwards compatibility for a simple usability improvement.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:45:11 +01:00
5e17dbf2bb cli: cleanup 'key show' - use format_and_print_result_full
We now expose all key derivation functions on the cli, so users can
choose between scrypt or pbkdf2.
2020-11-24 07:32:34 +01:00
dfb04575ad client: add 'key show' command
for (pretty-)printing a keyfile.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:15:29 +01:00
6f2626ae19 client: print key fingerprint and master key
for operations where it makes sense.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:11:26 +01:00
37e60ddcde key: add fingerprint to key config
and set/generate it on
- key creation
- key passphrase change
- key decryption if not already set
- key encryption with master key

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:46 +01:00
05cdc05347 crypt config: add fingerprint mechanism
by computing the ID digest of a hash of a static string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:16 +01:00
6364115b4b OnlineHelpInfo.js problems
Anybody known why I always get the following diff:
2020-11-23 12:57:41 +01:00
2133cd9103 update debian/control 2020-11-23 12:13:58 +01:00
01f84fcce1 ui: datastore content: use our keep field for group pruning
sets some defaults and provides the clear trigger, so less code and
slightly nicer UX.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-21 19:52:03 +01:00
08b3823025 bump dependency on proxmox to 0.7.1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-20 17:38:34 +01:00
968a0ab261 fix systemd-encoded upid strings in http client
since we systemd-encode parts of the upid string, and those can contain
characters that are invalid in urls (e.g. '\'), we have to percent encode
those

add a 'percent_encode_component' helper, so that we can maybe change
the AsciiSet for all uses at the same time

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-19 11:01:19 +01:00
21b552848a prune sim: make numberfields more similar to PBS's
by creating a new class that adds a clear trigger and also uses the
clear-trigger image. Code was taken from the one in PBS's prune window,
but we have default values here, so a bit of adapting was necessary. For
example, we don't want to reset to the original value (which might have
been one of the defaults) when clearing, but always to 'null'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-19 09:47:51 +01:00
fd19256470 gc: treat .bad files like regular chunks
Simplify the phase 2 code by treating .bad files just like regular
chunks, with the exception of stat logging.

To facilitate, we need to touch .bad files in phase 1. We only do this
under the condition that 1) the original chunk is missing (as before),
and 2) the original chunk is still referenced somewhere (since the code
lives in the error handler for a failed chunk touch, it only gets called
for chunks we expect to be there, i.e. ones that are referenced).

Untouched they will then be cleaned up after 24 hours (or after the last
longer-running task finishes).

Reason 2) is also a fix for .bad files not being cleaned up at all if
the original is no longer referenced anywhere (e.g. a user deleting all
snapshots after seeing some corrupt chunks appear).

cond_touch_path is introduced to touch arbitrary paths in the chunk
store with the same logic as touching chunks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-18 14:04:49 +01:00
1ed022576c api: include store in invalid owner errors
since a group might exist in plenty stores

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:24 +01:00
f6aa7b38bf drop now unused BackupInfo::list_backups
all global backup listing now happens via BackupGroup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:21 +01:00
fdfcb74d67 api: filter snapshot counts
unprivileged users should only see the counts related to their part of
the datastore.

while we're at it, switch to a list groups, filter groups, count
snapshots approach (like list_snapshots) to speedup calls to this
endpoint when many unprivileged users share a datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:50 +01:00
98afc7b152 api: make expensive parts of datastore status opt-in
used in the PBS GUI, but also for PVE usage queries which don't need all
the extra expensive information..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:47 +01:00
0d08fceeb9 improve group/snapshot listing
by listing groups first, then filtering, then listing group snapshots.

this cuts down the number of openat/getdirents calls for users that just
have a partial view of the datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 10:37:04 +01:00
3c945d73c2 client/http_client: add put method
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 16:59:14 +01:00
58fcbf5ab7 client: expose all-file-systems option
Useful to avoid the need for a long (and possibly changing) list of include-dev
options in certain situations, e.g. nested ZFS file systems. The option is
already implemented and seems to work as expected. The checks for virtual
filesystems are not affected by this option.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-16 16:59:14 +01:00
3a3f31c947 ui: datastores: hide "no datastore" box by default
avoids that it shows during store load, we do not know if there are
no datastores at that point and have already a loading mask.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-16 16:59:14 +01:00
8fc63287df ui: improve comment behaviour for datastore Summary
when we could not load the config (e.g. missing permissions)
show the comment from the global datastore-list

also show a messagebox for a load error instead of setting
the text of the comment box

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
172473e4de ui: DataStoreList: show message when there are no datastores
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
76f549debb ui: DataStoreList: remove datastores also from hash
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
c9097ff801 pxar: avoid including archive root's exclude patterns in .pxarexclude-cli
The patterns from the archive root's .pxarexclude file are already present in
self.patterns when encode_pxarexclude_cli is called. Pass along the number of
CLI patterns and slice accordingly.

Suggested-By: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 13:05:09 +01:00
fb01fd3af6 visibility cleanups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:53:50 +01:00
fa4bcbcad0 pxar: only generate .pxarexclude-cli if there were CLI parameters
previously a .pxarexclude entry in the root of the archive caused the file to
be generated as well, because the patterns are read before calling
generate_directory_file_list and within the function it wasn't possible to
distinguish between a pattern coming from the CLI and a pattern coming from
archive/root/.pxarexclude

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:08 +01:00
189cdb7427 pxar: include .pxarexclude files in the archive
The documentation states:
.pxarexclude files are treated as regular files and will be included in the
backup archive.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:06 +01:00
874bd5454d pxar: fix anchored exclusion at archive root
There is no leading slash in an entry's full_path, causing an anchored
exclude at the root level to fail, e.g. having "/name" as the content of the
file archive/root/.pxarexclude didn't match the file archive/root/name

Fix this by prepending a leading slash before matching.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:04 +01:00
b649887e9a remove unused function
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:15:15 +01:00
8c62c15f56 follouwp: whitespace cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 11:02:45 +01:00
51ac17b56e api: apt/versions: fix running_kernel string for unknown package case
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-12 11:02:20 +01:00
fc5a012068 manager: versions: non-verbose should actually print server pkg info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 10:28:03 +01:00
5e293f1315 apt: use typed response for get_versions
...and cleanup get_versions for manager CLI.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-12 10:15:32 +01:00
87367decf2 ui: tell ESLint to be strict in check target
the .lint-incremental target, which is implicitly used by the install
target, is still more forgiving to allow faster "change, build, test"
iteration when developing.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
f792220dd4 d/control: update for new pin-project dependency
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
97030c9407 cleanup clippy leftovers
this used to contain a pointer cast, now it doesn't

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
5d1d0f5d6c use pin-project to remove more unsafe blocks
we already have it in our dependency tree, so use it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
294466ee61 manager: versions: unify printing
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
c100fe9108 add versions command to proxmox-backup-manager
Add the versions command to proxmox-backup-manager with a similar output
to pveversion [-v]. It prints the packages line by line with only the
package name, followed by the version and, for proxmox-backup and
proxmox-backup-server, some additional information (running kernel,
running version).

In addition it supports the optional output-format parameter which can
be used to print the complete data in either json, json-pretty or text
format. If output-format is specified, the --verbose parameter is
ignored and the detailed list of packages is printed.

With the addition of the versions command, the report is extended as
well.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 18:30:33 +01:00
e754da3ac2 api: versions: add version also in server package unknown case
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
bc1e52bc38 api: versions: rust fmt cleanups
line length limit is 100

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
6f0073bbb5 api: apt update info: do not serialize extra info if none
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
2decf85d6e add extra_info field to APTUpdateInfo
Add an optional string field to APTUpdateInfo which can be used for
extra information.

This is used for passing running kernel and running version information
in the versions API call together with proxmox-backup and
proxmox-backup-server.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 16:39:11 +01:00
1d8f849457 api2/node/syslog: use 'real_service_name' here also
for now this only does the 'postfix' -> 'postfix@-' conversion,
fixes the issue that we only showed the 'postfix' service syslog
(which is rather empty in a default setup) instead of the instance one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 16:36:42 +01:00
beb07279b6 log source of encryption key
This patch prints the source of the encryption key when running
operations with proxmox-backup-client.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
8c6854c8fd inform user when using default encryption key
Currently if you generate a default encryption key:
`proxmox-backup-client key create --kdf none`

all backup operations which don't explicitly disable encryption will be
encrypted with this key.

I found it quite surprising, that my backups were all encrypted without
me explicitly specfying neither key nor encryption mode

This patch informs the user when the default key is used (and no
crypt-mode is provided explicitly)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
57f472d9bb report: use '$' instead of '#' for showing commands
since some files can contain '#' character for comments. (i.e.,
/etc/hosts)

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:37 +01:00
94ffca10a2 report: fix grammar error
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:33 +01:00
0a274ab0a0 ui: UserView: render name as 'Firstname Lastname'
instead of only the firstname

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
c0026563b0 make user properties deletable
by using our usual pattern for the update call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
e411924c7c rest: check for disabled token (user)
when authenticating a token, and not just when authenticating a
user/ticket.

Reported-By: Dominik Jäger <d.jaeger@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-11 12:21:29 +01:00
320 changed files with 32827 additions and 3199 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "1.0.1" version = "1.0.7"
authors = [ authors = [
"Dietmar Maurer <dietmar@proxmox.com>", "Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>", "Dominik Csapak <d.csapak@proxmox.com>",
@ -23,22 +23,22 @@ name = "proxmox_backup"
path = "src/lib.rs" path = "src/lib.rs"
[dependencies] [dependencies]
apt-pkg-native = "0.3.1" # custom patched version apt-pkg-native = "0.3.2"
base64 = "0.12" base64 = "0.12"
bitflags = "1.2.1" bitflags = "1.2.1"
bytes = "0.5" bytes = "1.0"
crc32fast = "1" crc32fast = "1"
endian_trait = { version = "0.6", features = ["arrays"] } endian_trait = { version = "0.6", features = ["arrays"] }
anyhow = "1.0" anyhow = "1.0"
futures = "0.3" futures = "0.3"
h2 = { version = "0.2", features = ["stream"] } h2 = { version = "0.3", features = [ "stream" ] }
handlebars = "3.0" handlebars = "3.0"
http = "0.2" http = "0.2"
hyper = "0.13.6" hyper = { version = "0.14", features = [ "full" ] }
lazy_static = "1.4" lazy_static = "1.4"
libc = "0.2" libc = "0.2"
log = "0.4" log = "0.4"
nix = "0.19" nix = "0.19.1"
num-traits = "0.2" num-traits = "0.2"
once_cell = "1.3.1" once_cell = "1.3.1"
openssl = "0.10" openssl = "0.10"
@ -46,31 +46,34 @@ pam = "0.7"
pam-sys = "0.5" pam-sys = "0.5"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-utils = "0.1.0" pin-utils = "0.1.0"
pin-project = "1.0"
pathpatterns = "0.1.2" pathpatterns = "0.1.2"
proxmox = { version = "0.7.0", features = [ "sortable-macro", "api-macro", "websocket" ] } proxmox = { version = "0.10.1", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0" proxmox-fuse = "0.1.1"
pxar = { version = "0.6.1", features = [ "tokio-io", "futures-io" ] } pxar = { version = "0.8.0", features = [ "tokio-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io", "futures-io" ] } #pxar = { path = "../pxar", features = [ "tokio-io" ] }
regex = "1.2" regex = "1.2"
rustyline = "6" rustyline = "7"
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
siphasher = "0.3" siphasher = "0.3"
syslog = "4.0" syslog = "4.0"
tokio = { version = "0.2.9", features = [ "blocking", "fs", "dns", "io-util", "macros", "process", "rt-threaded", "signal", "stream", "tcp", "time", "uds" ] } tokio = { version = "1.0", features = [ "fs", "io-util", "macros", "net", "parking_lot", "process", "rt", "rt-multi-thread", "signal", "time" ] }
tokio-openssl = "0.4.0" tokio-openssl = "0.6.1"
tokio-util = { version = "0.3", features = [ "codec" ] } tokio-stream = "0.1.0"
tokio-util = { version = "0.6", features = [ "codec" ] }
tower-service = "0.3.0" tower-service = "0.3.0"
udev = ">= 0.3, <0.5" udev = ">= 0.3, <0.5"
url = "2.1" url = "2.1"
#valgrind_request = { git = "https://github.com/edef1c/libvalgrind_request", version = "1.1.0", optional = true } #valgrind_request = { git = "https://github.com/edef1c/libvalgrind_request", version = "1.1.0", optional = true }
walkdir = "2" walkdir = "2"
webauthn-rs = "0.2.5"
xdg = "2.2" xdg = "2.2"
zstd = { version = "0.4", features = [ "bindgen" ] } zstd = { version = "0.4", features = [ "bindgen" ] }
nom = "5.1" nom = "5.1"
crossbeam-channel = "0.4" crossbeam-channel = "0.5"
[features] [features]
default = [] default = []

View File

@ -9,7 +9,8 @@ SUBDIRS := etc www docs
# Binaries usable by users # Binaries usable by users
USR_BIN := \ USR_BIN := \
proxmox-backup-client \ proxmox-backup-client \
pxar pxar \
pmtx
# Binaries usable by admins # Binaries usable by admins
USR_SBIN := \ USR_SBIN := \
@ -20,7 +21,7 @@ SERVICE_BIN := \
proxmox-backup-api \ proxmox-backup-api \
proxmox-backup-banner \ proxmox-backup-banner \
proxmox-backup-proxy \ proxmox-backup-proxy \
proxmox-daily-update \ proxmox-daily-update
ifeq ($(BUILD_MODE), release) ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release CARGO_BUILD_ARGS += --release
@ -141,6 +142,8 @@ install: $(COMPILED_BINS)
install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(SBINDIR)/ ; \ install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(SBINDIR)/ ; \
install -m644 zsh-completions/_$(i) $(DESTDIR)$(ZSH_COMPL_DEST)/ ;) install -m644 zsh-completions/_$(i) $(DESTDIR)$(ZSH_COMPL_DEST)/ ;)
install -dm755 $(DESTDIR)$(LIBEXECDIR)/proxmox-backup install -dm755 $(DESTDIR)$(LIBEXECDIR)/proxmox-backup
# install sg-tape-cmd as setuid binary
install -m4755 -o root -g root $(COMPILEDIR)/sg-tape-cmd $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/sg-tape-cmd
$(foreach i,$(SERVICE_BIN), \ $(foreach i,$(SERVICE_BIN), \
install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/ ;) install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/ ;)
$(MAKE) -C www install $(MAKE) -C www install

View File

@ -53,3 +53,83 @@ Setup:
Note: 2. may be skipped if you already added the PVE or PBS package repository Note: 2. may be skipped if you already added the PVE or PBS package repository
You are now able to build using the Makefile or cargo itself. You are now able to build using the Makefile or cargo itself.
Design Notes
============
Here are some random thought about the software design (unless I find a better place).
Large chunk sizes
-----------------
It is important to notice that large chunk sizes are crucial for
performance. We have a multi-user system, where different people can do
different operations on a datastore at the same time, and most operation
involves reading a series of chunks.
So what is the maximal theoretical speed we can get when reading a
series of chunks? Reading a chunk sequence need the following steps:
- seek to the first chunk start location
- read the chunk data
- seek to the first chunk start location
- read the chunk data
- ...
Lets use the following disk performance metrics:
:AST: Average Seek Time (second)
:MRS: Maximum sequential Read Speed (bytes/second)
:ACS: Average Chunk Size (bytes)
The maximum performance you can get is::
MAX(ACS) = ACS /(AST + ACS/MRS)
Please note that chunk data is likely to be sequential arranged on disk, but
this it is sort of a best case assumption.
For a typical rotational disk, we assume the following values::
AST: 10ms
MRS: 170MB/s
MAX(4MB) = 115.37 MB/s
MAX(1MB) = 61.85 MB/s;
MAX(64KB) = 6.02 MB/s;
MAX(4KB) = 0.39 MB/s;
MAX(1KB) = 0.10 MB/s;
Modern SSD are much faster, lets assume the following::
max IOPS: 20000 => AST = 0.00005
MRS: 500Mb/s
MAX(4MB) = 474 MB/s
MAX(1MB) = 465 MB/s;
MAX(64KB) = 354 MB/s;
MAX(4KB) = 67 MB/s;
MAX(1KB) = 18 MB/s;
Also, the average chunk directly relates to the number of chunks produced by
a backup::
CHUNK_COUNT = BACKUP_SIZE / ACS
Here are some staticics from my developer worstation::
Disk Usage: 65 GB
Directories: 58971
Files: 726314
Files < 64KB: 617541
As you see, there are really many small files. If we would do file
level deduplication, i.e. generate one chunk per file, we end up with
more than 700000 chunks.
Instead, our current algorithm only produce large chunks with an
average chunks size of 4MB. With above data, this produce about 15000
chunks (factor 50 less chunks).

View File

@ -35,3 +35,4 @@ Chores:
Suggestions Suggestions
=========== ===========

114
debian/changelog vendored
View File

@ -1,3 +1,117 @@
rust-proxmox-backup (1.0.7-1) unstable; urgency=medium
* fix #3197: skip fingerprint check when restoring key
* client: add 'import-with-master-key' command
* fix #3192: correct sort in prune sim
* tools/daemon: improve reload behaviour
* http client: add timeouts for critical connects
* api: improve error messages for restricted endpoints
* api: allow tokens to list users
* ui: running tasks: Use gettext for column labels
* login: add two-factor authenication (TFA) and integrate in web-interface
* login: support webAuthn, recovery keys and TOTP as TFA methods
* make it possible to abort tasks with CTRL-C
* fix #3245: only use default schedule for new jobs
* manager CLI: user/token list: fix rendering 0 (never) expire date
* update the event-driven, non-blocking I/O tokio platform to 1.0
* access: limit editing all pam credentials to superuser
* access: restrict password changes on @pam realm to superuser
* patch out wrongly linked libraries from ELFs to avoid extra, bogus
dependencies in resulting package
* add "password hint" to encryption key config
* improve GC error handling
* cli: make it possible to abort tasks with CTRL-C
-- Proxmox Support Team <support@proxmox.com> Wed, 03 Feb 2021 10:34:23 +0100
rust-proxmox-backup (1.0.6-1) unstable; urgency=medium
* stricter handling of file-descriptors, fixes some cases where some could
leak
* ui: fix various usages of the findRecord emthod, ensuring it matches exact
* garbage collection: improve task log format
* verification: improve progress log, make it similar to what's logged on
pull (sync)
* datastore: move manifest locking to /run. This avoids issues with
filesystems which cannot natively handle removing in-use files ("delete on
last close"), and create a virtual, internal, replacement file to work
around that. This is done, for example, by NFS or CIFS (samba).
-- Proxmox Support Team <support@proxmox.com> Fri, 11 Dec 2020 12:51:33 +0100
rust-proxmox-backup (1.0.5-1) unstable; urgency=medium
* client: restore: print meta information exclusively to standard error
-- Proxmox Support Team <support@proxmox.com> Wed, 25 Nov 2020 15:29:58 +0100
rust-proxmox-backup (1.0.4-1) unstable; urgency=medium
* fingerprint: add bytes() accessor
* ui: fix broken gettext use
* cli: move more commands into "snapshot" sub-command
-- Proxmox Support Team <support@proxmox.com> Wed, 25 Nov 2020 06:37:41 +0100
rust-proxmox-backup (1.0.3-1) unstable; urgency=medium
* client: inform user when automatically using the default encryption key
* ui: UserView: render name as 'Firstname Lastname'
* proxmox-backup-manager: add versions command
* pxar: fix anchored exclusion at archive root
* pxar: include .pxarexclude files in the archive
* client: expose all-file-systems option
* api: make expensive parts of datastore status opt-in
* api: include datastore ID in invalid owner errors
* garbage collection: treat .bad files like regular chunks to ensure they
are removed if not referenced anymore
* client: fix issues with encoded UPID strings
* encryption: add fingerprint to key config
* client: add 'key show' command
* fix #3139: add key fingerprint to backup snapshot manifest and check it
when loading with a key
* ui: add snapshot/file fingerprint tooltip
-- Proxmox Support Team <support@proxmox.com> Tue, 24 Nov 2020 08:55:47 +0100
rust-proxmox-backup (1.0.1-1) unstable; urgency=medium rust-proxmox-backup (1.0.1-1) unstable; urgency=medium
* ui: datastore summary: drop 'removed bytes' display * ui: datastore summary: drop 'removed bytes' display

86
debian/control vendored
View File

@ -7,24 +7,25 @@ Build-Depends: debhelper (>= 11),
rustc:native, rustc:native,
libstd-rust-dev, libstd-rust-dev,
librust-anyhow-1+default-dev, librust-anyhow-1+default-dev,
librust-apt-pkg-native-0.3+default-dev (>= 0.3.1-~~), librust-apt-pkg-native-0.3+default-dev (>= 0.3.2-~~),
librust-base64-0.12+default-dev, librust-base64-0.12+default-dev,
librust-bitflags-1+default-dev (>= 1.2.1-~~), librust-bitflags-1+default-dev (>= 1.2.1-~~),
librust-bytes-0.5+default-dev, librust-bytes-1+default-dev,
librust-crc32fast-1+default-dev, librust-crc32fast-1+default-dev,
librust-crossbeam-channel-0.4+default-dev, librust-crossbeam-channel-0.5+default-dev,
librust-endian-trait-0.6+arrays-dev, librust-endian-trait-0.6+arrays-dev,
librust-endian-trait-0.6+default-dev, librust-endian-trait-0.6+default-dev,
librust-futures-0.3+default-dev, librust-futures-0.3+default-dev,
librust-h2-0.2+default-dev, librust-h2-0.3+default-dev,
librust-h2-0.2+stream-dev, librust-h2-0.3+stream-dev,
librust-handlebars-3+default-dev, librust-handlebars-3+default-dev,
librust-http-0.2+default-dev, librust-http-0.2+default-dev,
librust-hyper-0.13+default-dev (>= 0.13.6-~~), librust-hyper-0.14+default-dev,
librust-hyper-0.14+full-dev,
librust-lazy-static-1+default-dev (>= 1.4-~~), librust-lazy-static-1+default-dev (>= 1.4-~~),
librust-libc-0.2+default-dev, librust-libc-0.2+default-dev,
librust-log-0.4+default-dev, librust-log-0.4+default-dev,
librust-nix-0.19+default-dev, librust-nix-0.19+default-dev (>= 0.19.1-~~),
librust-nom-5+default-dev (>= 5.1-~~), librust-nom-5+default-dev (>= 5.1-~~),
librust-num-traits-0.2+default-dev, librust-num-traits-0.2+default-dev,
librust-once-cell-1+default-dev (>= 1.3.1-~~), librust-once-cell-1+default-dev (>= 1.3.1-~~),
@ -33,42 +34,42 @@ Build-Depends: debhelper (>= 11),
librust-pam-sys-0.5+default-dev, librust-pam-sys-0.5+default-dev,
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~), librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~), librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-1+default-dev,
librust-pin-utils-0.1+default-dev, librust-pin-utils-0.1+default-dev,
librust-proxmox-0.7+api-macro-dev, librust-proxmox-0.10+api-macro-dev (>= 0.10.1-~~),
librust-proxmox-0.7+default-dev, librust-proxmox-0.10+default-dev (>= 0.10.1-~~),
librust-proxmox-0.7+sortable-macro-dev, librust-proxmox-0.10+sortable-macro-dev (>= 0.10.1-~~),
librust-proxmox-0.7+websocket-dev, librust-proxmox-0.10+websocket-dev (>= 0.10.1-~~),
librust-proxmox-fuse-0.1+default-dev, librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~),
librust-pxar-0.6+default-dev (>= 0.6.1-~~), librust-pxar-0.8+default-dev,
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~), librust-pxar-0.8+tokio-io-dev,
librust-pxar-0.6+tokio-io-dev (>= 0.6.1-~~),
librust-regex-1+default-dev (>= 1.2-~~), librust-regex-1+default-dev (>= 1.2-~~),
librust-rustyline-6+default-dev, librust-rustyline-7+default-dev,
librust-serde-1+default-dev, librust-serde-1+default-dev,
librust-serde-1+derive-dev, librust-serde-1+derive-dev,
librust-serde-json-1+default-dev, librust-serde-json-1+default-dev,
librust-siphasher-0.3+default-dev, librust-siphasher-0.3+default-dev,
librust-syslog-4+default-dev, librust-syslog-4+default-dev,
librust-tokio-0.2+blocking-dev (>= 0.2.9-~~), librust-tokio-1+default-dev,
librust-tokio-0.2+default-dev (>= 0.2.9-~~), librust-tokio-1+fs-dev,
librust-tokio-0.2+dns-dev (>= 0.2.9-~~), librust-tokio-1+io-util-dev,
librust-tokio-0.2+fs-dev (>= 0.2.9-~~), librust-tokio-1+macros-dev,
librust-tokio-0.2+io-util-dev (>= 0.2.9-~~), librust-tokio-1+net-dev,
librust-tokio-0.2+macros-dev (>= 0.2.9-~~), librust-tokio-1+parking-lot-dev,
librust-tokio-0.2+process-dev (>= 0.2.9-~~), librust-tokio-1+process-dev,
librust-tokio-0.2+rt-threaded-dev (>= 0.2.9-~~), librust-tokio-1+rt-dev,
librust-tokio-0.2+signal-dev (>= 0.2.9-~~), librust-tokio-1+rt-multi-thread-dev,
librust-tokio-0.2+stream-dev (>= 0.2.9-~~), librust-tokio-1+signal-dev,
librust-tokio-0.2+tcp-dev (>= 0.2.9-~~), librust-tokio-1+time-dev,
librust-tokio-0.2+time-dev (>= 0.2.9-~~), librust-tokio-openssl-0.6+default-dev (>= 0.6.1-~~),
librust-tokio-0.2+uds-dev (>= 0.2.9-~~), librust-tokio-stream-0.1+default-dev,
librust-tokio-openssl-0.4+default-dev, librust-tokio-util-0.6+codec-dev,
librust-tokio-util-0.3+codec-dev, librust-tokio-util-0.6+default-dev,
librust-tokio-util-0.3+default-dev,
librust-tower-service-0.3+default-dev, librust-tower-service-0.3+default-dev,
librust-udev-0.4+default-dev | librust-udev-0.3+default-dev, librust-udev-0.4+default-dev | librust-udev-0.3+default-dev,
librust-url-2+default-dev (>= 2.1-~~), librust-url-2+default-dev (>= 2.1-~~),
librust-walkdir-2+default-dev, librust-walkdir-2+default-dev,
librust-webauthn-rs-0.2+default-dev (>= 0.2.5-~~),
librust-xdg-2+default-dev (>= 2.2-~~), librust-xdg-2+default-dev (>= 2.2-~~),
librust-zstd-0.4+bindgen-dev, librust-zstd-0.4+bindgen-dev,
librust-zstd-0.4+default-dev, librust-zstd-0.4+default-dev,
@ -76,34 +77,40 @@ Build-Depends: debhelper (>= 11),
libfuse3-dev, libfuse3-dev,
libsystemd-dev, libsystemd-dev,
uuid-dev, uuid-dev,
debhelper (>= 12~), libsgutils2-dev,
bash-completion, bash-completion,
pve-eslint, debhelper (>= 12~),
python3-docutils,
python3-pygments,
rsync,
fonts-dejavu-core <!nodoc>, fonts-dejavu-core <!nodoc>,
fonts-lato <!nodoc>, fonts-lato <!nodoc>,
fonts-open-sans <!nodoc>, fonts-open-sans <!nodoc>,
graphviz <!nodoc>, graphviz <!nodoc>,
latexmk <!nodoc>, latexmk <!nodoc>,
patchelf,
pve-eslint (>= 7.18.0-1),
python3-docutils,
python3-pygments,
python3-sphinx <!nodoc>, python3-sphinx <!nodoc>,
rsync,
texlive-fonts-extra <!nodoc>, texlive-fonts-extra <!nodoc>,
texlive-fonts-recommended <!nodoc>, texlive-fonts-recommended <!nodoc>,
texlive-xetex <!nodoc>, texlive-xetex <!nodoc>,
xindy <!nodoc> xindy <!nodoc>
Maintainer: Proxmox Support Team <support@proxmox.com> Maintainer: Proxmox Support Team <support@proxmox.com>
Standards-Version: 4.4.1 Standards-Version: 4.4.1
Vcs-Git: Vcs-Git: git://git.proxmox.com/git/proxmox-backup.git
Vcs-Browser: Vcs-Browser: https://git.proxmox.com/?p=proxmox-backup.git;a=summary
Homepage: https://www.proxmox.com Homepage: https://www.proxmox.com
Package: proxmox-backup-server Package: proxmox-backup-server
Architecture: any Architecture: any
Depends: fonts-font-awesome, Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1), libjs-extjs (>= 6.0.1),
libjs-qrcodejs (>= 1.20201119),
libsgutils2-2,
libzstd1 (>= 1.3.8), libzstd1 (>= 1.3.8),
lvm2, lvm2,
mt-st,
mtx,
openssh-server, openssh-server,
pbs-i18n, pbs-i18n,
postfix | mail-transport-agent, postfix | mail-transport-agent,
@ -111,6 +118,7 @@ Depends: fonts-font-awesome,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.3-6), proxmox-widget-toolkit (>= 2.3-6),
pve-xtermjs (>= 4.7.0-1), pve-xtermjs (>= 4.7.0-1),
sg3-utils,
smartmontools, smartmontools,
${misc:Depends}, ${misc:Depends},
${shlibs:Depends}, ${shlibs:Depends},

5
debian/control.in vendored
View File

@ -2,8 +2,12 @@ Package: proxmox-backup-server
Architecture: any Architecture: any
Depends: fonts-font-awesome, Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1), libjs-extjs (>= 6.0.1),
libjs-qrcodejs (>= 1.20201119),
libsgutils2-2,
libzstd1 (>= 1.3.8), libzstd1 (>= 1.3.8),
lvm2, lvm2,
mt-st,
mtx,
openssh-server, openssh-server,
pbs-i18n, pbs-i18n,
postfix | mail-transport-agent, postfix | mail-transport-agent,
@ -11,6 +15,7 @@ Depends: fonts-font-awesome,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.3-6), proxmox-widget-toolkit (>= 2.3-6),
pve-xtermjs (>= 4.7.0-1), pve-xtermjs (>= 4.7.0-1),
sg3-utils,
smartmontools, smartmontools,
${misc:Depends}, ${misc:Depends},
${shlibs:Depends}, ${shlibs:Depends},

22
debian/debcargo.toml vendored
View File

@ -2,33 +2,32 @@ overlay = "."
crate_src_path = ".." crate_src_path = ".."
whitelist = ["tests/*.c"] whitelist = ["tests/*.c"]
# needed for pinutils alpha maintainer = "Proxmox Support Team <support@proxmox.com>"
allow_prerelease_deps = true
[source] [source]
# TODO: update once public vcs_git = "git://git.proxmox.com/git/proxmox-backup.git"
vcs_git = "" vcs_browser = "https://git.proxmox.com/?p=proxmox-backup.git;a=summary"
vcs_browser = ""
maintainer = "Proxmox Support Team <support@proxmox.com>"
section = "admin" section = "admin"
build_depends = [ build_depends = [
"debhelper (>= 12~)",
"bash-completion", "bash-completion",
"pve-eslint", "debhelper (>= 12~)",
"python3-docutils",
"python3-pygments",
"rsync",
"fonts-dejavu-core <!nodoc>", "fonts-dejavu-core <!nodoc>",
"fonts-lato <!nodoc>", "fonts-lato <!nodoc>",
"fonts-open-sans <!nodoc>", "fonts-open-sans <!nodoc>",
"graphviz <!nodoc>", "graphviz <!nodoc>",
"latexmk <!nodoc>", "latexmk <!nodoc>",
"patchelf",
"pve-eslint (>= 7.18.0-1)",
"python3-docutils",
"python3-pygments",
"python3-sphinx <!nodoc>", "python3-sphinx <!nodoc>",
"rsync",
"texlive-fonts-extra <!nodoc>", "texlive-fonts-extra <!nodoc>",
"texlive-fonts-recommended <!nodoc>", "texlive-fonts-recommended <!nodoc>",
"texlive-xetex <!nodoc>", "texlive-xetex <!nodoc>",
"xindy <!nodoc>", "xindy <!nodoc>",
] ]
build_depends_excludes = [ build_depends_excludes = [
"debhelper (>=11)", "debhelper (>=11)",
] ]
@ -39,4 +38,5 @@ depends = [
"libfuse3-dev", "libfuse3-dev",
"libsystemd-dev", "libsystemd-dev",
"uuid-dev", "uuid-dev",
"libsgutils2-dev",
] ]

3
debian/pmtx.bc vendored Normal file
View File

@ -0,0 +1,3 @@
# pmtx bash completion
complete -C 'pmtx bashcomplete' pmtx

3
debian/postinst vendored
View File

@ -6,6 +6,9 @@ set -e
case "$1" in case "$1" in
configure) configure)
# need to have user backup in the tapoe group
usermod -a -G tape backup
# modeled after dh_systemd_start output # modeled after dh_systemd_start output
systemctl --system daemon-reload >/dev/null || true systemctl --system daemon-reload >/dev/null || true
if [ -n "$2" ]; then if [ -n "$2" ]; then

View File

@ -1,2 +1,3 @@
/usr/share/doc/proxmox-backup/proxmox-backup.pdf /usr/share/doc/proxmox-backup/html/proxmox-backup.pdf /usr/share/doc/proxmox-backup/proxmox-backup.pdf /usr/share/doc/proxmox-backup/html/proxmox-backup.pdf
/usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/prune-simulator/extjs /usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/prune-simulator/extjs
/usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/lto-barcode/extjs

View File

@ -1 +1,2 @@
debian/proxmox-backup-manager.bc proxmox-backup-manager debian/proxmox-backup-manager.bc proxmox-backup-manager
debian/pmtx.bc pmtx

View File

@ -8,12 +8,15 @@ usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-daily-update usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-daily-update
usr/lib/x86_64-linux-gnu/proxmox-backup/sg-tape-cmd
usr/sbin/proxmox-backup-manager usr/sbin/proxmox-backup-manager
usr/bin/pmtx
usr/share/javascript/proxmox-backup/index.hbs usr/share/javascript/proxmox-backup/index.hbs
usr/share/javascript/proxmox-backup/css/ext6-pbs.css usr/share/javascript/proxmox-backup/css/ext6-pbs.css
usr/share/javascript/proxmox-backup/images/logo-128.png usr/share/javascript/proxmox-backup/images
usr/share/javascript/proxmox-backup/images/proxmox_logo.png
usr/share/javascript/proxmox-backup/js/proxmox-backup-gui.js usr/share/javascript/proxmox-backup/js/proxmox-backup-gui.js
usr/share/man/man1/proxmox-backup-manager.1 usr/share/man/man1/proxmox-backup-manager.1
usr/share/man/man1/proxmox-backup-proxy.1 usr/share/man/man1/proxmox-backup-proxy.1
usr/share/man/man1/pmtx.1
usr/share/zsh/vendor-completions/_proxmox-backup-manager usr/share/zsh/vendor-completions/_proxmox-backup-manager
usr/share/zsh/vendor-completions/_pmtx

10
debian/rules vendored
View File

@ -42,10 +42,20 @@ override_dh_installsystemd:
# note: we start/try-reload-restart services manually in postinst # note: we start/try-reload-restart services manually in postinst
dh_installsystemd --no-start --no-restart-after-upgrade dh_installsystemd --no-start --no-restart-after-upgrade
override_dh_fixperms:
dh_fixperms --exclude sg-tape-cmd
# workaround https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933541 # workaround https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933541
# TODO: remove once available (Debian 11 ?) # TODO: remove once available (Debian 11 ?)
override_dh_dwz: override_dh_dwz:
dh_dwz --no-dwz-multifile dh_dwz --no-dwz-multifile
override_dh_strip:
dh_strip
for exe in $$(find debian/proxmox-backup-client/usr \
debian/proxmox-backup-server/usr -executable -type f); do \
debian/scripts/elf-strip-unused-dependencies.sh "$$exe" || true; \
done
override_dh_compress: override_dh_compress:
dh_compress -X.pdf dh_compress -X.pdf

View File

@ -0,0 +1,20 @@
#!/bin/bash
binary=$1
exec 3< <(ldd -u "$binary" | grep -oP '[^/:]+$')
patchargs=""
dropped=""
while read -r dep; do
dropped="$dep $dropped"
patchargs="--remove-needed $dep $patchargs"
done <&3
exec 3<&-
if [[ $dropped == "" ]]; then
exit 0
fi
echo -e "patchelf '$binary' - removing unused dependencies:\n $dropped"
patchelf $patchargs $binary

View File

@ -5,11 +5,13 @@ GENERATED_SYNOPSIS := \
proxmox-backup-client/catalog-shell-synopsis.rst \ proxmox-backup-client/catalog-shell-synopsis.rst \
proxmox-backup-manager/synopsis.rst \ proxmox-backup-manager/synopsis.rst \
pxar/synopsis.rst \ pxar/synopsis.rst \
pmtx/synopsis.rst \
backup-protocol-api.rst \ backup-protocol-api.rst \
reader-protocol-api.rst reader-protocol-api.rst
MANUAL_PAGES := \ MANUAL_PAGES := \
pxar.1 \ pxar.1 \
pmtx.1 \
proxmox-backup-proxy.1 \ proxmox-backup-proxy.1 \
proxmox-backup-client.1 \ proxmox-backup-client.1 \
proxmox-backup-manager.1 proxmox-backup-manager.1
@ -17,8 +19,22 @@ MANUAL_PAGES := \
PRUNE_SIMULATOR_FILES := \ PRUNE_SIMULATOR_FILES := \
prune-simulator/index.html \ prune-simulator/index.html \
prune-simulator/documentation.html \ prune-simulator/documentation.html \
prune-simulator/clear-trigger.png \
prune-simulator/prune-simulator.js prune-simulator/prune-simulator.js
LTO_BARCODE_FILES := \
lto-barcode/index.html \
lto-barcode/code39.js \
lto-barcode/prefix-field.js \
lto-barcode/label-style.js \
lto-barcode/tape-type.js \
lto-barcode/paper-size.js \
lto-barcode/page-layout.js \
lto-barcode/page-calibration.js \
lto-barcode/label-list.js \
lto-barcode/label-setup.js \
lto-barcode/lto-barcode.js
# Sphinx documentation setup # Sphinx documentation setup
SPHINXOPTS = SPHINXOPTS =
SPHINXBUILD = sphinx-build SPHINXBUILD = sphinx-build
@ -53,6 +69,14 @@ pxar/synopsis.rst: ${COMPILEDIR}/pxar
pxar.1: pxar/man1.rst pxar/description.rst pxar/synopsis.rst pxar.1: pxar/man1.rst pxar/description.rst pxar/synopsis.rst
rst2man $< >$@ rst2man $< >$@
pmtx/synopsis.rst: ${COMPILEDIR}/pmtx
${COMPILEDIR}/pmtx printdoc > pmtx/synopsis.rst
pmtx.1: pmtx/man1.rst pmtx/description.rst pmtx/synopsis.rst
rst2man $< >$@
proxmox-backup-client/synopsis.rst: ${COMPILEDIR}/proxmox-backup-client proxmox-backup-client/synopsis.rst: ${COMPILEDIR}/proxmox-backup-client
${COMPILEDIR}/proxmox-backup-client printdoc > proxmox-backup-client/synopsis.rst ${COMPILEDIR}/proxmox-backup-client printdoc > proxmox-backup-client/synopsis.rst
@ -78,11 +102,13 @@ onlinehelpinfo:
@echo "Build finished. OnlineHelpInfo.js is in $(BUILDDIR)/scanrefs." @echo "Build finished. OnlineHelpInfo.js is in $(BUILDDIR)/scanrefs."
.PHONY: html .PHONY: html
html: ${GENERATED_SYNOPSIS} images/proxmox-logo.svg custom.css conf.py ${PRUNE_SIMULATOR_FILES} html: ${GENERATED_SYNOPSIS} images/proxmox-logo.svg custom.css conf.py ${PRUNE_SIMULATOR_FILES} ${LTO_BARCODE_FILES}
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
install -m 0644 custom.js custom.css images/proxmox-logo.svg $(BUILDDIR)/html/_static/ install -m 0644 custom.js custom.css images/proxmox-logo.svg $(BUILDDIR)/html/_static/
install -dm 0755 $(BUILDDIR)/html/prune-simulator install -dm 0755 $(BUILDDIR)/html/prune-simulator
install -m 0644 ${PRUNE_SIMULATOR_FILES} $(BUILDDIR)/html/prune-simulator install -m 0644 ${PRUNE_SIMULATOR_FILES} $(BUILDDIR)/html/prune-simulator
install -dm 0755 $(BUILDDIR)/html/lto-barcode
install -m 0644 ${LTO_BARCODE_FILES} $(BUILDDIR)/html/lto-barcode
@echo @echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html." @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."

View File

@ -353,8 +353,10 @@ To set up a master key:
.. code-block:: console .. code-block:: console
# openssl rsautl -decrypt -inkey master-private.pem -in rsa-encrypted.key -out /path/to/target # proxmox-backup-client key import-with-master-key /path/to/target --master-keyfile /path/to/master-private.pem --encrypted-keyfile /path/to/rsa-encrypted.key
Enter pass phrase for ./master-private.pem: ********* Master Key Password: ******
New Password: ******
Verify Password: ******
7. The target file will now contain the encryption key information in plain 7. The target file will now contain the encryption key information in plain
text. The success of this can be confirmed by passing the resulting ``json`` text. The success of this can be confirmed by passing the resulting ``json``
@ -392,11 +394,11 @@ periodic recovery tests to ensure that you can access the data in
case of problems. case of problems.
First, you need to find the snapshot which you want to restore. The snapshot First, you need to find the snapshot which you want to restore. The snapshot
command provides a list of all the snapshots on the server: list command provides a list of all the snapshots on the server:
.. code-block:: console .. code-block:: console
# proxmox-backup-client snapshots # proxmox-backup-client snapshot list
┌────────────────────────────────┬─────────────┬────────────────────────────────────┐ ┌────────────────────────────────┬─────────────┬────────────────────────────────────┐
│ snapshot │ size │ files │ │ snapshot │ size │ files │
╞════════════════════════════════╪═════════════╪════════════════════════════════════╡ ╞════════════════════════════════╪═════════════╪════════════════════════════════════╡
@ -581,7 +583,7 @@ command:
.. code-block:: console .. code-block:: console
# proxmox-backup-client forget <snapshot> # proxmox-backup-client snapshot forget <snapshot>
.. caution:: This command removes all archives in this backup .. caution:: This command removes all archives in this backup

View File

@ -172,6 +172,7 @@ html_theme_options = {
'Proxmox Homepage': 'https://proxmox.com', 'Proxmox Homepage': 'https://proxmox.com',
'PDF': 'proxmox-backup.pdf', 'PDF': 'proxmox-backup.pdf',
'Prune Simulator' : 'prune-simulator/index.html', 'Prune Simulator' : 'prune-simulator/index.html',
'LTO Barcode Generator' : 'lto-barcode/index.html',
}, },
'sidebar_width': '320px', 'sidebar_width': '320px',

View File

@ -53,9 +53,12 @@ checksums. This manifest file is used to verify the integrity of each backup.
When backing up to remote servers, do I have to trust the remote server? When backing up to remote servers, do I have to trust the remote server?
------------------------------------------------------------------------ ------------------------------------------------------------------------
Proxmox Backup Server supports client-side encryption, meaning your data is Proxmox Backup Server transfers data via `Transport Layer Security (TLS)
encrypted before it reaches the server. Thus, in the event that an attacker <https://en.wikipedia.org/wiki/Transport_Layer_Security>`_ and additionally
gains access to the server, they will not be able to read the data. supports client-side encryption. This means that data is transferred securely
and can be encrypted before it reaches the server. Thus, in the event that an
attacker gains access to the server or any point of the network, they will not
be able to read the data.
.. note:: Encryption is not enabled by default. To set up encryption, see the .. note:: Encryption is not enabled by default. To set up encryption, see the
`Encryption `Encryption

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -14,11 +14,12 @@ It supports deduplication, compression, and authenticated
encryption (AE_). Using :term:`Rust` as the implementation language guarantees high encryption (AE_). Using :term:`Rust` as the implementation language guarantees high
performance, low resource usage, and a safe, high-quality codebase. performance, low resource usage, and a safe, high-quality codebase.
Proxmox Backup uses state of the art cryptography for client communication and Proxmox Backup uses state of the art cryptography for both client-server
backup content :ref:`encryption <encryption>`. Encryption is done on the communication and backup content :ref:`encryption <encryption>`. All
client side, making it safer to back up data to targets that are not fully client-server communication uses `TLS
trusted. <https://en.wikipedia.org/wiki/Transport_Layer_Security>`_, and backup data can
be encrypted on the client-side before sending, making it safer to back up data
to targets that are not fully trusted.
Architecture Architecture
------------ ------------
@ -65,8 +66,9 @@ Main Features
several gigabytes of data per second. several gigabytes of data per second.
:Encryption: Backups can be encrypted on the client-side, using AES-256 in :Encryption: Backups can be encrypted on the client-side, using AES-256 in
Galois/Counter Mode (GCM_) mode. This authenticated encryption (AE_) mode Galois/Counter Mode (GCM_). This authenticated encryption (AE_) mode
provides very high performance on modern hardware. provides very high performance on modern hardware. In addition to client-side
encryption, all data is transferred via a secure TLS connection.
:Web interface: Manage the Proxmox Backup Server with the integrated, web-based :Web interface: Manage the Proxmox Backup Server with the integrated, web-based
user interface. user interface.

351
docs/lto-barcode/code39.js Normal file
View File

@ -0,0 +1,351 @@
// Code39 barcode generator
// see https://en.wikipedia.org/wiki/Code_39
// IBM LTO Ultrium Cartridge Label Specification
// http://www-01.ibm.com/support/docview.wss?uid=ssg1S7000429
let code39_codes = {
"1": ['B', 's', 'b', 'S', 'b', 's', 'b', 's', 'B'],
"A": ['B', 's', 'b', 's', 'b', 'S', 'b', 's', 'B'],
"K": ['B', 's', 'b', 's', 'b', 's', 'b', 'S', 'B'],
"U": ['B', 'S', 'b', 's', 'b', 's', 'b', 's', 'B'],
"2": ['b', 's', 'B', 'S', 'b', 's', 'b', 's', 'B'],
"B": ['b', 's', 'B', 's', 'b', 'S', 'b', 's', 'B'],
"L": ['b', 's', 'B', 's', 'b', 's', 'b', 'S', 'B'],
"V": ['b', 'S', 'B', 's', 'b', 's', 'b', 's', 'B'],
"3": ['B', 's', 'B', 'S', 'b', 's', 'b', 's', 'b'],
"C": ['B', 's', 'B', 's', 'b', 'S', 'b', 's', 'b'],
"M": ['B', 's', 'B', 's', 'b', 's', 'b', 'S', 'b'],
"W": ['B', 'S', 'B', 's', 'b', 's', 'b', 's', 'b'],
"4": ['b', 's', 'b', 'S', 'B', 's', 'b', 's', 'B'],
"D": ['b', 's', 'b', 's', 'B', 'S', 'b', 's', 'B'],
"N": ['b', 's', 'b', 's', 'B', 's', 'b', 'S', 'B'],
"X": ['b', 'S', 'b', 's', 'B', 's', 'b', 's', 'B'],
"5": ['B', 's', 'b', 'S', 'B', 's', 'b', 's', 'b'],
"E": ['B', 's', 'b', 's', 'B', 'S', 'b', 's', 'b'],
"O": ['B', 's', 'b', 's', 'B', 's', 'b', 'S', 'b'],
"Y": ['B', 'S', 'b', 's', 'B', 's', 'b', 's', 'b'],
"6": ['b', 's', 'B', 'S', 'B', 's', 'b', 's', 'b'],
"F": ['b', 's', 'B', 's', 'B', 'S', 'b', 's', 'b'],
"P": ['b', 's', 'B', 's', 'B', 's', 'b', 'S', 'b'],
"Z": ['b', 'S', 'B', 's', 'B', 's', 'b', 's', 'b'],
"7": ['b', 's', 'b', 'S', 'b', 's', 'B', 's', 'B'],
"G": ['b', 's', 'b', 's', 'b', 'S', 'B', 's', 'B'],
"Q": ['b', 's', 'b', 's', 'b', 's', 'B', 'S', 'B'],
"-": ['b', 'S', 'b', 's', 'b', 's', 'B', 's', 'B'],
"8": ['B', 's', 'b', 'S', 'b', 's', 'B', 's', 'b'],
"H": ['B', 's', 'b', 's', 'b', 'S', 'B', 's', 'b'],
"R": ['B', 's', 'b', 's', 'b', 's', 'B', 'S', 'b'],
".": ['B', 'S', 'b', 's', 'b', 's', 'B', 's', 'b'],
"9": ['b', 's', 'B', 'S', 'b', 's', 'B', 's', 'b'],
"I": ['b', 's', 'B', 's', 'b', 'S', 'B', 's', 'b'],
"S": ['b', 's', 'B', 's', 'b', 's', 'B', 'S', 'b'],
" ": ['b', 'S', 'B', 's', 'b', 's', 'B', 's', 'b'],
"0": ['b', 's', 'b', 'S', 'B', 's', 'B', 's', 'b'],
"J": ['b', 's', 'b', 's', 'B', 'S', 'B', 's', 'b'],
"T": ['b', 's', 'b', 's', 'B', 's', 'B', 'S', 'b'],
"*": ['b', 'S', 'b', 's', 'B', 's', 'B', 's', 'b']
};
let colors = [
'#BB282E',
'#FAE54A',
'#9AC653',
'#01A5E2',
'#9EAAB6',
'#D97E35',
'#E27B99',
'#67A945',
'#F6B855',
'#705A81'
];
let lto_label_width = 70;
let lto_label_height = 17;
function foreach_label(page_layout, callback) {
let count = 0;
let row = 0;
let height = page_layout.margin_top;
while ((height + page_layout.label_height) <= page_layout.page_height) {
let column = 0;
let width = page_layout.margin_left;
while ((width + page_layout.label_width) <= page_layout.page_width) {
callback(column, row, count, width, height);
count += 1;
column += 1;
width += page_layout.label_width;
width += page_layout.column_spacing;
}
row += 1;
height += page_layout.label_height;
height += page_layout.row_spacing;
}
}
function compute_max_labels(page_layout) {
let max_labels = 0;
foreach_label(page_layout, function() { max_labels += 1; });
return max_labels;
}
function svg_label(mode, label, label_type, pagex, pagey, label_borders) {
let svg = "";
if (label.length != 6) {
throw "wrong label length";
}
if (label_type.length != 2) {
throw "wrong label_type length";
}
let ratio = 2.75;
let parts = 3*ratio + 6; // 3*wide + 6*small;
let barcode_width = (lto_label_width/12)*10; // 10*code + 2margin
let small = barcode_width/(parts*10 + 9);
let code_width = small*parts;
let wide = small*ratio;
let xpos = pagex + code_width;
let height = 12;
if (mode === 'placeholder') {
if (label_borders) {
svg += `<rect class='unprintable' x='${pagex}' y='${pagey}' width='${lto_label_width}' height='${lto_label_height}' fill='none' style='stroke:black;stroke-width:0.1;'/>`;
}
return svg;
}
if (label_borders) {
svg += `<rect x='${pagex}' y='${pagey}' width='${lto_label_width}' height='${lto_label_height}' fill='none' style='stroke:black;stroke-width:0.1;'/>`;
}
if (mode === "color" || mode == "frame") {
let w = lto_label_width/8;
let h = lto_label_height - height;
for (var i = 0; i < 7; i++) {
let textx = w/2 + pagex + i*w;
let texty = pagey;
let fill = "none";
if (mode === "color" && (i < 6)) {
let letter = label.charAt(i);
if (letter >= '0' && letter <= '9') {
fill = colors[parseInt(letter, 10)];
}
}
svg += `<rect x='${textx}' y='${texty}' width='${w}' height='${h}' style='stroke:black;stroke-width:0.2;fill:${fill};'/>`;
if (i == 6) {
textx += 3;
texty += 3.7;
svg += `<text x='${textx}' y='${texty}' style='font-weight:bold;font-size:3px;font-family:sans-serif;'>${label_type}</text>`;
} else {
let letter = label.charAt(i);
textx += 3.5;
texty += 4;
svg += `<text x='${textx}' y='${texty}' style='font-weight:bold;font-size:4px;font-family:sans-serif;'>${letter}</text>`;
}
}
}
let raw_label = `*${label}${label_type}*`;
for (var i = 0; i < raw_label.length; i++) {
let letter = raw_label.charAt(i);
let code = code39_codes[letter];
if (code === undefined) {
throw `unable to encode letter '${letter}' with code39`;
}
if (mode === "simple") {
let textx = xpos + code_width/2;
let texty = pagey + 4;
if (i > 0 && (i+1) < raw_label.length) {
svg += `<text x='${textx}' y='${texty}' style='font-weight:bold;font-size:4px;font-family:sans-serif;'>${letter}</text>`;
}
}
for (let c of code) {
if (c === 's') {
xpos += small;
continue;
}
if (c === 'S') {
xpos += wide;
continue;
}
let w = c === 'B' ? wide : small;
let ypos = pagey + lto_label_height - height;
svg += `<rect x='${xpos}' y='${ypos}' width='${w}' height='${height}' style='fill:black'/>`;
xpos = xpos + w;
}
xpos += small;
}
return svg;
}
function html_page_header() {
let html = "<html5>";
html += "<style>";
/* no page margins */
html += "@page{margin-left: 0px;margin-right: 0px;margin-top: 0px;margin-bottom: 0px;}";
/* to hide things on printed page */
html += "@media print { .unprintable { visibility: hidden; } }";
html += "</style>";
//html += "<body onload='window.print()'>";
html += "<body style='background-color: white;'>";
return html;
}
function svg_page_header(page_width, page_height) {
let svg = "<svg version='1.1' xmlns='http://www.w3.org/2000/svg'";
svg += ` width='${page_width}mm' height='${page_height}mm' viewBox='0 0 ${page_width} ${page_height}'>`;
return svg;
}
function printBarcodePage() {
let frame = document.getElementById("print_frame");
let window = frame.contentWindow;
window.print();
}
function generate_barcode_page(target_id, page_layout, label_list, calibration) {
let svg = svg_page_header(page_layout.page_width, page_layout.page_height);
let c = calibration;
console.log(calibration);
svg += "<g id='barcode_page'";
if (c !== undefined) {
svg += ` transform='scale(${c.scalex}, ${c.scaley}),translate(${c.offsetx}, ${c.offsety})'`;
}
svg += '>';
foreach_label(page_layout, function(column, row, count, xpos, ypos) {
if (count >= label_list.length) { return; }
let item = label_list[count];
svg += svg_label(item.mode, item.label, item.tape_type, xpos, ypos, page_layout.label_borders);
});
svg += "</g>";
svg += "</svg>";
let html = html_page_header();
html += svg;
html += "</body>";
html += "</html>";
let frame = document.getElementById(target_id);
setupPrintFrame(frame, page_layout.page_width, page_layout.page_height);
let fwindow = frame.contentWindow;
fwindow.document.open();
fwindow.document.write(html);
fwindow.document.close();
}
function setupPrintFrame(frame, page_width, page_height) {
let dpi = 98;
let dpr = window.devicePixelRatio;
if (dpr !== undefined) {
dpi = dpi*dpr;
}
let ppmm = dpi/25.4;
frame.width = page_width*ppmm;
frame.height = page_height*ppmm;
}
function generate_calibration_page(target_id, page_layout, calibration) {
let frame = document.getElementById(target_id);
setupPrintFrame(frame, page_layout.page_width, page_layout.page_height);
let svg = svg_page_header( page_layout.page_width, page_layout.page_height);
svg += "<defs>";
svg += "<marker id='endarrow' markerWidth='10' markerHeight='7' ";
svg += "refX='10' refY='3.5' orient='auto'><polygon points='0 0, 10 3.5, 0 7' />";
svg += "</marker>";
svg += "<marker id='startarrow' markerWidth='10' markerHeight='7' ";
svg += "refX='0' refY='3.5' orient='auto'><polygon points='10 0, 10 7, 0 3.5' />";
svg += "</marker>";
svg += "</defs>";
svg += "<rect x='50' y='50' width='100' height='100' style='fill:none;stroke-width:0.05;stroke:rgb(0,0,0)'/>";
let text_style = "style='font-weight:bold;font-size:4;font-family:sans-serif;'";
svg += `<text x='10' y='99' ${text_style}>Sx = 50mm</text>`;
svg += "<line x1='0' y1='100' x2='50' y2='100' stroke='#000' marker-end='url(#endarrow)' stroke-width='.25'/>";
svg += `<text x='60' y='99' ${text_style}>Dx = 100mm</text>`;
svg += "<line x1='50' y1='100' x2='150' y2='100' stroke='#000' marker-start='url(#startarrow)' marker-end='url(#endarrow)' stroke-width='.25'/>";
svg += `<text x='142' y='10' ${text_style} writing-mode='tb'>Sy = 50mm</text>`;
svg += "<line x1='140' y1='0' x2='140' y2='50' stroke='#000' marker-end='url(#endarrow)' stroke-width='.25'/>";
svg += `<text x='142' y='60' ${text_style} writing-mode='tb'>Dy = 100mm</text>`;
svg += "<line x1='140' y1='50' x2='140' y2='150' stroke='#000' marker-start='url(#startarrow)' marker-end='url(#endarrow)' stroke-width='.25'/>";
let c = calibration;
if (c !== undefined) {
svg += `<rect x='50' y='50' width='100' height='100' style='fill:none;stroke-width:0.05;stroke:rgb(255,0,0)' `;
svg += `transform='scale(${c.scalex}, ${c.scaley}),translate(${c.offsetx}, ${c.offsety})'/>`;
}
svg += "</svg>";
let html = html_page_header();
html += svg;
html += "</body>";
html += "</html>";
let fwindow = frame.contentWindow;
fwindow.document.open();
fwindow.document.write(html);
fwindow.document.close();
}

View File

@ -0,0 +1,51 @@
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no">
<title>Proxmox LTO Barcode Label Generator</title>
<link rel="stylesheet" type="text/css" href="extjs/theme-crisp/resources/theme-crisp-all.css">
<style>
/* fix action column icons */
.x-action-col-icon {
font-size: 13px;
height: 13px;
}
.x-grid-cell-inner-action-col {
padding: 6px 10px 5px;
}
.x-action-col-icon:before {
color: #555;
}
.x-action-col-icon {
color: #21BF4B;
}
.x-action-col-icon {
margin: 0 1px;
font-size: 14px;
}
.x-action-col-icon:before, .x-action-col-icon:after {
font-size: 14px;
}
.x-action-col-icon:hover:before, .x-action-col-icon:hover:after {
text-shadow: 1px 1px 1px #AAA;
font-weight: 800;
}
</style>
<link rel="stylesheet" type="text/css" href="font-awesome/css/font-awesome.css"/>
<script type="text/javascript" src="extjs/ext-all.js"></script>
<script type="text/javascript" src="code39.js"></script>
<script type="text/javascript" src="prefix-field.js"></script>
<script type="text/javascript" src="label-style.js"></script>
<script type="text/javascript" src="tape-type.js"></script>
<script type="text/javascript" src="paper-size.js"></script>
<script type="text/javascript" src="page-layout.js"></script>
<script type="text/javascript" src="page-calibration.js"></script>
<script type="text/javascript" src="label-list.js"></script>
<script type="text/javascript" src="label-setup.js"></script>
<script type="text/javascript" src="lto-barcode.js"></script>
</head>
<body>
</body>
</html>

View File

@ -0,0 +1,140 @@
Ext.define('LabelList', {
extend: 'Ext.grid.Panel',
alias: 'widget.labelList',
plugins: {
ptype: 'cellediting',
clicksToEdit: 1
},
selModel: 'cellmodel',
store: {
field: [
'prefix',
'tape_type',
{
type: 'integer',
name: 'start',
},
{
type: 'integer',
name: 'end',
},
],
data: [],
},
listeners: {
validateedit: function(editor, context) {
console.log(context.field);
console.log(context.value);
context.record.set(context.field, context.value);
context.record.commit();
return true;
},
},
columns: [
{
text: 'Prefix',
dataIndex: 'prefix',
flex: 1,
editor: {
xtype: 'prefixfield',
allowBlank: false,
},
renderer: function (value, metaData, record) {
console.log(record);
if (record.data.mode === 'placeholder') {
return "-";
}
return value;
},
},
{
text: 'Type',
dataIndex: 'tape_type',
flex: 1,
editor: {
xtype: 'ltoTapeType',
allowBlank: false,
},
renderer: function (value, metaData, record) {
console.log(record);
if (record.data.mode === 'placeholder') {
return "-";
}
return value;
},
},
{
text: 'Mode',
dataIndex: 'mode',
flex: 1,
editor: {
xtype: 'ltoLabelStyle',
allowBlank: false,
},
},
{
text: 'Start',
dataIndex: 'start',
flex: 1,
editor: {
xtype: 'numberfield',
allowBlank: false,
},
},
{
text: 'End',
dataIndex: 'end',
flex: 1,
editor: {
xtype: 'numberfield',
},
renderer: function(value) {
if (value === null || value === '' || value === undefined) {
return "Fill";
}
return value;
},
},
{
xtype: 'actioncolumn',
width: 75,
items: [
{
tooltip: 'Move Up',
iconCls: 'fa fa-arrow-up',
handler: function(grid, rowIndex) {
if (rowIndex < 1) { return; }
let store = grid.getStore();
let record = store.getAt(rowIndex);
store.removeAt(rowIndex);
store.insert(rowIndex - 1, record);
},
},
{
tooltip: 'Move Down',
iconCls: 'fa fa-arrow-down',
handler: function(grid, rowIndex) {
let store = grid.getStore();
if (rowIndex >= store.getCount()) { return; }
let record = store.getAt(rowIndex);
store.removeAt(rowIndex);
store.insert(rowIndex + 1, record);
},
},
{
tooltip: 'Delete',
iconCls: 'fa fa-scissors',
//iconCls: 'fa critical fa-trash-o',
handler: function(grid, rowIndex) {
grid.getStore().removeAt(rowIndex);
},
}
],
},
],
});

View File

@ -0,0 +1,107 @@
Ext.define('LabelSetupPanel', {
extend: 'Ext.panel.Panel',
alias: 'widget.labelSetupPanel',
layout: {
type: 'hbox',
align: 'stretch',
pack: 'start',
},
getValues: function() {
let me = this;
let values = {};
Ext.Array.each(me.query('[isFormField]'), function(field) {
let data = field.getSubmitData();
Ext.Object.each(data, function(name, val) {
let parsed = parseInt(val, 10);
values[name] = isNaN(parsed) ? val : parsed;
});
});
return values;
},
controller: {
xclass: 'Ext.app.ViewController',
init: function() {
let me = this;
let view = me.getView();
let list = view.down("labelList");
let store = list.getStore();
store.on('datachanged', function(store) {
view.fireEvent("listchanged", store);
});
store.on('update', function(store) {
view.fireEvent("listchanged", store);
});
},
onAdd: function() {
let list = this.lookupReference('label_list');
let view = this.getView();
let params = view.getValues();
list.getStore().add(params);
},
},
items: [
{
border: false,
layout: {
type: 'vbox',
align: 'stretch',
pack: 'start',
},
items: [
{
xtype: 'prefixfield',
name: 'prefix',
value: 'TEST',
fieldLabel: 'Prefix',
},
{
xtype: 'ltoTapeType',
name: 'tape_type',
fieldLabel: 'Type',
value: 'L8',
},
{
xtype: 'ltoLabelStyle',
name: 'mode',
fieldLabel: 'Mode',
value: 'color',
},
{
xtype: 'numberfield',
name: 'start',
fieldLabel: 'Start',
minValue: 0,
allowBlank: false,
value: 0,
},
{
xtype: 'numberfield',
name: 'end',
fieldLabel: 'End',
minValue: 0,
emptyText: 'Fill',
},
{
xtype: 'button',
text: 'Add',
handler: 'onAdd',
},
],
},
{
margin: "0 0 0 10",
xtype: 'labelList',
reference: 'label_list',
flex: 1,
},
],
});

View File

@ -0,0 +1,20 @@
Ext.define('LtoLabelStyle', {
extend: 'Ext.form.field.ComboBox',
alias: 'widget.ltoLabelStyle',
editable: false,
displayField: 'text',
valueField: 'value',
queryMode: 'local',
store: {
field: ['value', 'text'],
data: [
{ value: 'simple', text: "Simple" },
{ value: 'color', text: 'Color (frames with color)' },
{ value: 'frame', text: 'Frame (no color)' },
{ value: 'placeholder', text: 'Placeholder (empty)' },
],
},
});

View File

@ -0,0 +1,214 @@
// FIXME: HACK! Makes scrolling in number spinner work again. fixed in ExtJS >= 6.1
if (Ext.isFirefox) {
Ext.$eventNameMap.DOMMouseScroll = 'DOMMouseScroll';
}
function draw_labels(target_id, label_list, page_layout, calibration) {
let max_labels = compute_max_labels(page_layout);
let count_fixed = 0;
let count_fill = 0;
for (i = 0; i < label_list.length; i++) {
let item = label_list[i];
if (item.end === null || item.end === '' || item.end === undefined) {
count_fill += 1;
continue;
}
if (item.end <= item.start) {
count_fixed += 1;
continue;
}
count_fixed += (item.end - item.start) + 1;
}
let rest = max_labels - count_fixed;
let fill_size = 1;
if (rest >= count_fill) {
fill_size = Math.floor(rest/count_fill);
}
let list = [];
let count_fill_2 = 0;
for (i = 0; i < label_list.length; i++) {
let item = label_list[i];
let count;
if (item.end === null || item.end === '' || item.end === undefined) {
count_fill_2 += 1;
if (count_fill_2 === count_fill) {
count = rest;
} else {
count = fill_size;
}
rest -= count;
} else {
if (item.end <= item.start) {
count = 1;
} else {
count = (item.end - item.start) + 1;
}
}
for (j = 0; j < count; j++) {
let id = item.start + j;
if (item.prefix.length == 6) {
list.push({
label: item.prefix,
tape_type: item.tape_type,
mode: item.mode,
id: id,
});
rest += count - j - 1;
break;
} else {
let pad_len = 6-item.prefix.length;
let label = item.prefix + id.toString().padStart(pad_len, 0);
if (label.length != 6) {
rest += count - j;
break;
}
list.push({
label: label,
tape_type: item.tape_type,
mode: item.mode,
id: id,
});
}
}
}
generate_barcode_page(target_id, page_layout, list, calibration);
}
Ext.define('MainView', {
extend: 'Ext.container.Viewport',
alias: 'widget.mainview',
layout: {
type: 'vbox',
align: 'stretch',
pack: 'start',
},
width: 800,
controller: {
xclass: 'Ext.app.ViewController',
update_barcode_preview: function() {
let me = this;
let view = me.getView();
let list_view = view.down("labelList");
let store = list_view.getStore();
let label_list = [];
store.each((record) => {
label_list.push(record.data);
});
let page_layout_view = view.down("pageLayoutPanel");
let page_layout = page_layout_view.getValues();
let calibration_view = view.down("pageCalibration");
let page_calibration = calibration_view.getValues();
draw_labels("print_frame", label_list, page_layout, page_calibration);
},
update_calibration_preview: function() {
let me = this;
let view = me.getView();
let page_layout_view = view.down("pageLayoutPanel");
let page_layout = page_layout_view.getValues();
let calibration_view = view.down("pageCalibration");
let page_calibration = calibration_view.getValues();
console.log(page_calibration);
generate_calibration_page('print_frame', page_layout, page_calibration);
},
control: {
labelSetupPanel: {
listchanged: function(store) {
this.update_barcode_preview();
},
activate: function() {
this.update_barcode_preview();
},
},
pageLayoutPanel: {
pagechanged: function(layout) {
this.update_barcode_preview();
},
activate: function() {
this.update_barcode_preview();
},
},
pageCalibration: {
calibrationchanged: function() {
this.update_calibration_preview();
},
activate: function() {
this.update_calibration_preview();
},
},
},
},
items: [
{
xtype: 'tabpanel',
items: [
{
xtype: 'labelSetupPanel',
title: 'Proxmox LTO Barcode Label Generator',
bodyPadding: 10,
},
{
xtype: 'pageLayoutPanel',
title: 'Page Layout',
bodyPadding: 10,
},
{
xtype: 'pageCalibration',
title: 'Printer Calibration',
bodyPadding: 10,
},
],
},
{
xtype: 'panel',
layout: "center",
title: 'Print Preview',
bodyStyle: "background-color: grey;",
bodyPadding: 10,
html: '<center><iframe id="print_frame" frameBorder="0"></iframe></center>',
border: false,
flex: 1,
scrollable: true,
tools:[{
type: 'print',
tooltip: 'Open Print Dialog',
handler: function(event, toolEl, panelHeader) {
printBarcodePage();
}
}],
},
],
});
Ext.onReady(function() {
Ext.create('MainView', {
renderTo: Ext.getBody(),
});
});

View File

@ -0,0 +1,142 @@
Ext.define('PageCalibration', {
extend: 'Ext.panel.Panel',
alias: 'widget.pageCalibration',
layout: {
type: 'hbox',
align: 'stretch',
pack: 'start',
},
getValues: function() {
let me = this;
let values = {};
Ext.Array.each(me.query('[isFormField]'), function(field) {
if (field.isValid()) {
let data = field.getSubmitData();
Ext.Object.each(data, function(name, val) {
let parsed = parseFloat(val, 10);
values[name] = isNaN(parsed) ? val : parsed;
});
}
});
if (values.d_x === undefined) { return; }
if (values.d_y === undefined) { return; }
if (values.s_x === undefined) { return; }
if (values.s_y === undefined) { return; }
scalex = 100/values.d_x;
scaley = 100/values.d_y;
let offsetx = ((50*scalex) - values.s_x)/scalex;
let offsety = ((50*scaley) - values.s_y)/scaley;
return {
scalex: scalex,
scaley: scaley,
offsetx: offsetx,
offsety: offsety,
};
},
controller: {
xclass: 'Ext.app.ViewController',
control: {
'field': {
change: function() {
let view = this.getView();
let param = view.getValues();
view.fireEvent("calibrationchanged", param);
},
},
},
},
items: [
{
border: false,
layout: {
type: 'vbox',
align: 'stretch',
pack: 'start',
},
items: [
{
xtype: 'displayfield',
value: 'a4',
fieldLabel: 'Start Offset Sx (mm)',
labelWidth: 150,
value: 50,
},
{
xtype: 'displayfield',
value: 'a4',
fieldLabel: 'Length Dx (mm)',
labelWidth: 150,
value: 100,
},
{
xtype: 'displayfield',
value: 'a4',
fieldLabel: 'Start Offset Sy (mm)',
labelWidth: 150,
value: 50,
},
{
xtype: 'displayfield',
value: 'a4',
fieldLabel: 'Length Dy (mm)',
labelWidth: 150,
value: 100,
},
],
},
{
border: false,
margin: '0 0 0 20',
layout: {
type: 'vbox',
align: 'stretch',
pack: 'start',
},
items: [
{
xtype: 'numberfield',
value: 'a4',
name: 's_x',
fieldLabel: 'Meassured Start Offset Sx (mm)',
allowBlank: false,
labelWidth: 200,
},
{
xtype: 'numberfield',
value: 'a4',
name: 'd_x',
fieldLabel: 'Meassured Length Dx (mm)',
allowBlank: false,
labelWidth: 200,
},
{
xtype: 'numberfield',
value: 'a4',
name: 's_y',
fieldLabel: 'Meassured Start Offset Sy (mm)',
allowBlank: false,
labelWidth: 200,
},
{
xtype: 'numberfield',
value: 'a4',
name: 'd_y',
fieldLabel: 'Meassured Length Dy (mm)',
allowBlank: false,
labelWidth: 200,
},
],
},
],
})

View File

@ -0,0 +1,167 @@
Ext.define('PageLayoutPanel', {
extend: 'Ext.panel.Panel',
alias: 'widget.pageLayoutPanel',
layout: {
type: 'hbox',
align: 'stretch',
pack: 'start',
},
getValues: function() {
let me = this;
let values = {};
Ext.Array.each(me.query('[isFormField]'), function(field) {
if (field.isValid()) {
let data = field.getSubmitData();
Ext.Object.each(data, function(name, val) {
values[name] = val;
});
}
});
let paper_size = values.paper_size || 'a4';
let param = Ext.apply({}, paper_sizes[paper_size]);
if (param === undefined) {
throw `unknown paper size ${paper_size}`;
}
param.paper_size = paper_size;
Ext.Object.each(values, function(name, val) {
let parsed = parseFloat(val, 10);
param[name] = isNaN(parsed) ? val : parsed;
});
return param;
},
controller: {
xclass: 'Ext.app.ViewController',
control: {
'paperSize': {
change: function(field, paper_size) {
let view = this.getView();
let defaults = paper_sizes[paper_size];
let names = [
'label_width',
'label_height',
'margin_left',
'margin_top',
'column_spacing',
'row_spacing',
];
for (i = 0; i < names.length; i++) {
let name = names[i];
let f = view.down(`field[name=${name}]`);
let v = defaults[name];
if (v != undefined) {
f.setValue(v);
f.setDisabled(defaults.fixed);
} else {
f.setDisabled(false);
}
}
},
},
'field': {
change: function() {
let view = this.getView();
let param = view.getValues();
view.fireEvent("pagechanged", param);
},
},
},
},
items: [
{
border: false,
layout: {
type: 'vbox',
align: 'stretch',
pack: 'start',
},
items: [
{
xtype: 'paperSize',
name: 'paper_size',
value: 'a4',
fieldLabel: 'Paper Size',
},
{
xtype: 'numberfield',
name: 'label_width',
fieldLabel: 'Label width',
minValue: 70,
allowBlank: false,
value: 70,
},
{
xtype: 'numberfield',
name: 'label_height',
fieldLabel: 'Label height',
minValue: 17,
allowBlank: false,
value: 17,
},
{
xtype: 'checkbox',
name: 'label_borders',
fieldLabel: 'Label borders',
value: true,
inputValue: true,
},
],
},
{
border: false,
margin: '0 0 0 10',
layout: {
type: 'vbox',
align: 'stretch',
pack: 'start',
},
items: [
{
xtype: 'numberfield',
name: 'margin_left',
fieldLabel: 'Left margin',
minValue: 0,
allowBlank: false,
value: 0,
},
{
xtype: 'numberfield',
name: 'margin_top',
fieldLabel: 'Top margin',
minValue: 0,
allowBlank: false,
value: 4,
},
{
xtype: 'numberfield',
name: 'column_spacing',
fieldLabel: 'Column spacing',
minValue: 0,
allowBlank: false,
value: 0,
},
{
xtype: 'numberfield',
name: 'row_spacing',
fieldLabel: 'Row spacing',
minValue: 0,
allowBlank: false,
value: 0,
},
],
},
],
});

View File

@ -0,0 +1,49 @@
let paper_sizes = {
a4: {
comment: 'A4 (plain)',
page_width: 210,
page_height: 297,
},
letter: {
comment: 'Letter (plain)',
page_width: 215.9,
page_height: 279.4,
},
avery3420: {
fixed: true,
comment: 'Avery Zweckform 3420',
page_width: 210,
page_height: 297,
label_width: 70,
label_height: 17,
margin_left: 0,
margin_top: 4,
column_spacing: 0,
row_spacing: 0,
},
}
function paper_size_combo_data() {
let data = [];
for (let [key, value] of Object.entries(paper_sizes)) {
data.push({ value: key, text: value.comment });
}
return data;
}
Ext.define('PaperSize', {
extend: 'Ext.form.field.ComboBox',
alias: 'widget.paperSize',
editable: false,
displayField: 'text',
valueField: 'value',
queryMode: 'local',
store: {
field: ['value', 'text'],
data: paper_size_combo_data(),
},
});

View File

@ -0,0 +1,15 @@
Ext.define('PrefixField', {
extend: 'Ext.form.field.Text',
alias: 'widget.prefixfield',
maxLength: 6,
allowBlank: false,
maskRe: /([A-Za-z]+)$/,
listeners: {
change: function(field) {
field.setValue(field.getValue().toUpperCase());
},
},
});

View File

@ -0,0 +1,23 @@
Ext.define('LtoTapeType', {
extend: 'Ext.form.field.ComboBox',
alias: 'widget.ltoTapeType',
editable: false,
displayField: 'text',
valueField: 'value',
queryMode: 'local',
store: {
field: ['value', 'text'],
data: [
{ value: 'L8', text: "LTO-8" },
{ value: 'L7', text: "LTO-7" },
{ value: 'L6', text: "LTO-6" },
{ value: 'L5', text: "LTO-5" },
{ value: 'L4', text: "LTO-4" },
{ value: 'L3', text: "LTO-3" },
{ value: 'CU', text: "Cleaning Unit" },
],
},
});

View File

@ -0,0 +1,6 @@
Description
^^^^^^^^^^^
The ``pmtx`` command controls SCSI media changer devices (tape
autoloader).

28
docs/pmtx/man1.rst Normal file
View File

@ -0,0 +1,28 @@
==========================
pmtx
==========================
.. include:: ../epilog.rst
-------------------------------------------------------------
Control SCSI media changer devices (tape autoloaders)
-------------------------------------------------------------
:Author: |AUTHOR|
:Version: Version |VERSION|
:Manual section: 1
Synopsis
==========
.. include:: synopsis.rst
Description
============
.. include:: description.rst
.. include:: ../pbs-copyright.rst

View File

@ -5,7 +5,7 @@ proxmox-backup-client
.. include:: ../epilog.rst .. include:: ../epilog.rst
------------------------------------------------------------- -------------------------------------------------------------
Command line toot for Backup and Restore Command line tool for Backup and Restore
------------------------------------------------------------- -------------------------------------------------------------
:Author: |AUTHOR| :Author: |AUTHOR|

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -33,6 +33,9 @@
.first-of-month { .first-of-month {
border-right: dashed black 4px; border-right: dashed black 4px;
} }
.clear-trigger {
background-image: url(./clear-trigger.png);
}
</style> </style>
<script type="text/javascript" src="extjs/ext-all.js"></script> <script type="text/javascript" src="extjs/ext-all.js"></script>

View File

@ -265,6 +265,34 @@ Ext.onReady(function() {
}, },
}); });
Ext.define('PBS.PruneSimulatorKeepInput', {
extend: 'Ext.form.field.Number',
alias: 'widget.prunesimulatorKeepInput',
allowBlank: true,
fieldGroup: 'keep',
minValue: 1,
listeners: {
afterrender: function(field) {
this.triggers.clear.setVisible(field.value !== null);
},
change: function(field, newValue, oldValue) {
this.triggers.clear.setVisible(newValue !== null);
},
},
triggers: {
clear: {
cls: 'clear-trigger',
weight: -1,
handler: function() {
this.triggers.clear.setVisible(false);
this.setValue(null);
},
},
},
});
Ext.define('PBS.PruneSimulatorPanel', { Ext.define('PBS.PruneSimulatorPanel', {
extend: 'Ext.panel.Panel', extend: 'Ext.panel.Panel',
alias: 'widget.prunesimulatorPanel', alias: 'widget.prunesimulatorPanel',
@ -421,11 +449,8 @@ Ext.onReady(function() {
}); });
}); });
// ordering here and iterating backwards through days // sort recent times first, backups array below is ordered now -> past
// ensures that everything is ordered timesOnSingleDay.sort((a, b) => b - a);
timesOnSingleDay.sort(function(a, b) {
return a < b;
});
let backups = []; let backups = [];
@ -457,16 +482,17 @@ Ext.onReady(function() {
backups.forEach(function(backup) { backups.forEach(function(backup) {
let mark = backup.mark; let mark = backup.mark;
if (mark && mark === 'keep') {
let id = idFunc(backup); let id = idFunc(backup);
if (finished || alreadyIncluded[id]) {
return;
}
if (mark) {
if (mark === 'keep') {
alreadyIncluded[id] = true; alreadyIncluded[id] = true;
} }
});
backups.forEach(function(backup) {
let mark = backup.mark;
let id = idFunc(backup);
if (finished || alreadyIncluded[id] || mark) {
return; return;
} }
@ -560,58 +586,37 @@ Ext.onReady(function() {
keepItems: [ keepItems: [
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-last', name: 'keep-last',
allowBlank: true,
fieldLabel: 'keep-last', fieldLabel: 'keep-last',
minValue: 0,
value: 4, value: 4,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-hourly', name: 'keep-hourly',
allowBlank: true,
fieldLabel: 'keep-hourly', fieldLabel: 'keep-hourly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-daily', name: 'keep-daily',
allowBlank: true,
fieldLabel: 'keep-daily', fieldLabel: 'keep-daily',
minValue: 0,
value: 5, value: 5,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-weekly', name: 'keep-weekly',
allowBlank: true,
fieldLabel: 'keep-weekly', fieldLabel: 'keep-weekly',
minValue: 0,
value: 2, value: 2,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-monthly', name: 'keep-monthly',
allowBlank: true,
fieldLabel: 'keep-monthly', fieldLabel: 'keep-monthly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-yearly', name: 'keep-yearly',
allowBlank: true,
fieldLabel: 'keep-yearly', fieldLabel: 'keep-yearly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
], ],

732
docs/tape-backup.rst Normal file
View File

@ -0,0 +1,732 @@
Tape Backup
===========
Proxmox tape backup provides an easy way to store datastore content
onto magnetic tapes. This increases data safety because you get:
- an additional copy of the data
- to a different media type (tape)
- to an additional location (you can move tapes offsite)
In most restore jobs, only data from the last backup job is restored.
Restore requests further decline the older the data
gets. Considering this, tape backup may also help to reduce disk
usage, because you can safely remove data from disk once archived on
tape. This is especially true if you need to keep data for several
years.
Tape backups do not provide random access to the stored data. Instead,
you need to restore the data to disk before you can access it
again. Also, if you store your tapes offsite (using some kind of tape
vaulting service), you need to bring them onsite before you can do any
restore. So please consider that restores from tapes can take much
longer than restores from disk.
Tape Technology Primer
----------------------
.. _Linear Tape Open: https://en.wikipedia.org/wiki/Linear_Tape-Open
As of 2021, the only broadly available tape technology standard is
`Linear Tape Open`_, and different vendors offers LTO Ultrium tape
drives, autoloaders and LTO tape cartridges.
There are a few vendors offering proprietary drives with
slight advantages in performance and capacity, but they have
significant disadvantages:
- proprietary (single vendor)
- a much higher purchase cost
So we currently do not test such drives.
In general, LTO tapes offer the following advantages:
- Durable (30 years)
- High Capacity (12 TB)
- Relatively low cost per TB
- Cold Media
- Movable (storable inside vault)
- Multiple vendors (for both media and drives)
- Build in AES-CGM Encryption engine
Please note that `Proxmox Backup Server` already stores compressed
data, so we do not need/use the tape compression feature.
Supported Hardware
------------------
Proxmox Backup Server supports `Linear Tape Open`_ genertion 4 (LTO4)
or later. In general, all SCSI2 tape drives supported by the Linux
kernel should work, but feature like hardware encryptions needs LTO4
or later.
Tape changer support is done using the Linux 'mtx' command line
tool. So any changer device supported by that tool should work.
Drive Performance
~~~~~~~~~~~~~~~~~
Current LTO-8 tapes provide read/write speeds up to 360MB/s. This means,
that it still takes a minimum of 9 hours to completely write or
read a single tape (even at maximum speed).
The only way to speed up that data rate is to use more than one
drive. That way you can run several backup jobs in parallel, or run
restore jobs while the other dives are used for backups.
Also consider that you need to read data first from your datastore
(disk). But a single spinning disk is unable to deliver data at this
rate. We measured a maximum rate of about 60MB/s to 100MB/s in practice,
so it takes 33 hours to read 12TB to fill up an LTO-8 tape. If you want
to run your tape at full speed, please make sure that the source
datastore is able to deliver that performance (e.g, by using SSDs).
Terminology
-----------
:Tape Labels: are used to uniquely indentify a tape. You normally use
some sticky paper labels and apply them on the front of the
cartridge. We additionally store the label text magnetically on the
tape (first file on tape).
.. _Code 39: https://en.wikipedia.org/wiki/Code_39
.. _LTO Ultrium Cartridge Label Specification: https://www.ibm.com/support/pages/ibm-lto-ultrium-cartridge-label-specification
.. _LTO Barcode Generator: lto-barcode/index.html
:Barcodes: are a special form of tape labels, which are electronically
readable. Most LTO tape robots use an 8 character string encoded as
`Code 39`_, as definded in the `LTO Ultrium Cartridge Label
Specification`_.
You can either buy such barcode labels from your cartridge vendor,
or print them yourself. You can use our `LTO Barcode Generator`_ App
for that.
.. Note:: Physical labels and the associated adhesive shall have an
environmental performance to match or exceed the environmental
specifications of the cartridge to which it is applied.
:Media Pools: A media pool is a logical container for tapes. A backup
job targets one media pool, so a job only uses tapes from that
pool. The pool additionally defines how long a backup job can
append data to tapes (allocation policy) and how long you want to
keep the data (retention policy).
:Media Set: A group of continuously written tapes (all from the same
media pool).
:Tape drive: The decive used to read and write data to the tape. There
are standalone drives, but drives often ship within tape libraries.
:Tape changer: A device which can change the tapes inside a tape drive
(tape robot). They are usually part of a tape library.
.. _Tape Library: https://en.wikipedia.org/wiki/Tape_library
:`Tape library`_: A storage device that contains one or more tape drives,
a number of slots to hold tape cartridges, a barcode reader to
identify tape cartridges and an automated method for loading tapes
(a robot).
People als call this 'autoloader', 'tape robot' or 'tape jukebox'.
:Inventory: The inventory stores the list of known tapes (with
additional status information).
:Catalog: A media catalog stores information about the media content.
Tape Quickstart
---------------
1. Configure your tape hardware (drives and changers)
2. Configure one or more media pools
3. Label your tape cartridges.
4. Start your first tape backup job ...
Configuration
-------------
Please note that you can configure anything using the graphical user
interface or the command line interface. Both methods results in the
same configuration.
Tape changers
~~~~~~~~~~~~~
Tape changers (robots) are part of a `Tape Library`_. You can skip
this step if you are using a standalone drive.
Linux is able to auto detect those devices, and you can get a list
of available devices using::
# proxmox-tape changer scan
┌─────────────────────────────┬─────────┬──────────────┬────────┐
│ path │ vendor │ model │ serial │
╞═════════════════════════════╪═════════╪══════════════╪════════╡
│ /dev/tape/by-id/scsi-CC2C52 │ Quantum │ Superloader3 │ CC2C52 │
└─────────────────────────────┴─────────┴──────────────┴────────┘
In order to use that device with Proxmox, you need to create a
configuration entry::
# proxmox-tape changer create sl3 --path /dev/tape/by-id/scsi-CC2C52
Where ``sl3`` is an arbitrary name you can choose.
.. Note:: Please use stable device path names from inside
``/dev/tape/by-id/``. Names like ``/dev/sg0`` may point to a
different device after reboot, and that is not what you want.
You can show the final configuration with::
# proxmox-tape changer config sl3
┌──────┬─────────────────────────────┐
│ Name │ Value │
╞══════╪═════════════════════════════╡
│ name │ sl3 │
├──────┼─────────────────────────────┤
│ path │ /dev/tape/by-id/scsi-CC2C52 │
└──────┴─────────────────────────────┘
Or simply list all configured changer devices::
# proxmox-tape changer list
┌──────┬─────────────────────────────┬─────────┬──────────────┬────────────┐
│ name │ path │ vendor │ model │ serial │
╞══════╪═════════════════════════════╪═════════╪══════════════╪════════════╡
│ sl3 │ /dev/tape/by-id/scsi-CC2C52 │ Quantum │ Superloader3 │ CC2C52 │
└──────┴─────────────────────────────┴─────────┴──────────────┴────────────┘
The Vendor, Model and Serial number are auto detected, but only shown
if the device is online.
To test your setup, please query the status of the changer device with::
# proxmox-tape changer status sl3
┌───────────────┬──────────┬────────────┬─────────────┐
│ entry-kind │ entry-id │ changer-id │ loaded-slot │
╞═══════════════╪══════════╪════════════╪═════════════╡
│ drive │ 0 │ vtape1 │ 1 │
├───────────────┼──────────┼────────────┼─────────────┤
│ slot │ 1 │ │ │
├───────────────┼──────────┼────────────┼─────────────┤
│ slot │ 2 │ vtape2 │ │
├───────────────┼──────────┼────────────┼─────────────┤
│ ... │ ... │ │ │
├───────────────┼──────────┼────────────┼─────────────┤
│ slot │ 16 │ │ │
└───────────────┴──────────┴────────────┴─────────────┘
Tape libraries usually provide some special import/export slots (also
called "mail slots"). Tapes inside those slots are acessible from
outside, making it easy to add/remove tapes to/from the library. Those
tapes are considered to be "offline", so backup jobs will not use
them. Those special slots are auto-detected and marked as
``import-export`` slot in the status command.
It's worth noting that some of the smaller tape libraries don't have
such slots. While they have something called "Mail Slot", that slot
is just a way to grab the tape from the gripper. But they are unable
to hold media while the robot does other things. They also do not
expose that "Mail Slot" over the SCSI interface, so you wont see them in
the status output.
As a workaround, you can mark some of the normal slots as export
slot. The software treats those slots like real ``import-export``
slots, and the media inside those slots is considered to be 'offline'
(not available for backup)::
# proxmox-tape changer update sl3 --export-slots 15,16
After that, you can see those artificial ``import-export`` slots in
the status output::
# proxmox-tape changer status sl3
┌───────────────┬──────────┬────────────┬─────────────┐
│ entry-kind │ entry-id │ changer-id │ loaded-slot │
╞═══════════════╪══════════╪════════════╪═════════════╡
│ drive │ 0 │ vtape1 │ 1 │
├───────────────┼──────────┼────────────┼─────────────┤
│ import-export │ 15 │ │ │
├───────────────┼──────────┼────────────┼─────────────┤
│ import-export │ 16 │ │ │
├───────────────┼──────────┼────────────┼─────────────┤
│ slot │ 1 │ │ │
├───────────────┼──────────┼────────────┼─────────────┤
│ slot │ 2 │ vtape2 │ │
├───────────────┼──────────┼────────────┼─────────────┤
│ ... │ ... │ │ │
├───────────────┼──────────┼────────────┼─────────────┤
│ slot │ 14 │ │ │
└───────────────┴──────────┴────────────┴─────────────┘
Tape drives
~~~~~~~~~~~
Linux is able to auto detect tape drives, and you can get a list
of available tape drives using::
# proxmox-tape drive scan
┌────────────────────────────────┬────────┬─────────────┬────────┐
│ path │ vendor │ model │ serial │
╞════════════════════════════════╪════════╪═════════════╪════════╡
│ /dev/tape/by-id/scsi-12345-nst │ IBM │ ULT3580-TD4 │ 12345 │
└────────────────────────────────┴────────┴─────────────┴────────┘
In order to use that drive with Proxmox, you need to create a
configuration entry::
# proxmox-tape drive create mydrive --path /dev/tape/by-id/scsi-12345-nst
.. Note:: Please use stable device path names from inside
``/dev/tape/by-id/``. Names like ``/dev/nst0`` may point to a
different device after reboot, and that is not what you want.
If you have a tape library, you also need to set the associated
changer device::
# proxmox-tape drive update mydrive --changer sl3 --changer-drivenum 0
The ``--changer-drivenum`` is only necessary if the tape library
includes more than one drive (The changer status command lists all
drivenums).
You can show the final configuration with::
# proxmox-tape drive config mydrive
┌─────────┬────────────────────────────────┐
│ Name │ Value │
╞═════════╪════════════════════════════════╡
│ name │ mydrive │
├─────────┼────────────────────────────────┤
│ path │ /dev/tape/by-id/scsi-12345-nst │
├─────────┼────────────────────────────────┤
│ changer │ sl3 │
└─────────┴────────────────────────────────┘
.. NOTE:: The ``changer-drivenum`` value 0 is not stored in the
configuration, because that is the default.
To list all configured drives use::
# proxmox-tape drive list
┌──────────┬────────────────────────────────┬─────────┬────────┬─────────────┬────────┐
│ name │ path │ changer │ vendor │ model │ serial │
╞══════════╪════════════════════════════════╪═════════╪════════╪═════════════╪════════╡
│ mydrive │ /dev/tape/by-id/scsi-12345-nst │ sl3 │ IBM │ ULT3580-TD4 │ 12345 │
└──────────┴────────────────────────────────┴─────────┴────────┴─────────────┴────────┘
The Vendor, Model and Serial number are auto detected, but only shown
if the device is online.
For testing, you can simply query the drive status with::
# proxmox-tape status --drive mydrive
┌───────────┬────────────────────────┐
│ Name │ Value │
╞═══════════╪════════════════════════╡
│ blocksize │ 0 │
├───────────┼────────────────────────┤
│ status │ DRIVE_OPEN | IM_REP_EN │
└───────────┴────────────────────────┘
.. NOTE:: Blocksize should always be 0 (variable block size
mode). This is the default anyways.
Media Pools
~~~~~~~~~~~
A media pool is a logical container for tapes. A backup job targets
one media pool, so a job only uses tapes from that pool.
.. topic:: Media Set
A media set is a group of continuously written tapes, used to split
the larger pool into smaller, restorable units. One or more backup
jobs write to a media set, producing an ordered group of
tapes. Media sets are identified by an unique ID. That ID and the
sequence number is stored on each tape of that set (tape label).
Media sets are the basic unit for restore tasks, i.e. you need all
tapes in the set to restore the media set content. Data is fully
deduplicated inside a media set.
.. topic:: Media Set Allocation Policy
The pool additionally defines how long backup jobs can append data
to a media set. The following settings are possible:
- Try to use the current media set.
This setting produce one large media set. While this is very
space efficient (deduplication, no unused space), it can lead to
long restore times, because restore jobs needs to read all tapes in the
set.
.. NOTE:: Data is fully deduplicated inside a media set. That
also means that data is randomly distributed over the tapes in
the set. So even if you restore a single VM, this may have to
read data from all tapes inside the media set.
Larger media sets are also more error prone, because a single
damaged media makes the restore fail.
Usage scenario: Mostly used with tape libraries, and you manually
trigger new set creation by running a backup job with the
``--export`` option.
.. NOTE:: Retention period starts with the existence of a newer
media set.
- Always create a new media set.
With this setting each backup job creates a new media set. This
is less space efficient, because the last media from the last set
may not be fully written, leaving the remaining space unused.
The advantage is that this procudes media sets of minimal
size. Small set are easier to handle, you can move sets to an
off-site vault, and restore is much faster.
.. NOTE:: Retention period starts with the creation time of the
media set.
- Create a new set when the specified Calendar Event triggers.
.. _systemd.time manpage: https://manpages.debian.org/buster/systemd/systemd.time.7.en.html
This allows you to specify points in time by using systemd like
Calendar Event specifications (see `systemd.time manpage`_).
For example, the value ``weekly`` (or ``Mon *-*-* 00:00:00``)
will create a new set each week.
This balances between space efficency and media count.
.. NOTE:: Retention period starts when the calendar event
triggers.
Additionally, the following events may allocate a new media set:
- Required tape is offline (and you use a tape library).
- Current set contains damaged of retired tapes.
- Media pool encryption changed
- Database consistency errors, e.g. if the inventory does not
contain required media info, or contain conflicting infos
(outdated data).
.. topic:: Retention Policy
Defines how long we want to keep the data.
- Always overwrite media.
- Protect data for the duration specified.
We use systemd like time spans to specify durations, e.g. ``2
weeks`` (see `systemd.time manpage`_).
- Never overwrite data.
.. topic:: Hardware Encryption
LTO4 (or later) tape drives support hardware encryption. If you
configure the media pool to use encryption, all data written to the
tapes is encrypted using the configured key.
That way, unauthorized users cannot read data from the media,
e.g. if you loose a media while shipping to an offsite location.
.. Note:: If the backup client also encrypts data, data on tape
will be double encrypted.
The password protected key is stored on each media, so it is
possbible to `restore the key <restore_encryption_key_>`_ using the password. Please make sure
you remember the password in case you need to restore the key.
.. NOTE:: FIXME: Add note about global content namespace. (We do not store
the source datastore, so it is impossible to distinguish
store1:/vm/100 from store2:/vm/100. Please use different media
pools if the source is from a different name space)
The following command creates a new media pool::
// proxmox-tape pool create <name> --drive <string> [OPTIONS]
# proxmox-tape pool create daily --drive mydrive
Additional option can be set later using the update command::
# proxmox-tape pool update daily --allocation daily --retention 7days
To list all configured pools use::
# proxmox-tape pool list
┌───────┬──────────┬────────────┬───────────┬──────────┐
│ name │ drive │ allocation │ retention │ template │
╞═══════╪══════════╪════════════╪═══════════╪══════════╡
│ daily │ mydrive │ daily │ 7days │ │
└───────┴──────────┴────────────┴───────────┴──────────┘
Tape Jobs
~~~~~~~~~
Administration
--------------
Many sub-command of the ``proxmox-tape`` command line tools take a
parameter called ``--drive``, which specifies the tape drive you want
to work on. For convenience, you can set that in an environment
variable::
# export PROXMOX_TAPE_DRIVE=mydrive
You can then omit the ``--drive`` parameter from the command. If the
drive has an associated changer device, you may also omit the changer
parameter from commands that needs a changer device, for example::
# proxmox-tape changer status
Should displays the changer status of the changer device associated with
drive ``mydrive``.
Label Tapes
~~~~~~~~~~~
By default, tape cartidges all looks the same, so you need to put a
label on them for unique identification. So first, put a sticky paper
label with some human readable text on the cartridge.
If you use a `Tape Library`_, you should use an 8 character string
encoded as `Code 39`_, as definded in the `LTO Ultrium Cartridge Label
Specification`_. You can either bye such barcode labels from your
cartidge vendor, or print them yourself. You can use our `LTO Barcode
Generator`_ App for that.
Next, you need to write that same label text to the tape, so that the
software can uniquely identify the tape too.
For a standalone drive, manually insert the new tape cartidge into the
drive and run::
# proxmox-tape label --changer-id <label-text> [--pool <pool-name>]
You may omit the ``--pool`` argument to allow the tape to be used by any pool.
.. Note:: For safety reasons, this command fails if the tape contain
any data. If you want to overwrite it anways, erase the tape first.
You can verify success by reading back the label::
# proxmox-tape read-label
┌─────────────────┬──────────────────────────────────────┐
│ Name │ Value │
╞═════════════════╪══════════════════════════════════════╡
│ changer-id │ vtape1 │
├─────────────────┼──────────────────────────────────────┤
│ uuid │ 7f42c4dd-9626-4d89-9f2b-c7bc6da7d533 │
├─────────────────┼──────────────────────────────────────┤
│ ctime │ Wed Jan 6 09:07:51 2021 │
├─────────────────┼──────────────────────────────────────┤
│ pool │ daily │
├─────────────────┼──────────────────────────────────────┤
│ media-set-uuid │ 00000000-0000-0000-0000-000000000000 │
├─────────────────┼──────────────────────────────────────┤
│ media-set-ctime │ Wed Jan 6 09:07:51 2021 │
└─────────────────┴──────────────────────────────────────┘
.. NOTE:: The ``media-set-uuid`` using all zeros indicates an empty
tape (not used by any media set).
If you have a tape library, apply the sticky barcode label to the tape
cartridges first. Then load those empty tapes into the library. You
can then label all unlabeled tapes with a single command::
# proxmox-tape barcode-label [--pool <pool-name>]
Run Tape Backups
~~~~~~~~~~~~~~~~
To manually run a backup job use::
# proxmox-tape backup <store> <pool> [OPTIONS]
The following options are available:
--eject-media Eject media upon job completion.
It is normally good practice to eject the tape after use. This unmounts the
tape from the drive and prevents the tape from getting dirty with dust.
--export-media-set Export media set upon job completion.
After a sucessful backup job, this moves all tapes from the used
media set into import-export slots. The operator can then pick up
those tapes and move them to a media vault.
Restore from Tape
~~~~~~~~~~~~~~~~~
Restore is done at media-set granularity, so you first need to find
out which media set contains the data you want to restore. This
information is stored in the media catalog. If you do not have media
catalogs, you need to restore them first. Please note that you need
the catalog to find your data, but restoring a complete media-set does
not need media catalogs.
The following command shows the media content (from catalog)::
# proxmox-tape media content
┌────────────┬──────┬──────────────────────────┬────────┬────────────────────────────────┬──────────────────────────────────────┐
│ label-text │ pool │ media-set-name │ seq-nr │ snapshot │ media-set-uuid │
╞════════════╪══════╪══════════════════════════╪════════╪════════════════════════════════╪══════════════════════════════════════╡
│ TEST01L8 │ p2 │ Wed Jan 13 13:55:55 2021 │ 0 │ vm/201/2021-01-11T10:43:48Z │ 9da37a55-aac7-4deb-91c6-482b3b675f30 │
├────────────┼──────┼──────────────────────────┼────────┼────────────────────────────────┼──────────────────────────────────────┤
│ ... │ ... │ ... │ ... │ ... │ ... │
└────────────┴──────┴──────────────────────────┴────────┴────────────────────────────────┴──────────────────────────────────────┘
A restore job reads the data from the media set and moves it back to
data disk (datastore)::
// proxmox-tape restore <media-set-uuid> <datastore>
# proxmox-tape restore 9da37a55-aac7-4deb-91c6-482b3b675f30 mystore
Update Inventory
~~~~~~~~~~~~~~~~
Restore Catalog
~~~~~~~~~~~~~~~
Encryption Key Management
~~~~~~~~~~~~~~~~~~~~~~~~~
Creating a new encryption key::
# proxmox-tape key create --hint "tape pw 2020"
Tape Encryption Key Password: **********
Verify Password: **********
"14:f8:79:b9:f5:13:e5:dc:bf:b6:f9:88:48:51:81:dc:79:bf:a0:22:68:47:d1:73:35:2d:b6:20:e1:7f:f5:0f"
List existing encryption keys::
# proxmox-tape key list
┌───────────────────────────────────────────────────┬───────────────┐
│ fingerprint │ hint │
╞═══════════════════════════════════════════════════╪═══════════════╡
│ 14:f8:79:b9:f5:13:e5:dc: ... :b6:20:e1:7f:f5:0f │ tape pw 2020 │
└───────────────────────────────────────────────────┴───────────────┘
To show encryption key details::
# proxmox-tape key show 14:f8:79:b9:f5:13:e5:dc:...:b6:20:e1:7f:f5:0f
┌─────────────┬───────────────────────────────────────────────┐
│ Name │ Value │
╞═════════════╪═══════════════════════════════════════════════╡
│ kdf │ scrypt │
├─────────────┼───────────────────────────────────────────────┤
│ created │ Sat Jan 23 14:47:21 2021 │
├─────────────┼───────────────────────────────────────────────┤
│ modified │ Sat Jan 23 14:47:21 2021 │
├─────────────┼───────────────────────────────────────────────┤
│ fingerprint │ 14:f8:79:b9:f5:13:e5:dc:...:b6:20:e1:7f:f5:0f │
├─────────────┼───────────────────────────────────────────────┤
│ hint │ tape pw 2020 │
└─────────────┴───────────────────────────────────────────────┘
The ``paperkey`` subcommand can be used to create a QR encoded
version of a tape encryption key. The following command sends the output of the
``paperkey`` command to a text file, for easy printing::
proxmox-tape key paperkey <fingerprint> --output-format text > qrkey.txt
.. _restore_encryption_key:
Restoring Encryption Keys
^^^^^^^^^^^^^^^^^^^^^^^^^
You can restore the encryption key from the tape, using the password
used to generate the key. First, load the tape you want to restore
into the drive. Then run::
# proxmox-tape key restore
Tepe Encryption Key Password: ***********
If the password is correct, the key will get imported to the
database. Further restore jobs automatically use any availbale key.
Tape Cleaning
~~~~~~~~~~~~~
LTO tape drives requires regular cleaning. This is done by loading a
cleaning cartridge into the drive, which is a manual task for
standalone drives.
For tape libraries, cleaning cartridges are identified using special
labels starting with letters "CLN". For example, our tape library has a
cleaning cartridge inside slot 3::
# proxmox-tape changer status sl3
┌───────────────┬──────────┬────────────┬─────────────┐
│ entry-kind │ entry-id │ changer-id │ loaded-slot │
╞═══════════════╪══════════╪════════════╪═════════════╡
│ drive │ 0 │ vtape1 │ 1 │
├───────────────┼──────────┼────────────┼─────────────┤
│ slot │ 1 │ │ │
├───────────────┼──────────┼────────────┼─────────────┤
│ slot │ 2 │ vtape2 │ │
├───────────────┼──────────┼────────────┼─────────────┤
│ slot │ 3 │ CLN001CU │ │
├───────────────┼──────────┼────────────┼─────────────┤
│ ... │ ... │ │ │
└───────────────┴──────────┴────────────┴─────────────┘
To initiate a cleaning operation simply run::
# proxmox-tape clean
This command does the following:
- find the cleaning tape (in slot 3)
- unload the current media from the drive (back to slot1)
- load the cleaning tape into the drive
- run drive cleaning operation
- unload the cleaning tape (to slot 3)

View File

@ -284,3 +284,104 @@ you can use the ``proxmox-backup-manager user permission`` command:
Path: /datastore/store1 Path: /datastore/store1
- Datastore.Backup (*) - Datastore.Backup (*)
.. _user_tfa:
Two-factor authentication
-------------------------
Introduction
~~~~~~~~~~~~
Simple authentication requires only secret piece of evidence (one factor) that
a user can successfully claim a identiy (authenticate), for example, that you
are allowed to login as `root@pam` on a specific Proxmox Backup Server.
If the password gets stolen, or leaked in another way, anybody can use it to
login - even if they should not be allowed to do so.
With Two-factor authentication (TFA) a user is asked for an additional factor,
to proof his authenticity. The extra factor is different from a password
(something only the user knows), it is something only the user has, for example
a piece of hardware (security key) or an secret saved on the users smartphone.
This means that a remote user can never get hold on such a physical object. So,
even if that user would know your password they cannot successfully
authenticate as you, as your second factor is missing.
.. image:: images/screenshots/pbs-gui-tfa-login.png
:align: right
:alt: Add a new user
Available Second Factors
~~~~~~~~~~~~~~~~~~~~~~~~
You can setup more than one second factor to avoid that losing your smartphone
or security key permanently locks you out from your account.
There are three different two-factor authentication methods supported:
* TOTP (`Time-based One-Time Password <https://en.wikipedia.org/wiki/Time-based_One-Time_Password>`_).
A short code derived from a shared secret and the current time, it switches
every 30 seconds.
* WebAuthn (`Web Authentication <https://en.wikipedia.org/wiki/WebAuthn>`_).
A general standard for authentication. It is implemented by various security
devices like hardware keys or trusted platform modules (TPM) from a computer
or smart phone.
* Single use Recovery Keys. A list of keys which should either be printed out
and locked in a secure fault or saved digitally in a electronic vault.
Each key can be used only once, they are perfect for ensuring you are not
locked out even if all of your other second factors are lost or corrupt.
Setup
~~~~~
.. _user_tfa_setup_totp:
TOTP
^^^^
.. image:: images/screenshots/pbs-gui-tfa-add-totp.png
:align: right
:alt: Add a new user
There is not server setup required, simply install a TOTP app on your
smartphone (for example, `FreeOTP <https://freeotp.github.io/>`_) and use the
Proxmox Backup Server web-interface to add a TOTP factor.
.. _user_tfa_setup_webauthn:
WebAuthn
^^^^^^^^
For WebAuthn to work you need to have two things:
* a trusted HTTPS certificate (for example, by using `Let's Encrypt
<https://pbs.proxmox.com/wiki/index.php/HTTPS_Certificate_Configuration>`_)
* setup the WebAuthn configuration (see *Configuration -> Authentication* in the
Proxmox Backup Server web-interface). This can be auto-filled in most setups.
Once you fullfilled both of those requirements, you can add a WebAuthn
configuration in the *Access Control* panel.
.. _user_tfa_setup_recovery_keys:
Recovery Keys
^^^^^^^^^^^^^
.. image:: images/screenshots/pbs-gui-tfa-add-recovery-keys.png
:align: right
:alt: Add a new user
Recovery key codes do not need any preparation, you can simply create a set of
recovery keys in the *Access Control* panel.
.. note:: There can only be one set of single-use recovery keys per user at any
time.
TFA and Automated Access
~~~~~~~~~~~~~~~~~~~~~~~~
Two-factor authentication is only implemented for the web-interface, you should
use :ref:`API Tokens <user_tokens>` for all other use cases, especially
non-interactive ones (for example, adding a Proxmox Backup server to Proxmox VE
as a storage).

View File

@ -28,7 +28,7 @@ async fn run() -> Result<(), Error> {
let auth_id = Authid::root_auth_id(); let auth_id = Authid::root_auth_id();
let options = HttpClientOptions::new() let options = HttpClientOptions::default()
.interactive(true) .interactive(true)
.ticket_cache(true); .ticket_cache(true);

View File

@ -2,7 +2,7 @@ use std::future::Future;
use std::pin::Pin; use std::pin::Pin;
use std::task::{Context, Poll}; use std::task::{Context, Poll};
use anyhow::{Error}; use anyhow::Error;
use futures::future::TryFutureExt; use futures::future::TryFutureExt;
use futures::stream::Stream; use futures::stream::Stream;
use tokio::net::TcpStream; use tokio::net::TcpStream;
@ -38,11 +38,11 @@ impl Future for Process {
this.body.flow_control().release_capacity(chunk.len())?; this.body.flow_control().release_capacity(chunk.len())?;
this.bytes += chunk.len(); this.bytes += chunk.len();
// println!("GOT FRAME {}", chunk.len()); // println!("GOT FRAME {}", chunk.len());
}, }
Some(Err(err)) => return Poll::Ready(Err(Error::from(err))), Some(Err(err)) => return Poll::Ready(Err(Error::from(err))),
None => { None => {
this.trailers = true; this.trailers = true;
}, }
} }
} }
} }
@ -52,7 +52,6 @@ impl Future for Process {
fn send_request( fn send_request(
mut client: h2::client::SendRequest<bytes::Bytes>, mut client: h2::client::SendRequest<bytes::Bytes>,
) -> impl Future<Output = Result<usize, Error>> { ) -> impl Future<Output = Result<usize, Error>> {
println!("sending request"); println!("sending request");
let request = http::Request::builder() let request = http::Request::builder()
@ -62,10 +61,10 @@ fn send_request(
let (response, _stream) = client.send_request(request, true).unwrap(); let (response, _stream) = client.send_request(request, true).unwrap();
response response.map_err(Error::from).and_then(|response| Process {
.map_err(Error::from) body: response.into_body(),
.and_then(|response| { trailers: false,
Process { body: response.into_body(), trailers: false, bytes: 0 } bytes: 0,
}) })
} }
@ -74,11 +73,10 @@ fn main() -> Result<(), Error> {
} }
async fn run() -> Result<(), Error> { async fn run() -> Result<(), Error> {
let start = std::time::SystemTime::now(); let start = std::time::SystemTime::now();
let conn = TcpStream::connect(std::net::SocketAddr::from(([127,0,0,1], 8008))) let conn = TcpStream::connect(std::net::SocketAddr::from(([127, 0, 0, 1], 8008))).await?;
.await?; conn.set_nodelay(true).unwrap();
let (client, h2) = h2::client::Builder::new() let (client, h2) = h2::client::Builder::new()
.initial_connection_window_size(1024 * 1024 * 1024) .initial_connection_window_size(1024 * 1024 * 1024)
@ -99,10 +97,13 @@ async fn run() -> Result<(), Error> {
} }
let elapsed = start.elapsed().unwrap(); let elapsed = start.elapsed().unwrap();
let elapsed = (elapsed.as_secs() as f64) + let elapsed = (elapsed.as_secs() as f64) + (elapsed.subsec_millis() as f64) / 1000.0;
(elapsed.subsec_millis() as f64)/1000.0;
println!("Downloaded {} bytes, {} MB/s", bytes, (bytes as f64)/(elapsed*1024.0*1024.0)); println!(
"Downloaded {} bytes, {} MB/s",
bytes,
(bytes as f64) / (elapsed * 1024.0 * 1024.0)
);
Ok(()) Ok(())
} }

View File

@ -5,6 +5,7 @@ use std::task::{Context, Poll};
use anyhow::{format_err, Error}; use anyhow::{format_err, Error};
use futures::future::TryFutureExt; use futures::future::TryFutureExt;
use futures::stream::Stream; use futures::stream::Stream;
use tokio::net::TcpStream;
// Simple H2 client to test H2 download speed using h2s-server.rs // Simple H2 client to test H2 download speed using h2s-server.rs
@ -37,11 +38,11 @@ impl Future for Process {
this.body.flow_control().release_capacity(chunk.len())?; this.body.flow_control().release_capacity(chunk.len())?;
this.bytes += chunk.len(); this.bytes += chunk.len();
// println!("GOT FRAME {}", chunk.len()); // println!("GOT FRAME {}", chunk.len());
}, }
Some(Err(err)) => return Poll::Ready(Err(Error::from(err))), Some(Err(err)) => return Poll::Ready(Err(Error::from(err))),
None => { None => {
this.trailers = true; this.trailers = true;
}, }
} }
} }
} }
@ -60,10 +61,10 @@ fn send_request(
let (response, _stream) = client.send_request(request, true).unwrap(); let (response, _stream) = client.send_request(request, true).unwrap();
response response.map_err(Error::from).and_then(|response| Process {
.map_err(Error::from) body: response.into_body(),
.and_then(|response| { trailers: false,
Process { body: response.into_body(), trailers: false, bytes: 0 } bytes: 0,
}) })
} }
@ -74,22 +75,22 @@ fn main() -> Result<(), Error> {
async fn run() -> Result<(), Error> { async fn run() -> Result<(), Error> {
let start = std::time::SystemTime::now(); let start = std::time::SystemTime::now();
let conn = let conn = TcpStream::connect(std::net::SocketAddr::from(([127, 0, 0, 1], 8008))).await?;
tokio::net::TcpStream::connect(std::net::SocketAddr::from(([127,0,0,1], 8008))).await?;
conn.set_nodelay(true).unwrap(); conn.set_nodelay(true).unwrap();
conn.set_recv_buffer_size(1024*1024).unwrap();
use openssl::ssl::{SslConnector, SslMethod}; use openssl::ssl::{SslConnector, SslMethod};
let mut ssl_connector_builder = SslConnector::builder(SslMethod::tls()).unwrap(); let mut ssl_connector_builder = SslConnector::builder(SslMethod::tls()).unwrap();
ssl_connector_builder.set_verify(openssl::ssl::SslVerifyMode::NONE); ssl_connector_builder.set_verify(openssl::ssl::SslVerifyMode::NONE);
let conn = let ssl = ssl_connector_builder
tokio_openssl::connect( .build()
ssl_connector_builder.build().configure()?, .configure()?
"localhost", .into_ssl("localhost")?;
conn,
) let conn = tokio_openssl::SslStream::new(ssl, conn)?;
let mut conn = Box::pin(conn);
conn.as_mut()
.connect()
.await .await
.map_err(|err| format_err!("connect failed - {}", err))?; .map_err(|err| format_err!("connect failed - {}", err))?;
@ -100,31 +101,25 @@ async fn run() -> Result<(), Error> {
.handshake(conn) .handshake(conn)
.await?; .await?;
// Spawn a task to run the conn...
tokio::spawn(async move { tokio::spawn(async move {
if let Err(e) = h2.await { if let Err(err) = h2.await {
println!("GOT ERR={:?}", e); println!("GOT ERR={:?}", err);
} }
}); });
let mut bytes = 0; let mut bytes = 0;
for _ in 0..100 { for _ in 0..2000 {
match send_request(client.clone()).await { bytes += send_request(client.clone()).await?;
Ok(b) => {
bytes += b;
}
Err(e) => {
println!("ERROR {}", e);
return Ok(());
}
}
} }
let elapsed = start.elapsed().unwrap(); let elapsed = start.elapsed().unwrap();
let elapsed = (elapsed.as_secs() as f64) + let elapsed = (elapsed.as_secs() as f64) + (elapsed.subsec_millis() as f64) / 1000.0;
(elapsed.subsec_millis() as f64)/1000.0;
println!("Downloaded {} bytes, {} MB/s", bytes, (bytes as f64)/(elapsed*1024.0*1024.0)); println!(
"Downloaded {} bytes, {} MB/s",
bytes,
(bytes as f64) / (elapsed * 1024.0 * 1024.0)
);
Ok(()) Ok(())
} }

View File

@ -2,14 +2,12 @@ use std::sync::Arc;
use anyhow::{format_err, Error}; use anyhow::{format_err, Error};
use futures::*; use futures::*;
use hyper::{Request, Response, Body}; use hyper::{Body, Request, Response};
use openssl::ssl::{SslMethod, SslAcceptor, SslFiletype}; use openssl::ssl::{SslAcceptor, SslFiletype, SslMethod};
use tokio::net::{TcpListener, TcpStream}; use tokio::net::{TcpListener, TcpStream};
use proxmox_backup::configdir; use proxmox_backup::configdir;
// Simple H2 server to test H2 speed with h2s-client.rs
fn main() -> Result<(), Error> { fn main() -> Result<(), Error> {
proxmox_backup::tools::runtime::main(run()) proxmox_backup::tools::runtime::main(run())
} }
@ -19,22 +17,23 @@ async fn run() -> Result<(), Error> {
let cert_path = configdir!("/proxy.pem"); let cert_path = configdir!("/proxy.pem");
let mut acceptor = SslAcceptor::mozilla_intermediate(SslMethod::tls()).unwrap(); let mut acceptor = SslAcceptor::mozilla_intermediate(SslMethod::tls()).unwrap();
acceptor.set_private_key_file(key_path, SslFiletype::PEM) acceptor
.set_private_key_file(key_path, SslFiletype::PEM)
.map_err(|err| format_err!("unable to read proxy key {} - {}", key_path, err))?; .map_err(|err| format_err!("unable to read proxy key {} - {}", key_path, err))?;
acceptor.set_certificate_chain_file(cert_path) acceptor
.set_certificate_chain_file(cert_path)
.map_err(|err| format_err!("unable to read proxy cert {} - {}", cert_path, err))?; .map_err(|err| format_err!("unable to read proxy cert {} - {}", cert_path, err))?;
acceptor.check_private_key().unwrap(); acceptor.check_private_key().unwrap();
let acceptor = Arc::new(acceptor.build()); let acceptor = Arc::new(acceptor.build());
let mut listener = TcpListener::bind(std::net::SocketAddr::from(([127,0,0,1], 8008))).await?; let listener = TcpListener::bind(std::net::SocketAddr::from(([127, 0, 0, 1], 8008))).await?;
println!("listening on {:?}", listener.local_addr()); println!("listening on {:?}", listener.local_addr());
loop { loop {
let (socket, _addr) = listener.accept().await?; let (socket, _addr) = listener.accept().await?;
tokio::spawn(handle_connection(socket, Arc::clone(&acceptor)) tokio::spawn(handle_connection(socket, Arc::clone(&acceptor)).map(|res| {
.map(|res| {
if let Err(err) = res { if let Err(err) = res {
eprintln!("Error: {}", err); eprintln!("Error: {}", err);
} }
@ -42,15 +41,14 @@ async fn run() -> Result<(), Error> {
} }
} }
async fn handle_connection( async fn handle_connection(socket: TcpStream, acceptor: Arc<SslAcceptor>) -> Result<(), Error> {
socket: TcpStream,
acceptor: Arc<SslAcceptor>,
) -> Result<(), Error> {
socket.set_nodelay(true).unwrap(); socket.set_nodelay(true).unwrap();
socket.set_send_buffer_size(1024*1024).unwrap();
socket.set_recv_buffer_size(1024*1024).unwrap();
let socket = tokio_openssl::accept(acceptor.as_ref(), socket).await?; let ssl = openssl::ssl::Ssl::new(acceptor.context())?;
let stream = tokio_openssl::SslStream::new(ssl, socket)?;
let mut stream = Box::pin(stream);
stream.as_mut().accept().await?;
let mut http = hyper::server::conn::Http::new(); let mut http = hyper::server::conn::Http::new();
http.http2_only(true); http.http2_only(true);
@ -61,7 +59,7 @@ async fn handle_connection(
let service = hyper::service::service_fn(|_req: Request<Body>| { let service = hyper::service::service_fn(|_req: Request<Body>| {
println!("Got request"); println!("Got request");
let buffer = vec![65u8; 1024*1024]; // nonsense [A,A,A,A...] let buffer = vec![65u8; 4 * 1024 * 1024]; // nonsense [A,A,A,A...]
let body = Body::from(buffer); let body = Body::from(buffer);
let response = Response::builder() let response = Response::builder()
@ -72,7 +70,7 @@ async fn handle_connection(
future::ok::<_, Error>(response) future::ok::<_, Error>(response)
}); });
http.serve_connection(socket, service) http.serve_connection(stream, service)
.map_err(Error::from) .map_err(Error::from)
.await?; .await?;

View File

@ -1,26 +1,21 @@
use anyhow::{Error}; use anyhow::Error;
use futures::*; use futures::*;
use hyper::{Body, Request, Response};
// Simple H2 server to test H2 speed with h2client.rs use tokio::net::{TcpListener, TcpStream};
use tokio::net::TcpListener;
use tokio::io::{AsyncRead, AsyncWrite};
use proxmox_backup::client::pipe_to_stream::PipeToSendStream;
fn main() -> Result<(), Error> { fn main() -> Result<(), Error> {
proxmox_backup::tools::runtime::main(run()) proxmox_backup::tools::runtime::main(run())
} }
async fn run() -> Result<(), Error> { async fn run() -> Result<(), Error> {
let mut listener = TcpListener::bind(std::net::SocketAddr::from(([127,0,0,1], 8008))).await?; let listener = TcpListener::bind(std::net::SocketAddr::from(([127, 0, 0, 1], 8008))).await?;
println!("listening on {:?}", listener.local_addr()); println!("listening on {:?}", listener.local_addr());
loop { loop {
let (socket, _addr) = listener.accept().await?; let (socket, _addr) = listener.accept().await?;
tokio::spawn(handle_connection(socket) tokio::spawn(handle_connection(socket).map(|res| {
.map(|res| {
if let Err(err) = res { if let Err(err) = res {
eprintln!("Error: {}", err); eprintln!("Error: {}", err);
} }
@ -28,24 +23,33 @@ async fn run() -> Result<(), Error> {
} }
} }
async fn handle_connection<T: AsyncRead + AsyncWrite + Unpin>(socket: T) -> Result<(), Error> { async fn handle_connection(socket: TcpStream) -> Result<(), Error> {
let mut conn = h2::server::handshake(socket).await?; socket.set_nodelay(true).unwrap();
println!("H2 connection bound"); let mut http = hyper::server::conn::Http::new();
http.http2_only(true);
// increase window size: todo - find optiomal size
let max_window_size = (1 << 31) - 2;
http.http2_initial_stream_window_size(max_window_size);
http.http2_initial_connection_window_size(max_window_size);
while let Some((request, mut respond)) = conn.try_next().await? { let service = hyper::service::service_fn(|_req: Request<Body>| {
println!("GOT request: {:?}", request); println!("Got request");
let buffer = vec![65u8; 4 * 1024 * 1024]; // nonsense [A,A,A,A...]
let body = Body::from(buffer);
let response = http::Response::builder() let response = Response::builder()
.status(http::StatusCode::OK) .status(http::StatusCode::OK)
.body(()) .header(http::header::CONTENT_TYPE, "application/octet-stream")
.body(body)
.unwrap(); .unwrap();
future::ok::<_, Error>(response)
});
let send = respond.send_response(response, false).unwrap(); http.serve_connection(socket, service)
let data = vec![65u8; 1024*1024]; .map_err(Error::from)
PipeToSendStream::new(bytes::Bytes::from(data), send).await?; .await?;
println!("DATA SENT");
}
println!("H2 connection CLOSE !");
Ok(()) Ok(())
} }

View File

@ -10,7 +10,7 @@ async fn upload_speed() -> Result<f64, Error> {
let auth_id = Authid::root_auth_id(); let auth_id = Authid::root_auth_id();
let options = HttpClientOptions::new() let options = HttpClientOptions::default()
.interactive(true) .interactive(true)
.ticket_cache(true); .ticket_cache(true);

View File

@ -1,3 +1,5 @@
//! The Proxmox Backup Server API
pub mod access; pub mod access;
pub mod admin; pub mod admin;
pub mod backup; pub mod backup;
@ -9,6 +11,7 @@ pub mod types;
pub mod version; pub mod version;
pub mod ping; pub mod ping;
pub mod pull; pub mod pull;
pub mod tape;
mod helpers; mod helpers;
use proxmox::api::router::SubdirMap; use proxmox::api::router::SubdirMap;
@ -17,7 +20,7 @@ use proxmox::list_subdirs_api_method;
const NODES_ROUTER: Router = Router::new().match_all("node", &node::ROUTER); const NODES_ROUTER: Router = Router::new().match_all("node", &node::ROUTER);
pub const SUBDIRS: SubdirMap = &[ const SUBDIRS: SubdirMap = &[
("access", &access::ROUTER), ("access", &access::ROUTER),
("admin", &admin::ROUTER), ("admin", &admin::ROUTER),
("backup", &backup::ROUTER), ("backup", &backup::ROUTER),
@ -27,6 +30,7 @@ pub const SUBDIRS: SubdirMap = &[
("pull", &pull::ROUTER), ("pull", &pull::ROUTER),
("reader", &reader::ROUTER), ("reader", &reader::ROUTER),
("status", &status::ROUTER), ("status", &status::ROUTER),
("tape", &tape::ROUTER),
("version", &version::ROUTER), ("version", &version::ROUTER),
]; ];

View File

@ -1,36 +1,52 @@
//! Access control (Users, Permissions and Authentication)
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::{json, Value}; use serde_json::{json, Value};
use std::collections::HashMap; use std::collections::HashMap;
use std::collections::HashSet; use std::collections::HashSet;
use proxmox::api::{api, RpcEnvironment, Permission};
use proxmox::api::router::{Router, SubdirMap}; use proxmox::api::router::{Router, SubdirMap};
use proxmox::{sortable, identity}; use proxmox::api::{api, Permission, RpcEnvironment};
use proxmox::{http_err, list_subdirs_api_method}; use proxmox::{http_err, list_subdirs_api_method};
use proxmox::{identity, sortable};
use crate::tools::ticket::{self, Empty, Ticket};
use crate::auth_helpers::*;
use crate::api2::types::*; use crate::api2::types::*;
use crate::auth_helpers::*;
use crate::server::ticket::ApiTicket;
use crate::tools::ticket::{self, Empty, Ticket};
use crate::config::acl as acl_config; use crate::config::acl as acl_config;
use crate::config::acl::{PRIVILEGES, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY}; use crate::config::acl::{PRIVILEGES, PRIV_PERMISSIONS_MODIFY, PRIV_SYS_AUDIT};
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
use crate::config::tfa::TfaChallenge;
pub mod user;
pub mod domain;
pub mod acl; pub mod acl;
pub mod domain;
pub mod role; pub mod role;
pub mod tfa;
pub mod user;
#[allow(clippy::large_enum_variant)]
enum AuthResult {
/// Successful authentication which does not require a new ticket.
Success,
/// Successful authentication which requires a ticket to be created.
CreateTicket,
/// A partial ticket which requires a 2nd factor will be created.
Partial(TfaChallenge),
}
/// returns Ok(true) if a ticket has to be created
/// and Ok(false) if not
fn authenticate_user( fn authenticate_user(
userid: &Userid, userid: &Userid,
password: &str, password: &str,
path: Option<String>, path: Option<String>,
privs: Option<String>, privs: Option<String>,
port: Option<u16>, port: Option<u16>,
) -> Result<bool, Error> { tfa_challenge: Option<String>,
) -> Result<AuthResult, Error> {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let auth_id = Authid::from(userid.clone()); let auth_id = Authid::from(userid.clone());
@ -38,12 +54,16 @@ fn authenticate_user(
bail!("user account disabled or expired."); bail!("user account disabled or expired.");
} }
if let Some(tfa_challenge) = tfa_challenge {
return authenticate_2nd(userid, &tfa_challenge, password);
}
if password.starts_with("PBS:") { if password.starts_with("PBS:") {
if let Ok(ticket_userid) = Ticket::<Userid>::parse(password) if let Ok(ticket_userid) = Ticket::<Userid>::parse(password)
.and_then(|ticket| ticket.verify(public_auth_key(), "PBS", None)) .and_then(|ticket| ticket.verify(public_auth_key(), "PBS", None))
{ {
if *userid == ticket_userid { if *userid == ticket_userid {
return Ok(true); return Ok(AuthResult::CreateTicket);
} }
bail!("ticket login failed - wrong userid"); bail!("ticket login failed - wrong userid");
} }
@ -53,17 +73,17 @@ fn authenticate_user(
} }
let path = path.ok_or_else(|| format_err!("missing path for termproxy ticket"))?; let path = path.ok_or_else(|| format_err!("missing path for termproxy ticket"))?;
let privilege_name = privs let privilege_name =
.ok_or_else(|| format_err!("missing privilege name for termproxy ticket"))?; privs.ok_or_else(|| format_err!("missing privilege name for termproxy ticket"))?;
let port = port.ok_or_else(|| format_err!("missing port for termproxy ticket"))?; let port = port.ok_or_else(|| format_err!("missing port for termproxy ticket"))?;
if let Ok(Empty) = Ticket::parse(password) if let Ok(Empty) = Ticket::parse(password).and_then(|ticket| {
.and_then(|ticket| ticket.verify( ticket.verify(
public_auth_key(), public_auth_key(),
ticket::TERM_PREFIX, ticket::TERM_PREFIX,
Some(&ticket::term_aad(userid, &path, port)), Some(&ticket::term_aad(userid, &path, port)),
)) )
{ }) {
for (name, privilege) in PRIVILEGES { for (name, privilege) in PRIVILEGES {
if *name == privilege_name { if *name == privilege_name {
let mut path_vec = Vec::new(); let mut path_vec = Vec::new();
@ -73,7 +93,7 @@ fn authenticate_user(
} }
} }
user_info.check_privs(&auth_id, &path_vec, *privilege, false)?; user_info.check_privs(&auth_id, &path_vec, *privilege, false)?;
return Ok(false); return Ok(AuthResult::Success);
} }
} }
@ -81,8 +101,26 @@ fn authenticate_user(
} }
} }
let _ = crate::auth::authenticate_user(userid, password)?; let _: () = crate::auth::authenticate_user(userid, password)?;
Ok(true)
Ok(match crate::config::tfa::login_challenge(userid)? {
None => AuthResult::CreateTicket,
Some(challenge) => AuthResult::Partial(challenge),
})
}
fn authenticate_2nd(
userid: &Userid,
challenge_ticket: &str,
response: &str,
) -> Result<AuthResult, Error> {
let challenge: TfaChallenge = Ticket::<ApiTicket>::parse(&challenge_ticket)?
.verify_with_time_frame(public_auth_key(), "PBS", Some(userid.as_str()), -60..600)?
.require_partial()?;
let _: () = crate::config::tfa::verify_challenge(userid, &challenge, response.parse()?)?;
Ok(AuthResult::CreateTicket)
} }
#[api( #[api(
@ -109,6 +147,11 @@ fn authenticate_user(
description: "Port for verifying terminal tickets.", description: "Port for verifying terminal tickets.",
optional: true, optional: true,
}, },
"tfa-challenge": {
type: String,
description: "The signed TFA challenge string the user wants to respond to.",
optional: true,
},
}, },
}, },
returns: { returns: {
@ -123,7 +166,9 @@ fn authenticate_user(
}, },
CSRFPreventionToken: { CSRFPreventionToken: {
type: String, type: String,
description: "Cross Site Request Forgery Prevention Token.", description:
"Cross Site Request Forgery Prevention Token. \
For partial tickets this is the string \"invalid\".",
}, },
}, },
}, },
@ -135,21 +180,24 @@ fn authenticate_user(
/// Create or verify authentication ticket. /// Create or verify authentication ticket.
/// ///
/// Returns: An authentication ticket with additional infos. /// Returns: An authentication ticket with additional infos.
fn create_ticket( pub fn create_ticket(
username: Userid, username: Userid,
password: String, password: String,
path: Option<String>, path: Option<String>,
privs: Option<String>, privs: Option<String>,
port: Option<u16>, port: Option<u16>,
tfa_challenge: Option<String>,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
match authenticate_user(&username, &password, path, privs, port) { match authenticate_user(&username, &password, path, privs, port, tfa_challenge) {
Ok(true) => { Ok(AuthResult::Success) => Ok(json!({ "username": username })),
let ticket = Ticket::new("PBS", &username)?.sign(private_auth_key(), None)?; Ok(AuthResult::CreateTicket) => {
let api_ticket = ApiTicket::full(username.clone());
let ticket = Ticket::new("PBS", &api_ticket)?.sign(private_auth_key(), None)?;
let token = assemble_csrf_prevention_token(csrf_secret(), &username); let token = assemble_csrf_prevention_token(csrf_secret(), &username);
crate::server::rest::auth_logger()?.log(format!("successful auth for user '{}'", username)); crate::server::rest::auth_logger()?
.log(format!("successful auth for user '{}'", username));
Ok(json!({ Ok(json!({
"username": username, "username": username,
@ -157,9 +205,16 @@ fn create_ticket(
"CSRFPreventionToken": token, "CSRFPreventionToken": token,
})) }))
} }
Ok(false) => Ok(json!({ Ok(AuthResult::Partial(challenge)) => {
let api_ticket = ApiTicket::partial(challenge);
let ticket = Ticket::new("PBS", &api_ticket)?
.sign(private_auth_key(), Some(username.as_str()))?;
Ok(json!({
"username": username, "username": username,
})), "ticket": ticket,
"CSRFPreventionToken": "invalid",
}))
}
Err(err) => { Err(err) => {
let client_ip = match rpcenv.get_client_ip().map(|addr| addr.ip()) { let client_ip = match rpcenv.get_client_ip().map(|addr| addr.ip()) {
Some(ip) => format!("{}", ip), Some(ip) => format!("{}", ip),
@ -181,6 +236,7 @@ fn create_ticket(
} }
#[api( #[api(
protected: true,
input: { input: {
properties: { properties: {
userid: { userid: {
@ -192,36 +248,42 @@ fn create_ticket(
}, },
}, },
access: { access: {
description: "Anybody is allowed to change there own password. In addition, users with 'Permissions:Modify' privilege may change any password.", description: "Everybody is allowed to change their own password. In addition, users with 'Permissions:Modify' privilege may change any password on @pbs realm.",
permission: &Permission::Anybody, permission: &Permission::Anybody,
}, },
)] )]
/// Change user password /// Change user password
/// ///
/// Each user is allowed to change his own password. Superuser /// Each user is allowed to change his own password. Superuser
/// can change all passwords. /// can change all passwords.
fn change_password( pub fn change_password(
userid: Userid, userid: Userid,
password: String, password: String,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let current_auth: Authid = rpcenv
let current_user: Userid = rpcenv
.get_auth_id() .get_auth_id()
.ok_or_else(|| format_err!("unknown user"))? .ok_or_else(|| format_err!("no authid available"))?
.parse()?; .parse()?;
let current_auth = Authid::from(current_user.clone());
let mut allowed = userid == current_user; if current_auth.is_token() {
bail!("API tokens cannot access this API endpoint");
}
if userid == "root@pam" { allowed = true; } let current_user = current_auth.user();
let mut allowed = userid == *current_user;
if !allowed { if !allowed {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&current_auth, &[]); let privs = user_info.lookup_privs(&current_auth, &[]);
if (privs & PRIV_PERMISSIONS_MODIFY) != 0 { allowed = true; } if user_info.is_superuser(&current_auth) {
allowed = true;
} }
if (privs & PRIV_PERMISSIONS_MODIFY) != 0 && userid.realm() != "pam" {
allowed = true;
}
};
if !allowed { if !allowed {
bail!("you are not authorized to change the password."); bail!("you are not authorized to change the password.");
@ -270,33 +332,26 @@ pub fn list_permissions(
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&current_auth_id, &["access"]); let user_privs = user_info.lookup_privs(&current_auth_id, &["access"]);
let auth_id = if user_privs & PRIV_SYS_AUDIT == 0 { let auth_id = match auth_id {
match auth_id { Some(auth_id) if auth_id == current_auth_id => current_auth_id,
Some(auth_id) => { Some(auth_id) => {
if auth_id == current_auth_id { if user_privs & PRIV_SYS_AUDIT != 0
auth_id || (auth_id.is_token()
} else if auth_id.is_token()
&& !current_auth_id.is_token() && !current_auth_id.is_token()
&& auth_id.user() == current_auth_id.user() { && auth_id.user() == current_auth_id.user())
{
auth_id auth_id
} else { } else {
bail!("not allowed to list permissions of {}", auth_id); bail!("not allowed to list permissions of {}", auth_id);
} }
}, },
None => current_auth_id, None => current_auth_id,
}
} else {
match auth_id {
Some(auth_id) => auth_id,
None => current_auth_id,
}
}; };
fn populate_acl_paths( fn populate_acl_paths(
mut paths: HashSet<String>, mut paths: HashSet<String>,
node: acl_config::AclTreeNode, node: acl_config::AclTreeNode,
path: &str path: &str,
) -> HashSet<String> { ) -> HashSet<String> {
for (sub_path, child_node) in node.children { for (sub_path, child_node) in node.children {
let sub_path = format!("{}/{}", path, &sub_path); let sub_path = format!("{}/{}", path, &sub_path);
@ -311,7 +366,7 @@ pub fn list_permissions(
let mut paths = HashSet::new(); let mut paths = HashSet::new();
paths.insert(path); paths.insert(path);
paths paths
}, }
None => { None => {
let mut paths = HashSet::new(); let mut paths = HashSet::new();
@ -326,31 +381,35 @@ pub fn list_permissions(
paths.insert("/system".to_string()); paths.insert("/system".to_string());
paths paths
}, }
}; };
let map = paths let map = paths.into_iter().fold(
.into_iter() HashMap::new(),
.fold(HashMap::new(), |mut map: HashMap<String, HashMap<String, bool>>, path: String| { |mut map: HashMap<String, HashMap<String, bool>>, path: String| {
let split_path = acl_config::split_acl_path(path.as_str()); let split_path = acl_config::split_acl_path(path.as_str());
let (privs, propagated_privs) = user_info.lookup_privs_details(&auth_id, &split_path); let (privs, propagated_privs) = user_info.lookup_privs_details(&auth_id, &split_path);
match privs { match privs {
0 => map, // Don't leak ACL paths where we don't have any privileges 0 => map, // Don't leak ACL paths where we don't have any privileges
_ => { _ => {
let priv_map = PRIVILEGES let priv_map =
PRIVILEGES
.iter() .iter()
.fold(HashMap::new(), |mut priv_map, (name, value)| { .fold(HashMap::new(), |mut priv_map, (name, value)| {
if value & privs != 0 { if value & privs != 0 {
priv_map.insert(name.to_string(), value & propagated_privs != 0); priv_map
.insert(name.to_string(), value & propagated_privs != 0);
} }
priv_map priv_map
}); });
map.insert(path, priv_map); map.insert(path, priv_map);
map map
}
}
}, },
}}); );
Ok(map) Ok(map)
} }
@ -358,21 +417,16 @@ pub fn list_permissions(
#[sortable] #[sortable]
const SUBDIRS: SubdirMap = &sorted!([ const SUBDIRS: SubdirMap = &sorted!([
("acl", &acl::ROUTER), ("acl", &acl::ROUTER),
("password", &Router::new().put(&API_METHOD_CHANGE_PASSWORD)),
( (
"password", &Router::new() "permissions",
.put(&API_METHOD_CHANGE_PASSWORD) &Router::new().get(&API_METHOD_LIST_PERMISSIONS)
),
(
"permissions", &Router::new()
.get(&API_METHOD_LIST_PERMISSIONS)
),
(
"ticket", &Router::new()
.post(&API_METHOD_CREATE_TICKET)
), ),
("ticket", &Router::new().post(&API_METHOD_CREATE_TICKET)),
("domains", &domain::ROUTER), ("domains", &domain::ROUTER),
("roles", &role::ROUTER), ("roles", &role::ROUTER),
("users", &user::ROUTER), ("users", &user::ROUTER),
("tfa", &tfa::ROUTER),
]); ]);
pub const ROUTER: Router = Router::new() pub const ROUTER: Router = Router::new()

View File

@ -1,5 +1,6 @@
//! Manage Access Control Lists
use anyhow::{bail, Error}; use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment, Permission}; use proxmox::api::{api, Router, RpcEnvironment, Permission};
use proxmox::tools::fs::open_file_locked; use proxmox::tools::fs::open_file_locked;
@ -9,36 +10,6 @@ use crate::config::acl;
use crate::config::acl::{Role, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY}; use crate::config::acl::{Role, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
#[api(
properties: {
propagate: {
schema: ACL_PROPAGATE_SCHEMA,
},
path: {
schema: ACL_PATH_SCHEMA,
},
ugid_type: {
schema: ACL_UGID_TYPE_SCHEMA,
},
ugid: {
type: String,
description: "User or Group ID.",
},
roleid: {
type: Role,
}
}
)]
#[derive(Serialize, Deserialize)]
/// ACL list entry.
pub struct AclListItem {
path: String,
ugid: String,
ugid_type: String,
propagate: bool,
roleid: String,
}
fn extract_acl_node_data( fn extract_acl_node_data(
node: &acl::AclTreeNode, node: &acl::AclTreeNode,
path: &str, path: &str,
@ -72,7 +43,7 @@ fn extract_acl_node_data(
} }
} }
for (group, roles) in &node.groups { for (group, roles) in &node.groups {
if let Some(_) = token_user { if token_user.is_some() {
continue; continue;
} }
@ -194,6 +165,7 @@ pub fn read_acl(
}, },
)] )]
/// Update Access Control List (ACLs). /// Update Access Control List (ACLs).
#[allow(clippy::too_many_arguments)]
pub fn update_acl( pub fn update_acl(
path: String, path: String,
role: String, role: String,
@ -210,7 +182,7 @@ pub fn update_acl(
let top_level_privs = user_info.lookup_privs(&current_auth_id, &["access", "acl"]); let top_level_privs = user_info.lookup_privs(&current_auth_id, &["access", "acl"]);
if top_level_privs & PRIV_PERMISSIONS_MODIFY == 0 { if top_level_privs & PRIV_PERMISSIONS_MODIFY == 0 {
if let Some(_) = group { if group.is_some() {
bail!("Unprivileged users are not allowed to create group ACL item."); bail!("Unprivileged users are not allowed to create group ACL item.");
} }

View File

@ -1,3 +1,5 @@
//! List Authentication domains/realms
use anyhow::{Error}; use anyhow::{Error};
use serde_json::{json, Value}; use serde_json::{json, Value};

View File

@ -1,3 +1,5 @@
//! Manage Roles with privileges
use anyhow::Error; use anyhow::Error;
use serde_json::{json, Value}; use serde_json::{json, Value};
@ -46,7 +48,7 @@ fn list_roles() -> Result<Value, Error> {
let mut priv_list = Vec::new(); let mut priv_list = Vec::new();
for (name, privilege) in PRIVILEGES.iter() { for (name, privilege) in PRIVILEGES.iter() {
if privs & privilege > 0 { if privs & privilege > 0 {
priv_list.push(name.clone()); priv_list.push(name);
} }
} }
list.push(json!({ "roleid": role, "privs": priv_list, "comment": comment })); list.push(json!({ "roleid": role, "privs": priv_list, "comment": comment }));

594
src/api2/access/tfa.rs Normal file
View File

@ -0,0 +1,594 @@
//! Two Factor Authentication
use anyhow::{bail, format_err, Error};
use serde::{Deserialize, Serialize};
use proxmox::api::{api, Permission, Router, RpcEnvironment};
use proxmox::tools::tfa::totp::Totp;
use proxmox::{http_bail, http_err};
use crate::api2::types::{Authid, Userid, PASSWORD_SCHEMA};
use crate::config::acl::{PRIV_PERMISSIONS_MODIFY, PRIV_SYS_AUDIT};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::tfa::{TfaInfo, TfaUserData};
/// Perform first-factor (password) authentication only. Ignore password for the root user.
/// Otherwise check the current user's password.
///
/// This means that user admins need to type in their own password while editing a user, and
/// regular users, which can only change their own TFA settings (checked at the API level), can
/// change their own settings using their own password.
fn tfa_update_auth(
rpcenv: &mut dyn RpcEnvironment,
userid: &Userid,
password: Option<String>,
must_exist: bool,
) -> Result<(), Error> {
let authid: Authid = rpcenv.get_auth_id().unwrap().parse()?;
if authid.user() != Userid::root_userid() {
let password = password.ok_or_else(|| http_err!(UNAUTHORIZED, "missing password"))?;
let _: () = crate::auth::authenticate_user(authid.user(), &password)
.map_err(|err| http_err!(UNAUTHORIZED, "{}", err))?;
}
// After authentication, verify that the to-be-modified user actually exists:
if must_exist && authid.user() != userid {
let (config, _digest) = crate::config::user::config()?;
if config
.lookup::<crate::config::user::User>("user", userid.as_str())
.is_err()
{
http_bail!(UNAUTHORIZED, "user '{}' does not exists.", userid);
}
}
Ok(())
}
#[api]
/// A TFA entry type.
#[derive(Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
enum TfaType {
/// A TOTP entry type.
Totp,
/// A U2F token entry.
U2f,
/// A Webauthn token entry.
Webauthn,
/// Recovery tokens.
Recovery,
}
#[api(
properties: {
type: { type: TfaType },
info: { type: TfaInfo },
},
)]
/// A TFA entry for a user.
#[derive(Deserialize, Serialize)]
#[serde(deny_unknown_fields)]
struct TypedTfaInfo {
#[serde(rename = "type")]
pub ty: TfaType,
#[serde(flatten)]
pub info: TfaInfo,
}
fn to_data(data: TfaUserData) -> Vec<TypedTfaInfo> {
let mut out = Vec::with_capacity(
data.totp.len()
+ data.u2f.len()
+ data.webauthn.len()
+ if data.recovery().is_some() { 1 } else { 0 },
);
if let Some(recovery) = data.recovery() {
out.push(TypedTfaInfo {
ty: TfaType::Recovery,
info: TfaInfo::recovery(recovery.created),
})
}
for entry in data.totp {
out.push(TypedTfaInfo {
ty: TfaType::Totp,
info: entry.info,
});
}
for entry in data.webauthn {
out.push(TypedTfaInfo {
ty: TfaType::Webauthn,
info: entry.info,
});
}
for entry in data.u2f {
out.push(TypedTfaInfo {
ty: TfaType::U2f,
info: entry.info,
});
}
out
}
/// Iterate through tuples of `(type, index, id)`.
fn tfa_id_iter(data: &TfaUserData) -> impl Iterator<Item = (TfaType, usize, &str)> {
data.totp
.iter()
.enumerate()
.map(|(i, entry)| (TfaType::Totp, i, entry.info.id.as_str()))
.chain(
data.webauthn
.iter()
.enumerate()
.map(|(i, entry)| (TfaType::Webauthn, i, entry.info.id.as_str())),
)
.chain(
data.u2f
.iter()
.enumerate()
.map(|(i, entry)| (TfaType::U2f, i, entry.info.id.as_str())),
)
.chain(
data.recovery
.iter()
.map(|_| (TfaType::Recovery, 0, "recovery")),
)
}
#[api(
protected: true,
input: {
properties: { userid: { type: Userid } },
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Add a TOTP secret to the user.
fn list_user_tfa(userid: Userid) -> Result<Vec<TypedTfaInfo>, Error> {
let _lock = crate::config::tfa::read_lock()?;
Ok(match crate::config::tfa::read()?.users.remove(&userid) {
Some(data) => to_data(data),
None => Vec::new(),
})
}
#[api(
protected: true,
input: {
properties: {
userid: { type: Userid },
id: { description: "the tfa entry id" }
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Get a single TFA entry.
fn get_tfa_entry(userid: Userid, id: String) -> Result<TypedTfaInfo, Error> {
let _lock = crate::config::tfa::read_lock()?;
if let Some(user_data) = crate::config::tfa::read()?.users.remove(&userid) {
match {
// scope to prevent the temprary iter from borrowing across the whole match
let entry = tfa_id_iter(&user_data).find(|(_ty, _index, entry_id)| id == *entry_id);
entry.map(|(ty, index, _)| (ty, index))
} {
Some((TfaType::Recovery, _)) => {
if let Some(recovery) = user_data.recovery() {
return Ok(TypedTfaInfo {
ty: TfaType::Recovery,
info: TfaInfo::recovery(recovery.created),
});
}
}
Some((TfaType::Totp, index)) => {
return Ok(TypedTfaInfo {
ty: TfaType::Totp,
// `into_iter().nth()` to *move* out of it
info: user_data.totp.into_iter().nth(index).unwrap().info,
});
}
Some((TfaType::Webauthn, index)) => {
return Ok(TypedTfaInfo {
ty: TfaType::Webauthn,
info: user_data.webauthn.into_iter().nth(index).unwrap().info,
});
}
Some((TfaType::U2f, index)) => {
return Ok(TypedTfaInfo {
ty: TfaType::U2f,
info: user_data.u2f.into_iter().nth(index).unwrap().info,
});
}
None => (),
}
}
http_bail!(NOT_FOUND, "no such tfa entry: {}/{}", userid, id);
}
#[api(
protected: true,
input: {
properties: {
userid: { type: Userid },
id: {
description: "the tfa entry id",
},
password: {
schema: PASSWORD_SCHEMA,
optional: true,
},
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Get a single TFA entry.
fn delete_tfa(
userid: Userid,
id: String,
password: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
tfa_update_auth(rpcenv, &userid, password, false)?;
let _lock = crate::config::tfa::write_lock()?;
let mut data = crate::config::tfa::read()?;
let user_data = data
.users
.get_mut(&userid)
.ok_or_else(|| http_err!(NOT_FOUND, "no such entry: {}/{}", userid, id))?;
match {
// scope to prevent the temprary iter from borrowing across the whole match
let entry = tfa_id_iter(&user_data).find(|(_, _, entry_id)| id == *entry_id);
entry.map(|(ty, index, _)| (ty, index))
} {
Some((TfaType::Recovery, _)) => user_data.recovery = None,
Some((TfaType::Totp, index)) => drop(user_data.totp.remove(index)),
Some((TfaType::Webauthn, index)) => drop(user_data.webauthn.remove(index)),
Some((TfaType::U2f, index)) => drop(user_data.u2f.remove(index)),
None => http_bail!(NOT_FOUND, "no such tfa entry: {}/{}", userid, id),
}
if user_data.is_empty() {
data.users.remove(&userid);
}
crate::config::tfa::write(&data)?;
Ok(())
}
#[api(
properties: {
"userid": { type: Userid },
"entries": {
type: Array,
items: { type: TypedTfaInfo },
},
},
)]
#[derive(Deserialize, Serialize)]
#[serde(deny_unknown_fields)]
/// Over the API we only provide the descriptions for TFA data.
struct TfaUser {
/// The user this entry belongs to.
userid: Userid,
/// TFA entries.
entries: Vec<TypedTfaInfo>,
}
#[api(
protected: true,
input: {
properties: {},
},
access: {
permission: &Permission::Anybody,
description: "Returns all or just the logged-in user, depending on privileges.",
},
returns: {
description: "The list tuples of user and TFA entries.",
type: Array,
items: { type: TfaUser }
},
)]
/// List user TFA configuration.
fn list_tfa(rpcenv: &mut dyn RpcEnvironment) -> Result<Vec<TfaUser>, Error> {
let authid: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let top_level_privs = user_info.lookup_privs(&authid, &["access", "users"]);
let top_level_allowed = (top_level_privs & PRIV_SYS_AUDIT) != 0;
let _lock = crate::config::tfa::read_lock()?;
let tfa_data = crate::config::tfa::read()?.users;
let mut out = Vec::<TfaUser>::new();
if top_level_allowed {
for (user, data) in tfa_data {
out.push(TfaUser {
userid: user,
entries: to_data(data),
});
}
} else if let Some(data) = { tfa_data }.remove(authid.user()) {
out.push(TfaUser {
userid: authid.into(),
entries: to_data(data),
});
}
Ok(out)
}
#[api(
properties: {
recovery: {
description: "A list of recovery codes as integers.",
type: Array,
items: {
type: Integer,
description: "A one-time usable recovery code entry.",
},
},
},
)]
/// The result returned when adding TFA entries to a user.
#[derive(Default, Serialize)]
struct TfaUpdateInfo {
/// The id if a newly added TFA entry.
id: Option<String>,
/// When adding u2f entries, this contains a challenge the user must respond to in order to
/// finish the registration.
#[serde(skip_serializing_if = "Option::is_none")]
challenge: Option<String>,
/// When adding recovery codes, this contains the list of codes to be displayed to the user
/// this one time.
#[serde(skip_serializing_if = "Vec::is_empty", default)]
recovery: Vec<String>,
}
impl TfaUpdateInfo {
fn id(id: String) -> Self {
Self {
id: Some(id),
..Default::default()
}
}
}
#[api(
protected: true,
input: {
properties: {
userid: { type: Userid },
description: {
description: "A description to distinguish multiple entries from one another",
type: String,
max_length: 255,
optional: true,
},
"type": { type: TfaType },
totp: {
description: "A totp URI.",
optional: true,
},
value: {
description:
"The current value for the provided totp URI, or a Webauthn/U2F challenge response",
optional: true,
},
challenge: {
description: "When responding to a u2f challenge: the original challenge string",
optional: true,
},
password: {
schema: PASSWORD_SCHEMA,
optional: true,
},
},
},
returns: { type: TfaUpdateInfo },
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Add a TFA entry to the user.
#[allow(clippy::too_many_arguments)]
fn add_tfa_entry(
userid: Userid,
description: Option<String>,
totp: Option<String>,
value: Option<String>,
challenge: Option<String>,
password: Option<String>,
r#type: TfaType,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<TfaUpdateInfo, Error> {
tfa_update_auth(rpcenv, &userid, password, true)?;
let need_description =
move || description.ok_or_else(|| format_err!("'description' is required for new entries"));
match r#type {
TfaType::Totp => match (totp, value) {
(Some(totp), Some(value)) => {
if challenge.is_some() {
bail!("'challenge' parameter is invalid for 'totp' entries");
}
let description = need_description()?;
let totp: Totp = totp.parse()?;
if totp
.verify(&value, std::time::SystemTime::now(), -1..=1)?
.is_none()
{
bail!("failed to verify TOTP challenge");
}
crate::config::tfa::add_totp(&userid, description, totp).map(TfaUpdateInfo::id)
}
_ => bail!("'totp' type requires both 'totp' and 'value' parameters"),
},
TfaType::Webauthn => {
if totp.is_some() {
bail!("'totp' parameter is invalid for 'totp' entries");
}
match challenge {
None => crate::config::tfa::add_webauthn_registration(&userid, need_description()?)
.map(|c| TfaUpdateInfo {
challenge: Some(c),
..Default::default()
}),
Some(challenge) => {
let value = value.ok_or_else(|| {
format_err!(
"missing 'value' parameter (webauthn challenge response missing)"
)
})?;
crate::config::tfa::finish_webauthn_registration(&userid, &challenge, &value)
.map(TfaUpdateInfo::id)
}
}
}
TfaType::U2f => {
if totp.is_some() {
bail!("'totp' parameter is invalid for 'totp' entries");
}
match challenge {
None => crate::config::tfa::add_u2f_registration(&userid, need_description()?).map(
|c| TfaUpdateInfo {
challenge: Some(c),
..Default::default()
},
),
Some(challenge) => {
let value = value.ok_or_else(|| {
format_err!("missing 'value' parameter (u2f challenge response missing)")
})?;
crate::config::tfa::finish_u2f_registration(&userid, &challenge, &value)
.map(TfaUpdateInfo::id)
}
}
}
TfaType::Recovery => {
if totp.or(value).or(challenge).is_some() {
bail!("generating recovery tokens does not allow additional parameters");
}
let recovery = crate::config::tfa::add_recovery(&userid)?;
Ok(TfaUpdateInfo {
id: Some("recovery".to_string()),
recovery,
..Default::default()
})
}
}
}
#[api(
protected: true,
input: {
properties: {
userid: { type: Userid },
id: {
description: "the tfa entry id",
},
description: {
description: "A description to distinguish multiple entries from one another",
type: String,
max_length: 255,
optional: true,
},
enable: {
description: "Whether this entry should currently be enabled or disabled",
optional: true,
},
password: {
schema: PASSWORD_SCHEMA,
optional: true,
},
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Update user's TFA entry description.
fn update_tfa_entry(
userid: Userid,
id: String,
description: Option<String>,
enable: Option<bool>,
password: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
tfa_update_auth(rpcenv, &userid, password, true)?;
let _lock = crate::config::tfa::write_lock()?;
let mut data = crate::config::tfa::read()?;
let mut entry = data
.users
.get_mut(&userid)
.and_then(|user| user.find_entry_mut(&id))
.ok_or_else(|| http_err!(NOT_FOUND, "no such entry: {}/{}", userid, id))?;
if let Some(description) = description {
entry.description = description;
}
if let Some(enable) = enable {
entry.enable = enable;
}
crate::config::tfa::write(&data)?;
Ok(())
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_TFA)
.match_all("userid", &USER_ROUTER);
const USER_ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_USER_TFA)
.post(&API_METHOD_ADD_TFA_ENTRY)
.match_all("id", &ITEM_ROUTER);
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_TFA_ENTRY)
.put(&API_METHOD_UPDATE_TFA_ENTRY)
.delete(&API_METHOD_DELETE_TFA);

View File

@ -1,4 +1,6 @@
use anyhow::{bail, Error}; //! User Management
use anyhow::{bail, format_err, Error};
use serde::{Serialize, Deserialize}; use serde::{Serialize, Deserialize};
use serde_json::{json, Value}; use serde_json::{json, Value};
use std::collections::HashMap; use std::collections::HashMap;
@ -94,7 +96,6 @@ impl UserWithTokens {
} }
} }
#[api( #[api(
input: { input: {
properties: { properties: {
@ -113,7 +114,7 @@ impl UserWithTokens {
}, },
access: { access: {
permission: &Permission::Anybody, permission: &Permission::Anybody,
description: "Returns all or just the logged-in user, depending on privileges.", description: "Returns all or just the logged-in user (/API token owner), depending on privileges.",
}, },
)] )]
/// List users /// List users
@ -125,9 +126,12 @@ pub fn list_users(
let (config, digest) = user::config()?; let (config, digest) = user::config()?;
// intentionally user only for now let auth_id: Authid = rpcenv
let userid: Userid = rpcenv.get_auth_id().unwrap().parse()?; .get_auth_id()
let auth_id = Authid::from(userid.clone()); .ok_or_else(|| format_err!("no authid available"))?
.parse()?;
let userid = auth_id.user();
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
@ -135,7 +139,7 @@ pub fn list_users(
let top_level_allowed = (top_level_privs & PRIV_SYS_AUDIT) != 0; let top_level_allowed = (top_level_privs & PRIV_SYS_AUDIT) != 0;
let filter_by_privs = |user: &user::User| { let filter_by_privs = |user: &user::User| {
top_level_allowed || user.userid == userid top_level_allowed || user.userid == *userid
}; };
@ -167,7 +171,7 @@ pub fn list_users(
}) })
.collect() .collect()
} else { } else {
iter.map(|user: user::User| UserWithTokens::new(user)) iter.map(UserWithTokens::new)
.collect() .collect()
}; };
@ -216,7 +220,11 @@ pub fn list_users(
}, },
)] )]
/// Create new user. /// Create new user.
pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error> { pub fn create_user(
password: Option<String>,
param: Value,
rpcenv: &mut dyn RpcEnvironment
) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
@ -224,17 +232,25 @@ pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error>
let (mut config, _digest) = user::config()?; let (mut config, _digest) = user::config()?;
if let Some(_) = config.sections.get(user.userid.as_str()) { if config.sections.get(user.userid.as_str()).is_some() {
bail!("user '{}' already exists.", user.userid); bail!("user '{}' already exists.", user.userid);
} }
let authenticator = crate::auth::lookup_authenticator(&user.userid.realm())?;
config.set_data(user.userid.as_str(), "user", &user)?; config.set_data(user.userid.as_str(), "user", &user)?;
let realm = user.userid.realm();
// Fails if realm does not exist!
let authenticator = crate::auth::lookup_authenticator(realm)?;
user::save_config(&config)?; user::save_config(&config)?;
if let Some(password) = password { if let Some(password) = password {
let user_info = CachedUserInfo::new()?;
let current_auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
if realm == "pam" && !user_info.is_superuser(&current_auth_id) {
bail!("only superuser can edit pam credentials!");
}
authenticator.store_password(user.userid.name(), &password)?; authenticator.store_password(user.userid.name(), &password)?;
} }
@ -249,10 +265,7 @@ pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error>
}, },
}, },
}, },
returns: { returns: { type: user::User },
description: "The user configuration (with config digest).",
type: user::User,
},
access: { access: {
permission: &Permission::Or(&[ permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false), &Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
@ -268,6 +281,21 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
Ok(user) Ok(user)
} }
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
#[allow(non_camel_case_types)]
pub enum DeletableProperty {
/// Delete the comment property.
comment,
/// Delete the firstname property.
firstname,
/// Delete the lastname property.
lastname,
/// Delete the email property.
email,
}
#[api( #[api(
protected: true, protected: true,
input: { input: {
@ -303,6 +331,14 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
schema: user::EMAIL_SCHEMA, schema: user::EMAIL_SCHEMA,
optional: true, optional: true,
}, },
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: { digest: {
optional: true, optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA, schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
@ -317,6 +353,7 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
}, },
)] )]
/// Update user configuration. /// Update user configuration.
#[allow(clippy::too_many_arguments)]
pub fn update_user( pub fn update_user(
userid: Userid, userid: Userid,
comment: Option<String>, comment: Option<String>,
@ -326,7 +363,9 @@ pub fn update_user(
firstname: Option<String>, firstname: Option<String>,
lastname: Option<String>, lastname: Option<String>,
email: Option<String>, email: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>, digest: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
@ -340,6 +379,17 @@ pub fn update_user(
let mut data: user::User = config.lookup("user", userid.as_str())?; let mut data: user::User = config.lookup("user", userid.as_str())?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::comment => data.comment = None,
DeletableProperty::firstname => data.firstname = None,
DeletableProperty::lastname => data.lastname = None,
DeletableProperty::email => data.email = None,
}
}
}
if let Some(comment) = comment { if let Some(comment) = comment {
let comment = comment.trim().to_string(); let comment = comment.trim().to_string();
if comment.is_empty() { if comment.is_empty() {
@ -358,6 +408,13 @@ pub fn update_user(
} }
if let Some(password) = password { if let Some(password) = password {
let user_info = CachedUserInfo::new()?;
let current_auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let self_service = current_auth_id.user() == &userid;
let target_realm = userid.realm();
if !self_service && target_realm == "pam" && !user_info.is_superuser(&current_auth_id) {
bail!("only superuser can edit pam credentials!");
}
let authenticator = crate::auth::lookup_authenticator(userid.realm())?; let authenticator = crate::auth::lookup_authenticator(userid.realm())?;
authenticator.store_password(userid.name(), &password)?; authenticator.store_password(userid.name(), &password)?;
} }
@ -403,6 +460,7 @@ pub fn update_user(
/// Remove a user from the configuration file. /// Remove a user from the configuration file.
pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error> { pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error> {
let _tfa_lock = crate::config::tfa::write_lock()?;
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?; let (mut config, expected_digest) = user::config()?;
@ -419,6 +477,19 @@ pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error>
user::save_config(&config)?; user::save_config(&config)?;
match crate::config::tfa::read().and_then(|mut cfg| {
let _: bool = cfg.remove_user(&userid);
crate::config::tfa::write(&cfg)
}) {
Ok(()) => (),
Err(err) => {
eprintln!(
"error updating TFA config after deleting user {:?}: {}",
userid, err
);
}
}
Ok(()) Ok(())
} }
@ -433,10 +504,7 @@ pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error>
}, },
}, },
}, },
returns: { returns: { type: user::ApiToken },
description: "Get API token metadata (with config digest).",
type: user::ApiToken,
},
access: { access: {
permission: &Permission::Or(&[ permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false), &Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
@ -530,7 +598,7 @@ pub fn generate_token(
let tokenid = Authid::from((userid.clone(), Some(tokenname.clone()))); let tokenid = Authid::from((userid.clone(), Some(tokenname.clone())));
let tokenid_string = tokenid.to_string(); let tokenid_string = tokenid.to_string();
if let Some(_) = config.sections.get(&tokenid_string) { if config.sections.get(&tokenid_string).is_some() {
bail!("token '{}' for user '{}' already exists.", tokenname.as_str(), userid); bail!("token '{}' for user '{}' already exists.", tokenname.as_str(), userid);
} }
@ -538,7 +606,7 @@ pub fn generate_token(
token_shadow::set_secret(&tokenid, &secret)?; token_shadow::set_secret(&tokenid, &secret)?;
let token = user::ApiToken { let token = user::ApiToken {
tokenid: tokenid.clone(), tokenid,
comment, comment,
enable, enable,
expire, expire,

View File

@ -1,3 +1,5 @@
//! Backup Server Administration
use proxmox::api::router::{Router, SubdirMap}; use proxmox::api::router::{Router, SubdirMap};
use proxmox::list_subdirs_api_method; use proxmox::list_subdirs_api_method;

View File

@ -1,7 +1,8 @@
use std::collections::{HashSet, HashMap}; //! Datastore Management
use std::collections::HashSet;
use std::ffi::OsStr; use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use std::sync::{Arc, Mutex};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::pin::Pin; use std::pin::Pin;
@ -10,12 +11,13 @@ use futures::*;
use hyper::http::request::Parts; use hyper::http::request::Parts;
use hyper::{header, Body, Response, StatusCode}; use hyper::{header, Body, Response, StatusCode};
use serde_json::{json, Value}; use serde_json::{json, Value};
use tokio_stream::wrappers::ReceiverStream;
use proxmox::api::{ use proxmox::api::{
api, ApiResponseFuture, ApiHandler, ApiMethod, Router, api, ApiResponseFuture, ApiHandler, ApiMethod, Router,
RpcEnvironment, RpcEnvironmentType, Permission RpcEnvironment, RpcEnvironmentType, Permission
}; };
use proxmox::api::router::SubdirMap; use proxmox::api::router::{ReturnType, SubdirMap};
use proxmox::api::schema::*; use proxmox::api::schema::*;
use proxmox::tools::fs::{replace_file, CreateOptions}; use proxmox::tools::fs::{replace_file, CreateOptions};
use proxmox::{http_err, identity, list_subdirs_api_method, sortable}; use proxmox::{http_err, identity, list_subdirs_api_method, sortable};
@ -125,19 +127,6 @@ fn get_all_snapshot_files(
Ok((manifest, files)) Ok((manifest, files))
} }
fn group_backups(backup_list: Vec<BackupInfo>) -> HashMap<String, Vec<BackupInfo>> {
let mut group_hash = HashMap::new();
for info in backup_list {
let group_id = info.backup_dir.group().group_path().to_str().unwrap().to_owned();
let time_list = group_hash.entry(group_id).or_insert(vec![]);
time_list.push(info);
}
group_hash
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -161,7 +150,7 @@ fn group_backups(backup_list: Vec<BackupInfo>) -> HashMap<String, Vec<BackupInfo
}, },
)] )]
/// List backup groups. /// List backup groups.
fn list_groups( pub fn list_groups(
store: String, store: String,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<GroupListItem>, Error> { ) -> Result<Vec<GroupListItem>, Error> {
@ -171,45 +160,64 @@ fn list_groups(
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let backup_list = BackupInfo::list_backups(&datastore.base_path())?;
let group_hash = group_backups(backup_list);
let mut groups = Vec::new();
for (_group_id, mut list) in group_hash {
BackupInfo::sort_list(&mut list, false);
let info = &list[0];
let group = info.backup_dir.group();
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0; let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = match datastore.get_owner(group) {
let backup_groups = BackupInfo::list_backup_groups(&datastore.base_path())?;
let group_info = backup_groups
.into_iter()
.fold(Vec::new(), |mut group_info, group| {
let owner = match datastore.get_owner(&group) {
Ok(auth_id) => auth_id, Ok(auth_id) => auth_id,
Err(err) => { Err(err) => {
println!("Failed to get owner of group '{}' - {}", group, err); eprintln!("Failed to get owner of group '{}/{}' - {}",
continue; &store,
group,
err);
return group_info;
}, },
}; };
if !list_all && check_backup_owner(&owner, &auth_id).is_err() { if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue; return group_info;
} }
let result_item = GroupListItem { let snapshots = match group.list_backups(&datastore.base_path()) {
Ok(snapshots) => snapshots,
Err(_) => {
return group_info;
},
};
let backup_count: u64 = snapshots.len() as u64;
if backup_count == 0 {
return group_info;
}
let last_backup = snapshots
.iter()
.fold(&snapshots[0], |last, curr| {
if curr.is_finished()
&& curr.backup_dir.backup_time() > last.backup_dir.backup_time() {
curr
} else {
last
}
})
.to_owned();
group_info.push(GroupListItem {
backup_type: group.backup_type().to_string(), backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(), backup_id: group.backup_id().to_string(),
last_backup: info.backup_dir.backup_time(), last_backup: last_backup.backup_dir.backup_time(),
backup_count: list.len() as u64,
files: info.files.clone(),
owner: Some(owner), owner: Some(owner),
}; backup_count,
groups.push(result_item); files: last_backup.files,
} });
Ok(groups) group_info
});
Ok(group_info)
} }
#[api( #[api(
@ -292,7 +300,7 @@ pub fn list_snapshot_files(
}, },
)] )]
/// Delete backup snapshot. /// Delete backup snapshot.
fn delete_snapshot( pub fn delete_snapshot(
store: String, store: String,
backup_type: String, backup_type: String,
backup_id: String, backup_id: String,
@ -357,49 +365,56 @@ pub fn list_snapshots (
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let base_path = datastore.base_path(); let base_path = datastore.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?; let groups = match (backup_type, backup_id) {
(Some(backup_type), Some(backup_id)) => {
let mut snapshots = vec![]; let mut groups = Vec::with_capacity(1);
groups.push(BackupGroup::new(backup_type, backup_id));
for info in backup_list { groups
let group = info.backup_dir.group();
if let Some(ref backup_type) = backup_type {
if backup_type != group.backup_type() { continue; }
}
if let Some(ref backup_id) = backup_id {
if backup_id != group.backup_id() { continue; }
}
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
println!("Failed to get owner of group '{}' - {}", group, err);
continue;
}, },
(Some(backup_type), None) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_type() == backup_type)
.collect()
},
(None, Some(backup_id)) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_id() == backup_id)
.collect()
},
_ => BackupInfo::list_backup_groups(&base_path)?,
}; };
if !list_all && check_backup_owner(&owner, &auth_id).is_err() { let info_to_snapshot_list_item = |group: &BackupGroup, owner, info: BackupInfo| {
continue; let backup_type = group.backup_type().to_string();
} let backup_id = group.backup_id().to_string();
let backup_time = info.backup_dir.backup_time();
let mut size = None; match get_all_snapshot_files(&datastore, &info) {
let (comment, verification, files) = match get_all_snapshot_files(&datastore, &info) {
Ok((manifest, files)) => { Ok((manifest, files)) => {
size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
// extract the first line from notes // extract the first line from notes
let comment: Option<String> = manifest.unprotected["notes"] let comment: Option<String> = manifest.unprotected["notes"]
.as_str() .as_str()
.and_then(|notes| notes.lines().next()) .and_then(|notes| notes.lines().next())
.map(String::from); .map(String::from);
let verify = manifest.unprotected["verify_state"].clone(); let fingerprint = match manifest.fingerprint() {
let verify: Option<SnapshotVerifyState> = match serde_json::from_value(verify) { Ok(fp) => fp,
Err(err) => {
eprintln!("error parsing fingerprint: '{}'", err);
None
},
};
let verification = manifest.unprotected["verify_state"].clone();
let verification: Option<SnapshotVerifyState> = match serde_json::from_value(verification) {
Ok(verify) => verify, Ok(verify) => verify,
Err(err) => { Err(err) => {
eprintln!("error parsing verification state : '{}'", err); eprintln!("error parsing verification state : '{}'", err);
@ -407,88 +422,114 @@ pub fn list_snapshots (
} }
}; };
(comment, verify, files) let size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment,
verification,
fingerprint,
files,
size,
owner,
}
}, },
Err(err) => { Err(err) => {
eprintln!("error during snapshot file listing: '{}'", err); eprintln!("error during snapshot file listing: '{}'", err);
( let files = info
None,
None,
info
.files .files
.iter() .into_iter()
.map(|x| BackupContent { .map(|filename| BackupContent {
filename: x.to_string(), filename,
size: None, size: None,
crypt_mode: None, crypt_mode: None,
}) })
.collect() .collect();
)
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment: None,
verification: None,
fingerprint: None,
files,
size: None,
owner,
}
},
}
};
groups
.iter()
.try_fold(Vec::new(), |mut snapshots, group| {
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
&store,
group,
err);
return Ok(snapshots);
}, },
}; };
let result_item = SnapshotListItem { if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
backup_type: group.backup_type().to_string(), return Ok(snapshots);
backup_id: group.backup_id().to_string(),
backup_time: info.backup_dir.backup_time(),
comment,
verification,
files,
size,
owner: Some(owner),
};
snapshots.push(result_item);
} }
let group_backups = group.list_backups(&datastore.base_path())?;
snapshots.extend(
group_backups
.into_iter()
.map(|info| info_to_snapshot_list_item(&group, Some(owner.clone()), info))
);
Ok(snapshots) Ok(snapshots)
})
} }
fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> { fn get_snapshots_count(store: &DataStore, filter_owner: Option<&Authid>) -> Result<Counts, Error> {
let base_path = store.base_path(); let base_path = store.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?; let groups = BackupInfo::list_backup_groups(&base_path)?;
let mut groups = HashSet::new();
let mut result = Counts { groups.iter()
ct: None, .filter(|group| {
host: None, let owner = match store.get_owner(&group) {
vm: None, Ok(owner) => owner,
other: None, Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
store.name(),
group,
err);
return false;
},
}; };
for info in backup_list { match filter_owner {
let group = info.backup_dir.group(); Some(filter) => check_backup_owner(&owner, filter).is_ok(),
None => true,
let id = group.backup_id();
let backup_type = group.backup_type();
let mut new_id = false;
if groups.insert(format!("{}-{}", &backup_type, &id)) {
new_id = true;
} }
})
.try_fold(Counts::default(), |mut counts, group| {
let snapshot_count = group.list_backups(&base_path)?.len() as u64;
let mut counts = match backup_type { let type_count = match group.backup_type() {
"ct" => result.ct.take().unwrap_or(Default::default()), "ct" => counts.ct.get_or_insert(Default::default()),
"host" => result.host.take().unwrap_or(Default::default()), "vm" => counts.vm.get_or_insert(Default::default()),
"vm" => result.vm.take().unwrap_or(Default::default()), "host" => counts.host.get_or_insert(Default::default()),
_ => result.other.take().unwrap_or(Default::default()), _ => counts.other.get_or_insert(Default::default()),
}; };
counts.snapshots += 1; type_count.groups += 1;
if new_id { type_count.snapshots += snapshot_count;
counts.groups +=1;
}
match backup_type { Ok(counts)
"ct" => result.ct = Some(counts), })
"host" => result.host = Some(counts),
"vm" => result.vm = Some(counts),
_ => result.other = Some(counts),
}
}
Ok(result)
} }
#[api( #[api(
@ -497,8 +538,15 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
store: { store: {
schema: DATASTORE_SCHEMA, schema: DATASTORE_SCHEMA,
}, },
verbose: {
type: bool,
default: false,
optional: true,
description: "Include additional information like snapshot counts and GC status.",
}, },
}, },
},
returns: { returns: {
type: DataStoreStatus, type: DataStoreStatus,
}, },
@ -509,13 +557,30 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
/// Get datastore status. /// Get datastore status.
pub fn status( pub fn status(
store: String, store: String,
verbose: bool,
_info: &ApiMethod, _info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<DataStoreStatus, Error> { ) -> Result<DataStoreStatus, Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let storage = crate::tools::disks::disk_usage(&datastore.base_path())?; let storage = crate::tools::disks::disk_usage(&datastore.base_path())?;
let counts = get_snapshots_count(&datastore)?; let (counts, gc_status) = if verbose {
let gc_status = datastore.last_gc_status(); let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let store_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let filter_owner = if store_privs & PRIV_DATASTORE_AUDIT != 0 {
None
} else {
Some(&auth_id)
};
let counts = Some(get_snapshots_count(&datastore, filter_owner)?);
let gc_status = Some(datastore.last_gc_status());
(counts, gc_status)
} else {
(None, None)
};
Ok(DataStoreStatus { Ok(DataStoreStatus {
total: storage.total, total: storage.total,
@ -598,25 +663,20 @@ pub fn verify(
_ => bail!("parameters do not specify a backup group or snapshot"), _ => bail!("parameters do not specify a backup group or snapshot"),
} }
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
worker_type, worker_type,
Some(worker_id.clone()), Some(worker_id),
auth_id.clone(), auth_id.clone(),
to_stdout, to_stdout,
move |worker| { move |worker| {
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16))); let verify_worker = crate::backup::VerifyWorker::new(worker.clone(), datastore);
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
let failed_dirs = if let Some(backup_dir) = backup_dir { let failed_dirs = if let Some(backup_dir) = backup_dir {
let mut res = Vec::new(); let mut res = Vec::new();
if !verify_backup_dir( if !verify_backup_dir(
datastore, &verify_worker,
&backup_dir, &backup_dir,
verified_chunks,
corrupt_chunks,
worker.clone(),
worker.upid().clone(), worker.upid().clone(),
None, None,
)? { )? {
@ -624,13 +684,10 @@ pub fn verify(
} }
res res
} else if let Some(backup_group) = backup_group { } else if let Some(backup_group) = backup_group {
let (_count, failed_dirs) = verify_backup_group( let failed_dirs = verify_backup_group(
datastore, &verify_worker,
&backup_group, &backup_group,
verified_chunks, &mut StoreProgress::new(1),
corrupt_chunks,
None,
worker.clone(),
worker.upid(), worker.upid(),
None, None,
)?; )?;
@ -645,10 +702,10 @@ pub fn verify(
None None
}; };
verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)? verify_all_backups(&verify_worker, worker.upid(), owner, None)?
}; };
if failed_dirs.len() > 0 { if !failed_dirs.is_empty() {
worker.log("Failed to verify following snapshots/groups:"); worker.log("Failed to verify the following snapshots/groups:");
for dir in failed_dirs { for dir in failed_dirs {
worker.log(format!("\t{}", dir)); worker.log(format!("\t{}", dir));
} }
@ -709,7 +766,7 @@ pub const API_RETURN_SCHEMA_PRUNE: Schema = ArraySchema::new(
&PruneListItem::API_SCHEMA &PruneListItem::API_SCHEMA
).schema(); ).schema();
const API_METHOD_PRUNE: ApiMethod = ApiMethod::new( pub const API_METHOD_PRUNE: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&prune), &ApiHandler::Sync(&prune),
&ObjectSchema::new( &ObjectSchema::new(
"Prune the datastore.", "Prune the datastore.",
@ -724,14 +781,14 @@ const API_METHOD_PRUNE: ApiMethod = ApiMethod::new(
("store", false, &DATASTORE_SCHEMA), ("store", false, &DATASTORE_SCHEMA),
]) ])
)) ))
.returns(&API_RETURN_SCHEMA_PRUNE) .returns(ReturnType::new(false, &API_RETURN_SCHEMA_PRUNE))
.access(None, &Permission::Privilege( .access(None, &Permission::Privilege(
&["datastore", "{store}"], &["datastore", "{store}"],
PRIV_DATASTORE_MODIFY | PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_MODIFY | PRIV_DATASTORE_PRUNE,
true) true)
); );
fn prune( pub fn prune(
param: Value, param: Value,
_info: &ApiMethod, _info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
@ -791,7 +848,7 @@ fn prune(
// We use a WorkerTask just to have a task log, but run synchrounously // We use a WorkerTask just to have a task log, but run synchrounously
let worker = WorkerTask::new("prune", Some(worker_id), auth_id.clone(), true)?; let worker = WorkerTask::new("prune", Some(worker_id), auth_id, true)?;
if keep_all { if keep_all {
worker.log("No prune selection - keeping all files."); worker.log("No prune selection - keeping all files.");
@ -859,7 +916,7 @@ fn prune(
}, },
)] )]
/// Start garbage collection. /// Start garbage collection.
fn start_garbage_collection( pub fn start_garbage_collection(
store: String, store: String,
_info: &ApiMethod, _info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
@ -871,7 +928,7 @@ fn start_garbage_collection(
let job = Job::new("garbage_collection", &store) let job = Job::new("garbage_collection", &store)
.map_err(|_| format_err!("garbage collection already running"))?; .map_err(|_| format_err!("garbage collection already running"))?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
let upid_str = crate::server::do_garbage_collection_job(job, datastore, &auth_id, None, to_stdout) let upid_str = crate::server::do_garbage_collection_job(job, datastore, &auth_id, None, to_stdout)
.map_err(|err| format_err!("unable to start garbage collection job on datastore {} - {}", store, err))?; .map_err(|err| format_err!("unable to start garbage collection job on datastore {} - {}", store, err))?;
@ -912,17 +969,14 @@ pub fn garbage_collection_status(
returns: { returns: {
description: "List the accessible datastores.", description: "List the accessible datastores.",
type: Array, type: Array,
items: { items: { type: DataStoreListItem },
description: "Datastore name and description.",
type: DataStoreListItem,
},
}, },
access: { access: {
permission: &Permission::Anybody, permission: &Permission::Anybody,
}, },
)] )]
/// Datastore list /// Datastore list
fn get_datastore_list( pub fn get_datastore_list(
_param: Value, _param: Value,
_info: &ApiMethod, _info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
@ -948,7 +1002,7 @@ fn get_datastore_list(
} }
} }
Ok(list.into()) Ok(list)
} }
#[sortable] #[sortable]
@ -970,7 +1024,7 @@ pub const API_METHOD_DOWNLOAD_FILE: ApiMethod = ApiMethod::new(
true) true)
); );
fn download_file( pub fn download_file(
_parts: Parts, _parts: Parts,
_req_body: Body, _req_body: Body,
param: Value, param: Value,
@ -1005,7 +1059,7 @@ fn download_file(
.map_err(|err| http_err!(BAD_REQUEST, "File open failed: {}", err))?; .map_err(|err| http_err!(BAD_REQUEST, "File open failed: {}", err))?;
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new()) let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())
.map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze())) .map_ok(|bytes| bytes.freeze())
.map_err(move |err| { .map_err(move |err| {
eprintln!("error during streaming of '{:?}' - {}", &path, err); eprintln!("error during streaming of '{:?}' - {}", &path, err);
err err
@ -1040,7 +1094,7 @@ pub const API_METHOD_DOWNLOAD_FILE_DECODED: ApiMethod = ApiMethod::new(
true) true)
); );
fn download_file_decoded( pub fn download_file_decoded(
_parts: Parts, _parts: Parts,
_req_body: Body, _req_body: Body,
param: Value, param: Value,
@ -1154,7 +1208,7 @@ pub const API_METHOD_UPLOAD_BACKUP_LOG: ApiMethod = ApiMethod::new(
&Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_BACKUP, false) &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_BACKUP, false)
); );
fn upload_backup_log( pub fn upload_backup_log(
_parts: Parts, _parts: Parts,
req_body: Body, req_body: Body,
param: Value, param: Value,
@ -1233,14 +1287,12 @@ fn upload_backup_log(
}, },
)] )]
/// Get the entries of the given path of the catalog /// Get the entries of the given path of the catalog
fn catalog( pub fn catalog(
store: String, store: String,
backup_type: String, backup_type: String,
backup_id: String, backup_id: String,
backup_time: i64, backup_time: i64,
filepath: String, filepath: String,
_param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
@ -1280,10 +1332,10 @@ fn catalog(
if filepath != "root" { if filepath != "root" {
components = base64::decode(filepath)?; components = base64::decode(filepath)?;
if components.len() > 0 && components[0] == '/' as u8 { if !components.is_empty() && components[0] == b'/' {
components.remove(0); components.remove(0);
} }
for component in components.split(|c| *c == '/' as u8) { for component in components.split(|c| *c == b'/') {
if let Some(entry) = catalog_reader.lookup(&current, component)? { if let Some(entry) = catalog_reader.lookup(&current, component)? {
current = entry; current = entry;
} else { } else {
@ -1296,7 +1348,7 @@ fn catalog(
for direntry in catalog_reader.read_dir(&current)? { for direntry in catalog_reader.read_dir(&current)? {
let mut components = components.clone(); let mut components = components.clone();
components.push('/' as u8); components.push(b'/');
components.extend(&direntry.name); components.extend(&direntry.name);
let path = base64::encode(components); let path = base64::encode(components);
let text = String::from_utf8_lossy(&direntry.name); let text = String::from_utf8_lossy(&direntry.name);
@ -1401,7 +1453,7 @@ pub const API_METHOD_PXAR_FILE_DOWNLOAD: ApiMethod = ApiMethod::new(
true) true)
); );
fn pxar_file_download( pub fn pxar_file_download(
_parts: Parts, _parts: Parts,
_req_body: Body, _req_body: Body,
param: Value, param: Value,
@ -1426,13 +1478,13 @@ fn pxar_file_download(
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?; check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
let mut components = base64::decode(&filepath)?; let mut components = base64::decode(&filepath)?;
if components.len() > 0 && components[0] == '/' as u8 { if !components.is_empty() && components[0] == b'/' {
components.remove(0); components.remove(0);
} }
let mut split = components.splitn(2, |c| *c == '/' as u8); let mut split = components.splitn(2, |c| *c == b'/');
let pxar_name = std::str::from_utf8(split.next().unwrap())?; let pxar_name = std::str::from_utf8(split.next().unwrap())?;
let file_path = split.next().ok_or(format_err!("filepath looks strange '{}'", filepath))?; let file_path = split.next().ok_or_else(|| format_err!("filepath looks strange '{}'", filepath))?;
let (manifest, files) = read_backup_index(&datastore, &backup_dir)?; let (manifest, files) = read_backup_index(&datastore, &backup_dir)?;
for file in files { for file in files {
if file.filename == pxar_name && file.crypt_mode == Some(CryptMode::Encrypt) { if file.filename == pxar_name && file.crypt_mode == Some(CryptMode::Encrypt) {
@ -1459,7 +1511,7 @@ fn pxar_file_download(
let root = decoder.open_root().await?; let root = decoder.open_root().await?;
let file = root let file = root
.lookup(OsStr::from_bytes(file_path)).await? .lookup(OsStr::from_bytes(file_path)).await?
.ok_or(format_err!("error opening '{:?}'", file_path))?; .ok_or_else(|| format_err!("error opening '{:?}'", file_path))?;
let body = match file.kind() { let body = match file.kind() {
EntryKind::File { .. } => Body::wrap_stream( EntryKind::File { .. } => Body::wrap_stream(
@ -1502,7 +1554,7 @@ fn pxar_file_download(
.map_err(|err| eprintln!("error during finishing of zip: {}", err)) .map_err(|err| eprintln!("error during finishing of zip: {}", err))
}); });
Body::wrap_stream(receiver.map_err(move |err| { Body::wrap_stream(ReceiverStream::new(receiver).map_err(move |err| {
eprintln!("error during streaming of zip '{:?}' - {}", filepath, err); eprintln!("error during streaming of zip '{:?}' - {}", filepath, err);
err err
})) }))
@ -1538,7 +1590,7 @@ fn pxar_file_download(
}, },
)] )]
/// Read datastore stats /// Read datastore stats
fn get_rrd_stats( pub fn get_rrd_stats(
store: String, store: String,
timeframe: RRDTimeFrameResolution, timeframe: RRDTimeFrameResolution,
cf: RRDMode, cf: RRDMode,
@ -1580,7 +1632,7 @@ fn get_rrd_stats(
}, },
)] )]
/// Get "notes" for a specific backup /// Get "notes" for a specific backup
fn get_notes( pub fn get_notes(
store: String, store: String,
backup_type: String, backup_type: String,
backup_id: String, backup_id: String,
@ -1630,7 +1682,7 @@ fn get_notes(
}, },
)] )]
/// Set "notes" for a specific backup /// Set "notes" for a specific backup
fn set_notes( pub fn set_notes(
store: String, store: String,
backup_type: String, backup_type: String,
backup_id: String, backup_id: String,
@ -1675,7 +1727,7 @@ fn set_notes(
}, },
)] )]
/// Change owner of a backup group /// Change owner of a backup group
fn set_backup_owner( pub fn set_backup_owner(
store: String, store: String,
backup_type: String, backup_type: String,
backup_id: String, backup_id: String,

View File

@ -1,3 +1,5 @@
//! Datastore Syncronization Job Management
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::Value; use serde_json::Value;
@ -58,7 +60,7 @@ pub fn list_sync_jobs(
} }
}) })
.filter(|job: &SyncJobStatus| { .filter(|job: &SyncJobStatus| {
let as_config: SyncJobConfig = job.clone().into(); let as_config: SyncJobConfig = job.into();
check_sync_job_read_access(&user_info, &auth_id, &as_config) check_sync_job_read_access(&user_info, &auth_id, &as_config)
}).collect(); }).collect();
@ -81,13 +83,13 @@ pub fn list_sync_jobs(
job.last_run_state = state; job.last_run_state = state;
job.last_run_endtime = endtime; job.last_run_endtime = endtime;
let last = job.last_run_endtime.unwrap_or_else(|| starttime); let last = job.last_run_endtime.unwrap_or(starttime);
job.next_run = (|| -> Option<i64> { job.next_run = (|| -> Option<i64> {
let schedule = job.schedule.as_ref()?; let schedule = job.schedule.as_ref()?;
let event = parse_calendar_event(&schedule).ok()?; let event = parse_calendar_event(&schedule).ok()?;
// ignore errors // ignore errors
compute_next_event(&event, last, false).unwrap_or_else(|_| None) compute_next_event(&event, last, false).unwrap_or(None)
})(); })();
} }
@ -110,7 +112,7 @@ pub fn list_sync_jobs(
}, },
)] )]
/// Runs the sync jobs manually. /// Runs the sync jobs manually.
fn run_sync_job( pub fn run_sync_job(
id: String, id: String,
_info: &ApiMethod, _info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,

View File

@ -1,3 +1,5 @@
//! Datastore Verify Job Management
use anyhow::{format_err, Error}; use anyhow::{format_err, Error};
use proxmox::api::router::SubdirMap; use proxmox::api::router::SubdirMap;
@ -86,13 +88,13 @@ pub fn list_verification_jobs(
job.last_run_state = state; job.last_run_state = state;
job.last_run_endtime = endtime; job.last_run_endtime = endtime;
let last = job.last_run_endtime.unwrap_or_else(|| starttime); let last = job.last_run_endtime.unwrap_or(starttime);
job.next_run = (|| -> Option<i64> { job.next_run = (|| -> Option<i64> {
let schedule = job.schedule.as_ref()?; let schedule = job.schedule.as_ref()?;
let event = parse_calendar_event(&schedule).ok()?; let event = parse_calendar_event(&schedule).ok()?;
// ignore errors // ignore errors
compute_next_event(&event, last, false).unwrap_or_else(|_| None) compute_next_event(&event, last, false).unwrap_or(None)
})(); })();
} }
@ -115,7 +117,7 @@ pub fn list_verification_jobs(
}, },
)] )]
/// Runs a verification job manually. /// Runs a verification job manually.
fn run_verification_job( pub fn run_verification_job(
id: String, id: String,
_info: &ApiMethod, _info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,

View File

@ -1,8 +1,10 @@
//! Backup protocol (HTTP2 upgrade)
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::*; use futures::*;
use hyper::header::{HeaderValue, UPGRADE}; use hyper::header::{HeaderValue, UPGRADE};
use hyper::http::request::Parts; use hyper::http::request::Parts;
use hyper::{Body, Response, StatusCode}; use hyper::{Body, Response, Request, StatusCode};
use serde_json::{json, Value}; use serde_json::{json, Value};
use proxmox::{sortable, identity, list_subdirs_api_method}; use proxmox::{sortable, identity, list_subdirs_api_method};
@ -138,7 +140,7 @@ async move {
} }
}; };
let backup_dir = BackupDir::with_group(backup_group.clone(), backup_time)?; let backup_dir = BackupDir::with_group(backup_group, backup_time)?;
let _last_guard = if let Some(last) = &last_backup { let _last_guard = if let Some(last) = &last_backup {
if backup_dir.backup_time() <= last.backup_dir.backup_time() { if backup_dir.backup_time() <= last.backup_dir.backup_time() {
@ -171,8 +173,7 @@ async move {
let env2 = env.clone(); let env2 = env.clone();
let mut req_fut = req_body let mut req_fut = hyper::upgrade::on(Request::from_parts(parts, req_body))
.on_upgrade()
.map_err(Error::from) .map_err(Error::from)
.and_then(move |conn| { .and_then(move |conn| {
env2.debug("protocol upgrade done"); env2.debug("protocol upgrade done");
@ -311,6 +312,10 @@ pub const BACKUP_API_SUBDIRS: SubdirMap = &[
"previous", &Router::new() "previous", &Router::new()
.download(&API_METHOD_DOWNLOAD_PREVIOUS) .download(&API_METHOD_DOWNLOAD_PREVIOUS)
), ),
(
"previous_backup_time", &Router::new()
.get(&API_METHOD_GET_PREVIOUS_BACKUP_TIME)
),
( (
"speedtest", &Router::new() "speedtest", &Router::new()
.upload(&API_METHOD_UPLOAD_SPEEDTEST) .upload(&API_METHOD_UPLOAD_SPEEDTEST)
@ -694,6 +699,28 @@ fn finish_backup (
Ok(Value::Null) Ok(Value::Null)
} }
#[sortable]
pub const API_METHOD_GET_PREVIOUS_BACKUP_TIME: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&get_previous_backup_time),
&ObjectSchema::new(
"Get previous backup time.",
&[],
)
);
fn get_previous_backup_time(
_param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let env: &BackupEnvironment = rpcenv.as_ref();
let backup_time = env.last_backup.as_ref().map(|info| info.backup_dir.backup_time());
Ok(json!(backup_time))
}
#[sortable] #[sortable]
pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new( pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&download_previous), &ApiHandler::AsyncHttp(&download_previous),

View File

@ -1,6 +1,6 @@
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use std::collections::{HashMap, HashSet}; use std::collections::HashMap;
use nix::dir::Dir; use nix::dir::Dir;
use ::serde::{Serialize}; use ::serde::{Serialize};
@ -185,7 +185,9 @@ impl BackupEnvironment {
if size > data.chunk_size { if size > data.chunk_size {
bail!("fixed writer '{}' - got large chunk ({} > {}", data.name, size, data.chunk_size); bail!("fixed writer '{}' - got large chunk ({} > {}", data.name, size, data.chunk_size);
} else if size < data.chunk_size { }
if size < data.chunk_size {
data.small_chunk_count += 1; data.small_chunk_count += 1;
if data.small_chunk_count > 1 { if data.small_chunk_count > 1 {
bail!("fixed writer '{}' - detected multiple end chunks (chunk size too small)"); bail!("fixed writer '{}' - detected multiple end chunks (chunk size too small)");
@ -465,7 +467,7 @@ impl BackupEnvironment {
state.ensure_unfinished()?; state.ensure_unfinished()?;
// test if all writer are correctly closed // test if all writer are correctly closed
if state.dynamic_writers.len() != 0 || state.fixed_writers.len() != 0 { if !state.dynamic_writers.is_empty() || !state.fixed_writers.is_empty() {
bail!("found open index writer - unable to finish backup"); bail!("found open index writer - unable to finish backup");
} }
@ -523,15 +525,11 @@ impl BackupEnvironment {
move |worker| { move |worker| {
worker.log("Automatically verifying newly added snapshot"); worker.log("Automatically verifying newly added snapshot");
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
let verify_worker = crate::backup::VerifyWorker::new(worker.clone(), datastore);
if !verify_backup_dir_with_lock( if !verify_backup_dir_with_lock(
datastore, &verify_worker,
&backup_dir, &backup_dir,
verified_chunks,
corrupt_chunks,
worker.clone(),
worker.upid().clone(), worker.upid().clone(),
None, None,
snap_lock, snap_lock,

View File

@ -1,16 +1,28 @@
//! Backup Server Configuration
use proxmox::api::router::{Router, SubdirMap}; use proxmox::api::router::{Router, SubdirMap};
use proxmox::list_subdirs_api_method; use proxmox::list_subdirs_api_method;
pub mod access;
pub mod datastore; pub mod datastore;
pub mod remote; pub mod remote;
pub mod sync; pub mod sync;
pub mod verify; pub mod verify;
pub mod drive;
pub mod changer;
pub mod media_pool;
pub mod tape_encryption_keys;
const SUBDIRS: SubdirMap = &[ const SUBDIRS: SubdirMap = &[
("access", &access::ROUTER),
("changer", &changer::ROUTER),
("datastore", &datastore::ROUTER), ("datastore", &datastore::ROUTER),
("drive", &drive::ROUTER),
("media-pool", &media_pool::ROUTER),
("remote", &remote::ROUTER), ("remote", &remote::ROUTER),
("sync", &sync::ROUTER), ("sync", &sync::ROUTER),
("verify", &verify::ROUTER) ("tape-encryption-keys", &tape_encryption_keys::ROUTER),
("verify", &verify::ROUTER),
]; ];
pub const ROUTER: Router = Router::new() pub const ROUTER: Router = Router::new()

View File

@ -0,0 +1,10 @@
use proxmox::api::{Router, SubdirMap};
use proxmox::list_subdirs_api_method;
pub mod tfa;
const SUBDIRS: SubdirMap = &[("tfa", &tfa::ROUTER)];
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

View File

@ -0,0 +1,84 @@
//! For now this only has the TFA subdir, which is in this file.
//! If we add more, it should be moved into a sub module.
use anyhow::Error;
use crate::api2::types::PROXMOX_CONFIG_DIGEST_SCHEMA;
use proxmox::api::{api, Permission, Router, RpcEnvironment, SubdirMap};
use proxmox::list_subdirs_api_method;
use crate::config::tfa::{self, WebauthnConfig, WebauthnConfigUpdater};
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);
const SUBDIRS: SubdirMap = &[("webauthn", &WEBAUTHN_ROUTER)];
const WEBAUTHN_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_WEBAUTHN_CONFIG)
.put(&API_METHOD_UPDATE_WEBAUTHN_CONFIG);
#[api(
protected: true,
input: {
properties: {},
},
returns: {
type: WebauthnConfig,
optional: true,
},
access: {
permission: &Permission::Anybody,
},
)]
/// Get the TFA configuration.
pub fn get_webauthn_config(
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Option<WebauthnConfig>, Error> {
let (config, digest) = match tfa::webauthn_config()? {
Some(c) => c,
None => return Ok(None),
};
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(Some(config))
}
#[api(
protected: true,
input: {
properties: {
webauthn: {
flatten: true,
type: WebauthnConfigUpdater,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
)]
/// Update the TFA configuration.
pub fn update_webauthn_config(
webauthn: WebauthnConfigUpdater,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = tfa::write_lock();
let mut tfa = tfa::read()?;
if let Some(wa) = &mut tfa.webauthn {
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &wa.digest()?)?;
}
webauthn.apply_to(wa);
} else {
tfa.webauthn = Some(webauthn.build()?);
}
tfa::write(&tfa)?;
Ok(())
}

295
src/api2/config/changer.rs Normal file
View File

@ -0,0 +1,295 @@
use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize};
use serde_json::Value;
use proxmox::api::{
api,
Router,
RpcEnvironment,
schema::parse_property_string,
};
use crate::{
config,
api2::types::{
PROXMOX_CONFIG_DIGEST_SCHEMA,
CHANGER_NAME_SCHEMA,
LINUX_DRIVE_PATH_SCHEMA,
SLOT_ARRAY_SCHEMA,
EXPORT_SLOT_LIST_SCHEMA,
ScsiTapeChanger,
LinuxTapeDrive,
},
tape::{
linux_tape_changer_list,
check_drive_path,
},
};
#[api(
protected: true,
input: {
properties: {
name: {
schema: CHANGER_NAME_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
},
"export-slots": {
schema: EXPORT_SLOT_LIST_SCHEMA,
optional: true,
},
},
},
)]
/// Create a new changer device
pub fn create_changer(
name: String,
path: String,
export_slots: Option<String>,
) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
let linux_changers = linux_tape_changer_list();
check_drive_path(&linux_changers, &path)?;
let existing: Vec<ScsiTapeChanger> = config.convert_to_typed_array("changer")?;
for changer in existing {
if changer.name == name {
bail!("Entry '{}' already exists", name);
}
if changer.path == path {
bail!("Path '{}' already in use by '{}'", path, changer.name);
}
}
let item = ScsiTapeChanger {
name: name.clone(),
path,
export_slots,
};
config.set_data(&name, "changer", &item)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
name: {
schema: CHANGER_NAME_SCHEMA,
},
},
},
returns: {
type: ScsiTapeChanger,
},
)]
/// Get tape changer configuration
pub fn get_config(
name: String,
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<ScsiTapeChanger, Error> {
let (config, digest) = config::drive::config()?;
let data: ScsiTapeChanger = config.lookup("changer", &name)?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(data)
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of configured changers (with config digest).",
type: Array,
items: {
type: ScsiTapeChanger,
},
},
)]
/// List changers
pub fn list_changers(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<ScsiTapeChanger>, Error> {
let (config, digest) = config::drive::config()?;
let list: Vec<ScsiTapeChanger> = config.convert_to_typed_array("changer")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[allow(non_camel_case_types)]
#[serde(rename_all = "kebab-case")]
/// Deletable property name
pub enum DeletableProperty {
/// Delete export-slots.
export_slots,
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: CHANGER_NAME_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
optional: true,
},
"export-slots": {
schema: EXPORT_SLOT_LIST_SCHEMA,
optional: true,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
},
},
digest: {
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
optional: true,
},
},
},
)]
/// Update a tape changer configuration
pub fn update_changer(
name: String,
path: Option<String>,
export_slots: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
_param: Value,
) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, expected_digest) = config::drive::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let mut data: ScsiTapeChanger = config.lookup("changer", &name)?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::export_slots => {
data.export_slots = None;
}
}
}
}
if let Some(path) = path {
let changers = linux_tape_changer_list();
check_drive_path(&changers, &path)?;
data.path = path;
}
if let Some(export_slots) = export_slots {
let slots: Value = parse_property_string(
&export_slots, &SLOT_ARRAY_SCHEMA
)?;
let mut slots: Vec<String> = slots
.as_array()
.unwrap()
.iter()
.map(|v| v.to_string())
.collect();
slots.sort();
if slots.is_empty() {
data.export_slots = None;
} else {
let slots = slots.join(",");
data.export_slots = Some(slots);
}
}
config.set_data(&name, "changer", &data)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: CHANGER_NAME_SCHEMA,
},
},
},
)]
/// Delete a tape changer configuration
pub fn delete_changer(name: String, _param: Value) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
match config.sections.get(&name) {
Some((section_type, _)) => {
if section_type != "changer" {
bail!("Entry '{}' exists, but is not a changer device", name);
}
config.sections.remove(&name);
},
None => bail!("Delete changer '{}' failed - no such entry", name),
}
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
for drive in drive_list {
if let Some(changer) = drive.changer {
if changer == name {
bail!("Delete changer '{}' failed - used by drive '{}'", name, drive.name);
}
}
}
config::drive::save_config(&config)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_CONFIG)
.put(&API_METHOD_UPDATE_CHANGER)
.delete(&API_METHOD_DELETE_CHANGER);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_CHANGERS)
.post(&API_METHOD_CREATE_CHANGER)
.match_all("name", &ITEM_ROUTER);

View File

@ -120,11 +120,11 @@ pub fn create_datastore(param: Value) -> Result<(), Error> {
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?; let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let datastore: datastore::DataStoreConfig = serde_json::from_value(param.clone())?; let datastore: datastore::DataStoreConfig = serde_json::from_value(param)?;
let (mut config, _digest) = datastore::config()?; let (mut config, _digest) = datastore::config()?;
if let Some(_) = config.sections.get(&datastore.name) { if config.sections.get(&datastore.name).is_some() {
bail!("datastore '{}' already exists.", datastore.name); bail!("datastore '{}' already exists.", datastore.name);
} }
@ -151,10 +151,7 @@ pub fn create_datastore(param: Value) -> Result<(), Error> {
}, },
}, },
}, },
returns: { returns: { type: datastore::DataStoreConfig },
description: "The datastore configuration (with config digest).",
type: datastore::DataStoreConfig,
},
access: { access: {
permission: &Permission::Privilege(&["datastore", "{name}"], PRIV_DATASTORE_AUDIT, false), permission: &Permission::Privilege(&["datastore", "{name}"], PRIV_DATASTORE_AUDIT, false),
}, },
@ -280,6 +277,7 @@ pub enum DeletableProperty {
}, },
)] )]
/// Update datastore config. /// Update datastore config.
#[allow(clippy::too_many_arguments)]
pub fn update_datastore( pub fn update_datastore(
name: String, name: String,
comment: Option<String>, comment: Option<String>,

281
src/api2/config/drive.rs Normal file
View File

@ -0,0 +1,281 @@
use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize};
use serde_json::Value;
use proxmox::api::{api, Router, RpcEnvironment};
use crate::{
config,
api2::types::{
PROXMOX_CONFIG_DIGEST_SCHEMA,
DRIVE_NAME_SCHEMA,
CHANGER_NAME_SCHEMA,
CHANGER_DRIVENUM_SCHEMA,
LINUX_DRIVE_PATH_SCHEMA,
LinuxTapeDrive,
ScsiTapeChanger,
},
tape::{
linux_tape_device_list,
check_drive_path,
},
};
#[api(
protected: true,
input: {
properties: {
name: {
schema: DRIVE_NAME_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
},
changer: {
schema: CHANGER_NAME_SCHEMA,
optional: true,
},
"changer-drivenum": {
schema: CHANGER_DRIVENUM_SCHEMA,
optional: true,
},
},
},
)]
/// Create a new drive
pub fn create_drive(param: Value) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
let item: LinuxTapeDrive = serde_json::from_value(param)?;
let linux_drives = linux_tape_device_list();
check_drive_path(&linux_drives, &item.path)?;
let existing: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
for drive in existing {
if drive.name == item.name {
bail!("Entry '{}' already exists", item.name);
}
if drive.path == item.path {
bail!("Path '{}' already used in drive '{}'", item.path, drive.name);
}
}
config.set_data(&item.name, "linux", &item)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
name: {
schema: DRIVE_NAME_SCHEMA,
},
},
},
returns: {
type: LinuxTapeDrive,
},
)]
/// Get drive configuration
pub fn get_config(
name: String,
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<LinuxTapeDrive, Error> {
let (config, digest) = config::drive::config()?;
let data: LinuxTapeDrive = config.lookup("linux", &name)?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(data)
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of configured drives (with config digest).",
type: Array,
items: {
type: LinuxTapeDrive,
},
},
)]
/// List drives
pub fn list_drives(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<LinuxTapeDrive>, Error> {
let (config, digest) = config::drive::config()?;
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(drive_list)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[allow(non_camel_case_types)]
#[serde(rename_all = "kebab-case")]
/// Deletable property name
pub enum DeletableProperty {
/// Delete the changer property.
changer,
/// Delete the changer-drivenum property.
changer_drivenum,
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: DRIVE_NAME_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
optional: true,
},
changer: {
schema: CHANGER_NAME_SCHEMA,
optional: true,
},
"changer-drivenum": {
schema: CHANGER_DRIVENUM_SCHEMA,
optional: true,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: {
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
optional: true,
},
},
},
)]
/// Update a drive configuration
pub fn update_drive(
name: String,
path: Option<String>,
changer: Option<String>,
changer_drivenum: Option<u64>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
_param: Value,
) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, expected_digest) = config::drive::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let mut data: LinuxTapeDrive = config.lookup("linux", &name)?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::changer => {
data.changer = None;
data.changer_drivenum = None;
},
DeletableProperty::changer_drivenum => { data.changer_drivenum = None; },
}
}
}
if let Some(path) = path {
let linux_drives = linux_tape_device_list();
check_drive_path(&linux_drives, &path)?;
data.path = path;
}
if let Some(changer) = changer {
let _: ScsiTapeChanger = config.lookup("changer", &changer)?;
data.changer = Some(changer);
}
if let Some(changer_drivenum) = changer_drivenum {
if changer_drivenum == 0 {
data.changer_drivenum = None;
} else {
if data.changer.is_none() {
bail!("Option 'changer-drivenum' requires option 'changer'.");
}
data.changer_drivenum = Some(changer_drivenum);
}
}
config.set_data(&name, "linux", &data)?;
config::drive::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: DRIVE_NAME_SCHEMA,
},
},
},
)]
/// Delete a drive configuration
pub fn delete_drive(name: String, _param: Value) -> Result<(), Error> {
let _lock = config::drive::lock()?;
let (mut config, _digest) = config::drive::config()?;
match config.sections.get(&name) {
Some((section_type, _)) => {
if section_type != "linux" {
bail!("Entry '{}' exists, but is not a linux tape drive", name);
}
config.sections.remove(&name);
},
None => bail!("Delete drive '{}' failed - no such drive", name),
}
config::drive::save_config(&config)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_CONFIG)
.put(&API_METHOD_UPDATE_DRIVE)
.delete(&API_METHOD_DELETE_DRIVE);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_DRIVES)
.post(&API_METHOD_CREATE_DRIVE)
.match_all("name", &ITEM_ROUTER);

View File

@ -0,0 +1,251 @@
use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize};
use proxmox::{
api::{
api,
Router,
RpcEnvironment,
},
};
use crate::{
api2::types::{
MEDIA_POOL_NAME_SCHEMA,
MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
MEDIA_RETENTION_POLICY_SCHEMA,
TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
MediaPoolConfig,
},
config,
};
#[api(
protected: true,
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
allocation: {
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
optional: true,
},
retention: {
schema: MEDIA_RETENTION_POLICY_SCHEMA,
optional: true,
},
template: {
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
optional: true,
},
encrypt: {
schema: TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
optional: true,
},
},
},
)]
/// Create a new media pool
pub fn create_pool(
name: String,
allocation: Option<String>,
retention: Option<String>,
template: Option<String>,
encrypt: Option<String>,
) -> Result<(), Error> {
let _lock = config::media_pool::lock()?;
let (mut config, _digest) = config::media_pool::config()?;
if config.sections.get(&name).is_some() {
bail!("Media pool '{}' already exists", name);
}
let item = MediaPoolConfig {
name: name.clone(),
allocation,
retention,
template,
encrypt,
};
config.set_data(&name, "pool", &item)?;
config::media_pool::save_config(&config)?;
Ok(())
}
#[api(
returns: {
description: "The list of configured media pools (with config digest).",
type: Array,
items: {
type: MediaPoolConfig,
},
},
)]
/// List media pools
pub fn list_pools(
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<MediaPoolConfig>, Error> {
let (config, digest) = config::media_pool::config()?;
let list = config.convert_to_typed_array("pool")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
},
},
returns: {
type: MediaPoolConfig,
},
)]
/// Get media pool configuration
pub fn get_config(name: String) -> Result<MediaPoolConfig, Error> {
let (config, _digest) = config::media_pool::config()?;
let data: MediaPoolConfig = config.lookup("pool", &name)?;
Ok(data)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[allow(non_camel_case_types)]
/// Deletable property name
pub enum DeletableProperty {
/// Delete media set allocation policy.
allocation,
/// Delete pool retention policy
retention,
/// Delete media set naming template
template,
/// Delete encryption fingerprint
encrypt,
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
allocation: {
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
optional: true,
},
retention: {
schema: MEDIA_RETENTION_POLICY_SCHEMA,
optional: true,
},
template: {
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
optional: true,
},
encrypt: {
schema: TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
optional: true,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
},
},
)]
/// Update media pool settings
pub fn update_pool(
name: String,
allocation: Option<String>,
retention: Option<String>,
template: Option<String>,
encrypt: Option<String>,
delete: Option<Vec<DeletableProperty>>,
) -> Result<(), Error> {
let _lock = config::media_pool::lock()?;
let (mut config, _digest) = config::media_pool::config()?;
let mut data: MediaPoolConfig = config.lookup("pool", &name)?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::allocation => { data.allocation = None; },
DeletableProperty::retention => { data.retention = None; },
DeletableProperty::template => { data.template = None; },
DeletableProperty::encrypt => { data.encrypt = None; },
}
}
}
if allocation.is_some() { data.allocation = allocation; }
if retention.is_some() { data.retention = retention; }
if template.is_some() { data.template = template; }
if encrypt.is_some() { data.encrypt = encrypt; }
config.set_data(&name, "pool", &data)?;
config::media_pool::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
},
},
)]
/// Delete a media pool configuration
pub fn delete_pool(name: String) -> Result<(), Error> {
let _lock = config::media_pool::lock()?;
let (mut config, _digest) = config::media_pool::config()?;
match config.sections.get(&name) {
Some(_) => { config.sections.remove(&name); },
None => bail!("delete pool '{}' failed - no such pool", name),
}
config::media_pool::save_config(&config)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_GET_CONFIG)
.put(&API_METHOD_UPDATE_POOL)
.delete(&API_METHOD_DELETE_POOL);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_POOLS)
.post(&API_METHOD_CREATE_POOL)
.match_all("name", &ITEM_ROUTER);

View File

@ -19,10 +19,7 @@ use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
returns: { returns: {
description: "The list of configured remotes (with config digest).", description: "The list of configured remotes (with config digest).",
type: Array, type: Array,
items: { items: { type: remote::Remote },
type: remote::Remote,
description: "Remote configuration (without password).",
},
}, },
access: { access: {
description: "List configured remotes filtered by Remote.Audit privileges", description: "List configured remotes filtered by Remote.Audit privileges",
@ -99,13 +96,13 @@ pub fn create_remote(password: String, param: Value) -> Result<(), Error> {
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?; let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let mut data = param.clone(); let mut data = param;
data["password"] = Value::from(base64::encode(password.as_bytes())); data["password"] = Value::from(base64::encode(password.as_bytes()));
let remote: remote::Remote = serde_json::from_value(data)?; let remote: remote::Remote = serde_json::from_value(data)?;
let (mut config, _digest) = remote::config()?; let (mut config, _digest) = remote::config()?;
if let Some(_) = config.sections.get(&remote.name) { if config.sections.get(&remote.name).is_some() {
bail!("remote '{}' already exists.", remote.name); bail!("remote '{}' already exists.", remote.name);
} }
@ -124,10 +121,7 @@ pub fn create_remote(password: String, param: Value) -> Result<(), Error> {
}, },
}, },
}, },
returns: { returns: { type: remote::Remote },
description: "The remote configuration (with config digest).",
type: remote::Remote,
},
access: { access: {
permission: &Permission::Privilege(&["remote", "{name}"], PRIV_REMOTE_AUDIT, false), permission: &Permission::Privilege(&["remote", "{name}"], PRIV_REMOTE_AUDIT, false),
} }
@ -209,6 +203,7 @@ pub enum DeletableProperty {
}, },
)] )]
/// Update remote configuration. /// Update remote configuration.
#[allow(clippy::too_many_arguments)]
pub fn update_remote( pub fn update_remote(
name: String, name: String,
comment: Option<String>, comment: Option<String>,
@ -316,9 +311,7 @@ pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error>
/// Helper to get client for remote.cfg entry /// Helper to get client for remote.cfg entry
pub async fn remote_client(remote: remote::Remote) -> Result<HttpClient, Error> { pub async fn remote_client(remote: remote::Remote) -> Result<HttpClient, Error> {
let options = HttpClientOptions::new() let options = HttpClientOptions::new_non_interactive(remote.password.clone(), remote.fingerprint.clone());
.password(Some(remote.password.clone()))
.fingerprint(remote.fingerprint.clone());
let client = HttpClient::new( let client = HttpClient::new(
&remote.host, &remote.host,
@ -347,10 +340,7 @@ pub async fn remote_client(remote: remote::Remote) -> Result<HttpClient, Error>
returns: { returns: {
description: "List the accessible datastores.", description: "List the accessible datastores.",
type: Array, type: Array,
items: { items: { type: DataStoreListItem },
description: "Datastore name and description.",
type: DataStoreListItem,
},
}, },
)] )]
/// List datastores of a remote.cfg entry /// List datastores of a remote.cfg entry

View File

@ -154,14 +154,14 @@ pub fn create_sync_job(
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?; let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let sync_job: sync::SyncJobConfig = serde_json::from_value(param.clone())?; let sync_job: sync::SyncJobConfig = serde_json::from_value(param)?;
if !check_sync_job_modify_access(&user_info, &auth_id, &sync_job) { if !check_sync_job_modify_access(&user_info, &auth_id, &sync_job) {
bail!("permission check failed"); bail!("permission check failed");
} }
let (mut config, _digest) = sync::config()?; let (mut config, _digest) = sync::config()?;
if let Some(_) = config.sections.get(&sync_job.id) { if config.sections.get(&sync_job.id).is_some() {
bail!("job '{}' already exists.", sync_job.id); bail!("job '{}' already exists.", sync_job.id);
} }
@ -182,10 +182,7 @@ pub fn create_sync_job(
}, },
}, },
}, },
returns: { returns: { type: sync::SyncJobConfig },
description: "The sync job configuration.",
type: sync::SyncJobConfig,
},
access: { access: {
description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.", description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody, permission: &Permission::Anybody,
@ -282,6 +279,7 @@ pub enum DeletableProperty {
}, },
)] )]
/// Update sync job config. /// Update sync job config.
#[allow(clippy::too_many_arguments)]
pub fn update_sync_job( pub fn update_sync_job(
id: String, id: String,
store: Option<String>, store: Option<String>,
@ -517,7 +515,7 @@ acl:1:/remote/remote1/remotestore1:write@pbs:RemoteSyncOperator
// unless they have Datastore.Modify as well // unless they have Datastore.Modify as well
job.store = "localstore3".to_string(); job.store = "localstore3".to_string();
job.owner = Some(read_auth_id.clone()); job.owner = Some(read_auth_id);
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true); assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
job.owner = None; job.owner = None;
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true); assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);

View File

@ -0,0 +1,279 @@
use anyhow::{bail, Error};
use serde_json::Value;
use proxmox::{
api::{
api,
ApiMethod,
Router,
RpcEnvironment,
},
tools::fs::open_file_locked,
};
use crate::{
config::{
tape_encryption_keys::{
TAPE_KEYS_LOCKFILE,
load_keys,
load_key_configs,
save_keys,
save_key_configs,
insert_key,
},
},
api2::types::{
TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
PROXMOX_CONFIG_DIGEST_SCHEMA,
PASSWORD_HINT_SCHEMA,
KeyInfo,
Kdf,
},
backup::{
KeyConfig,
Fingerprint,
},
};
#[api(
input: {
properties: {},
},
returns: {
description: "The list of tape encryption keys (with config digest).",
type: Array,
items: { type: KeyInfo },
},
)]
/// List existing keys
pub fn list_keys(
_param: Value,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<KeyInfo>, Error> {
let (key_map, digest) = load_key_configs()?;
let mut list = Vec::new();
for (_fingerprint, item) in key_map.iter() {
list.push(item.into());
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
protected: true,
input: {
properties: {
kdf: {
type: Kdf,
optional: true,
},
fingerprint: {
schema: TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
},
password: {
description: "The current password.",
min_length: 5,
},
"new-password": {
description: "The new password.",
min_length: 5,
},
hint: {
schema: PASSWORD_HINT_SCHEMA,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
)]
/// Change the encryption key's password (and password hint).
pub fn change_passphrase(
kdf: Option<Kdf>,
password: String,
new_password: String,
hint: String,
fingerprint: Fingerprint,
digest: Option<String>,
_rpcenv: &mut dyn RpcEnvironment
) -> Result<(), Error> {
let kdf = kdf.unwrap_or_default();
if let Kdf::None = kdf {
bail!("Please specify a key derivation funktion (none is not allowed here).");
}
let _lock = open_file_locked(
TAPE_KEYS_LOCKFILE,
std::time::Duration::new(10, 0),
true,
)?;
let (mut config_map, expected_digest) = load_key_configs()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let key_config = match config_map.get(&fingerprint) {
Some(key_config) => key_config,
None => bail!("tape encryption key '{}' does not exist.", fingerprint),
};
let (key, created, fingerprint) = key_config.decrypt(&|| Ok(password.as_bytes().to_vec()))?;
let mut new_key_config = KeyConfig::with_key(&key, new_password.as_bytes(), kdf)?;
new_key_config.created = created; // keep original value
new_key_config.hint = Some(hint);
config_map.insert(fingerprint, new_key_config);
save_key_configs(config_map)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
kdf: {
type: Kdf,
optional: true,
},
password: {
description: "A secret password.",
min_length: 5,
},
hint: {
schema: PASSWORD_HINT_SCHEMA,
},
},
},
returns: {
schema: TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
},
)]
/// Create a new encryption key
pub fn create_key(
kdf: Option<Kdf>,
password: String,
hint: String,
_rpcenv: &mut dyn RpcEnvironment
) -> Result<Fingerprint, Error> {
let kdf = kdf.unwrap_or_default();
if let Kdf::None = kdf {
bail!("Please specify a key derivation funktion (none is not allowed here).");
}
let (key, mut key_config) = KeyConfig::new(password.as_bytes(), kdf)?;
key_config.hint = Some(hint);
let fingerprint = key_config.fingerprint.clone().unwrap();
insert_key(key, key_config, false)?;
Ok(fingerprint)
}
#[api(
input: {
properties: {
fingerprint: {
schema: TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
},
},
},
returns: {
type: KeyInfo,
},
)]
/// Get key config (public key part)
pub fn read_key(
fingerprint: Fingerprint,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<KeyInfo, Error> {
let (config_map, _digest) = load_key_configs()?;
let key_config = match config_map.get(&fingerprint) {
Some(key_config) => key_config,
None => bail!("tape encryption key '{}' does not exist.", fingerprint),
};
if key_config.kdf.is_none() {
bail!("found unencrypted key - internal error");
}
Ok(key_config.into())
}
#[api(
protected: true,
input: {
properties: {
fingerprint: {
schema: TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
)]
/// Remove a encryption key from the database
///
/// Please note that you can no longer access tapes using this key.
pub fn delete_key(
fingerprint: Fingerprint,
digest: Option<String>,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let _lock = open_file_locked(
TAPE_KEYS_LOCKFILE,
std::time::Duration::new(10, 0),
true,
)?;
let (mut config_map, expected_digest) = load_key_configs()?;
let (mut key_map, _) = load_keys()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
match config_map.get(&fingerprint) {
Some(_) => { config_map.remove(&fingerprint); },
None => bail!("tape encryption key '{}' does not exist.", fingerprint),
}
save_key_configs(config_map)?;
key_map.remove(&fingerprint);
save_keys(key_map)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_KEY)
.put(&API_METHOD_CHANGE_PASSPHRASE)
.delete(&API_METHOD_DELETE_KEY);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_KEYS)
.post(&API_METHOD_CREATE_KEY)
.match_all("fingerprint", &ITEM_ROUTER);

View File

@ -98,7 +98,7 @@ pub fn create_verification_job(
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let verification_job: verify::VerificationJobConfig = serde_json::from_value(param.clone())?; let verification_job: verify::VerificationJobConfig = serde_json::from_value(param)?;
user_info.check_privs(&auth_id, &["datastore", &verification_job.store], PRIV_DATASTORE_VERIFY, false)?; user_info.check_privs(&auth_id, &["datastore", &verification_job.store], PRIV_DATASTORE_VERIFY, false)?;
@ -106,7 +106,7 @@ pub fn create_verification_job(
let (mut config, _digest) = verify::config()?; let (mut config, _digest) = verify::config()?;
if let Some(_) = config.sections.get(&verification_job.id) { if config.sections.get(&verification_job.id).is_some() {
bail!("job '{}' already exists.", verification_job.id); bail!("job '{}' already exists.", verification_job.id);
} }
@ -127,10 +127,7 @@ pub fn create_verification_job(
}, },
}, },
}, },
returns: { returns: { type: verify::VerificationJobConfig },
description: "The verification job configuration.",
type: verify::VerificationJobConfig,
},
access: { access: {
permission: &Permission::Anybody, permission: &Permission::Anybody,
description: "Requires Datastore.Audit or Datastore.Verify on job's datastore.", description: "Requires Datastore.Audit or Datastore.Verify on job's datastore.",
@ -218,6 +215,7 @@ pub enum DeletableProperty {
}, },
)] )]
/// Update verification job config. /// Update verification job config.
#[allow(clippy::too_many_arguments)]
pub fn update_verification_job( pub fn update_verification_job(
id: String, id: String,
store: Option<String>, store: Option<String>,

View File

@ -16,7 +16,7 @@ pub async fn create_download_response(path: PathBuf) -> Result<Response<Body>, E
}; };
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new()) let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())
.map_ok(|bytes| hyper::body::Bytes::from(bytes.freeze())); .map_ok(|bytes| bytes.freeze());
let body = Body::wrap_stream(payload); let body = Body::wrap_stream(payload);

View File

@ -1,3 +1,5 @@
//! Server/Node Configuration and Administration
use std::net::TcpListener; use std::net::TcpListener;
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
@ -6,7 +8,7 @@ use futures::future::{FutureExt, TryFutureExt};
use hyper::body::Body; use hyper::body::Body;
use hyper::http::request::Parts; use hyper::http::request::Parts;
use hyper::upgrade::Upgraded; use hyper::upgrade::Upgraded;
use nix::fcntl::{fcntl, FcntlArg, FdFlag}; use hyper::Request;
use serde_json::{json, Value}; use serde_json::{json, Value};
use tokio::io::{AsyncBufReadExt, BufReader}; use tokio::io::{AsyncBufReadExt, BufReader};
@ -34,7 +36,7 @@ pub mod subscription;
pub(crate) mod rrd; pub(crate) mod rrd;
mod journal; mod journal;
mod services; pub(crate) mod services;
mod status; mod status;
mod syslog; mod syslog;
mod time; mod time;
@ -93,11 +95,16 @@ async fn termproxy(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
// intentionally user only for now // intentionally user only for now
let userid: Userid = rpcenv let auth_id: Authid = rpcenv
.get_auth_id() .get_auth_id()
.ok_or_else(|| format_err!("unknown user"))? .ok_or_else(|| format_err!("no authid available"))?
.parse()?; .parse()?;
let auth_id = Authid::from(userid.clone());
if auth_id.is_token() {
bail!("API tokens cannot access this API endpoint");
}
let userid = auth_id.user();
if userid.realm() != "pam" { if userid.realm() != "pam" {
bail!("only pam users can use the console"); bail!("only pam users can use the console");
@ -116,7 +123,7 @@ async fn termproxy(
)?; )?;
let mut command = Vec::new(); let mut command = Vec::new();
match cmd.as_ref().map(|x| x.as_str()) { match cmd.as_deref() {
Some("login") | None => { Some("login") | None => {
command.push("login"); command.push("login");
if userid == "root@pam" { if userid == "root@pam" {
@ -145,18 +152,10 @@ async fn termproxy(
move |worker| async move { move |worker| async move {
// move inside the worker so that it survives and does not close the port // move inside the worker so that it survives and does not close the port
// remove CLOEXEC from listenere so that we can reuse it in termproxy // remove CLOEXEC from listenere so that we can reuse it in termproxy
let fd = listener.as_raw_fd(); tools::fd_change_cloexec(listener.as_raw_fd(), false)?;
let mut flags = match fcntl(fd, FcntlArg::F_GETFD) {
Ok(bits) => FdFlag::from_bits_truncate(bits),
Err(err) => bail!("could not get fd: {}", err),
};
flags.remove(FdFlag::FD_CLOEXEC);
if let Err(err) = fcntl(fd, FcntlArg::F_SETFD(flags)) {
bail!("could not set fd: {}", err);
}
let mut arguments: Vec<&str> = Vec::new(); let mut arguments: Vec<&str> = Vec::new();
let fd_string = fd.to_string(); let fd_string = listener.as_raw_fd().to_string();
arguments.push(&fd_string); arguments.push(&fd_string);
arguments.extend_from_slice(&[ arguments.extend_from_slice(&[
"--path", "--path",
@ -201,7 +200,7 @@ async fn termproxy(
let mut needs_kill = false; let mut needs_kill = false;
let res = tokio::select!{ let res = tokio::select!{
res = &mut child => { res = child.wait() => {
let exit_code = res?; let exit_code = res?;
if !exit_code.success() { if !exit_code.success() {
match exit_code.code() { match exit_code.code() {
@ -221,14 +220,13 @@ async fn termproxy(
if needs_kill { if needs_kill {
if res.is_ok() { if res.is_ok() {
child.kill()?; child.kill().await?;
child.await?;
return Ok(()); return Ok(());
} }
if let Err(err) = child.kill() { if let Err(err) = child.kill().await {
worker.warn(format!("error killing termproxy: {}", err)); worker.warn(format!("error killing termproxy: {}", err));
} else if let Err(err) = child.await { } else if let Err(err) = child.wait().await {
worker.warn(format!("error awaiting termproxy: {}", err)); worker.warn(format!("error awaiting termproxy: {}", err));
} }
} }
@ -276,7 +274,16 @@ fn upgrade_to_websocket(
) -> ApiResponseFuture { ) -> ApiResponseFuture {
async move { async move {
// intentionally user only for now // intentionally user only for now
let userid: Userid = rpcenv.get_auth_id().unwrap().parse()?; let auth_id: Authid = rpcenv
.get_auth_id()
.ok_or_else(|| format_err!("no authid available"))?
.parse()?;
if auth_id.is_token() {
bail!("API tokens cannot access this API endpoint");
}
let userid = auth_id.user();
let ticket = tools::required_string_param(&param, "vncticket")?; let ticket = tools::required_string_param(&param, "vncticket")?;
let port: u16 = tools::required_integer_param(&param, "port")? as u16; let port: u16 = tools::required_integer_param(&param, "port")? as u16;
@ -288,10 +295,10 @@ fn upgrade_to_websocket(
Some(&ticket::term_aad(&userid, "/system", port)), Some(&ticket::term_aad(&userid, "/system", port)),
)?; )?;
let (ws, response) = WebSocket::new(parts.headers)?; let (ws, response) = WebSocket::new(parts.headers.clone())?;
crate::server::spawn_internal_task(async move { crate::server::spawn_internal_task(async move {
let conn: Upgraded = match req_body.on_upgrade().map_err(Error::from).await { let conn: Upgraded = match hyper::upgrade::on(Request::from_parts(parts, req_body)).map_err(Error::from).await {
Ok(upgraded) => upgraded, Ok(upgraded) => upgraded,
_ => bail!("error"), _ => bail!("error"),
}; };

View File

@ -35,18 +35,15 @@ use crate::api2::types::{Authid, APTUpdateInfo, NODE_SCHEMA, UPID_SCHEMA};
/// List available APT updates /// List available APT updates
fn apt_update_available(_param: Value) -> Result<Value, Error> { fn apt_update_available(_param: Value) -> Result<Value, Error> {
match apt::pkg_cache_expired() { if let Ok(false) = apt::pkg_cache_expired() {
Ok(false) => {
if let Ok(Some(cache)) = apt::read_pkg_state() { if let Ok(Some(cache)) = apt::read_pkg_state() {
return Ok(json!(cache.package_status)); return Ok(json!(cache.package_status));
} }
},
_ => (),
} }
let cache = apt::update_cache()?; let cache = apt::update_cache()?;
return Ok(json!(cache.package_status)); Ok(json!(cache.package_status))
} }
fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> { fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
@ -90,8 +87,8 @@ fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
type: bool, type: bool,
description: r#"Send notification mail about new package updates availanle to the description: r#"Send notification mail about new package updates availanle to the
email address configured for 'root@pam')."#, email address configured for 'root@pam')."#,
optional: true,
default: false, default: false,
optional: true,
}, },
quiet: { quiet: {
description: "Only produces output suitable for logging, omitting progress indicators.", description: "Only produces output suitable for logging, omitting progress indicators.",
@ -110,16 +107,13 @@ fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
)] )]
/// Update the APT database /// Update the APT database
pub fn apt_update_database( pub fn apt_update_database(
notify: Option<bool>, notify: bool,
quiet: Option<bool>, quiet: bool,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> { ) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
// FIXME: change to non-option in signature and drop below once we have proxmox-api-macro 0.2.3
let quiet = quiet.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_QUIET);
let notify = notify.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_NOTIFY);
let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |worker| { let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |worker| {
do_apt_update(&worker, quiet)?; do_apt_update(&worker, quiet)?;
@ -196,7 +190,7 @@ fn apt_get_changelog(
} }
}, Some(&name)); }, Some(&name));
if pkg_info.len() == 0 { if pkg_info.is_empty() {
bail!("Package '{}' not found", name); bail!("Package '{}' not found", name);
} }
@ -205,7 +199,7 @@ fn apt_get_changelog(
if changelog_url.starts_with("http://download.proxmox.com/") { if changelog_url.starts_with("http://download.proxmox.com/") {
let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url, None)) let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url, None))
.map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?; .map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?;
return Ok(json!(changelog)); Ok(json!(changelog))
} else if changelog_url.starts_with("https://enterprise.proxmox.com/") { } else if changelog_url.starts_with("https://enterprise.proxmox.com/") {
let sub = match subscription::read_subscription()? { let sub = match subscription::read_subscription()? {
@ -229,7 +223,7 @@ fn apt_get_changelog(
let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url, Some(&auth_header))) let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url, Some(&auth_header)))
.map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?; .map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?;
return Ok(json!(changelog)); Ok(json!(changelog))
} else { } else {
let mut command = std::process::Command::new("apt-get"); let mut command = std::process::Command::new("apt-get");
@ -237,7 +231,7 @@ fn apt_get_changelog(
command.arg("-qq"); // don't display download progress command.arg("-qq"); // don't display download progress
command.arg(name); command.arg(name);
let output = crate::tools::run_command(command, None)?; let output = crate::tools::run_command(command, None)?;
return Ok(json!(output)); Ok(json!(output))
} }
} }
@ -261,7 +255,7 @@ fn apt_get_changelog(
}, },
)] )]
/// Get package information for important Proxmox Backup Server packages. /// Get package information for important Proxmox Backup Server packages.
pub fn get_versions() -> Result<Value, Error> { pub fn get_versions() -> Result<Vec<APTUpdateInfo>, Error> {
const PACKAGES: &[&str] = &[ const PACKAGES: &[&str] = &[
"ifupdown2", "ifupdown2",
"libjs-extjs", "libjs-extjs",
@ -276,7 +270,7 @@ pub fn get_versions() -> Result<Value, Error> {
"zfsutils-linux", "zfsutils-linux",
]; ];
fn unknown_package(package: String) -> APTUpdateInfo { fn unknown_package(package: String, extra_info: Option<String>) -> APTUpdateInfo {
APTUpdateInfo { APTUpdateInfo {
package, package,
title: "unknown".into(), title: "unknown".into(),
@ -288,6 +282,7 @@ pub fn get_versions() -> Result<Value, Error> {
priority: "unknown".into(), priority: "unknown".into(),
section: "unknown".into(), section: "unknown".into(),
change_log_url: "unknown".into(), change_log_url: "unknown".into(),
extra_info,
} }
} }
@ -301,14 +296,28 @@ pub fn get_versions() -> Result<Value, Error> {
}, },
None, None,
); );
let running_kernel = format!(
"running kernel: {}",
nix::sys::utsname::uname().release().to_owned()
);
if let Some(proxmox_backup) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup") { if let Some(proxmox_backup) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup") {
packages.push(proxmox_backup.to_owned()); let mut proxmox_backup = proxmox_backup.clone();
proxmox_backup.extra_info = Some(running_kernel);
packages.push(proxmox_backup);
} else { } else {
packages.push(unknown_package("proxmox-backup".into())); packages.push(unknown_package("proxmox-backup".into(), Some(running_kernel)));
} }
let version = crate::api2::version::PROXMOX_PKG_VERSION;
let release = crate::api2::version::PROXMOX_PKG_RELEASE;
let daemon_version_info = Some(format!("running version: {}.{}", version, release));
if let Some(pkg) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup-server") { if let Some(pkg) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup-server") {
packages.push(pkg.to_owned()); let mut pkg = pkg.clone();
pkg.extra_info = daemon_version_info;
packages.push(pkg);
} else {
packages.push(unknown_package("proxmox-backup".into(), daemon_version_info));
} }
let mut kernel_pkgs: Vec<APTUpdateInfo> = pbs_packages let mut kernel_pkgs: Vec<APTUpdateInfo> = pbs_packages
@ -334,11 +343,11 @@ pub fn get_versions() -> Result<Value, Error> {
} }
match pbs_packages.iter().find(|item| &item.package == pkg) { match pbs_packages.iter().find(|item| &item.package == pkg) {
Some(apt_pkg) => packages.push(apt_pkg.to_owned()), Some(apt_pkg) => packages.push(apt_pkg.to_owned()),
None => packages.push(unknown_package(pkg.to_string())), None => packages.push(unknown_package(pkg.to_string(), None)),
} }
} }
Ok(json!(packages)) Ok(packages)
} }
const SUBDIRS: SubdirMap = &[ const SUBDIRS: SubdirMap = &[

View File

@ -138,7 +138,7 @@ pub fn initialize_disk(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;

View File

@ -132,7 +132,7 @@ pub fn create_datastore_disk(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> { ) -> Result<String, Error> {
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@ -164,7 +164,7 @@ pub fn create_datastore_disk(
let manager = DiskManage::new(); let manager = DiskManage::new();
let disk = manager.clone().disk_by_name(&disk)?; let disk = manager.disk_by_name(&disk)?;
let partition = create_single_linux_partition(&disk)?; let partition = create_single_linux_partition(&disk)?;
create_file_system(&partition, filesystem)?; create_file_system(&partition, filesystem)?;
@ -212,8 +212,7 @@ pub fn delete_datastore_disk(name: String) -> Result<(), Error> {
let (config, _) = crate::config::datastore::config()?; let (config, _) = crate::config::datastore::config()?;
let datastores: Vec<DataStoreConfig> = config.convert_to_typed_array("datastore")?; let datastores: Vec<DataStoreConfig> = config.convert_to_typed_array("datastore")?;
let conflicting_datastore: Option<DataStoreConfig> = datastores.into_iter() let conflicting_datastore: Option<DataStoreConfig> = datastores.into_iter()
.filter(|ds| ds.path == path) .find(|ds| ds.path == path);
.next();
if let Some(conflicting_datastore) = conflicting_datastore { if let Some(conflicting_datastore) = conflicting_datastore {
bail!("Can't remove '{}' since it's required by datastore '{}'", bail!("Can't remove '{}' since it's required by datastore '{}'",

View File

@ -254,7 +254,7 @@ pub fn create_zpool(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> { ) -> Result<String, Error> {
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;

View File

@ -125,7 +125,7 @@ pub fn update_dns(
) -> Result<Value, Error> { ) -> Result<Value, Error> {
lazy_static! { lazy_static! {
static ref MUTEX: Arc<Mutex<usize>> = Arc::new(Mutex::new(0)); static ref MUTEX: Arc<Mutex<()>> = Arc::new(Mutex::new(()));
} }
let _guard = MUTEX.lock(); let _guard = MUTEX.lock();

View File

@ -102,10 +102,7 @@ pub fn list_network_devices(
}, },
}, },
}, },
returns: { returns: { type: Interface },
description: "The network interface configuration (with config digest).",
type: Interface,
},
access: { access: {
permission: &Permission::Privilege(&["system", "network", "interfaces", "{name}"], PRIV_SYS_AUDIT, false), permission: &Permission::Privilege(&["system", "network", "interfaces", "{name}"], PRIV_SYS_AUDIT, false),
}, },
@ -135,7 +132,6 @@ pub fn read_interface(iface: String) -> Result<Value, Error> {
schema: NETWORK_INTERFACE_NAME_SCHEMA, schema: NETWORK_INTERFACE_NAME_SCHEMA,
}, },
"type": { "type": {
description: "Interface type.",
type: NetworkInterfaceType, type: NetworkInterfaceType,
optional: true, optional: true,
}, },
@ -217,6 +213,7 @@ pub fn read_interface(iface: String) -> Result<Value, Error> {
}, },
)] )]
/// Create network interface configuration. /// Create network interface configuration.
#[allow(clippy::too_many_arguments)]
pub fn create_interface( pub fn create_interface(
iface: String, iface: String,
autostart: Option<bool>, autostart: Option<bool>,
@ -388,7 +385,6 @@ pub enum DeletableProperty {
schema: NETWORK_INTERFACE_NAME_SCHEMA, schema: NETWORK_INTERFACE_NAME_SCHEMA,
}, },
"type": { "type": {
description: "Interface type. If specified, need to match the current type.",
type: NetworkInterfaceType, type: NetworkInterfaceType,
optional: true, optional: true,
}, },
@ -482,6 +478,7 @@ pub enum DeletableProperty {
}, },
)] )]
/// Update network interface config. /// Update network interface config.
#[allow(clippy::too_many_arguments)]
pub fn update_interface( pub fn update_interface(
iface: String, iface: String,
autostart: Option<bool>, autostart: Option<bool>,

View File

@ -22,7 +22,7 @@ static SERVICE_NAME_LIST: [&str; 7] = [
"systemd-timesyncd", "systemd-timesyncd",
]; ];
fn real_service_name(service: &str) -> &str { pub fn real_service_name(service: &str) -> &str {
// since postfix package 3.1.0-3.1 the postfix unit is only here // since postfix package 3.1.0-3.1 the postfix unit is only here
// to manage subinstances, of which the default is called "-". // to manage subinstances, of which the default is called "-".

View File

@ -73,10 +73,7 @@ pub fn check_subscription(
}, },
}, },
}, },
returns: { returns: { type: SubscriptionInfo },
description: "Subscription status.",
type: SubscriptionInfo,
},
access: { access: {
permission: &Permission::Anybody, permission: &Permission::Anybody,
}, },
@ -140,7 +137,7 @@ pub fn set_subscription(
let server_id = tools::get_hardware_address()?; let server_id = tools::get_hardware_address()?;
let info = subscription::check_subscription(key, server_id.to_owned())?; let info = subscription::check_subscription(key, server_id)?;
subscription::write_subscription(info) subscription::write_subscription(info)
.map_err(|e| format_err!("Error writing subscription status - {}", e))?; .map_err(|e| format_err!("Error writing subscription status - {}", e))?;

View File

@ -134,12 +134,18 @@ fn get_syslog(
mut rpcenv: &mut dyn RpcEnvironment, mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let service = if let Some(service) = param["service"].as_str() {
Some(crate::api2::node::services::real_service_name(service))
} else {
None
};
let (count, lines) = dump_journal( let (count, lines) = dump_journal(
param["start"].as_u64(), param["start"].as_u64(),
param["limit"].as_u64(), param["limit"].as_u64(),
param["since"].as_str(), param["since"].as_str(),
param["until"].as_str(), param["until"].as_str(),
param["service"].as_str())?; service)?;
rpcenv["total"] = Value::from(count); rpcenv["total"] = Value::from(count);

View File

@ -110,16 +110,12 @@ fn check_task_access(auth_id: &Authid, upid: &UPID) -> Result<(), Error> {
} else { } else {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let task_privs = user_info.lookup_privs(auth_id, &["system", "tasks"]); // access to all tasks
if task_privs & PRIV_SYS_AUDIT != 0 { // or task == job which the user/token could have configured/manually executed
// allowed to read all tasks in general
Ok(()) user_info.check_privs(auth_id, &["system", "tasks"], PRIV_SYS_AUDIT, false)
} else if check_job_privs(&auth_id, &user_info, upid).is_ok() { .or_else(|_| check_job_privs(&auth_id, &user_info, upid))
// job which the user/token could have configured/manually executed .or_else(|_| bail!("task access not allowed"))
Ok(())
} else {
bail!("task access not allowed");
}
} }
} }
@ -166,7 +162,6 @@ fn check_task_access(auth_id: &Authid, upid: &UPID) -> Result<(), Error> {
}, },
user: { user: {
type: Userid, type: Userid,
description: "The user who started the task.",
}, },
tokenid: { tokenid: {
type: Tokenname, type: Tokenname,
@ -430,6 +425,7 @@ fn stop_task(
}, },
)] )]
/// List tasks. /// List tasks.
#[allow(clippy::too_many_arguments)]
pub fn list_tasks( pub fn list_tasks(
start: u64, start: u64,
limit: u64, limit: u64,
@ -514,7 +510,7 @@ pub fn list_tasks(
.collect(); .collect();
let mut count = result.len() + start as usize; let mut count = result.len() + start as usize;
if result.len() > 0 && result.len() >= limit { // we have a 'virtual' entry as long as we have any new if !result.is_empty() && result.len() >= limit { // we have a 'virtual' entry as long as we have any new
count += 1; count += 1;
} }

View File

@ -1,3 +1,5 @@
//! Cheap check if the API daemon is online.
use anyhow::{Error}; use anyhow::{Error};
use serde_json::{json, Value}; use serde_json::{json, Value};
@ -20,7 +22,7 @@ use proxmox::api::{api, Router, Permission};
} }
)] )]
/// Dummy method which replies with `{ "pong": True }` /// Dummy method which replies with `{ "pong": True }`
fn ping() -> Result<Value, Error> { pub fn ping() -> Result<Value, Error> {
Ok(json!({ Ok(json!({
"pong": true, "pong": true,
})) }))

View File

@ -88,7 +88,7 @@ pub fn do_sync_job(
let worker_future = async move { let worker_future = async move {
let delete = sync_job.remove_vanished.unwrap_or(true); let delete = sync_job.remove_vanished.unwrap_or(true);
let sync_owner = sync_job.owner.unwrap_or(Authid::root_auth_id().clone()); let sync_owner = sync_job.owner.unwrap_or_else(|| Authid::root_auth_id().clone());
let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?; let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
worker.log(format!("Starting datastore sync job '{}'", job_id)); worker.log(format!("Starting datastore sync job '{}'", job_id));

View File

@ -1,8 +1,10 @@
//! Backup reader/restore protocol (HTTP2 upgrade)
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::*; use futures::*;
use hyper::header::{self, HeaderValue, UPGRADE}; use hyper::header::{self, HeaderValue, UPGRADE};
use hyper::http::request::Parts; use hyper::http::request::Parts;
use hyper::{Body, Response, StatusCode}; use hyper::{Body, Response, Request, StatusCode};
use serde_json::Value; use serde_json::Value;
use proxmox::{sortable, identity}; use proxmox::{sortable, identity};
@ -113,7 +115,9 @@ fn upgrade_to_backup_reader_protocol(
let worker_id = format!("{}:{}/{}/{:08X}", store, backup_type, backup_id, backup_dir.backup_time()); let worker_id = format!("{}:{}/{}/{:08X}", store, backup_type, backup_id, backup_dir.backup_time());
WorkerTask::spawn("reader", Some(worker_id), auth_id.clone(), true, move |worker| { WorkerTask::spawn("reader", Some(worker_id), auth_id.clone(), true, move |worker| async move {
let _guard = _guard;
let mut env = ReaderEnvironment::new( let mut env = ReaderEnvironment::new(
env_type, env_type,
auth_id, auth_id,
@ -128,15 +132,13 @@ fn upgrade_to_backup_reader_protocol(
let service = H2Service::new(env.clone(), worker.clone(), &READER_API_ROUTER, debug); let service = H2Service::new(env.clone(), worker.clone(), &READER_API_ROUTER, debug);
let abort_future = worker.abort_future(); let mut abort_future = worker.abort_future()
.map(|_| Err(format_err!("task aborted")));
let req_fut = req_body let env2 = env.clone();
.on_upgrade() let req_fut = async move {
.map_err(Error::from) let conn = hyper::upgrade::on(Request::from_parts(parts, req_body)).await?;
.and_then({ env2.debug("protocol upgrade done");
let env = env.clone();
move |conn| {
env.debug("protocol upgrade done");
let mut http = hyper::server::conn::Http::new(); let mut http = hyper::server::conn::Http::new();
http.http2_only(true); http.http2_only(true);
@ -147,24 +149,17 @@ fn upgrade_to_backup_reader_protocol(
http.http2_max_frame_size(4*1024*1024); http.http2_max_frame_size(4*1024*1024);
http.serve_connection(conn, service) http.serve_connection(conn, service)
.map_err(Error::from) .map_err(Error::from).await
} };
});
let abort_future = abort_future
.map(|_| Err(format_err!("task aborted")));
use futures::future::Either; futures::select!{
futures::future::select(req_fut, abort_future) req = req_fut.fuse() => req?,
.map(move |res| { abort = abort_future => abort?,
let _guard = _guard; };
match res {
Either::Left((Ok(res), _)) => Ok(res), env.log("reader finished successfully");
Either::Left((Err(err), _)) => Err(err),
Either::Right((Ok(res), _)) => Ok(res), Ok(())
Either::Right((Err(err), _)) => Err(err),
}
})
.map_ok(move |_| env.log("reader finished successfully"))
})?; })?;
let response = Response::builder() let response = Response::builder()

View File

@ -1,3 +1,5 @@
//! Datastote status
use proxmox::list_subdirs_api_method; use proxmox::list_subdirs_api_method;
use anyhow::{Error}; use anyhow::{Error};
@ -75,7 +77,7 @@ use crate::config::acl::{
}, },
)] )]
/// List Datastore usages and estimates /// List Datastore usages and estimates
fn datastore_status( pub fn datastore_status(
_param: Value, _param: Value,
_info: &ApiMethod, _info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
@ -127,8 +129,7 @@ fn datastore_status(
rrd_mode, rrd_mode,
); );
match (total_res, used_res) { if let (Some((start, reso, total_list)), Some((_, _, used_list))) = (total_res, used_res) {
(Some((start, reso, total_list)), Some((_, _, used_list))) => {
let mut usage_list: Vec<f64> = Vec::new(); let mut usage_list: Vec<f64> = Vec::new();
let mut time_list: Vec<u64> = Vec::new(); let mut time_list: Vec<u64> = Vec::new();
let mut history = Vec::new(); let mut history = Vec::new();
@ -168,8 +169,6 @@ fn datastore_status(
} }
} }
} }
},
_ => {},
} }
list.push(entry); list.push(entry);

254
src/api2/tape/backup.rs Normal file
View File

@ -0,0 +1,254 @@
use std::path::Path;
use std::sync::Arc;
use anyhow::{bail, Error};
use serde_json::Value;
use proxmox::{
api::{
api,
RpcEnvironment,
RpcEnvironmentType,
Router,
},
};
use crate::{
task_log,
config::{
self,
drive::check_drive_exists,
},
backup::{
DataStore,
BackupDir,
BackupInfo,
},
api2::types::{
Authid,
DATASTORE_SCHEMA,
MEDIA_POOL_NAME_SCHEMA,
DRIVE_NAME_SCHEMA,
UPID_SCHEMA,
MediaPoolConfig,
},
server::WorkerTask,
task::TaskState,
tape::{
TAPE_STATUS_DIR,
Inventory,
PoolWriter,
MediaPool,
SnapshotReader,
drive::media_changer,
changer::update_changer_online_status,
},
};
#[api(
input: {
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
drive: {
schema: DRIVE_NAME_SCHEMA,
},
"eject-media": {
description: "Eject media upon job completion.",
type: bool,
optional: true,
},
"export-media-set": {
description: "Export media set upon job completion.",
type: bool,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Backup datastore to tape media pool
pub fn backup(
store: String,
pool: String,
drive: String,
eject_media: Option<bool>,
export_media_set: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let datastore = DataStore::lookup_datastore(&store)?;
let (config, _digest) = config::media_pool::config()?;
let pool_config: MediaPoolConfig = config.lookup("pool", &pool)?;
let (drive_config, _digest) = config::drive::config()?;
// early check before starting worker
check_drive_exists(&drive_config, &drive)?;
let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
let eject_media = eject_media.unwrap_or(false);
let export_media_set = export_media_set.unwrap_or(false);
let upid_str = WorkerTask::new_thread(
"tape-backup",
Some(store),
auth_id,
to_stdout,
move |worker| {
backup_worker(&worker, datastore, &drive, &pool_config, eject_media, export_media_set)?;
Ok(())
}
)?;
Ok(upid_str.into())
}
pub const ROUTER: Router = Router::new()
.post(&API_METHOD_BACKUP);
fn backup_worker(
worker: &WorkerTask,
datastore: Arc<DataStore>,
drive: &str,
pool_config: &MediaPoolConfig,
eject_media: bool,
export_media_set: bool,
) -> Result<(), Error> {
let status_path = Path::new(TAPE_STATUS_DIR);
let _lock = MediaPool::lock(status_path, &pool_config.name)?;
task_log!(worker, "update media online status");
let has_changer = update_media_online_status(drive)?;
let use_offline_media = !has_changer;
let pool = MediaPool::with_config(status_path, &pool_config, use_offline_media)?;
let mut pool_writer = PoolWriter::new(pool, drive)?;
let mut group_list = BackupInfo::list_backup_groups(&datastore.base_path())?;
group_list.sort_unstable();
for group in group_list {
let mut snapshot_list = group.list_backups(&datastore.base_path())?;
BackupInfo::sort_list(&mut snapshot_list, true); // oldest first
for info in snapshot_list {
if pool_writer.contains_snapshot(&info.backup_dir.to_string()) {
continue;
}
task_log!(worker, "backup snapshot {}", info.backup_dir);
backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)?;
}
}
pool_writer.commit()?;
if export_media_set {
pool_writer.export_media_set(worker)?;
} else if eject_media {
pool_writer.eject_media(worker)?;
}
Ok(())
}
// Try to update the the media online status
fn update_media_online_status(drive: &str) -> Result<bool, Error> {
let (config, _digest) = config::drive::config()?;
let mut has_changer = false;
if let Ok(Some((mut changer, changer_name))) = media_changer(&config, drive) {
has_changer = true;
let label_text_list = changer.online_media_label_texts()?;
let status_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(status_path)?;
update_changer_online_status(
&config,
&mut inventory,
&changer_name,
&label_text_list,
)?;
}
Ok(has_changer)
}
pub fn backup_snapshot(
worker: &WorkerTask,
pool_writer: &mut PoolWriter,
datastore: Arc<DataStore>,
snapshot: BackupDir,
) -> Result<(), Error> {
task_log!(worker, "start backup {}:{}", datastore.name(), snapshot);
let snapshot_reader = SnapshotReader::new(datastore.clone(), snapshot.clone())?;
let mut chunk_iter = snapshot_reader.chunk_iterator()?.peekable();
loop {
worker.check_abort()?;
// test is we have remaining chunks
if chunk_iter.peek().is_none() {
break;
}
let uuid = pool_writer.load_writable_media(worker)?;
worker.check_abort()?;
let (leom, _bytes) = pool_writer.append_chunk_archive(worker, &datastore, &mut chunk_iter)?;
if leom {
pool_writer.set_media_status_full(&uuid)?;
}
}
worker.check_abort()?;
let uuid = pool_writer.load_writable_media(worker)?;
worker.check_abort()?;
let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
if !done {
// does not fit on tape, so we try on next volume
pool_writer.set_media_status_full(&uuid)?;
worker.check_abort()?;
pool_writer.load_writable_media(worker)?;
let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
if !done {
bail!("write_snapshot_archive failed on second media");
}
}
task_log!(worker, "end backup {}:{}", datastore.name(), snapshot);
Ok(())
}

191
src/api2/tape/changer.rs Normal file
View File

@ -0,0 +1,191 @@
use std::path::Path;
use anyhow::Error;
use serde_json::Value;
use proxmox::api::{api, Router, SubdirMap};
use proxmox::list_subdirs_api_method;
use crate::{
config,
api2::types::{
CHANGER_NAME_SCHEMA,
ChangerListEntry,
MtxEntryKind,
MtxStatusEntry,
ScsiTapeChanger,
},
tape::{
TAPE_STATUS_DIR,
Inventory,
linux_tape_changer_list,
changer::{
OnlineStatusMap,
ElementStatus,
ScsiMediaChange,
mtx_status_to_online_set,
},
lookup_device_identification,
},
};
#[api(
input: {
properties: {
name: {
schema: CHANGER_NAME_SCHEMA,
},
},
},
returns: {
description: "A status entry for each drive and slot.",
type: Array,
items: {
type: MtxStatusEntry,
},
},
)]
/// Get tape changer status
pub async fn get_status(name: String) -> Result<Vec<MtxStatusEntry>, Error> {
let (config, _digest) = config::drive::config()?;
let mut changer_config: ScsiTapeChanger = config.lookup("changer", &name)?;
let status = tokio::task::spawn_blocking(move || {
changer_config.status()
}).await??;
let state_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(state_path)?;
let mut map = OnlineStatusMap::new(&config)?;
let online_set = mtx_status_to_online_set(&status, &inventory);
map.update_online_status(&name, online_set)?;
inventory.update_online_status(&map)?;
let mut list = Vec::new();
for (id, drive_status) in status.drives.iter().enumerate() {
let entry = MtxStatusEntry {
entry_kind: MtxEntryKind::Drive,
entry_id: id as u64,
label_text: match &drive_status.status {
ElementStatus::Empty => None,
ElementStatus::Full => Some(String::new()),
ElementStatus::VolumeTag(tag) => Some(tag.to_string()),
},
loaded_slot: drive_status.loaded_slot,
};
list.push(entry);
}
for (id, slot_info) in status.slots.iter().enumerate() {
let entry = MtxStatusEntry {
entry_kind: if slot_info.import_export {
MtxEntryKind::ImportExport
} else {
MtxEntryKind::Slot
},
entry_id: id as u64 + 1,
label_text: match &slot_info.status {
ElementStatus::Empty => None,
ElementStatus::Full => Some(String::new()),
ElementStatus::VolumeTag(tag) => Some(tag.to_string()),
},
loaded_slot: None,
};
list.push(entry);
}
Ok(list)
}
#[api(
input: {
properties: {
name: {
schema: CHANGER_NAME_SCHEMA,
},
from: {
description: "Source slot number",
minimum: 1,
},
to: {
description: "Destination slot number",
minimum: 1,
},
},
},
)]
/// Transfers media from one slot to another
pub async fn transfer(
name: String,
from: u64,
to: u64,
) -> Result<(), Error> {
let (config, _digest) = config::drive::config()?;
let mut changer_config: ScsiTapeChanger = config.lookup("changer", &name)?;
tokio::task::spawn_blocking(move || {
changer_config.transfer(from, to)
}).await?
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of configured changers with model information.",
type: Array,
items: {
type: ChangerListEntry,
},
},
)]
/// List changers
pub fn list_changers(
_param: Value,
) -> Result<Vec<ChangerListEntry>, Error> {
let (config, _digest) = config::drive::config()?;
let linux_changers = linux_tape_changer_list();
let changer_list: Vec<ScsiTapeChanger> = config.convert_to_typed_array("changer")?;
let mut list = Vec::new();
for changer in changer_list {
let info = lookup_device_identification(&linux_changers, &changer.path);
let entry = ChangerListEntry { config: changer, info };
list.push(entry);
}
Ok(list)
}
const SUBDIRS: SubdirMap = &[
(
"status",
&Router::new()
.get(&API_METHOD_GET_STATUS)
),
(
"transfer",
&Router::new()
.post(&API_METHOD_TRANSFER)
),
];
const ITEM_ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(&SUBDIRS);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_CHANGERS)
.match_all("name", &ITEM_ROUTER);

1235
src/api2/tape/drive.rs Normal file

File diff suppressed because it is too large Load Diff

350
src/api2/tape/media.rs Normal file
View File

@ -0,0 +1,350 @@
use std::path::Path;
use anyhow::{bail, format_err, Error};
use serde::{Serialize, Deserialize};
use proxmox::{
api::{api, Router, SubdirMap},
list_subdirs_api_method,
tools::Uuid,
};
use crate::{
config::{
self,
},
api2::types::{
BACKUP_ID_SCHEMA,
BACKUP_TYPE_SCHEMA,
MEDIA_POOL_NAME_SCHEMA,
MEDIA_LABEL_SCHEMA,
MEDIA_UUID_SCHEMA,
MEDIA_SET_UUID_SCHEMA,
MediaPoolConfig,
MediaListEntry,
MediaStatus,
MediaContentEntry,
},
backup::{
BackupDir,
},
tape::{
TAPE_STATUS_DIR,
Inventory,
MediaPool,
MediaCatalog,
changer::update_online_status,
},
};
#[api(
input: {
properties: {
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
},
},
returns: {
description: "List of registered backup media.",
type: Array,
items: {
type: MediaListEntry,
},
},
)]
/// List pool media
pub async fn list_media(pool: Option<String>) -> Result<Vec<MediaListEntry>, Error> {
let (config, _digest) = config::media_pool::config()?;
let status_path = Path::new(TAPE_STATUS_DIR);
let catalogs = tokio::task::spawn_blocking(move || {
// update online media status
if let Err(err) = update_online_status(status_path) {
eprintln!("{}", err);
eprintln!("update online media status failed - using old state");
}
// test what catalog files we have
MediaCatalog::media_with_catalogs(status_path)
}).await??;
let mut list = Vec::new();
for (_section_type, data) in config.sections.values() {
let pool_name = match data["name"].as_str() {
None => continue,
Some(name) => name,
};
if let Some(ref name) = pool {
if name != pool_name {
continue;
}
}
let config: MediaPoolConfig = config.lookup("pool", pool_name)?;
let use_offline_media = true; // does not matter here
let pool = MediaPool::with_config(status_path, &config, use_offline_media)?;
let current_time = proxmox::tools::time::epoch_i64();
for media in pool.list_media() {
let expired = pool.media_is_expired(&media, current_time);
let media_set_uuid = media.media_set_label()
.map(|set| set.uuid.clone());
let seq_nr = media.media_set_label()
.map(|set| set.seq_nr);
let media_set_name = media.media_set_label()
.map(|set| {
pool.generate_media_set_name(&set.uuid, config.template.clone())
.unwrap_or_else(|_| set.uuid.to_string())
});
let catalog_ok = if media.media_set_label().is_none() {
// Media is empty, we need no catalog
true
} else {
catalogs.contains(media.uuid())
};
list.push(MediaListEntry {
uuid: media.uuid().clone(),
label_text: media.label_text().to_string(),
ctime: media.ctime(),
pool: Some(pool_name.to_string()),
location: media.location().clone(),
status: *media.status(),
catalog: catalog_ok,
expired,
media_set_ctime: media.media_set_label().map(|set| set.ctime),
media_set_uuid,
media_set_name,
seq_nr,
});
}
}
if pool.is_none() {
let inventory = Inventory::load(status_path)?;
for media_id in inventory.list_unassigned_media() {
let (mut status, location) = inventory.status_and_location(&media_id.label.uuid);
if status == MediaStatus::Unknown {
status = MediaStatus::Writable;
}
list.push(MediaListEntry {
uuid: media_id.label.uuid.clone(),
ctime: media_id.label.ctime,
label_text: media_id.label.label_text.to_string(),
location,
status,
catalog: true, // empty, so we do not need a catalog
expired: false,
media_set_uuid: None,
media_set_name: None,
media_set_ctime: None,
seq_nr: None,
pool: None,
});
}
}
Ok(list)
}
#[api(
input: {
properties: {
"label-text": {
schema: MEDIA_LABEL_SCHEMA,
},
force: {
description: "Force removal (even if media is used in a media set).",
type: bool,
optional: true,
},
},
},
)]
/// Destroy media (completely remove from database)
pub fn destroy_media(label_text: String, force: Option<bool>,) -> Result<(), Error> {
let force = force.unwrap_or(false);
let status_path = Path::new(TAPE_STATUS_DIR);
let mut inventory = Inventory::load(status_path)?;
let media_id = inventory.find_media_by_label_text(&label_text)
.ok_or_else(|| format_err!("no such media '{}'", label_text))?;
if !force {
if let Some(ref set) = media_id.media_set_label {
let is_empty = set.uuid.as_ref() == [0u8;16];
if !is_empty {
bail!("media '{}' contains data (please use 'force' flag to remove.", label_text);
}
}
}
let uuid = media_id.label.uuid.clone();
inventory.remove_media(&uuid)?;
Ok(())
}
#[api(
properties: {
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
"label-text": {
schema: MEDIA_LABEL_SCHEMA,
optional: true,
},
"media": {
schema: MEDIA_UUID_SCHEMA,
optional: true,
},
"media-set": {
schema: MEDIA_SET_UUID_SCHEMA,
optional: true,
},
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
optional: true,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Content list filter parameters
pub struct MediaContentListFilter {
pub pool: Option<String>,
pub label_text: Option<String>,
pub media: Option<Uuid>,
pub media_set: Option<Uuid>,
pub backup_type: Option<String>,
pub backup_id: Option<String>,
}
#[api(
input: {
properties: {
"filter": {
type: MediaContentListFilter,
flatten: true,
},
},
},
returns: {
description: "Media content list.",
type: Array,
items: {
type: MediaContentEntry,
},
},
)]
/// List media content
pub fn list_content(
filter: MediaContentListFilter,
) -> Result<Vec<MediaContentEntry>, Error> {
let (config, _digest) = config::media_pool::config()?;
let status_path = Path::new(TAPE_STATUS_DIR);
let inventory = Inventory::load(status_path)?;
let mut list = Vec::new();
for media_id in inventory.list_used_media() {
let set = media_id.media_set_label.as_ref().unwrap();
if let Some(ref label_text) = filter.label_text {
if &media_id.label.label_text != label_text { continue; }
}
if let Some(ref pool) = filter.pool {
if &set.pool != pool { continue; }
}
if let Some(ref media_uuid) = filter.media {
if &media_id.label.uuid != media_uuid { continue; }
}
if let Some(ref media_set_uuid) = filter.media_set {
if &set.uuid != media_set_uuid { continue; }
}
let config: MediaPoolConfig = config.lookup("pool", &set.pool)?;
let media_set_name = inventory
.generate_media_set_name(&set.uuid, config.template.clone())
.unwrap_or_else(|_| set.uuid.to_string());
let catalog = MediaCatalog::open(status_path, &media_id.label.uuid, false, false)?;
for snapshot in catalog.snapshot_index().keys() {
let backup_dir: BackupDir = snapshot.parse()?;
if let Some(ref backup_type) = filter.backup_type {
if backup_dir.group().backup_type() != backup_type { continue; }
}
if let Some(ref backup_id) = filter.backup_id {
if backup_dir.group().backup_id() != backup_id { continue; }
}
list.push(MediaContentEntry {
uuid: media_id.label.uuid.clone(),
label_text: media_id.label.label_text.to_string(),
pool: set.pool.clone(),
media_set_name: media_set_name.clone(),
media_set_uuid: set.uuid.clone(),
media_set_ctime: set.ctime,
seq_nr: set.seq_nr,
snapshot: snapshot.to_owned(),
backup_time: backup_dir.backup_time(),
});
}
}
Ok(list)
}
const SUBDIRS: SubdirMap = &[
(
"content",
&Router::new()
.get(&API_METHOD_LIST_CONTENT)
),
(
"destroy",
&Router::new()
.get(&API_METHOD_DESTROY_MEDIA)
),
(
"list",
&Router::new()
.get(&API_METHOD_LIST_MEDIA)
),
];
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

89
src/api2/tape/mod.rs Normal file
View File

@ -0,0 +1,89 @@
//! Tape Backup Management
use anyhow::Error;
use serde_json::Value;
use proxmox::{
api::{
api,
router::SubdirMap,
Router,
},
list_subdirs_api_method,
};
use crate::{
api2::types::TapeDeviceInfo,
tape::{
linux_tape_device_list,
linux_tape_changer_list,
},
};
pub mod drive;
pub mod changer;
pub mod media;
pub mod backup;
pub mod restore;
#[api(
input: {
properties: {},
},
returns: {
description: "The list of autodetected tape drives.",
type: Array,
items: {
type: TapeDeviceInfo,
},
},
)]
/// Scan tape drives
pub fn scan_drives(_param: Value) -> Result<Vec<TapeDeviceInfo>, Error> {
let list = linux_tape_device_list();
Ok(list)
}
#[api(
input: {
properties: {},
},
returns: {
description: "The list of autodetected tape changers.",
type: Array,
items: {
type: TapeDeviceInfo,
},
},
)]
/// Scan for SCSI tape changers
pub fn scan_changers(_param: Value) -> Result<Vec<TapeDeviceInfo>, Error> {
let list = linux_tape_changer_list();
Ok(list)
}
const SUBDIRS: SubdirMap = &[
("backup", &backup::ROUTER),
("changer", &changer::ROUTER),
("drive", &drive::ROUTER),
("media", &media::ROUTER),
("restore", &restore::ROUTER),
(
"scan-changers",
&Router::new()
.get(&API_METHOD_SCAN_CHANGERS),
),
(
"scan-drives",
&Router::new()
.get(&API_METHOD_SCAN_DRIVES),
),
];
pub const ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SUBDIRS))
.subdirs(SUBDIRS);

556
src/api2/tape/restore.rs Normal file
View File

@ -0,0 +1,556 @@
use std::path::Path;
use std::ffi::OsStr;
use std::convert::TryFrom;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::{
api::{
api,
RpcEnvironment,
RpcEnvironmentType,
Router,
section_config::SectionConfigData,
},
tools::{
Uuid,
io::ReadExt,
fs::{
replace_file,
CreateOptions,
},
},
};
use crate::{
tools::compute_file_csum,
api2::types::{
DATASTORE_SCHEMA,
DRIVE_NAME_SCHEMA,
UPID_SCHEMA,
Authid,
MediaPoolConfig,
},
config::{
self,
drive::check_drive_exists,
},
backup::{
archive_type,
MANIFEST_BLOB_NAME,
CryptMode,
DataStore,
BackupDir,
DataBlob,
BackupManifest,
ArchiveType,
IndexFile,
DynamicIndexReader,
FixedIndexReader,
},
server::WorkerTask,
tape::{
TAPE_STATUS_DIR,
TapeRead,
MediaId,
MediaCatalog,
ChunkArchiveDecoder,
MediaPool,
Inventory,
file_formats::{
PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0,
MediaContentHeader,
},
drive::{
TapeDriver,
request_and_load_media,
}
},
};
pub const ROUTER: Router = Router::new()
.post(&API_METHOD_RESTORE);
#[api(
input: {
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
drive: {
schema: DRIVE_NAME_SCHEMA,
},
"media-set": {
description: "Media set UUID.",
type: String,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
)]
/// Restore data from media-set
pub fn restore(
store: String,
drive: String,
media_set: String,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let status_path = Path::new(TAPE_STATUS_DIR);
let inventory = Inventory::load(status_path)?;
let media_set_uuid = media_set.parse()?;
let pool = inventory.lookup_media_set_pool(&media_set_uuid)?;
// check if pool exists
let (config, _digest) = config::media_pool::config()?;
let _pool_config: MediaPoolConfig = config.lookup("pool", &pool)?;
let (drive_config, _digest) = config::drive::config()?;
// early check before starting worker
check_drive_exists(&drive_config, &drive)?;
let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
let upid_str = WorkerTask::new_thread(
"tape-restore",
Some(store.clone()),
auth_id.clone(),
to_stdout,
move |worker| {
let _lock = MediaPool::lock(status_path, &pool)?;
let members = inventory.compute_media_set_members(&media_set_uuid)?;
let media_list = members.media_list();
let mut media_id_list = Vec::new();
let mut encryption_key_fingerprint = None;
for (seq_nr, media_uuid) in media_list.iter().enumerate() {
match media_uuid {
None => {
bail!("media set {} is incomplete (missing member {}).", media_set_uuid, seq_nr);
}
Some(media_uuid) => {
let media_id = inventory.lookup_media(media_uuid).unwrap();
if let Some(ref set) = media_id.media_set_label { // always true here
if encryption_key_fingerprint.is_none() && set.encryption_key_fingerprint.is_some() {
encryption_key_fingerprint = set.encryption_key_fingerprint.clone();
}
}
media_id_list.push(media_id);
}
}
}
worker.log(format!("Restore mediaset '{}'", media_set));
if let Some(fingerprint) = encryption_key_fingerprint {
worker.log(format!("Encryption key fingerprint: {}", fingerprint));
}
worker.log(format!("Pool: {}", pool));
worker.log(format!("Datastore: {}", store));
worker.log(format!("Drive: {}", drive));
worker.log(format!(
"Required media list: {}",
media_id_list.iter()
.map(|media_id| media_id.label.label_text.as_str())
.collect::<Vec<&str>>()
.join(";")
));
for media_id in media_id_list.iter() {
request_and_restore_media(
&worker,
media_id,
&drive_config,
&drive,
&datastore,
&auth_id,
)?;
}
worker.log(format!("Restore mediaset '{}' done", media_set));
Ok(())
}
)?;
Ok(upid_str.into())
}
/// Request and restore complete media without using existing catalog (create catalog instead)
pub fn request_and_restore_media(
worker: &WorkerTask,
media_id: &MediaId,
drive_config: &SectionConfigData,
drive_name: &str,
datastore: &DataStore,
authid: &Authid,
) -> Result<(), Error> {
let media_set_uuid = match media_id.media_set_label {
None => bail!("restore_media: no media set - internal error"),
Some(ref set) => &set.uuid,
};
let (mut drive, info) = request_and_load_media(worker, &drive_config, &drive_name, &media_id.label)?;
match info.media_set_label {
None => {
bail!("missing media set label on media {} ({})",
media_id.label.label_text, media_id.label.uuid);
}
Some(ref set) => {
if &set.uuid != media_set_uuid {
bail!("wrong media set label on media {} ({} != {})",
media_id.label.label_text, media_id.label.uuid,
media_set_uuid);
}
let encrypt_fingerprint = set.encryption_key_fingerprint.clone()
.map(|fp| (fp, set.uuid.clone()));
drive.set_encryption(encrypt_fingerprint)?;
}
}
restore_media(worker, &mut drive, &info, Some((datastore, authid)), false)
}
/// Restore complete media content and catalog
///
/// Only create the catalog if target is None.
pub fn restore_media(
worker: &WorkerTask,
drive: &mut Box<dyn TapeDriver>,
media_id: &MediaId,
target: Option<(&DataStore, &Authid)>,
verbose: bool,
) -> Result<(), Error> {
let status_path = Path::new(TAPE_STATUS_DIR);
let mut catalog = MediaCatalog::create_temporary_database(status_path, media_id, false)?;
loop {
let current_file_number = drive.current_file_number()?;
let reader = match drive.read_next_file()? {
None => {
worker.log(format!("detected EOT after {} files", current_file_number));
break;
}
Some(reader) => reader,
};
restore_archive(worker, reader, current_file_number, target, &mut catalog, verbose)?;
}
MediaCatalog::finish_temporary_database(status_path, &media_id.label.uuid, true)?;
Ok(())
}
fn restore_archive<'a>(
worker: &WorkerTask,
mut reader: Box<dyn 'a + TapeRead>,
current_file_number: u64,
target: Option<(&DataStore, &Authid)>,
catalog: &mut MediaCatalog,
verbose: bool,
) -> Result<(), Error> {
let header: MediaContentHeader = unsafe { reader.read_le_value()? };
if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
bail!("missing MediaContentHeader");
}
//println!("Found MediaContentHeader: {:?}", header);
match header.content_magic {
PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0 | PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0 => {
bail!("unexpected content magic (label)");
}
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0 => {
let snapshot = reader.read_exact_allocated(header.size as usize)?;
let snapshot = std::str::from_utf8(&snapshot)
.map_err(|_| format_err!("found snapshot archive with non-utf8 characters in name"))?;
worker.log(format!("Found snapshot archive: {} {}", current_file_number, snapshot));
let backup_dir: BackupDir = snapshot.parse()?;
if let Some((datastore, authid)) = target.as_ref() {
let (owner, _group_lock) = datastore.create_locked_backup_group(backup_dir.group(), authid)?;
if *authid != &owner { // only the owner is allowed to create additional snapshots
bail!("restore '{}' failed - owner check failed ({} != {})", snapshot, authid, owner);
}
let (rel_path, is_new, _snap_lock) = datastore.create_locked_backup_dir(&backup_dir)?;
let mut path = datastore.base_path();
path.push(rel_path);
if is_new {
worker.log(format!("restore snapshot {}", backup_dir));
match restore_snapshot_archive(reader, &path) {
Err(err) => {
std::fs::remove_dir_all(&path)?;
bail!("restore snapshot {} failed - {}", backup_dir, err);
}
Ok(false) => {
std::fs::remove_dir_all(&path)?;
worker.log(format!("skip incomplete snapshot {}", backup_dir));
}
Ok(true) => {
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, snapshot)?;
catalog.commit_if_large()?;
}
}
return Ok(());
}
}
reader.skip_to_end()?; // read all data
if let Ok(false) = reader.is_incomplete() {
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, snapshot)?;
catalog.commit_if_large()?;
}
}
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0 => {
worker.log(format!("Found chunk archive: {}", current_file_number));
let datastore = target.as_ref().map(|t| t.0);
if let Some(chunks) = restore_chunk_archive(worker, reader, datastore, verbose)? {
catalog.start_chunk_archive(Uuid::from(header.uuid), current_file_number)?;
for digest in chunks.iter() {
catalog.register_chunk(&digest)?;
}
worker.log(format!("register {} chunks", chunks.len()));
catalog.end_chunk_archive()?;
catalog.commit_if_large()?;
}
}
_ => bail!("unknown content magic {:?}", header.content_magic),
}
catalog.commit()?;
Ok(())
}
fn restore_chunk_archive<'a>(
worker: &WorkerTask,
reader: Box<dyn 'a + TapeRead>,
datastore: Option<&DataStore>,
verbose: bool,
) -> Result<Option<Vec<[u8;32]>>, Error> {
let mut chunks = Vec::new();
let mut decoder = ChunkArchiveDecoder::new(reader);
let result: Result<_, Error> = proxmox::try_block!({
while let Some((digest, blob)) = decoder.next_chunk()? {
if let Some(datastore) = datastore {
let chunk_exists = datastore.cond_touch_chunk(&digest, false)?;
if !chunk_exists {
blob.verify_crc()?;
if blob.crypt_mode()? == CryptMode::None {
blob.decode(None, Some(&digest))?; // verify digest
}
if verbose {
worker.log(format!("Insert chunk: {}", proxmox::tools::digest_to_hex(&digest)));
}
datastore.insert_chunk(&blob, &digest)?;
} else if verbose {
worker.log(format!("Found existing chunk: {}", proxmox::tools::digest_to_hex(&digest)));
}
} else if verbose {
worker.log(format!("Found chunk: {}", proxmox::tools::digest_to_hex(&digest)));
}
chunks.push(digest);
}
Ok(())
});
match result {
Ok(()) => Ok(Some(chunks)),
Err(err) => {
let reader = decoder.reader();
// check if this stream is marked incomplete
if let Ok(true) = reader.is_incomplete() {
return Ok(Some(chunks));
}
// check if this is an aborted stream without end marker
if let Ok(false) = reader.has_end_marker() {
worker.log("missing stream end marker".to_string());
return Ok(None);
}
// else the archive is corrupt
Err(err)
}
}
}
fn restore_snapshot_archive<'a>(
reader: Box<dyn 'a + TapeRead>,
snapshot_path: &Path,
) -> Result<bool, Error> {
let mut decoder = pxar::decoder::sync::Decoder::from_std(reader)?;
match try_restore_snapshot_archive(&mut decoder, snapshot_path) {
Ok(()) => Ok(true),
Err(err) => {
let reader = decoder.input();
// check if this stream is marked incomplete
if let Ok(true) = reader.is_incomplete() {
return Ok(false);
}
// check if this is an aborted stream without end marker
if let Ok(false) = reader.has_end_marker() {
return Ok(false);
}
// else the archive is corrupt
Err(err)
}
}
}
fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
decoder: &mut pxar::decoder::sync::Decoder<R>,
snapshot_path: &Path,
) -> Result<(), Error> {
let _root = match decoder.next() {
None => bail!("missing root entry"),
Some(root) => {
let root = root?;
match root.kind() {
pxar::EntryKind::Directory => { /* Ok */ }
_ => bail!("wrong root entry type"),
}
root
}
};
let root_path = Path::new("/");
let manifest_file_name = OsStr::new(MANIFEST_BLOB_NAME);
let mut manifest = None;
loop {
let entry = match decoder.next() {
None => break,
Some(entry) => entry?,
};
let entry_path = entry.path();
match entry.kind() {
pxar::EntryKind::File { .. } => { /* Ok */ }
_ => bail!("wrong entry type for {:?}", entry_path),
}
match entry_path.parent() {
None => bail!("wrong parent for {:?}", entry_path),
Some(p) => {
if p != root_path {
bail!("wrong parent for {:?}", entry_path);
}
}
}
let filename = entry.file_name();
let mut contents = match decoder.contents() {
None => bail!("missing file content"),
Some(contents) => contents,
};
let mut archive_path = snapshot_path.to_owned();
archive_path.push(&filename);
let mut tmp_path = archive_path.clone();
tmp_path.set_extension("tmp");
if filename == manifest_file_name {
let blob = DataBlob::load_from_reader(&mut contents)?;
let options = CreateOptions::new();
replace_file(&tmp_path, blob.raw_data(), options)?;
manifest = Some(BackupManifest::try_from(blob)?);
} else {
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.create(true)
.read(true)
.open(&tmp_path)
.map_err(|err| format_err!("restore {:?} failed - {}", tmp_path, err))?;
std::io::copy(&mut contents, &mut tmpfile)?;
if let Err(err) = std::fs::rename(&tmp_path, &archive_path) {
bail!("Atomic rename file {:?} failed - {}", archive_path, err);
}
}
}
let manifest = match manifest {
None => bail!("missing manifest"),
Some(manifest) => manifest,
};
for item in manifest.files() {
let mut archive_path = snapshot_path.to_owned();
archive_path.push(&item.filename);
match archive_type(&item.filename)? {
ArchiveType::DynamicIndex => {
let index = DynamicIndexReader::open(&archive_path)?;
let (csum, size) = index.compute_csum();
manifest.verify_file(&item.filename, &csum, size)?;
}
ArchiveType::FixedIndex => {
let index = FixedIndexReader::open(&archive_path)?;
let (csum, size) = index.compute_csum();
manifest.verify_file(&item.filename, &csum, size)?;
}
ArchiveType::Blob => {
let mut tmpfile = std::fs::File::open(&archive_path)?;
let (csum, size) = compute_file_csum(&mut tmpfile)?;
manifest.verify_file(&item.filename, &csum, size)?;
}
}
}
// commit manifest
let mut manifest_path = snapshot_path.to_owned();
manifest_path.push(MANIFEST_BLOB_NAME);
let mut tmp_manifest_path = manifest_path.clone();
tmp_manifest_path.set_extension("tmp");
if let Err(err) = std::fs::rename(&tmp_manifest_path, &manifest_path) {
bail!("Atomic rename manifest {:?} failed - {}", manifest_path, err);
}
Ok(())
}

View File

@ -1,3 +1,5 @@
//! API Type Definitions
use anyhow::bail; use anyhow::bail;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@ -5,8 +7,15 @@ use proxmox::api::{api, schema::*};
use proxmox::const_regex; use proxmox::const_regex;
use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32}; use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::{CryptMode, BACKUP_ID_REGEX}; use crate::{
use crate::server::UPID; backup::{
CryptMode,
Fingerprint,
BACKUP_ID_REGEX,
},
server::UPID,
config::acl::Role,
};
#[macro_use] #[macro_use]
mod macros; mod macros;
@ -20,6 +29,9 @@ pub use userid::Userid;
pub use userid::Authid; pub use userid::Authid;
pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GROUP_ID_SCHEMA}; pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GROUP_ID_SCHEMA};
mod tape;
pub use tape::*;
// File names: may not contain slashes, may not start with "." // File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| { pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
if name.starts_with('.') { if name.starts_with('.') {
@ -74,7 +86,7 @@ const_regex!{
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), "|", APITOKEN_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE_BRACKET!() ,"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$"); pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), "|", APITOKEN_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE_BRACKET!() ,"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$"; pub FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$"); pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
@ -83,6 +95,8 @@ const_regex!{
pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$"; pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$";
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$"; pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$";
} }
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat = pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
@ -100,8 +114,8 @@ pub const IP_FORMAT: ApiStringFormat =
pub const PVE_CONFIG_DIGEST_FORMAT: ApiStringFormat = pub const PVE_CONFIG_DIGEST_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SHA256_HEX_REGEX); ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const CERT_FINGERPRINT_SHA256_FORMAT: ApiStringFormat = pub const FINGERPRINT_SHA256_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&CERT_FINGERPRINT_SHA256_REGEX); ApiStringFormat::Pattern(&FINGERPRINT_SHA256_REGEX);
pub const PROXMOX_SAFE_ID_FORMAT: ApiStringFormat = pub const PROXMOX_SAFE_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX); ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
@ -109,6 +123,9 @@ pub const PROXMOX_SAFE_ID_FORMAT: ApiStringFormat =
pub const BACKUP_ID_FORMAT: ApiStringFormat = pub const BACKUP_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BACKUP_ID_REGEX); ApiStringFormat::Pattern(&BACKUP_ID_REGEX);
pub const UUID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&UUID_REGEX);
pub const SINGLE_LINE_COMMENT_FORMAT: ApiStringFormat = pub const SINGLE_LINE_COMMENT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SINGLE_LINE_COMMENT_REGEX); ApiStringFormat::Pattern(&SINGLE_LINE_COMMENT_REGEX);
@ -160,17 +177,22 @@ pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
pub const CERT_FINGERPRINT_SHA256_SCHEMA: Schema = StringSchema::new( pub const CERT_FINGERPRINT_SHA256_SCHEMA: Schema = StringSchema::new(
"X509 certificate fingerprint (sha256)." "X509 certificate fingerprint (sha256)."
) )
.format(&CERT_FINGERPRINT_SHA256_FORMAT) .format(&FINGERPRINT_SHA256_FORMAT)
.schema(); .schema();
pub const PROXMOX_CONFIG_DIGEST_SCHEMA: Schema = StringSchema::new(r#"\ pub const TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA: Schema = StringSchema::new(
Prevent changes if current configuration file has different SHA256 digest. "Tape encryption key fingerprint (sha256)."
This can be used to prevent concurrent modifications.
"#
) )
.format(&PVE_CONFIG_DIGEST_FORMAT) .format(&FINGERPRINT_SHA256_FORMAT)
.schema(); .schema();
pub const PROXMOX_CONFIG_DIGEST_SCHEMA: Schema = StringSchema::new(
"Prevent changes if current configuration file has different \
SHA256 digest. This can be used to prevent concurrent \
modifications."
)
.format(&PVE_CONFIG_DIGEST_FORMAT) .schema();
pub const CHUNK_DIGEST_FORMAT: ApiStringFormat = pub const CHUNK_DIGEST_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SHA256_HEX_REGEX); ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
@ -269,6 +291,36 @@ pub const ACL_UGID_TYPE_SCHEMA: Schema = StringSchema::new(
EnumEntry::new("group", "Group")])) EnumEntry::new("group", "Group")]))
.schema(); .schema();
#[api(
properties: {
propagate: {
schema: ACL_PROPAGATE_SCHEMA,
},
path: {
schema: ACL_PATH_SCHEMA,
},
ugid_type: {
schema: ACL_UGID_TYPE_SCHEMA,
},
ugid: {
type: String,
description: "User or Group ID.",
},
roleid: {
type: Role,
}
}
)]
#[derive(Serialize, Deserialize)]
/// ACL list entry.
pub struct AclListItem {
pub path: String,
pub ugid: String,
pub ugid_type: String,
pub propagate: bool,
pub roleid: String,
}
pub const BACKUP_ARCHIVE_NAME_SCHEMA: Schema = pub const BACKUP_ARCHIVE_NAME_SCHEMA: Schema =
StringSchema::new("Backup archive name.") StringSchema::new("Backup archive name.")
.format(&PROXMOX_SAFE_ID_FORMAT) .format(&PROXMOX_SAFE_ID_FORMAT)
@ -302,6 +354,16 @@ pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.max_length(32) .max_length(32)
.schema(); .schema();
pub const MEDIA_SET_UUID_SCHEMA: Schema =
StringSchema::new("MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).")
.format(&UUID_FORMAT)
.schema();
pub const MEDIA_UUID_SCHEMA: Schema =
StringSchema::new("Media Uuid.")
.format(&UUID_FORMAT)
.schema();
pub const SYNC_SCHEDULE_SCHEMA: Schema = StringSchema::new( pub const SYNC_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run sync job at specified schedule.") "Run sync job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event)) .format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event))
@ -484,6 +546,10 @@ pub struct SnapshotVerifyState {
type: SnapshotVerifyState, type: SnapshotVerifyState,
optional: true, optional: true,
}, },
fingerprint: {
type: String,
optional: true,
},
files: { files: {
items: { items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA schema: BACKUP_ARCHIVE_NAME_SCHEMA
@ -508,6 +574,9 @@ pub struct SnapshotListItem {
/// The result of the last run verify task /// The result of the last run verify task
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub verification: Option<SnapshotVerifyState>, pub verification: Option<SnapshotVerifyState>,
/// Fingerprint of encryption key
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<Fingerprint>,
/// List of contained archive files. /// List of contained archive files.
pub files: Vec<BackupContent>, pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes). /// Overall snapshot size (sum of all archive sizes).
@ -692,7 +761,7 @@ pub struct TypeCounts {
}, },
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize, Default)]
/// Counts of groups/snapshots per BackupType. /// Counts of groups/snapshots per BackupType.
pub struct Counts { pub struct Counts {
/// The counts for CT backups /// The counts for CT backups
@ -707,8 +776,14 @@ pub struct Counts {
#[api( #[api(
properties: { properties: {
"gc-status": { type: GarbageCollectionStatus, }, "gc-status": {
counts: { type: Counts, } type: GarbageCollectionStatus,
optional: true,
},
counts: {
type: Counts,
optional: true,
},
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
@ -722,9 +797,11 @@ pub struct DataStoreStatus {
/// Available space (bytes). /// Available space (bytes).
pub avail: u64, pub avail: u64,
/// Status of last GC /// Status of last GC
pub gc_status: GarbageCollectionStatus, #[serde(skip_serializing_if="Option::is_none")]
pub gc_status: Option<GarbageCollectionStatus>,
/// Group/Snapshot counts /// Group/Snapshot counts
pub counts: Counts, #[serde(skip_serializing_if="Option::is_none")]
pub counts: Option<Counts>,
} }
#[api( #[api(
@ -1054,7 +1131,7 @@ fn test_cert_fingerprint_schema() -> Result<(), anyhow::Error> {
]; ];
for fingerprint in invalid_fingerprints.iter() { for fingerprint in invalid_fingerprints.iter() {
if let Ok(_) = parse_simple_value(fingerprint, &schema) { if parse_simple_value(fingerprint, &schema).is_ok() {
bail!("test fingerprint '{}' failed - got Ok() while exception an error.", fingerprint); bail!("test fingerprint '{}' failed - got Ok() while exception an error.", fingerprint);
} }
} }
@ -1095,7 +1172,7 @@ fn test_proxmox_user_id_schema() -> Result<(), anyhow::Error> {
]; ];
for name in invalid_user_ids.iter() { for name in invalid_user_ids.iter() {
if let Ok(_) = parse_simple_value(name, &Userid::API_SCHEMA) { if parse_simple_value(name, &Userid::API_SCHEMA).is_ok() {
bail!("test userid '{}' failed - got Ok() while exception an error.", name); bail!("test userid '{}' failed - got Ok() while exception an error.", name);
} }
} }
@ -1177,6 +1254,9 @@ pub struct APTUpdateInfo {
pub section: String, pub section: String,
/// URL under which the package's changelog can be retrieved /// URL under which the package's changelog can be retrieved
pub change_log_url: String, pub change_log_url: String,
/// Custom extra field for additional package information
#[serde(skip_serializing_if="Option::is_none")]
pub extra_info: Option<String>,
} }
#[api()] #[api()]
@ -1223,3 +1303,60 @@ pub const DATASTORE_NOTIFY_STRING_SCHEMA: Schema = StringSchema::new(
"Datastore notification setting") "Datastore notification setting")
.format(&ApiStringFormat::PropertyString(&DatastoreNotify::API_SCHEMA)) .format(&ApiStringFormat::PropertyString(&DatastoreNotify::API_SCHEMA))
.schema(); .schema();
pub const PASSWORD_HINT_SCHEMA: Schema = StringSchema::new("Password hint.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(1)
.max_length(64)
.schema();
#[api(default: "scrypt")]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
/// Encrtypt the Key with a password using PBKDF2
PBKDF2,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api(
properties: {
kdf: {
type: Kdf,
},
fingerprint: {
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
optional: true,
},
},
)]
#[derive(Deserialize, Serialize)]
/// Encryption Key Information
pub struct KeyInfo {
/// Path to key (if stored in a file)
#[serde(skip_serializing_if="Option::is_none")]
pub path: Option<String>,
pub kdf: Kdf,
/// Key creation time
pub created: i64,
/// Key modification time
pub modified: i64,
/// Key fingerprint
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<String>,
/// Password hint
#[serde(skip_serializing_if="Option::is_none")]
pub hint: Option<String>,
}

View File

@ -0,0 +1,132 @@
//! Types for tape changer API
use serde::{Deserialize, Serialize};
use proxmox::api::{
api,
schema::{
Schema,
ApiStringFormat,
ArraySchema,
IntegerSchema,
StringSchema,
},
};
use crate::api2::types::{
PROXMOX_SAFE_ID_FORMAT,
OptionalDeviceIdentification,
};
pub const CHANGER_NAME_SCHEMA: Schema = StringSchema::new("Tape Changer Identifier.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const SCSI_CHANGER_PATH_SCHEMA: Schema = StringSchema::new(
"Path to Linux generic SCSI device (e.g. '/dev/sg4')")
.schema();
pub const MEDIA_LABEL_SCHEMA: Schema = StringSchema::new("Media Label/Barcode.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(2)
.max_length(32)
.schema();
pub const SLOT_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Slot list.", &IntegerSchema::new("Slot number")
.minimum(1)
.schema())
.schema();
pub const EXPORT_SLOT_LIST_SCHEMA: Schema = StringSchema::new(r###"\
A list of slot numbers, comma separated. Those slots are reserved for
Import/Export, i.e. any media in those slots are considered to be
'offline'.
"###)
.format(&ApiStringFormat::PropertyString(&SLOT_ARRAY_SCHEMA))
.schema();
#[api(
properties: {
name: {
schema: CHANGER_NAME_SCHEMA,
},
path: {
schema: SCSI_CHANGER_PATH_SCHEMA,
},
"export-slots": {
schema: EXPORT_SLOT_LIST_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// SCSI tape changer
pub struct ScsiTapeChanger {
pub name: String,
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]
pub export_slots: Option<String>,
}
#[api(
properties: {
config: {
type: ScsiTapeChanger,
},
info: {
type: OptionalDeviceIdentification,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Changer config with optional device identification attributes
pub struct ChangerListEntry {
#[serde(flatten)]
pub config: ScsiTapeChanger,
#[serde(flatten)]
pub info: OptionalDeviceIdentification,
}
#[api()]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Mtx Entry Kind
pub enum MtxEntryKind {
/// Drive
Drive,
/// Slot
Slot,
/// Import/Export Slot
ImportExport,
}
#[api(
properties: {
"entry-kind": {
type: MtxEntryKind,
},
"label-text": {
schema: MEDIA_LABEL_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Mtx Status Entry
pub struct MtxStatusEntry {
pub entry_kind: MtxEntryKind,
/// The ID of the slot or drive
pub entry_id: u64,
/// The media label (volume tag) if the slot/drive is full
#[serde(skip_serializing_if="Option::is_none")]
pub label_text: Option<String>,
/// The slot the drive was loaded from
#[serde(skip_serializing_if="Option::is_none")]
pub loaded_slot: Option<u64>,
}

View File

@ -0,0 +1,55 @@
use ::serde::{Deserialize, Serialize};
use proxmox::api::api;
#[api()]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Optional Device Identification Attributes
pub struct OptionalDeviceIdentification {
/// Vendor (autodetected)
#[serde(skip_serializing_if="Option::is_none")]
pub vendor: Option<String>,
/// Model (autodetected)
#[serde(skip_serializing_if="Option::is_none")]
pub model: Option<String>,
/// Serial number (autodetected)
#[serde(skip_serializing_if="Option::is_none")]
pub serial: Option<String>,
}
#[api()]
#[derive(Debug,Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Kind of devive
pub enum DeviceKind {
/// Tape changer (Autoloader, Robot)
Changer,
/// Normal SCSI tape device
Tape,
}
#[api(
properties: {
kind: {
type: DeviceKind,
},
},
)]
#[derive(Debug,Serialize,Deserialize)]
/// Tape device information
pub struct TapeDeviceInfo {
pub kind: DeviceKind,
/// Path to the linux device node
pub path: String,
/// Serial number (autodetected)
pub serial: String,
/// Vendor (autodetected)
pub vendor: String,
/// Model (autodetected)
pub model: String,
/// Device major number
pub major: u32,
/// Device minor number
pub minor: u32,
}

View File

@ -0,0 +1,211 @@
//! Types for tape drive API
use std::convert::TryFrom;
use anyhow::{bail, Error};
use serde::{Deserialize, Serialize};
use proxmox::api::{
api,
schema::{Schema, IntegerSchema, StringSchema},
};
use crate::api2::types::{
PROXMOX_SAFE_ID_FORMAT,
CHANGER_NAME_SCHEMA,
OptionalDeviceIdentification,
};
pub const DRIVE_NAME_SCHEMA: Schema = StringSchema::new("Drive Identifier.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const LINUX_DRIVE_PATH_SCHEMA: Schema = StringSchema::new(
"The path to a LINUX non-rewinding SCSI tape device (i.e. '/dev/nst0')")
.schema();
pub const CHANGER_DRIVENUM_SCHEMA: Schema = IntegerSchema::new(
"Associated changer drive number (requires option changer)")
.minimum(0)
.maximum(8)
.default(0)
.schema();
#[api(
properties: {
name: {
schema: DRIVE_NAME_SCHEMA,
}
}
)]
#[derive(Serialize,Deserialize)]
/// Simulate tape drives (only for test and debug)
#[serde(rename_all = "kebab-case")]
pub struct VirtualTapeDrive {
pub name: String,
/// Path to directory
pub path: String,
/// Virtual tape size
#[serde(skip_serializing_if="Option::is_none")]
pub max_size: Option<usize>,
}
#[api(
properties: {
name: {
schema: DRIVE_NAME_SCHEMA,
},
path: {
schema: LINUX_DRIVE_PATH_SCHEMA,
},
changer: {
schema: CHANGER_NAME_SCHEMA,
optional: true,
},
"changer-drivenum": {
schema: CHANGER_DRIVENUM_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Linux SCSI tape driver
pub struct LinuxTapeDrive {
pub name: String,
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]
pub changer: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub changer_drivenum: Option<u64>,
}
#[api(
properties: {
config: {
type: LinuxTapeDrive,
},
info: {
type: OptionalDeviceIdentification,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Drive list entry
pub struct DriveListEntry {
#[serde(flatten)]
pub config: LinuxTapeDrive,
#[serde(flatten)]
pub info: OptionalDeviceIdentification,
}
#[api()]
#[derive(Serialize,Deserialize)]
/// Medium auxiliary memory attributes (MAM)
pub struct MamAttribute {
/// Attribute id
pub id: u16,
/// Attribute name
pub name: String,
/// Attribute value
pub value: String,
}
#[api()]
#[derive(Serialize,Deserialize,Copy,Clone,Debug)]
pub enum TapeDensity {
/// LTO1
LTO1,
/// LTO2
LTO2,
/// LTO3
LTO3,
/// LTO4
LTO4,
/// LTO5
LTO5,
/// LTO6
LTO6,
/// LTO7
LTO7,
/// LTO7M8
LTO7M8,
/// LTO8
LTO8,
}
impl TryFrom<u8> for TapeDensity {
type Error = Error;
fn try_from(value: u8) -> Result<Self, Self::Error> {
let density = match value {
0x40 => TapeDensity::LTO1,
0x42 => TapeDensity::LTO2,
0x44 => TapeDensity::LTO3,
0x46 => TapeDensity::LTO4,
0x58 => TapeDensity::LTO5,
0x5a => TapeDensity::LTO6,
0x5c => TapeDensity::LTO7,
0x5d => TapeDensity::LTO7M8,
0x5e => TapeDensity::LTO8,
_ => bail!("unknown tape density code 0x{:02x}", value),
};
Ok(density)
}
}
#[api(
properties: {
density: {
type: TapeDensity,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Drive/Media status for Linux SCSI drives.
///
/// Media related data is optional - only set if there is a medium
/// loaded.
pub struct LinuxDriveAndMediaStatus {
/// Block size (0 is variable size)
pub blocksize: u32,
/// Tape density
#[serde(skip_serializing_if="Option::is_none")]
pub density: Option<TapeDensity>,
/// Status flags
pub status: String,
/// Linux Driver Options
pub options: String,
/// Tape Alert Flags
#[serde(skip_serializing_if="Option::is_none")]
pub alert_flags: Option<String>,
/// Current file number
#[serde(skip_serializing_if="Option::is_none")]
pub file_number: Option<u32>,
/// Current block number
#[serde(skip_serializing_if="Option::is_none")]
pub block_number: Option<u32>,
/// Medium Manufacture Date (epoch)
#[serde(skip_serializing_if="Option::is_none")]
pub manufactured: Option<i64>,
/// Total Bytes Read in Medium Life
#[serde(skip_serializing_if="Option::is_none")]
pub bytes_read: Option<u64>,
/// Total Bytes Written in Medium Life
#[serde(skip_serializing_if="Option::is_none")]
pub bytes_written: Option<u64>,
/// Number of mounts for the current volume (i.e., Thread Count)
#[serde(skip_serializing_if="Option::is_none")]
pub volume_mounts: Option<u64>,
/// Count of the total number of times the medium has passed over
/// the head.
#[serde(skip_serializing_if="Option::is_none")]
pub medium_passes: Option<u64>,
/// Estimated tape wearout factor (assuming max. 16000 end-to-end passes)
#[serde(skip_serializing_if="Option::is_none")]
pub medium_wearout: Option<f64>,
}

View File

@ -0,0 +1,151 @@
use ::serde::{Deserialize, Serialize};
use proxmox::{
api::api,
tools::Uuid,
};
use crate::api2::types::{
MEDIA_UUID_SCHEMA,
MEDIA_SET_UUID_SCHEMA,
MediaStatus,
MediaLocation,
};
#[api(
properties: {
location: {
type: MediaLocation,
},
status: {
type: MediaStatus,
},
uuid: {
schema: MEDIA_UUID_SCHEMA,
},
"media-set-uuid": {
schema: MEDIA_SET_UUID_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media list entry
pub struct MediaListEntry {
/// Media label text (or Barcode)
pub label_text: String,
pub uuid: Uuid,
/// Creation time stamp
pub ctime: i64,
pub location: MediaLocation,
pub status: MediaStatus,
/// Expired flag
pub expired: bool,
/// Catalog status OK
pub catalog: bool,
/// Media set name
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_name: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_uuid: Option<Uuid>,
/// Media set seq_nr
#[serde(skip_serializing_if="Option::is_none")]
pub seq_nr: Option<u64>,
/// MediaSet creation time stamp
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_ctime: Option<i64>,
/// Media Pool
#[serde(skip_serializing_if="Option::is_none")]
pub pool: Option<String>,
}
#[api(
properties: {
uuid: {
schema: MEDIA_UUID_SCHEMA,
},
"media-set-uuid": {
schema: MEDIA_SET_UUID_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media label info
pub struct MediaIdFlat {
/// Unique ID
pub uuid: Uuid,
/// Media label text (or Barcode)
pub label_text: String,
/// Creation time stamp
pub ctime: i64,
// All MediaSet properties are optional here
/// MediaSet Pool
#[serde(skip_serializing_if="Option::is_none")]
pub pool: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_uuid: Option<Uuid>,
/// MediaSet media sequence number
#[serde(skip_serializing_if="Option::is_none")]
pub seq_nr: Option<u64>,
/// MediaSet Creation time stamp
#[serde(skip_serializing_if="Option::is_none")]
pub media_set_ctime: Option<i64>,
/// Encryption key fingerprint
#[serde(skip_serializing_if="Option::is_none")]
pub encryption_key_fingerprint: Option<String>,
}
#[api(
properties: {
uuid: {
schema: MEDIA_UUID_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Label with optional Uuid
pub struct LabelUuidMap {
/// Changer label text (or Barcode)
pub label_text: String,
/// Associated Uuid (if any)
pub uuid: Option<Uuid>,
}
#[api(
properties: {
uuid: {
schema: MEDIA_UUID_SCHEMA,
},
"media-set-uuid": {
schema: MEDIA_SET_UUID_SCHEMA,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media content list entry
pub struct MediaContentEntry {
/// Media label text (or Barcode)
pub label_text: String,
/// Media Uuid
pub uuid: Uuid,
/// Media set name
pub media_set_name: String,
/// Media set uuid
pub media_set_uuid: Uuid,
/// MediaSet Creation time stamp
pub media_set_ctime: i64,
/// Media set seq_nr
pub seq_nr: u64,
/// Media Pool
pub pool: String,
/// Backup snapshot
pub snapshot: String,
/// Snapshot creation time (epoch)
pub backup_time: i64,
}

View File

@ -0,0 +1,91 @@
use anyhow::{bail, Error};
use proxmox::api::{
schema::{
Schema,
StringSchema,
ApiStringFormat,
parse_simple_value,
},
};
use crate::api2::types::{
PROXMOX_SAFE_ID_FORMAT,
CHANGER_NAME_SCHEMA,
};
pub const VAULT_NAME_SCHEMA: Schema = StringSchema::new("Vault name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
#[derive(Debug, PartialEq, Clone)]
/// Media location
pub enum MediaLocation {
/// Ready for use (inside tape library)
Online(String),
/// Local available, but need to be mounted (insert into tape
/// drive)
Offline,
/// Media is inside a Vault
Vault(String),
}
proxmox::forward_deserialize_to_from_str!(MediaLocation);
proxmox::forward_serialize_to_display!(MediaLocation);
impl MediaLocation {
pub const API_SCHEMA: Schema = StringSchema::new(
"Media location (e.g. 'offline', 'online-<changer_name>', 'vault-<vault_name>')")
.format(&ApiStringFormat::VerifyFn(|text| {
let location: MediaLocation = text.parse()?;
match location {
MediaLocation::Online(ref changer) => {
parse_simple_value(changer, &CHANGER_NAME_SCHEMA)?;
}
MediaLocation::Vault(ref vault) => {
parse_simple_value(vault, &VAULT_NAME_SCHEMA)?;
}
MediaLocation::Offline => { /* OK */}
}
Ok(())
}))
.schema();
}
impl std::fmt::Display for MediaLocation {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
MediaLocation::Offline => {
write!(f, "offline")
}
MediaLocation::Online(changer) => {
write!(f, "online-{}", changer)
}
MediaLocation::Vault(vault) => {
write!(f, "vault-{}", vault)
}
}
}
}
impl std::str::FromStr for MediaLocation {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s == "offline" {
return Ok(MediaLocation::Offline);
}
if let Some(changer) = s.strip_prefix("online-") {
return Ok(MediaLocation::Online(changer.to_string()));
}
if let Some(vault) = s.strip_prefix("vault-") {
return Ok(MediaLocation::Online(vault.to_string()));
}
bail!("MediaLocation parse error");
}
}

Some files were not shown because too many files have changed in this diff Show More