Compare commits

..

652 Commits

Author SHA1 Message Date
bd00ff10e4 bump version to 2.1.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-23 13:56:36 +01:00
149b969d9a docs: remotes: note that protected flags will not be synced
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-11-23 13:53:03 +01:00
56d3b59c71 docs: backup-client: fix wrong ':ref:'
there is no 'backup server' reference we can link to here, and it would
not make sense anyway.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-11-23 13:53:03 +01:00
c1e6efa8e1 sync job: correctly apply rate limit
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-23 09:42:18 +01:00
3b5473a682 bump version to 2.1.1-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 16:07:42 +01:00
4954d3130b docs: add/update tc related screenshots & content, document tc for sync-job
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 16:05:22 +01:00
064497756e bump version to 2.1.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 12:27:22 +01:00
ce3c7a1bda ui: sync job: allow to configure rate limit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 12:20:27 +01:00
50a39bbc1f ui: datastore content: rework rendering protection state
avoid that there's the same icon rendered twice, once clickable and
once as status. Also indicate the protection with a literal text and
by highlighting the single shield with green, if protected.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 11:22:29 +01:00
154d01b042 d/control and Cargo.toml bumps
* pin-utils isn't used anymore
* proxmox-sys version should also be tracked in Cargo.toml

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-22 10:56:36 +01:00
1f3352018b ui: traffic-control edit: add spaces between networks for more readabillity
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 10:45:29 +01:00
b721783c48 d/control: bump versioned build-dependency to librust-proxmox-sys
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 10:41:56 +01:00
76ee3085a4 ui: traffic-control edit: simple duplicate networks detection
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 10:30:17 +01:00
5d5a53059f ui: traffic-control edit: move on-load set value logic to own method
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 10:30:17 +01:00
77d8c593b3 ui: traffic-control edit: simpler unique timeframe logic
still just a heuristic, i.e., it does the same as previously but in
one line..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 10:30:17 +01:00
c450a3cafd ui: traffic-control edit: there's no 'network-select' anymore
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 10:30:17 +01:00
f8f4d7cab4 ui: traffic-control edit: avoid CIDR literals in gettext
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 10:30:17 +01:00
91abfef049 ui: traffic-control: include ipv6 in 'all' networks
by including '::/0' too

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 10:30:17 +01:00
963b7ec51b ui: traffic-control: fix sending network value
we forgot to correclty send the network value as we changed from
the radiogroup to a simple text field

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 10:30:17 +01:00
16aab0c137 ui: indentation fix
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 10:30:17 +01:00
bf8b8be976 ui: fix group-filter property name
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-22 09:13:32 +01:00
e201104d0b docs: update traffic control docs (use HumanBytes)
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-22 09:07:05 +01:00
d63db0863d proxmox-backup-manager traffic: render data human readable
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-22 09:07:05 +01:00
7a36833103 fix sync job regression test (add RateLimitConfig)
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-22 08:29:43 +01:00
ca6e66aa5a Fingerprint: add new signature method
commit c42a54795d introcuded a bug by
using fp.to_string(). Replace this with fp.signature() which correctly
returns the full fingerprint instead of the short version.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-22 08:29:43 +01:00
94a6b33680 set default for 'protected' flag
otherwise we cannot properly parse the api return value from older
versions, since that field does not exist there.

fixes sync from older versions without the protected feature

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-11-22 08:28:37 +01:00
2d5287fbbc use RateLimitConfig for HttpClient and pull
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-22 07:49:41 +01:00
6eb756bcab sync-job: add rate limit
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-22 07:49:41 +01:00
5647219049 pbs-api-types: split out type RateLimitConfig 2021-11-22 07:49:41 +01:00
b810972823 bump version to 2.1.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-21 10:32:17 +01:00
3a07cdf574 d/control: bump versioned dependency for proxmox-widget-toolkit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-21 10:32:17 +01:00
193ec30c2b fix proxmox-backup-manager sync-job list
Property is called 'group-filter' (not 'groups').
2021-11-21 09:46:46 +01:00
c94723062c pbs-api-types: fix HumanByte::auto_scale 2021-11-21 09:13:02 +01:00
0eadfdf670 ui: traffic-control edit: fix name minLength (3 not 4)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 22:33:32 +01:00
ebf8ce20bc ui: traffic-control view: tune column width, add more flex
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 22:32:53 +01:00
e7acdde758 ui: traffic-control edit: make window taller for more common ratio
and add timeframe emptyText

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 22:09:41 +01:00
f2c9da2349 ui: traffic-control edit: send rates as stringified, auto-scaled size-unit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 22:08:22 +01:00
ff344655e2 ui: traffic-control edit: make network edit a single text field
here's to note that the radio-group was my idea, Dominik just
executed it, nicely that is.

But, the panel looks a bit glitchy layout wise as with that and the
bandwidth fields (maybe we should render their unit inline) the
vertical alignments were all over the place.

So for now make it a simple text field and throw in a tooltip for
good measurement

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 22:04:59 +01:00
3490d9460c ui: traffic-control edit: very simple duplicate timeframe detection
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 22:04:01 +01:00
fdf9373f9e ui: traffic-control edit: handle empty time-frame correctly
delete on update and avoid sending an empty string in any case, the
backend does not likes that much.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 22:02:47 +01:00
ba80611324 ui: traffic-control view: various code cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 21:54:55 +01:00
07a579c632 ui: traffic-control view: time frames are optional so avoid render exception
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 21:54:22 +01:00
188a37fbed ui: traffic-control view: make rendere more flexible
do not choke on non-numbers but use the (partially new) widget
toolkit helpers to also be able to parse string based sizes with
units and auto-scale them

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 21:52:31 +01:00
f251367c33 ui: navigation: change traffic-control icon to rotated signal
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 21:43:03 +01:00
ac4e399a10 ui: add Traffic Control UI
adds a list of traffic control rules (with their current usage)
and let the user add/edit/remove them

the edit window currently has a grid for timeframes to add/remove
with input fields for start/endtime and checkboxes for the days

there are still some improvements possible, like having a seperate
grid for networks (the input field is maybe too small), or
optimizing consecutive days to a range (e.g. mon..wed instead of mon,tue,wed)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-11-20 19:40:59 +01:00
4fe77c36df api: traffic_control: add missing rename to 'kebab-case'
otherwise the 'delete' properties need underscores
(e.g. 'burst_in' instead of 'burst-in')

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-11-20 19:40:30 +01:00
118515dbd0 use HumanByte for traffic-control config
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 19:35:24 +01:00
42ba4cd399 human byte: make proper proxmox API type
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 19:35:24 +01:00
ab1c07a622 human byte: add from string parser
Adapted from Dietmar's v3 on pbs-devel but some changes:
- reworked with a strip_suffix fn that does matching, way shorter and
  even easier to read IMO
- make b/B byte symbol fully optional, not just for base-10
- also trim trailing whitespace for SizeUnit::Byte
- simplify the FromStr impl
- adapt parser unit tests such that we actually see the failed test's
  definition line, simplifies debugging a bit

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 19:35:24 +01:00
930a71460f human byte: add proper unit type and support base-10
The new SizeUnit type takes over the auto scaling logic and could be
used on its own too.

Switch the internal type of HumanByte from u64 to f64, this results
in a slight reduce of usable sizes we can represent (there's no
unsigned float type after all) but we support pebibyte now with quite
the precision and ebibytes should be also work out ok, and that
really should us have covered for a while..

Partially adapted by Dietmar's version, but split up and change so:
* there's no None type, for a SizeUnit that does not makes much sense
* print the unit for byte too, better consistency and one can still
  use as_u64() or as_f64() if they do not want/need the unit rendered
* left the "From usize/u64" impls intact, just convenient to have and
  avoids all over the tree changes to adapt to loosing that
* move auto-scaling into SizeUnit, good fit there and I could see
  some re-use potential for non-human-byte users in the future
* impl Display for SizeUnit instead of the separate unit_str method,
  better usability as it can be used directly in format (with zero
  alloc/copy) and saw no real reason of not having that this way
* switch the place where we auto-scale in HumanByte's to the new_X
  helpers which allows for slightly reduced code usage and simplify
  implementation where possible
* use rounding for the precision limit algorithm. This is a stupid
  problem as in practices there are cases for requiring every variant:
  - flooring would be good for limits, better less than to much
  - ceiling would be good for file sizes, to less can mean ENOSPACE
    and user getting angry if their working value is messed with
  - rounding can be good for rendering benchmark, closer to reality
    and no real impact
  So going always for rounding is really not the best solution..

Some of those changes where naturally opinionated, if there's a good
practical reason we can switch back (or to something completely
different).

The single thing I kept and am not _that_ happy with is being able to
have fractional bytes (1.1 B or even 0.01 B), which just does not
makes much sense as most of those values cannot exist at all in
reality - I say most as multiple of 1/8 Byte can exists, those are
bits.o

Note, the precission also changed from fixed 2 to max 3 (trailing
zeros stripped), while that can be nice we should see if we get
a better precision limiting algorithm, e.g., directly in the printer.
Rust sadly does not supports "limit to precision of 3 but avoid
trailing zeros" so we'd need to adapt their Grisu based algorithm our
own - way to much complexity for this though..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 19:35:24 +01:00
a58a5cf795 move HumanByte to pbs-abi-types crate
Originally-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-20 19:35:24 +01:00
92a8f0bc82 depend on proxmox-async 0.2 2021-11-20 17:14:02 +01:00
bf298a16ef proxmox-rest-server: remove pbs-tools dependency 2021-11-19 18:06:54 +01:00
9a1b24b6b1 use new proxmox-async crate
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 18:03:22 +01:00
ea67cd70c9 tfa: handle incompatible challenge data
by returning default data, in case the challenge data is not parseable.
this allows a new challenge to be started for the userid in question
without manual cleanup.

currently this can be triggered if an ongoing challenge created with
webauthn-rs 0.2.5 is stored in /run and attempted to be read
post-upgrade.

Reported-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 14:12:31 +01:00
281a5dd1fc cleanup unused re-exports
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-19 12:49:46 +01:00
df3b3d1798 bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-19 12:22:44 +01:00
f5e2b4726d webauthn: correctly set origin when updating
with current proxmox-tfa this became a hard error, since origin and rp
are not both Strings anymore..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 11:58:17 +01:00
daaeea8b4b update to base64 0.13
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 11:58:17 +01:00
6f6df501a0 fix debian/control: include librust-proxmox-sys 2021-11-19 11:35:28 +01:00
d5790a9f27 use new proxmox-sys crate
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 11:06:35 +01:00
860eaec58f use proxmox::tools::fd::fd_change_cloexec from proxmox 0.15.3
Depend on proxmox 0.15.3

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-18 13:43:41 +01:00
26e949d5fe traffic-control api: return current traffic with config
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-18 12:24:33 +01:00
ac7dbba458 bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-18 11:32:56 +01:00
7c2431d42c api: acme: fix typo
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-18 11:32:22 +01:00
25c1420a12 config: acme: plugin: rustfmt
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-18 11:32:22 +01:00
c1a1e1ae8f api: config: acme: rustfmt
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-18 11:32:22 +01:00
a9df9df25d d/control: update openid build dependency verison
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 11:23:50 +01:00
10beed1199 openid: allow to configure scopes, prompt, ACRs and arbitrary username-claim values
- no longer set prompt to 'login' (makes auto-login possible)
- new prompt configuration
- allow arbitrary username-claim values

Depend on proxmox-openid 0.9.0.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-18 11:20:55 +01:00
df32530750 docs: remote sync: adapt to changed filter param and add some examples
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:40:19 +01:00
062edce27f group filter: rename CLI/API/Config "groups" option to "group-filter"
we even use that for basically all the related schema names, "groups"
allone is just rather not so telling, i.e., "groups" what?

While due to the additive nature of `group-filter` is not the best
possible name for passing multiple arguments on the CLI (the web-ui
can present this more UX-friendly anyway) due to possible confusion
about if the filter act like AND vs OR it can be documented and even
if a user is confused they still are safe on more being synced than
less. Also, the original param name wasn't really _that_ better in
that regards

Dietmar also suggested to use singular for the CLI option, while
there can be more they're passed over repeating the option, each with
a single filter.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
efd2713aa8 proxmox-tape: add groups filter to backup command
and add a completion handler to complete the backup groups

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
8a21566c8a ui: tape: show configred group filters
in the grid and in the edit window

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
c8c5c7f571 fix #3533: tape backup: filter groups according to config
this fixes bug #3533, since now a user can backup a single datastore
on multiple tape media pools in parallel, e.g. vms on one pool, ct on
another.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
91357c2034 tape backup jobs: add group filters to config/api
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
097ccfe1d5 proxmox-tape: add missing 'notify-user' option to backup command
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
61ef4ae8cb fix #sync.cfg/pull: don't remove by default
and convert existing (manually created/edited) jobs to the previous
default value of 'true'. the GUI has always set this value and defaults
to 'false'.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
01ae7bfaf2 docs: mention group filter in sync docs
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
1b52122a1f manager: render group filter properly
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
1d9bc184f5 remote: add backup group scanning
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
5f83d3f636 sync: add group filtering
like for manual pulls, but persisted in the sync job config and visible
in the relevant GUI parts.

GUI is read-only for now (and defaults to no filtering on creation), as
this is a rather advanced feature that requires a complex GUI to be
user-friendly (regex-freeform, type-combobox, remote group scanning +
selector with additional freeform input).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
71e534631f pull: allow pulling groups selectively
without requiring workarounds based on ownership and limited
visibility/access.

if a group filter is set, remove_vanished will only consider filtered
groups for removal to prevent concurrent disjunct filters from trashing
eachother's synced groups.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
6e9e6c7a54 pull/sync: extract passed along vars into struct
this is basically the sync job config without ID and some stuff
converted already, and a convenient helper to generate the http client
from it.

Suggested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
e2e7560d5e pull: use BackupGroup consistently
instead of `GroupListItem`s. we convert it anyway, so might as well do
that at the start.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
0ceb97538a BackupGroup: add filter helper
to have a single implementation of "group is matched by group filter".

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
3e276f6fb6 api: add GroupFilter(List) type
at the API level, this is a simple (wrapped) Vec of Strings with a
verifier function. all users should use the provided helper to get the
actual GroupFilter enum values, which can't be directly used in the API
schema because of restrictions of the api macro.

validation of the schema + parsing into the proper type uses the same fn
intentionally to avoid running out of sync, even if it means compiling
the REs twice.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
2b00c5abca api-types: add schema for backup group
the regex was already there, and we need a simple type/schema for
passing in multiple groups as Vec/Array via the API.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-18 10:36:57 +01:00
15cc41b6cb proxmox-systemd: remove crate, use new proxmox-time 1.1.0 instead
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-17 13:07:51 +01:00
729bd1fd16 remove now unused serde_filter module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-17 09:50:08 +01:00
9a7431e2e0 www: use TFA widgets from widget toolkit
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-17 09:44:55 +01:00
52fbc86fc9 bump d/control rust dependencies
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-17 09:44:48 +01:00
afe6c79ce3 bump proxmox-widget-toolkit dependency to 3.4-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-17 09:43:26 +01:00
9407810fe1 switch tfa api to use proxmox-tfa::api
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-17 09:33:04 +01:00
c42a54795d move fingerprint helpers from pbs-tools to pbs-api-types
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-17 07:07:40 +01:00
96ec3801a9 docs: add traffic control section
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-15 13:43:11 +01:00
c4707d0c1d depend on proxmox-shared-memory 0.1.1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-15 11:35:52 +01:00
24f9af9e0f add missing file from previous commit
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-14 18:49:29 +01:00
a0172d766b traffic-controls: add API/CLI to show current traffic
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-14 17:21:45 +01:00
09f999337a update to proxmox-http 0.5.4
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-14 08:27:45 +01:00
e3eb062c09 cached_traffic_control: fix regression tests
Avoid using shared memory in tests because of permission problems.
2021-11-14 08:05:40 +01:00
de21d4efdc implement rate limiter in shared memory
This kind of rate limiter can be used among several processes (as long
as all set the same rate/burst).
2021-11-13 17:49:38 +01:00
d5f58006d3 cached_traffic_control: use ShareableRateLimit trait object 2021-11-13 17:49:38 +01:00
cb80ffc1de pbs-config: use new SharedMemory helpers from proxmox-shared-memory crate
depend on proxmox-shared-memory crate.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-13 17:49:38 +01:00
1859a0eb8b rest: make successful-ticket auth log a debug one to avoid syslog
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-12 11:10:12 +01:00
9e7132c0b3 bump version to 2.0.14-1 2021-11-12 08:12:18 +01:00
bf013be1c4 create /var/lib/proxmox-bnackup at server startup
This was missing in previous patch...

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-12 08:12:18 +01:00
b935209584 fix directory permission problems
By carefully setting options on all create_path() calls,
and by creating "/var/lib/proxmox-backup" at api server startup.
2021-11-12 07:29:18 +01:00
efd4ddc17b debian/control: depend on librust-cidr-dev 2021-11-10 12:23:16 +01:00
e511e0e553 proxmox-backup-proxy: implement traffic control
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 10:15:40 +01:00
610150a4b4 implement a traffic control cache for fast rate control limiter lockups
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 10:15:40 +01:00
485b2438ac traffic_control: use Memcom to track. config versions
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 10:15:40 +01:00
bfd12e871f Add traffic control configuration config with API
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 10:15:40 +01:00
0c136bfab1 DailyDuration: implement time_match() 2021-11-10 10:15:40 +01:00
245e2aea23 New DailyDuration type with nom parser
We will use this to specify timesframes for network rate limits (only
apply limite when inside the time frame).

Note: This is not systemd related, but we can reuse some of the parser
method.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 10:15:40 +01:00
b9d588ffde implement Servive for RateLimitedStream
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 10:15:40 +01:00
e4bc3e0e8d proxmox-backup-client: add rate/burst parameter to backup CLI
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 10:15:40 +01:00
2419dc0de9 pbs-client: add option to use the new RateLimiter
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 10:15:32 +01:00
68fd9ca6d6 openid_login: vertify that firstname, lastname and email fits our schema definitions
If not, we do not copy the values to our user.cfg.
2021-11-10 06:48:40 +01:00
4beb7d2dbe correctly lock remote config
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-06 17:35:10 +01:00
2bc1250c28 docs: language fixup: faq and appendix
minor formatting and language fixes to the faq section and the appendix

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-11-02 07:12:07 +01:00
9b1e2ae83c api: admin/datastore: reuse 'is_protected' implementation
we already have that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 12:54:54 +02:00
9b5ecbe2ff backup-client: use () instead of Value as return type
shorter and we do a conversion anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 12:54:54 +02:00
342ed4aea0 PruneMark: implement display without the write! macro
by using write_str instead

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 12:54:54 +02:00
d4e9d5470e PruneMark: use copied values instead of references
the type is small enough

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 12:54:54 +02:00
5c1cabdea1 rrd: use saturating_sub to avoid underflow
Without this, the tests fail in debug mode.
Also having start (u64) underflow to a value greater than end does
not really make sense

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 12:54:54 +02:00
38517ca053 docs: add info about protection flag to client docs
and mention that sync/pull does not sync the protected flag

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:31:32 +02:00
e33758d1b8 fix #3602: ui: datastore/Content: add action to set protection status
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:31:28 +02:00
aba6189c4f ui: add protected icon to snapshots
if they are protected

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:31:23 +02:00
adcc21716b ui: PruneInputPanel: add keepReason 'protected' for protected backups
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:31:21 +02:00
87e17fb4d1 proxmox-backup-client: add 'protected' commands
includes 'update' and 'show' similar to the notes commands

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:31:16 +02:00
8292d3d20e api2/admin/datastore: add get/set_protection
for gettin/setting the protected flag for snapshots (akin to notes)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:31:11 +02:00
5cc7d89139 api2: datastore/delete_group: throw error for partially removed group
when a group could not be completely removed due to protected snapshot,
throw an error

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:31:07 +02:00
343392613d pull_store/group: dont try remove locally protected snapshots
and log if a vanished groups could not be completely deleted if it
contains protected snapshots

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:31:04 +02:00
de91418b79 backup/datastore: prevent protected snapshots to be removed
by throwing an error for remove_backup_dir, and skipping for
remove_backup_group

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:31:00 +02:00
fe9c47ab4f tests/prune: add tests for protecteded backups
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:30:56 +02:00
02db72678f add protected info of snapshots to api and task logs
adds the info that a snapshot is protected to:
* snapshot list
* manual pruning (also dry-run)
* prune jobs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:30:51 +02:00
db4b469285 pbs-datastore: skip protected backups in pruning
as a separate keep reason so it will not be calculated for the other reasons

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:30:47 +02:00
92c5cf42d1 pbs-datastore: add protection info to BackupInfo
and add necessary helper functions (protected_file/is_protected)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-28 11:30:44 +02:00
e9558f290a ui: datastore content: improve sorting verification column
sort failed < no verify < outdated < all ok

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-27 16:29:59 +02:00
572e6594d2 fix typo s/CGM/GCM/i
only user visible change is in the error message

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-27 16:28:02 +02:00
88691284d8 bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-22 14:30:00 +02:00
85c622807e Cargo.toml: set udev dependency to 0.4
we don't need to bother with 0.3 anymore

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-22 14:28:33 +02:00
9d42e0475b acme: interpret no TOS as accepted
some custom ACME endpoints do not have TOS, interpret this as
'the user has accepted the TOS', like we do for PVE.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-22 10:55:14 +02:00
181a335bfa bump proxmox-acme-rs dependency to 0.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-22 10:54:43 +02:00
0a33951e9e acme: new_account: prevent replacing existing accounts
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-22 08:35:24 +02:00
7a356a748a bump version to 2.0.13-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-21 08:36:41 +02:00
1c402740a2 update dedian/control
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-21 07:59:45 +02:00
e0a19d3313 use new fsync parameter to replace_file and atomic_open_or_create
Depend on proxmox 0.15.0 and proxmox-openid 0.8.1

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-21 07:28:32 +02:00
6b8329ee34 tape: simplify export_media_set for pool writer
our export code can handle if the tape is inside the drive, so unloading
it first does not have an benefit, it even makes the exporting slower,
since we first unload it into its original slot, and then moving it
to an import/export slot

so drop the code that unloads the tape from the drive, and let the
export code itself handle that

change the 'eject' into a 'rewind' and comment why we do that first

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-21 06:37:48 +02:00
1d4448998a rest-server: use hashmap for parameter errors
our ui expects a map here with 'field: "error"'. This way it can mark
the relevant field as invalid and correctly shows the complete error
message

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-21 06:32:23 +02:00
d6473f5359 proxmox-rrd: use fsync instead of syncfs
syncfs can sync unrelated data, and we do not want that.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-20 11:46:59 +02:00
f5f9ec81d2 proxmox-rrd: fix regression tests 2021-10-19 18:41:03 +02:00
fea950155f proxmox-rrd: improve dev docs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-19 11:17:09 +02:00
ef2944bc24 proxmox-rrd: cleanup - impl FromStr for JournalEntry
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-19 11:17:09 +02:00
934c8724e2 proxmox-rrd: add option to avoid page cache for load/save
use fadvice(.., POSIX_FADV_DONTNEED) for RRD files. We read those files only once,
and always rewrite them.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-19 11:17:09 +02:00
98eb435d90 proxmox-rrd: use syncfs after writing rrd files
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-19 11:17:09 +02:00
bd10af6eda bump version to 2.0.12-1
note, this bump happened outside the main branch as it wasn't in a
good state and there was need for bumping (log/task rotate stuff).

Cherry picking the actual bump to avoid changelog/versioning
confusion on the next one, that should again happen on the main
branch.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit edc876c58e)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-19 11:13:06 +02:00
7f381a6246 proxmox-rrd: use fine grained locking in commit_journal_impl
Aquire the rrd_map lock for each file (else we block access for a long time)
2021-10-18 14:55:47 +02:00
c17fbbbc07 proxmox-rrd: log all errors from apply_and_commit_journal_thread (but only once) 2021-10-18 11:57:19 +02:00
ac2ca6c341 tape: improve export_media error message for not found tape
'export_media' can handle if the tape is in either a normal slot of the
library, or in the drive assigned to the current pool writer.
(because we need to lock the drive)

if it is, for some reason, in a different drive, the error message
 'media is not online'
could be slightly confusing for a user, since it would appear in the drive list

add the 'or a differen drive' to make it clearer

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-18 10:40:56 +02:00
d26865c52c proxmox-rrd: cleanup list_old_journals 2021-10-18 10:00:58 +02:00
2b05008a11 proxmox-rrd: cleanup - use struct instead of tuple 2021-10-16 12:45:03 +02:00
45700e2ecf proxmox-rrd: move RRDMap into extra file 2021-10-16 12:45:03 +02:00
f84304235b proxmox-rrd: move JournalState into extra file 2021-10-16 12:45:03 +02:00
0ca41155b2 proxmox-rrd: implement non blocking journal
Do not block while applying the journal.
2021-10-16 12:45:03 +02:00
a291ab59ba proxmox-rrd: rename RRDCacheState to JournalState 2021-10-15 09:35:44 +02:00
fce7cd0d36 proxmox-rrd: avoild blocking readers while applying the journal
By using and extra RwLock<RRDMap> on the rrd data.
2021-10-15 09:22:07 +02:00
658357c5a8 proxmox-rrd: log journal apply/flush times, split apply and flush
We need to apply the journal only once.
2021-10-15 07:16:41 +02:00
7484fce24d proxmox-rrd: cleanup - use slot_end_time() 2021-10-14 16:29:00 +02:00
f28a713e2b proxmox-rrd: cleanup - use staturating_add instead of if/else 2021-10-14 16:10:55 +02:00
a9017805b7 proxmox-rrd: improve dev docs 2021-10-14 11:53:54 +02:00
2e3f94e12f proxmox-rrd: make rrd load callback configurable 2021-10-14 11:41:26 +02:00
d531c7ae61 proxmox-rrd: add more regression tests 2021-10-14 10:55:12 +02:00
7df1580fa6 proxmox-rrd: add regression tests and two minor fixes 2021-10-14 10:17:07 +02:00
58f70bccbb proxmox-rrd: pass time and value to update function 2021-10-14 08:12:56 +02:00
fae4f6c509 cleanup: move rrd cache related code into extra file 2021-10-14 07:57:27 +02:00
ddafb28572 proxmox-rrd: add some integration tests (file format tests) 2021-10-13 18:21:23 +02:00
642c7b9915 bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-13 14:47:24 +02:00
5a8726e6d2 pbs-tools: drop borrow module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-13 14:14:03 +02:00
b3f279e2d9 use complete_file_name from proxmox-router 1.1 2021-10-13 14:10:02 +02:00
82f5ad18f0 proxmox-rrd: move unshipped cli tool to examples
it's a rather low-level tool mostly useful for debugging and some of
it is rather "dumb" (by design) anyway, e.g., it does not
transparently applies journal but really only operates on the DB
files as is (which can conflict with daemon operations).

In summary, not (yet) a tool meant for end user consumption.
Move it to examples folder to avoid compilation on packaging (we do
not ship it anyway) which allows us to move the rather expensive
proxmox-router (pulls in hyper) to the dev-dependencies section.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
bacc99c7f8 proxmox-rrd: add more commands to the rrd cli tool
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
6728d0977b proxmox-rrd: rename last_counter to last_value
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
bff7c027c9 proxmox-rrd: protect against negative update time
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
79b3113361 proxmox-rrd: new helpers: slot, slot_start_time & slot_end_time
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
5885767b91 proxmox-rrd: avoid expensive modulo (%) inside loop
Modulo is very slow, so we try to avoid it inside loops.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
ec08247e5c proxmox-rrd: add binary to create/manage rrd files
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
400f081487 proxmox-rrd: split out load_rrd (cleanup)
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
03664514ab proxmox-rrd: support CF::Last
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
c68fa58a59 remove proxmox-rrd-api-types crate, s/RRDTimeFrameResolution/RRDTimeFrame/
Because the types used inside the RRD have other requirements
than the API types:

- other serialization format
- the API may not support all RRD features

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
426dda0730 proxmox-rrd: extract_data: include values from current slot
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
eb37d4ece2 proxmox-rrd: remove dependency to proxmox-rrd-api-types
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
1198f8d4e6 proxmox-rrd: implement new CBOR based format
Storing much more data points now got get better graphs.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
4b709ade68 proxmox-backup-proxy: use tokio::task::spawn_blocking instead of block_in_place
allow the current thread to do some other work in-between

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
fa49d0fde9 RRD_CACHE: use a OnceCell instead of lazy_static
And initialize only with proxmox-backup-proxy. Other binaries dont need it.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
1d44f175c6 proxmox-rrd: use a journal to reduce amount of bytes written
Append pending changes in a simple text based format that allows for
lockless appends as long as we stay below 4 KiB data per write.

Apply the journal every 30 minutes and on daemon startup.

Note that we do not ensure that the journal is synced, this is a
perfomance optimization we can make as the kernel defaults to
writeback in-flight data every 30s (sysctl vm/dirty_expire_centisecs)
anyway, so we lose at max half a minute of data on a crash, here one
should have in mind that we normally expose 1 minute as finest
granularity anyway, so not really much lost.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-13 13:36:02 +02:00
890b88cbef remove pbs-tools::ops::ControlFlow
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-12 14:36:40 +02:00
27709b49d5 pbs-config: drop default-features on proxmox-router dep
we don't need the 'cli' feature in there

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-12 13:11:08 +02:00
7ccbce03d3 docs: language and formatting fixup
Some minor langague and formatting fixes to sections: Proxmox VE
Integration, pxar Command Line Tool, Managing Remotes, Maintenance
Tasks, Host System Administration, Network Management, and Technical
Overview.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-10-12 08:31:37 +02:00
5fb852afed docs: backup-client: langauge and formatting fixup
also remove todo item for scheduling garbage collect with cron, and add
note about schedule configuration through proxmox-backup-manager/PBS GUI

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-10-12 08:30:39 +02:00
60589e6066 docs: Update for new features/functionality
Update GUI section and GUI instructions to reflect current layout and
features

List OpenID connect in possible realms (user management)

Link Access Control section when referring to it (user management)

Include Tape roles in access control section

Minor formatting changes

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-10-12 08:28:29 +02:00
717ce40612 docs: language and formatting fixup
Some minor changes to the sections: Introduction, Installation,
Terminology, GUI, Storage, and User Management

Mention tape backup in main features

Update epilog.rst with link for 'LXC'.
Remove FIXME from epilog.rst (I believe this was a note to repair
the not-yet-created pbs wiki link).

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-10-12 08:26:13 +02:00
75442e813e api daemons: fix sending log-reopen command
send_command serializes everything so it cannot be used to send a
raw, optimized command. Normally that means we get an error like
> 'unable to parse parameters (expected json object)'
when used that way.

Switch over to send_raw_command which does not re-serializes the
command.

Fixes: 45b8a032 ("refactor send_command")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-11 14:35:50 +02:00
853c55a049 bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-11 12:08:57 +02:00
6ef1b649d9 update to first proxmox crate split
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-11 11:58:49 +02:00
e3f3359c86 bump proxmox dependency to 0.14.0 and proxmox-http to 0.5.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-08 11:18:22 +02:00
0e1edf19b1 proxmox-backup-proxy: clean up old tasks when the task log was rotated
we maybe have old tasks when the task list was rotated, so clean them up

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-08 06:47:38 +02:00
de55fff226 rest-server: add cleanup_old_tasks
this is a helper that removes task log files that are not referenced
by the task archive anymore

it gets the oldest task archive file, gets the first endtime (the
oldest) and removes all files in the taskdir where the mtime is older
than that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-08 06:38:52 +02:00
b3a67f1f14 proxmox-rrd: use correct directory options in create_rrdb_dir 2021-10-07 08:50:50 +02:00
3cc23ca6cc proxmox-rrd: cleanup error handling 2021-10-07 08:01:12 +02:00
3def6bfc64 proxmox-rrd: use log crate instead of eprintln, avoid duplicate logs 2021-10-06 18:19:22 +02:00
18e8bc17e4 proxmox-rrd: fix update (do not update) when time is in the past 2021-10-06 18:01:48 +02:00
f66d66aafe drop dynamic_index.rs duplicate in pbs-client
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-06 15:29:27 +02:00
7380c48dff pbs-tools::io::pipe: use nix Error type
there's no need to upgrade to anyhow::Error there already

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-06 15:28:58 +02:00
0191759316 proxmox-rrd: improve developer docs 2021-10-06 12:19:54 +02:00
dbc42e6f75 proxmox-rrd: remove serde dependency 2021-10-06 10:55:46 +02:00
d1c3bc5350 split out RRD api types into proxmox-rrd-api-types crate 2021-10-06 09:49:51 +02:00
a97301350f proxmox-rrd: use create_path instead of std::fs::create_dir_all
To ensure correct file ownership.
2021-10-06 08:37:14 +02:00
09340f28f5 move RRD code into proxmox-rrd crate 2021-10-06 08:13:28 +02:00
20497c6346 bump version to 2.0.11-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-05 16:34:35 +02:00
d0f7d0d9c1 d/changelog: fixup release
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-05 14:23:28 +02:00
608806e884 proxmox-rest-server: use new ServerAdapter trait instead of callbacks
Async callbacks are a PITA, so we now pass a single trait object which
implements check_auth and get_index.
2021-10-05 11:13:10 +02:00
48176b0a77 proxmox-rest-server: pass owned RestEnvironment to get_index
This way we avoid pointers with lifetimes.
2021-10-05 11:12:53 +02:00
3483a3b3a1 proxmox-rest-server: cleanup, access api_auth using a method 2021-10-05 11:12:53 +02:00
347e0d4c57 fix deprecated use of std::u64/... modules
integer primitive type modules are deprecated, use
associated constants instead

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-04 15:02:30 +02:00
ae9b5c077a ui: datastore/Content: add empty text for no snapshots
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-04 10:28:10 +02:00
747446eb50 ui: datastore/Content: reload in activate listener
when we trigger the first load before the panel was fully created,
there was no load mask for it (but the snapshots would "pop in" on load)

move the first reload into the 'activate' listener. this will be called
the every time a user opens the content tab of a datastore, so guard
it by a 'firstLoad' bool.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-04 10:28:10 +02:00
e1c8c27f47 rest: daemon: group systemd FFI together
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:45:34 +02:00
63cec1622a rest: daemon: sd notify: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:45:34 +02:00
31142ef291 rest: daemon: sd notify barrier: avoid barging in between SystemdNotify enum and systemd_notify
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:45:34 +02:00
058b4b9708 rest: daemon: sd notify barrier: allow caller to set timeout
else it's rather to subtle and not a nice interface considering that
we only want to have a thin wrapper for sd_notify_barrier..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:44:20 +02:00
9a1330c72e rest: daemon: comment why using a systemd barrier is important for main PID handover
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:44:20 +02:00
0a6df20986 rest-server/daemon: use sd_notify_barrier for service reloading
until now, we manually polled the systemd service state during a reload
so that the sd_notify messages get processed in the correct order
(RELOAD(old) -> MAINPID(old) -> READY(new))

with systemd >= 246 there is now 'sd_notify_barrier' which
blocks until systemd processed all prior messages

with that change, the daemon does not need to know the service name anymore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:44:20 +02:00
6680878b5c proxmox-rest-server: make get_index async 2021-10-01 09:38:10 +02:00
593043ed53 proxmox-rest-server: add comment why ApiService needs to be 'pub' 2021-10-01 08:35:51 +02:00
038f385089 proxmox-rest-server: make check_auth async 2021-10-01 07:53:59 +02:00
b914b94773 proxmox-rest-server: fix spelling errors 2021-10-01 06:43:30 +02:00
2194bc59c8 proxmox-rest-server: improve ApiService docs 2021-09-30 17:18:47 +02:00
a98a288e2d proxmox-rest-server: start module docs 2021-09-30 13:49:29 +02:00
49e25688f1 rename CommandoSocket to CommandSocket 2021-09-30 12:52:35 +02:00
d7eedbd24b tools::format: avoid some string copies
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-30 12:43:33 +02:00
5b17a02da4 drop str::join helper
the standard join method can do this now

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-30 12:43:33 +02:00
8735247f29 drop fd_change_cloexec from proxmox-rest-server
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-30 12:43:22 +02:00
0d5d15c9d1 proxmox-rest-server: improve docs
And rename enable_file_log to enable_access_log.
2021-09-30 12:29:15 +02:00
2e44983a37 proxmox-rest-server: improve docs
And renames abort_worker_async to abort_worker_nowait (avoid confusion,
because the function itself is not async).
2021-09-30 10:51:41 +02:00
c76ff4b472 proxmox-rest-server: cleanup FileLogger docs 2021-09-30 10:51:31 +02:00
aaf4f40285 subscription: switch verification domain over to shop.proxmox.com
With the merger the shop got moved from shop.maurer-it to
shop.proxmox.com, while we transparently redirect we also want to
stop doing that in a few years, so use new domain.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-30 10:28:53 +02:00
e64f77b716 cleanup: move use clause to top 2021-09-30 08:42:37 +02:00
fd1b65cc3c proxmox-rest-server: allow to catch SIGINT and SIGHUP separately
And make ServerState private.
2021-09-30 08:41:30 +02:00
11148dce43 proxmox-rtest-server: make Reloader and Reloadable private 2021-09-30 07:44:19 +02:00
38da8ca1bc proxmox-rest-server: improve logging
And rename server_state_init() into catch_shutdown_and_reload_signals().
2021-09-29 14:48:46 +02:00
a0ffd4a413 proxmox-rest-server: avoid useless call to request_shutdown
Also avoid unsafe code.
2021-09-29 14:37:07 +02:00
450105b0c3 make pbs_tools::cert not depend on pbs-buildcfg
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-29 14:11:26 +02:00
b62edce929 remove pbs_client::connect_to_localhost
It also used `CertInfo` from pbs-tools which is also server
specific.

The original helper is now in the main crate's
client_helpers instead.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-29 14:11:26 +02:00
67678ec39c add all autotraits to output_or_stdout trait object
just in case we ever need any of them in async code that
requires them and loses it because of accessing such a trait
object...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-29 13:59:02 +02:00
bf95fba72e remove wrong calls to systemd_notify
We alrteady call systemd_notify inside the create_service future.
2021-09-29 12:04:48 +02:00
d265420025 daemon: simlify code (make it easier to use) 2021-09-29 12:04:48 +02:00
01a080215d drop pbs_tools::auth
`pbs_client::connect_to_localhost` now requires the key as
optional parameter

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-29 11:08:52 +02:00
8cf445ecc4 cleanup: make BoxedStoreFunc private
There is no need to export this type.
2021-09-29 09:55:43 +02:00
20def38e96 examples: add example for a simple rest server with a small api
show how to generally start a daemon that serves a rest api + index page

api calls are (prefixed with either /api2/json or /api2/extjs):
/		GET	listing
/ping		GET	returns "pong"
/items		GET	lists existing items
		POST	lets user create new items
/items/{id}	GET	returns the content of a single item
		PUT	updates an item
		DELETE	deletes an item

Contains a small dummy user/authinfo

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-29 09:48:47 +02:00
be5b43cb87 remove tools/async_io.rs
nothing from here is used anymore, so remove it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-29 09:38:40 +02:00
6f0565fa60 rest-server: use hypers AddrIncoming for proxmox-backup-api
this has a 'from_listener' (tokio::net::TcpListener) since hyper 0.14.5 in
the 'tcp' feature (we use 'full', which includes that; since 0.14.13
it is not behind a feature flag anymore).

this makes it possible to create a hyper server without our
'StaticIncoming' wrapper and thus makes it unnecessary.

The only other thing we have to do is to change the Service impl from
tokio::net::TcpStream to hyper::server::conn::AddStream to fulfill the trait
requirements.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-29 09:38:40 +02:00
99940358e3 ExtJsFormatter: use ParameterError to correctly compute 'errors'
By default, 'errors' is now empty.

Depend on proxmox 0.13.5.
2021-09-28 10:19:55 +02:00
53daae8e89 proxmox-rest-server: cleanup formatter, improve docs
Use trait for OutputFormatter. This is functionally equivalent,
but more rust-like...
2021-09-28 07:45:50 +02:00
8a23ea4656 move src/backup/read_chunk.rs to pbs-datastore/src/local_chunk_reader.rs 2021-09-27 11:10:14 +02:00
c95c1c83b0 move src/backup/snapshot_reader.rs to pbs_datastore crate 2021-09-27 09:58:20 +02:00
b446fa14c5 WorkerTaskContext: make it Send + Sync 2021-09-27 09:11:38 +02:00
6d5d305d9d move src/backup/datastore.rs into pbs_datastore crate 2021-09-27 09:11:38 +02:00
af2eb422d5 tools: smart: only throw error for smartctl fatal errors
only bit 0-2 are fatal errors, bit 3-7 are used to indicate
some drive conditions. for details see the manpage of smartctl(8)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
 [ Thomas: resolved merge-conflict due to moved run_command ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-27 08:59:13 +02:00
bbd57396d7 proxmox-backup-manager: avoid proxmox_rest_server::init_worker_tasks() for "bashcomplete" and "printdoc" 2021-09-24 12:31:42 +02:00
0fd55b08d9 WorkerTaskContext: add shutdown_requested() and fail_on_shutdown() 2021-09-24 12:04:31 +02:00
619cd5cbcb cleanup WorkerTaskContext 2021-09-24 11:39:30 +02:00
1ec0d70d09 cleanup worker task logging
In order to avoid name conflicts with WorkerTaskContext

- renamed WorkerTask::log to WorkerTask::log_message

Note: Methods have different fuction signatures

Also renamed WorkerTask::warn to WorkerTask::log_warning for
consistency reasons.

Use the task_log!() and task_warn!() macros more often.
2021-09-24 10:34:11 +02:00
c8449217dc rename TaskState to WorkerTaskContext 2021-09-24 10:33:49 +02:00
f7348a23cd move src/server/h2service.rs into proxmox-rest-server crate 2021-09-24 10:28:17 +02:00
ae18c436dd proxmox-backup-manager: setup worker and command socket 2021-09-24 10:28:17 +02:00
b0e20a71e2 proxmox-daily-update: setup worker and command socket 2021-09-24 10:28:17 +02:00
b9700a9fe5 move worker_task.rs into proxmox-rest-server crate
Also moved pbs-datastore/src/task.rs to pbs-tools, which now depends on 'log'.
2021-09-24 10:28:17 +02:00
81867f0539 use UPID and systemd helpers from proxmox 0.13.4 2021-09-23 12:01:43 +02:00
0a33fba49c worker task: allow to configure path and owner/group
And application now needs to call init_worker_tasks() before using
worker tasks.

Notable changes:
- need to call  init_worker_tasks() before using worker tasks.
- create_task_log_dirs() ís called inside init_worker_tasks()
- removed UpidExt trait
- use atomic_open_or_create_file()
- remove pbs_config and pbs_buildcfg dependency
2021-09-23 11:59:45 +02:00
049a22a3a3 src/server/worker_task.rs: Avoid using pbs-api-type::Authid
Because we want to move worker_task.rs into proxmox-rest-server crate.
2021-09-23 11:59:25 +02:00
4d4f94dedf buildsys: better group and sort --bin arguments
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 16:27:37 +02:00
a844fa0ba0 move dump-catalog-shell-cli doc-helper to proxmox-backup-client crate
it's only used for generating the docs for the interactive-shell
parts of the client.

Ideally we'd avoid that whole separate binary in the first place and
let the client dump it, but we'd need to have some more elaborate
"hide this command from the help/usage" mechanisms in the CLI
helper/formatter code to make that play out more nicely.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 16:25:09 +02:00
497a7b3f8e bump version to 2.0.10-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 11:36:42 +02:00
71549afa3f cargo: switch from proc-macro pin-project to declarative pin-project-lite
In our simple use cases they both should generate the same code, see
[0] for notable differences. While we cannot drop proc-macro due to
that switch, all of our dependencies that use pinning already use
pin-project-lite, so this allows us to drop a whole crate in general
while not loosing anything.

[0]: https://github.com/taiki-e/pin-project-lite#pin-project-vs-pin-project-lite

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 11:15:40 +02:00
a294588409 docs: troubleshooting: reformat & adapt
Text-width should be 80 cc in the docs.

Avoid using relative paths in examples, they only confuse users as
one has less of a specific idea what the example may do. Rather use a
"descriptive" example path.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 07:41:47 +02:00
5a83930667 docs/technical-overview: add troubleshooting section 2021-09-22 07:29:00 +02:00
c25ea25f0a debug: api ls: make path optional and default to "/"
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:11:36 +02:00
f7885eb263 docs: proxmox-backup-debug: add info about the 'api' subcommand
and mention PROXMOX_DEBUG_API_CODE and that its dangerous.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:48 +02:00
a48d534d39 docs: add proxmox-backup-debug to the list of command line tools
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:48 +02:00
bfa942c0cf api: make some workers log on CLI
some workers did not log when called via cli

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:37 +02:00
f54634a890 api: add missing token list match_all property
to have the proper link between the token list and the sub routes
in the api, include the 'tokenname' property in the token listing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:30 +02:00
efb7c5348c proxmox-backup-debug: add 'api' subcommands
this provides some generic api call mechanisms like pvesh/pmgsh.
by default it uses the https api on localhost (creating a token
if called as root, else requesting the root@pam password interactively)

this is mainly intended for debugging, but it is also useful for
situations where some api calls do not have an equivalent in a binary
and a user does not want to go through the api

not implemented are the http2 api calls (since it is a separate api an
it wouldn't be that easy to do)

there are a few quirks though, related to the 'ls' command:
i extract the 'child-link' from the property name of the
'match_all' statement of the router, but this does not
always match with the property from the relevant 'get' api call
so it fails there (e.g. /tape/drive )

this can be fixed in the respective api calls (e.g. by renaming
the parameter that comes from the path)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:30 +02:00
d6fcc1170a move proxmox-backup-debug back to main crate
we want to add something to it that needs access to the
proxmox_backup::api2 stuff, so it cannot live in a sub crate

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:30 +02:00
3f742f952a server: refactor abort_local_worker
we'll need this outside the module

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 15:10:30 +02:00
84af82e8cf rename pbs-systemd to proxmox-systemd 2021-09-21 10:06:27 +02:00
48109c5354 pbs-systemd: do not depend on pbs-tools
Instead, copy a few line of nom helper code, and implement
a simple run_command helper.
2021-09-21 10:06:27 +02:00
fd18775ac1 worker_state: move tasktype() code to src/api2/node/tasks.rs
Because this is API related code, and only used there.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:47:48 +02:00
e678a50ea1 buildsys: drop double-build hack to avoid linkage issues
basically a (semantic) revert of commit
991be99c37 "buildsys: workaround
linkage issues from openid/curl build server stuff separate"

This is no longer required because we moved proxmox_restore_daemon
code into extra crate (previous commit)

Originally-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
6523588c8d move proxmox_restore_daemon code into extra crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
6fbf0acc76 move src/server/rest.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
36b7085ec2 rest server: cleanup auth-log handling
Handle auth logs the same way as access log.
- Configure with ApiConfig
- CommandoSocket command to reload auth-logs "api-auth-log-reopen"

Inside API calls, we now access the ApiConfig using the RestEnvironment.

The openid_login api now also logs failed logins and return http_err!(UNAUTHORIZED, ..)
on failed logins.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
1b1a553741 rest server: do not use pbs_api_types::Authid
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
98b7d58b94 rest server: return UserInformation from ApiAuth::check_auth
This need impl UserInformation for Arc<CachedUserInfo> which is implemented
with proxmox 0.13.2

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
7fa9a37c7c make get_index and ApiConfig property (callback)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
f533d16ef6 rest server: simplify get_index() method signature
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
778c7d954b move normalize_uri_path and extract_cookie to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
605fe2e7e7 move src/tools/compression.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
1b552c109d move src/server/formatter.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
d4d49f7325 move src/server/environment.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
8bca935f08 move src/tools/daemon.rs to proxmox-rest-server workspace
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
fd6d243843 move ApiConfig, FileLogger and CommandoSocket to proxmox-rest-server workspace
ApiConfig: avoid using  pbs_config::backup_user()
CommandoSocket: avoid using  pbs_config::backup_user()
FileLogger: avoid using  pbs_config::backup_user()
- use atomic_open_or_create_file()

Auth Trait: moved definitions to proxmox-rest-server/src/lib.rs
- removed CachedUserInfo patrameter
- return user as String (not Authid)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
037f6b6d5e start new proxmox-rest-server workspace
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
8eef31724f api: disks/directory: add 'name' property to list of mounts
so that we actually have the property that 'match_all' refers to for
the templated API path.

This is mostly for improving usage of the WIP pbs-shell, i.e., its
`ls` command, it has no other functional/semantic impact.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-15 11:59:02 +02:00
2de1b06a06 api: disks/directory: factor out BASE_MOUNT_DIR path
will be reused in the next commit

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-15 11:54:18 +02:00
a332040a7f api: nodes: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-15 11:42:28 +02:00
957133077f api2: nodes: add missing node list api call
to have an api call for api path traversal

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-09-15 11:32:58 +02:00
36c6e7bb82 fix tests/worker-task-abort.rs - correctly spawn command socket
And wait for the task.

Note: The test is still ignored (but works now when run a root)
2021-09-14 10:42:44 +02:00
ccc3896ff3 avoid type re-exports 2021-09-14 08:35:43 +02:00
cef5c72682 move src/tape/helpers/snapshot_reader.rs to src/backup/snapshot_reader.rs 2021-09-14 07:42:06 +02:00
51a2d9e375 fix refs in generated docs 2021-09-13 13:40:20 +02:00
048b43af24 split tape code into new pbs_tape workspace 2021-09-13 12:54:59 +02:00
bfd2b47649 buildsys: cargo build: avoid redundant "--bin pxar" argument
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-13 12:11:24 +02:00
67a5cf4714 fix regression tests 2021-09-10 12:45:06 +02:00
6227654ad8 more api type cleanups: avoid re-exports 2021-09-10 12:25:32 +02:00
e384f16a19 proxmox-tape: add 'force-media-set' also to cli
we have it in the api and gui, but the cli was missing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-09-10 10:06:21 +02:00
89725197c0 move PruneOptions to pbs_api_types workspace 2021-09-10 09:21:27 +02:00
e7d4be9d85 move datastore config to pbs_config workspace 2021-09-10 08:40:58 +02:00
ba3d7e19fb move user configuration to pbs_config workspace
Also moved memcom.rs and cached_user_info.rs
2021-09-10 07:09:04 +02:00
b65dfff574 cleanup User configuration: use Updater 2021-09-09 13:14:28 +02:00
8cc3760e74 move acl to pbs_config workspaces, pbs_api_types cleanups 2021-09-09 10:50:08 +02:00
1cb08a0a05 move token_shadow to pbs_config workspace
Also moved out crypt.rs (libcrypt bindings) to pbs_tools workspace.
2021-09-08 14:00:14 +02:00
6f4228809e move network config to pbs_config workspace 2021-09-08 12:22:48 +02:00
5af3bcf062 changer config cleanup: use Updater 2021-09-08 09:29:01 +02:00
67d00d5c0e drop proxmox-backup-debug package, use server package instead
The datastore/backup debug helpers should always be available, they
can help a lot in dire times, so making them available directly via
the server package (alongside the manager CLI tool) is nicer for the
user.

Additionally, building a package can be quite time consuming in this
repo, as some tools like dwarves and other debug symbol stuff has to
scan the quite big rust binaries. So dropping a binary package shaves
of a noticeable bit of build time too.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-08 09:11:04 +02:00
cdc83c4eb2 tape job cleanup: user Updater 2021-09-08 08:55:55 +02:00
ffa403b5fd verify job cleanup: use Updater/flatten 2021-09-08 08:40:32 +02:00
5bd77f00e2 sync job cleanup: use Updater/flatten 2021-09-08 08:28:09 +02:00
802189f7f5 move verify.rs to pbs_config workspace 2021-09-08 08:01:07 +02:00
a4e5a0fc9f move sync.rs to pbs_config workspace 2021-09-08 06:57:23 +02:00
58bfa3b19c remove dead code
backup_user() and backup_group() are now in pbs_config workspace
2021-09-08 06:34:44 +02:00
f9c0a94140 buildsys: set pkg-buildcfg version automatically
the 'build' target now fixates the pbs-buildcfg version to
$(DEB_VERSION_UPSTREAM)

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-07 13:54:20 +02:00
e3619d4101 moved tape_job.rs to pbs_config workspace 2021-09-07 12:40:15 +02:00
5839c469c1 move tape_encryption_keys.rs to pbs_config workspace 2021-09-07 10:37:08 +02:00
bbdda58b35 moved key_derivation.rs from pbs_datastore to pbs-config/src/key_config.rs
Also moved pbs-datastore/src/crypt_config.rs to pbs-tools/src/crypt_config.rs.
We do not want to depend on pbs-api-types there, so I use [u8;32] instead of
Fingerprint.
2021-09-07 10:12:17 +02:00
ed2080762c move data_blob encode/decode from crypt_config.rs to data_blob.rs 2021-09-07 10:00:05 +02:00
45d5d873ce move Kdf and KeyInfo to pbs_api_types workspace 2021-09-07 09:59:59 +02:00
f46806414a tape/inventory: fix the tape tests as user, by mocking the lock
locking during the tests as regular user failed because we try to
chown to the backup user (which is not always possible).

Instead, do not lock at all, by implementing 'open_backup_lockfile' with
'create_mocked_lock'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-07 08:42:04 +02:00
ebf34e7edd pbs-config: add 'create_mocked_lock' helper
by making the field an option and making it None in the mocked case
this function is only intended for testing and hidden from the docs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-07 08:42:02 +02:00
aad2d162ab move media_pool config to pbs_config workspace 2021-09-06 08:56:04 +02:00
68149b9045 zsh: fix completions
seems like there was a typo in these from the beginning.

also fixes the wrong function name for proxmox-file-restore completion

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2021-09-03 10:29:48 +02:00
1ce8e905ea move drive config to pbs_config workspace
Also moved the tape type definitions to pbs_api_types.
2021-09-03 09:10:18 +02:00
ccb3b45e18 add missing file pbs-api-types/src/remote.rs 2021-09-02 17:36:13 +02:00
6afdda8832 move remote config into pbs-config workspace 2021-09-02 14:25:15 +02:00
2121174827 start new pbs-config workspace
moved src/config/domains.rs
2021-09-02 12:58:20 +02:00
df12c9ec4e add proxmox-backup-debug debian package
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-02 11:25:50 +02:00
4c1b776168 another import cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-01 14:46:01 +02:00
42dad3abd3 fixup imports in tests and examples
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-01 12:32:21 +02:00
6c76aa434d split proxmox-file-restore into its own crate
This also moves a couple of required utilities such as
logrotate and some file descriptor methods to pbs-tools.

Note that the logrotate usage and run-dir handling should be
improved to work as a regular user as this *should* (IMHO)
be a regular unprivileged command (including running
qemu given the kvm privileges...)

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-01 12:23:29 +02:00
e5f9b7f79e split out proxmox-backup-debug binary
and introduce pbs_tools::cli module

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 14:45:48 +02:00
dd2162f6bd more import cleanups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 14:01:03 +02:00
cabdabba3d fixup imports in debug binary
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 13:29:06 +02:00
3e593a2459 add index recovery to pb-debug
Adds possibility to recover data from an index file. Options:
 - chunks: path to the directory where the chunks are saved
 - file: the index file that should be recovered(must be either .fidx or
   didx)
 - [opt] keyfile: path to a keyfile, if the data was encrypted, a keyfile is
   needed
 - [opt] skip-crc: boolean, if true, read chunks wont be verified with their
   crc-sum, increases the restore speed by a lot

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 13:19:56 +02:00
7c5287bb95 add file inspection to pb-debug
Adds possibility to inspect .blob, .fidx and .didx files. For index
files a list of the chunks referenced will be printed in addition to
some other information. .blob files can be decoded into file or directly
into stdout. Without decode the tool just prints the size and encryption
mode of the blob file. Options:
 - file: path to the file
 - [opt] decode: path to a file or stdout(-), if specidied, the file will be
   decoded into the specified location [only for blob files, no effect
   with index files]
 - [opt] keyfile: path to a keyfile, needed if decode is specified and the
   data was encrypted

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 13:19:54 +02:00
7c72ae04f1 add chunk inspection to pb-debug
Adds possibility to inspect chunks and find indexes that reference the
chunk. Options:
 - chunk: path to the chunk file
 - [opt] decode: path to a file or to stdout(-), if specified, the
   chunk will be decoded into the specified location
 - [opt] digest: needed when searching for references, if set, it will
   be used for verification when decoding
 - [opt] keyfile: path to a keyfile, needed if decode is specified and
   the data was encrypted
 - [opt] reference-filter: path in which indexes that reference the
   chunk should be searched, can be a group, snapshot or the whole
   datastore, if not specified no references will be searched
 - [default=true] use-filename-as-digest: use chunk-filename as digest,
   if no digest is specified

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 13:19:51 +02:00
86582454e8 make api2::helpers::list_dir_content a CatalogReader method
this is its natural place and everything required is already
part of the catalog module

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 11:29:17 +02:00
013b1e8bca move some more API types
ArchiveEntry -> pbs-datastore
RestoreDaemonStatus -> pbs-api-types

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-31 11:29:17 +02:00
40ff84b138 ui: fix order of prune keep reasons
two things wrong with the old code:
 * the sort function wants -1, 0 and 1 as a return value for a<b, a==b and a>b
   respectively, not a bool (which a < b returns)
 * we have to sort the newest backups first, since the first reason is
   'keep-last'. until now, we sorted the oldest backup first, resulting
   in the older backups getting the 'keep-last' reason

reported by a user in the forum:
https://forum.proxmox.com/threads/prune-ui-and-prune-schedule-simulator-dont-match.94944/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-08-30 15:28:25 +02:00
b2065dc7d2 cleanup proxmox_backup::backup module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 14:14:04 +02:00
97dfc62f0d remote config: derive and use Updater
Defined a new struct RemoteConfig (without name and password). This makes it
possible to bas64-encode the pasword in the config, but still allow plain
passwords with the API.
2021-08-30 12:48:45 +02:00
e351ac786d split out proxmox-backup-client binary
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 11:39:01 +02:00
7b570c177d move some API return types to pbs-api-types
they'll be required by the api client

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 11:39:01 +02:00
6838b75904 Cargo.toml: drop features in 'patch' section
the features array does not need to be repeated here

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 11:39:01 +02:00
dbda1513c5 tape: media_pool: derive and use Updater 2021-08-30 11:17:14 +02:00
c62a6acb2e drive config cleanup: derive and use Updater 2021-08-30 10:50:20 +02:00
e4a5c072b4 openid cleanup: derive and use Updater 2021-08-30 09:48:53 +02:00
80f950c05d more Updatable -> UpdaterType fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
4933b853cd d/control bump
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
aec1b91eb8 bump proxmox-openid dependency to 0.7.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
2e2d64fdba bump proxmox dependency to 0.13.0
and with it:
* bump proxmox-http dependency to 0.4.0
* bump proxmox-apt dependency to 0.7.0

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
a37c8d2431 use ApiType trait
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
a8a20e9210 use new api updater features
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 10:43:58 +02:00
be5b468975 bump version to 2.0.9-2
rebuild to include openid and to actually have a correct pbs-buildcfg
Cargo.toml version..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-08-24 14:48:57 +02:00
9789461363 bump version to 2.0-9-1 2021-08-09 09:54:33 +02:00
9f58e312d7 tape/pool_writer: fix typo
s/wrinting/writing/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-08-09 09:36:38 +02:00
cffe0b81e3 tape backup: mention groups that were empty
otherwise a user might get a task log like this:

-----
...
found 7 groups
TASK OK
-----

which could confuse the users as why there were no snapshots backed up

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-08-09 09:28:01 +02:00
bb14ed8cab cleanup: simplify next_expired_media() 2021-08-04 11:01:18 +02:00
023adb5945 ui: display next-media-label for tape backup jobs 2021-08-04 09:59:12 +02:00
e5545c9804 cli: proxmox-tape backup-job list: use status api and display next-run an d next-media-label 2021-08-04 09:59:12 +02:00
efe96ec039 tape: compute next-media-label for each tape backup job 2021-08-04 09:59:12 +02:00
1d3ae83359 tape: media_pool: implement guess_next_writable_media() 2021-08-04 09:59:12 +02:00
4bb3876352 tape: lto: increase default timeout to 10 minutes
it seems that for some actions or in some circumstances, two minutes is
simply too short and the command aborts. Increase the default timeout to
10 minutes.

While it should give most commands enough time to finish, in case of a real
failure the procedure now takes up to 5 times longer, but IMHO thats an
OK tradeoff.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-08-03 09:19:13 +02:00
400e90cfbe docs/file-formats: fix typo
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2021-08-03 09:17:33 +02:00
e16c289f50 bump version toö 2.0.8-1 2021-08-02 10:35:16 +02:00
140c159b36 bump proxmox-apt to 0.6 in debian/control
Build deps could not be installed

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2021-08-02 09:32:27 +02:00
8be69a8453 api/ui: allow zstd compression for new zpools
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-30 17:51:13 +02:00
9ba4833f3c cargo: update proxmox-apt to v0.6.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-30 10:43:40 +02:00
0b12a5a698 api: apt: adapt to further proxmox-apt back-end changes
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-30 10:37:27 +02:00
2eac359430 api: apt: adapt to proxmox-apt back-end changes
It's up to the caller to provide the current release for standard
repository detection/addition.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-30 10:37:27 +02:00
855b55dc14 api2: tape: media: use MediaCatalog::snapshot_list for content listing
this should make the api call much faster, since it is not reading
the whole catalog anymore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-07-29 13:34:36 +02:00
5ad40a3dd1 tape: media_catalog: add snapshot list cache for catalog
For some parts of the ui, we only need the snapshot list from the catalog,
and reading the whole catalog (can be multiple hundred MiB) is not
really necessary.

Instead, we write the list of snapshots into a seperate .index file. This file
is generated on demand and is much smaller and thus faster to read.
2021-07-29 13:34:31 +02:00
7116a2d9da tape: lock media_catalog file to to get a consistent view with load_catalog 2021-07-29 13:34:25 +02:00
0d5e990a62 cleanup: factor out tape catalog path helpers 2021-07-29 13:34:18 +02:00
4f57f4ad84 tape: changer: add tests for decode_element_status_page
a test for a valid status_page, one with excess data
(in the descriptor as well in the page as a whole)
and a test with too little data

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-28 12:23:21 +02:00
13e13d836f tape: changer: handle libraries that sends wrong amount of data
if the library sends more data than advertised, simply cut it off,
but if it sends less data, bail out (depending on how much data is
missing, trying to parse it could lead to a panic, so bail out early)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-28 12:22:48 +02:00
3ab2432ab6 tape: changer: remove unnecesary inquiry parameter
this is never used, so remove it.
Ok, since they are only non public functions.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-28 12:17:07 +02:00
76e8565076 api2: tape/restore: commit temporary catalog at the end
in 'restore_archive', we reach that 'catalog.commit()' for
* every skipped snapshot (we already call 'commit_if_large' then before)
* every skipped chunk archive (no change in catalog since we do not read
  the chunk archive in that case)
* after reading a catalog (no change in catalog)

in all other cases, we call 'commit_if_large' and return early,
meaning that the 'commit' there was executed too often and
unnecessary, so move it after the loop over the files, before
finishing the temporary database.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-28 11:28:03 +02:00
a5f30a562b docs: tape: add instructions on how to restore the catalog
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-26 13:41:38 +02:00
a2ef36d445 tape: media_catalog: improve chunk_archive interface
instead of having a public start/end_chunk_archive and register_chunks,
simply expose a 'register_chunk_archive' method since we always have
a list of chunks anywhere we want to add them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-26 10:18:13 +02:00
9a1ecae0b7 ui: tape/ChangerStatus: improve layout for large libraries
instead of having the grid be as tall as possible and the containing
panel scroll. limit the grids height to the panel size and scroll the
grid.

this has two advantages:
* if a user has many slots, it is now possible to to navigate the other
  grids to the position wanted
* having the grids scroll, means it can use extjs' buffered renderer,
  which makes the view much more responsive (in case of hundreds of
  slots)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-26 10:12:03 +02:00
42b010174e tape: changer: handle invalid descriptor data from library in status page
We get the descriptor length from the library and use that in
'chunks_exact', which panics on length 0. Catch that case
and bail out, since that makes no sense here anyway.

This could prevent a panic, in case a library sends wrong data.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-26 10:05:37 +02:00
68e77657e6 datastore config: cleanup code (use flatten attribute) 2021-07-23 12:43:33 +02:00
1b2f851e42 bump version to 2.0.7-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:44:41 +02:00
cc99866ea3 restore daemon: rust format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:26:10 +02:00
1ea3f23f7e file restore: improve some comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:25:34 +02:00
3f780ddf73 restore daemon: log about doing basic system env setup
debugging history showed that its surely nice to have more logs at
when stuff happens (and thus fails)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:24:30 +02:00
9edf96e6b6 restore daemon: setup backup system user and group
now required as we always enforce lock files to be owned by the
backup user, and the restore code uses such code indirectly as the
REST server module is reused from proxmox-backup-server. Once that is
refactored out we may do away such things, but until then we need to
have a somewhat complete system env.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:19:38 +02:00
73e1ba65ca restore daemon: add setup_system_env helper
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 08:10:55 +02:00
02631056b8 tape: changer: handle missing dvcid information
the dvcid information is not always available, so skip it if is missing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-22 12:00:30 +02:00
131d0f10c2 tape: changer: improve error message on wrong counts
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-22 11:37:14 +02:00
f9aa980c7d tape: changer: correctly consume data in decode_element_status_page
instead of 'blindly' trusting the changer to deliver the fields written
in the specification, trust the length data it returns in the header.

we slice the descriptor data into equal sized chunks of the correct
size, then we do not have care bout the len and empty checks anymore

this also makes the code to read the rest of the page obsolete,
since the next descriptor is on the correct offset anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-22 11:37:14 +02:00
ad0364c558 tools: xattr: don't test things beyond our control
whether the kernel allows super-long names or weird
namespace prefixes is not our concern...

also the latter fails under fakeroot

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-22 11:34:40 +02:00
76486eb3d1 bump version to 2.0.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:22:33 +02:00
65ab4ca976 docs: simplify list of ENV var alternative
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:21:40 +02:00
99a73fad15 doc: Document new environment variabless to specify secret values
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:10:40 +02:00
16a01c19dd support new ENV vars to get secret values through a file or a command
We want to allow passing a secret not only directly through the
environment value, but also indirectly through a file path, an open
file descriptor or a command that can write it to standard out.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:09:53 +02:00
86b8ba448c ui: server administration: repos: add online help
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:09:53 +02:00
9b8e8012a7 cargo: update proxmox to 0.12.1
For the FS compat improvement in the atomic create file helper

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 10:09:53 +02:00
b29292a87b build: unbreak 'nocheck'
to skip test cases for faster builds or in case your local system does
not support running (all) tests..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-21 17:22:18 +02:00
c1feb447e8 tape: changer: sg_pt: fix typo
ok, since its a private struct

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-21 17:02:16 +02:00
62a0e190cb tape: changer: sg_pt: add SCSI_VOLUME_TAG_LEN const
so that we do have less 'magic' constants without description

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-21 17:01:37 +02:00
5890143920 api: types: CHANGER_DRIVENUM_SCHEMA: increase maximum drives per changer
to 255. 8 drives per changer was a rather arbitrary limitation and could
well be reached in practice with big libraries.

Altough 255 is still a arbirtrary limitation, this is much less likely
to be reached in practice.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-21 16:59:02 +02:00
ef4df211ab move CachedChunkReader to pbs-datastore
this was actually still missing from the previous commit

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-21 14:20:03 +02:00
eb5e0ae65a move remaining client tools to pbs-tools/datastore
pbs-datastore now ended up depending on tokio after all, but
that's fine for now

for the fuse code I added pbs-fuse-loop (has the old
fuse_loop and its 'loopdev' module)
ultimately only binaries should depend on this to avoid the
library link

the only thins remaining to move out the client binary are
the api method return types, those will need to be moved to
pbs-api-types...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-21 14:12:24 +02:00
bbc71e3b02 client: fix panic message
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-21 13:28:55 +02:00
ac81ed17b9 fix regression test file permission problems
By simply using the current user/group instead of backup:backup
2021-07-21 09:30:22 +02:00
89145cde34 bump version to 2.0.5-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-21 09:12:46 +02:00
ef4b2c2470 buildcfg: fix version
now set here, but we really need to automate this soon, just to easy
to forget.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-21 09:11:57 +02:00
7190cbf2ac buildsys: run test before compile to avoid clobbering the openid build binaries
dh_auto_test also checks for the build flags used, including any
`--cfg`, so it rebuilds and overwrites our carefully assembled daemon
binaries with openid support as it is run after build and before
install.
So manually ensure the order of first test then build (argh, hackes
of hackes >.<)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 20:27:40 +02:00
f726e1e0ea buildsys: cargo build target: one binary per line
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 20:27:40 +02:00
6d81e65986 bump version to 2.0.5-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 19:34:49 +02:00
ba5f5083c3 d/control: record fonts-font-awesome dependency for docs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 19:34:49 +02:00
314db4072c docs: add missing font-awesome link for lto-barcode generator
else it cannot load the icons and does not show them in the action column

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-20 19:34:49 +02:00
baff2324f3 pbs-tools: fix doctest reference to moved cache modules
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 19:15:28 +02:00
02eae829f7 tests: move pxar test to its crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 18:54:23 +02:00
bb77143108 d/control: update build dependencies
needs to be done manually for now..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 18:54:23 +02:00
02cb5b5f80 cargo: bump proxmox-http to 0.3.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 18:54:23 +02:00
a301c362e3 add helpers to write configuration files 2021-07-20 18:54:23 +02:00
7526d86419 use new atomic_open_or_create_file
Factor out open_backup_lockfile() method to acquire locks owned by
user backup with permission 0660.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 18:54:23 +02:00
a00888e93f fixup examples
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-20 15:26:25 +02:00
fc5870be53 move channel/stream helpers to pbs-tools
pbs_tools
  ::blocking: std/async wrapping with block_in_place
  ::stream: stream <-> AsyncRead/AsyncWrite wrapping

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-20 11:27:40 +02:00
3c8c2827cb move required_X_param to pbs_tools::json
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-20 11:09:52 +02:00
6c221244df move lru cachers to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-20 10:57:22 +02:00
38629c3961 move ChunkStream to pbs-client
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-20 10:52:21 +02:00
513d019ac3 issue banner: avoid depending on proxmox crate for hostname
While this slightly duplicates code we just do not profit from the
central, lazy static variant here, as that is only really useful in
daemons to avoid doing frequent syscalls there.

proxmox just pull in far to much (e.g., tokio) and duplicating that
one line of simple code has no real maintenance cost, so just go for
that and use the nix crate directly.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-19 16:32:50 +02:00
3fa1b4b48c cleanup unused imports
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 15:55:19 +02:00
a6eac535e4 Makefile: fix build.rs reference
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 15:54:53 +02:00
58a3fae773 move pxar binary to separate crate
and move its few remaining proxmox_backup deps out to
pbs-tools

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 15:53:43 +02:00
0889806a3c resolve some more client imports
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 15:03:24 +02:00
51ec8a3c62 move some api types to pbs-api-types
and resolve some imports in the client binary

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 15:01:03 +02:00
a12b1be728 move build.rs and friends to pbs-buildcfg
with this the main crate won't be re-compiled every time a
*binary* is modified

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 14:59:18 +02:00
4d04cd9ab9 comment on test output paths
cargo should be getting a new env var for this soon

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 14:24:13 +02:00
a3399f4337 doc and tests fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 14:16:28 +02:00
2b7f8dd5ea move client to pbs-client subcrate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 12:58:43 +02:00
72fbe9ffa5 move 'wait_for_local_worker' from client to server
this just made no sense in the client

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:44:44 +02:00
0be8bce718 d/control: fixup proxmox feature flags
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:09:43 +02:00
4805edc4ec move more tools for the client into subcrates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:07:12 +02:00
9eb784076c move more helpers to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:07:12 +02:00
b9c5cd8291 add proxmox-backup-banner binary crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:07:12 +02:00
9008c0c177 bump proxmox-apt dependency
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 10:07:12 +02:00
f027c2146e ui: datastore/Prune: improve title of group prune window
we are not actually pruning the whole datastore, but only the single
group, so set that as a title

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:42:30 +02:00
afbf2e10f3 ui: datastore/Content: add 'Prune All' button
since the api call always starts a real worker, we cannot have a
preview. It would also be very hard to show that for all groups in a
non-confusing way. We reuse the pbsPruneInputPanel and add the dry-run
field there conditionally.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:42:09 +02:00
9805207aa5 api: admin/datastore: add new 'prune-datastore' api call
to prune the whole datastore at once, with the given parameters.
We need a new api call since this can take a while and we need to start
a worker for this. The exisiting api call returns a list of removed/kept
snapshots and is synchronous.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:40:05 +02:00
8e0b852f24 server/prune_job: add proper permission checks to 'prune_datastore'
checks for PRIV_DATASTORE_MODIFY, or else if the auth_id is the backup
owner, and skips the group if not.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:39:01 +02:00
0052dc6d28 server/prune_job: add 'keep_all' logic to 'prune_datastore'
it is the same as when pruning single groups.
for prune_jobs, we never start the worker if there is no prune option set.
but if we want to call 'prune_datastore' from somewhere else, we
have to check it here again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:38:28 +02:00
61f05679d2 server/prune_job: factor out 'prune_datastore'
we want to use that outside of a prune job

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:36:45 +02:00
9751ef4b36 backup/datastore: refactor check_backup_owner there
and add a 'owns_backup' convenience function

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:36:02 +02:00
0a240aaa9a api: admin/datastore: simplify prune api call
by using the api macro and reusing the PruneOptions from pbs-datastore

this means we can now drop the 'add_common_prune_prameters' macro

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:34:36 +02:00
e0665a64bd client: simplify prune api method
by using the api macro on the async method and reusing the PruneOptions
from pbs-datastore with 'flatten: true'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:34:28 +02:00
dc46aa9a00 pbs-datastore/prune: make PruneOptions an api type
so that we can reuse it from here

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:34:18 +02:00
ced694589d api-types: move PRUNE_SCHEMA_KEEP_* to pbs-api-types
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 11:26:09 +02:00
6c053ffc89 tape: changer: sg_pt: make extra scsi request for dvcid
some libraries cannot handle a request with volume tags and DVCID set at
the same time.

So we make 2 separate requests and merge them, since we want to keep
the vendor/model/serial data.

to not overcomplicate the code, add another special type to ElementType

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-16 08:46:06 +02:00
9f5b57a348 buildsys: Prepare new way for path dependencies
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-15 09:56:32 +02:00
f1c4b8df34 features update
so we can drop default-features in proxmox for build-deps to
be more lean

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-15 09:56:05 +02:00
269e274bb5 d/control: update proxmox b-d
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-14 13:51:38 +02:00
bfd357c5a1 depend on proxmox 0.11.6 (changed make_tmp_file() return type) 2021-07-14 13:37:26 +02:00
9517a5759a fix #3526: correctly filter tasks with 'since' and 'until'
The previous assumption was that the Tasks returned by the Iterator are
sorted by the starttime, but that is not actually the case, and
could never have been, since we append the tasks into the log when
they are finished (not started) and running tasks are always iterated
first.

To correctly filter (and simplify the the api call) we forgo the
combinators, and use a for loop instead. This way we only have to do
the since/until checks only once per Task, but have to do the
start/limit counting ourselves.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-14 09:39:14 +02:00
a5d51b0c4f docs: tape: drop technology preview admonitions
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-13 16:48:05 +02:00
d9822cd3cb fix #3515: file-restore-daemon: allow LVs/PVs with dash in name
LVM replaces any dashes '-' in an LV or PV name with two '--' for the
created device node in /dev/mapper/ to distinguish the seperating
character between the PV and LV name.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-13 12:07:51 +02:00
66501529a2 file-restore: increase lock timeout on QEMU map
This lock is held during VM startup, so that multiple calls will not
start VMs twice. But this means that the timeout needs to incorporate
the time it might take a VM to boot, so increase it quite a bit.

This could previously lead to "interrupted system call" errors when
accessing backups with many disks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-13 12:07:23 +02:00
2072dede4a api2: tape: restore: add warning for list restore
if an error occurs, the snapshot dirs will already be created, and we
do not clean them up (some might already be finished).

Warn the user that they are not cleaned up.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-13 12:02:01 +02:00
31c94d1645 chunk_store/insert_chunk: add more information to file errors
otherwise this context is missing in some tasks (e.g. tape restore)
and it is unclear where it came from

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-13 11:55:33 +02:00
9ee4c23833 tape: changer: sg_pt: always retry until timeout 2021-07-13 10:39:28 +02:00
a14a1c7b90 ui: tape/BackupOverview: increase timeout for media-set content
a single catalog can be over 100MiB, and a media-set can have multiple
catalogs to read (no technical upper limit). On slow disks, this can
take much longer than 30 seconds (the default timeout).

The real solution would be to have some kind of index only for the gui
relevant part, e.g. a table in the beginning of the catalog, or
alternatively a seperate file with that info. Until we have such a
solution increase the timeout as a stopgap.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-13 09:44:17 +02:00
9ef88578af bump version to 2.0.4-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 18:51:41 +02:00
c4c4b5a3ef auth: 'crypt' is not thread safe
According to crypt(3):
"crypt places its result in a static storage area, which will be
overwritten by subsequent calls to crypt. It is not safe to call crypt
from multiple threads simultaneously."

This means that multiple login calls as a PBS-realm user can collide and
produce intermittent authentication failures. A visible case is for
file-restore, where VMs with many disks lead to just as many auth-calls
at the same time, as the GUI tries to expand each tree element on load.

Instead, use the thread-safe variant 'crypt_r', which places the result
into a pre-allocated buffer of type 'crypt_data'. The C struct is laid
out according to 'lib/crypt.h.in' and the man page mentioned above.

Use the opportunity and make both arguments to the rust 'crypt' function
take a &[u8].

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-12 18:38:48 +02:00
0ed40b19c7 tape: changer: sg_pt: query element types separately
Some changers do not like the DVCID bit when querying non-drives,
this includes when querying 'all' elements.

To circumvent this, we query each type by itself (like mtx does it),
and only add the DVCID bit for drives (Data Transfer Elements).

Reported by a user in the forum:
https://forum.proxmox.com/threads/ibm-3584-ts3500-support.92291/

and limit to 1000 elements per request.
(Because some changers limit that request with the options we set)

instead of checking if the data len was equal to the allocation_len
for getting more data, we count the returned elements and compare
that with the number we requested

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-07-12 18:19:26 +02:00
a0cd0f9cec change tape drive lock path
New kernel has stricter checks on tmpfs with stick-bit on directories, so some
commands (i.e. proxmox-tape changer status) fails when executed as root, because
permission checks fails when locking the drive.

This patch move the drive locks to /run/proxmox-backup/drive-lock.

Note: This is incompatible to old locking mechmanism, so users may not
run tape backups during update (or running backup can fail).
2021-07-12 17:26:49 +02:00
49e47c491b d/postinst: drop some legacy update handling
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 16:14:28 +02:00
424d2d68d3 buildsys: try to avoid duplicate build due to "phony" docs dependency
Make docs target depend directly on the some docs-only required
binaries and add a new intermediate ".do-cargo-build" target that is
explicitly not a PHONY target.

That avoids one extra set of full builds.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 13:19:20 +02:00
415690a0e7 bump version to 2.0.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 09:53:07 +02:00
2c0abe9234 Revert "api: access: domains: add ExtraRealmInfo and RealmInfo structs"
This reverts commit da7ec1d2af.

not necessary, since we have the api in config/access/openid

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 09:53:07 +02:00
2649c89358 Revert "api: access: domains: add get/create/update/delete domain call"
This reverts commit 5117cf4f17.

we already have that in api2/config/access

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 09:53:07 +02:00
bbd34d70d5 api: config: access: openid: use better Privilige Realm.Allocate
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 09:53:07 +02:00
9779ad0b00 api: config: access: openid: use correct parameter for matching
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 09:53:07 +02:00
70fd0652a1 ui: panel/AccessControl: define baseUrland useTypeInUrl for AuthView
both are not the default

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 09:53:07 +02:00
6b85671dd2 buildsys: fixup clean target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 09:35:05 +02:00
82bdf6b5e7 api: tfa: module path cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-12 08:43:14 +02:00
ba2679c9d7 ui: datastore content: style edit notes pencil like action-col icon
as those have a hover effect and use dark-grey vs. the quite "harsh"
looking plain black. We need to override the margin though, as else
the floated layout adds another line.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 07:57:40 +02:00
8866cbccc8 ui: update group notes: fix obj access and rewrite to async
eslint is configured to not allow using quoted object keys if they
could be just passed in dot notation, e.g.,
wrong: `group["comment"]`
good:  `group.comment`

It's not a big problem but eslint fails the build with the wrong one,
so this needs to be fixed anyway..

Also, rewrite to async, shorter and less indentation

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 07:55:02 +02:00
b3477d286f d/control: bump versioned dependency to widget-toolkit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 07:20:45 +02:00
68e2ea99ba ui: add support for notes on backup groups
Currently done a little bit hacky in a seperate API call following the
initial list_snapshots, as we previously didn't call list_groups at all
and instead calculated the groups from the snapshots.

This calls it async and updates the view with group comments when data
arrives. The editor is simply reused with the 'group-notes' API call,
since the semantics are the same.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-12 07:13:44 +02:00
d6688884f6 api: add support for notes on backup groups
Stored in atomically-updated 'notes' file in backup group directory.
Available via dedicated GET/PUT API calls, as well as the first line
being included in list_groups (similar to list_snapshots).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 07:13:28 +02:00
7d3482f5bf ui: node status: fix font-awesome icon size
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 06:56:09 +02:00
7a39b41c20 ui: node status: reduce padding like in PVE
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 06:55:30 +02:00
4672273fe6 ui: dashboard: show node's repository/subscription status
Mostly copied from PVE, slightly adapted to be consistent with other
things in the dashboard, e.g. use a store for the repository info.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-12 06:29:24 +02:00
01284de0b2 ui: window/Settings: add summarycolumns settings
like in pve

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 06:29:21 +02:00
b20368ee1b ui: panel/NodeInfo: make it like in pve
this changes the node info panel to a similar layout as in pve,
with the ksm sharing and version field removed

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 06:29:18 +02:00
e584593cb5 ui: factor out NodeInfoPanel
so that Dashboard.js will be less cluttered when we add more information
there.

No functional change, but reworked the fingerprint button disabling to
use a property of the view instead of a viewmodel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 06:29:14 +02:00
069a6e28a7 ui: tapeRestore: make window non-resizable
While it would be nice to be able to resize that window for more
snapshots/datastores in view, this would need quite some reworking on the
input panel side. So for now, disable resizing of that window, otherwise
the grids look weird as they only scale horizontally but not vertically.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 06:24:01 +02:00
8fab19da73 fix #3447: ui: Dashboard: disallow selection of datastore statistics row
since we cannot do anything with a selected row anyway, simply
disallow it

this avoids having the row in the same color as the progressbar, without
being able to deselect the row again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-12 06:18:45 +02:00
991be99c37 buildsys: workaround linkage issues from openid/curl build server stuff separate
this blows up build times, but we do not plan for using it longer
than required (i.e., the server is finally split into its own binary
crate providing only those binaries).

Note, using `cargo b --release` to build is naturally unaffected by
this change, so for dev builds just continue to use that.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-12 06:16:32 +02:00
1900d7810c buildsys: mark clean targets phony and split out deb pkg-clean
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-11 13:25:39 +02:00
6b5013edb3 rest: log response: avoid unnecessary mut on variable
a match expresses the fallback slightly nicer and needs no mut,
which is always nice to avoid.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-11 13:05:19 +02:00
f313494d48 d/rules: drop dh_dwz override, handled now better
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933541#17

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-11 13:00:12 +02:00
353dcf1d13 ui: access: revert icon swap, use realm over auth.
Moving icons around is not to ideal for people accustomed to the old
ones, at least if they are used for a new component on the same view.

Rather use the address-book icon, which is also used for adding a new
realm in PVE, we can rather switch over PVE to that and the text
"Realms", as that is also the label one sees when logging in, so a
better fit to keep that consistent.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-11 12:53:45 +02:00
3006d70ebe ui: use Async tools from widget toolkit
The api2 one passes the whole response (for more flexibility) on
reject, so we need to adapt to that.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 16:53:12 +02:00
681e096448 ui: adapt to widget toolkit changes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 16:52:25 +02:00
ac9a9e8002 ui: add /access/domains to PermissionPathsStore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
ecbc385b7b ui: add Authentication tab to Access Control
so that user can add/edit/delete realms

changes the icon of tfa to 'id-badge' so that we can keep the same icon
for authentication as pve and not have duplicate icons

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
5117cf4f17 api: access: domains: add get/create/update/delete domain call
modeled like our other section config api calls
two drawbacks of doing it this way:
* we have to copy some api properties again for the update call,
  since not all of them are updateable (username-claim)
* we only handle openid for now, which we would have to change
  when we add ldap/ad

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
da7ec1d2af api: access: domains: add ExtraRealmInfo and RealmInfo structs
these will be used as parameters/return types for the read/create/etc.
calls for realms

for now we copy the necessary attributes (only from openid) since
our api macros/tools are not good enought to generate the necessary
api definitions for section configs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
934de1d691 config: acl: add PRIV_REALM_ALLOCATE
will be used for realm creation/update/deletion

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
0c27d880b0 api: access: domains: add BasicRealmInfo struct and use it
to have better type safety and as preparation for adding more types

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 15:36:54 +02:00
be3a0295b6 client: import updates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 15:32:12 +02:00
aa2838c27a move client::pull to server::pull
it's not used by the client and not part of the client, it
just makes use *of* the client, but is used on the
datastore/server...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 15:17:53 +02:00
ea584a7510 move more api types for the client
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 15:17:53 +02:00
ba0ccc5991 move some tools used by the client
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 15:17:52 +02:00
75f83c6a81 move some api types and resolve imports
in preparation of moving client & proxmox_client_tools out
into a pbs-client subcrate

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 15:17:52 +02:00
0dda5a6695 ui: add APT repositories
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:45:45 +02:00
289738dc1a api: apt: add endpoints for adding/changing repositories
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:45:45 +02:00
d830804f02 api: apt: add repositories call
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:45:45 +02:00
82cc4b56e5 depend on proxmox-apt
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:45:45 +02:00
923f94a4d7 api: access: openid: add PROXMOX_BACKUP_RUN_DIR_M
otherwise it does not compile with 'RUSTFLAGS="--cfg openid"'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-09 13:03:32 +02:00
bbff317aa7 api: disk list: sort by name
So callers get more stable results. Most noticeable, the disk list in
the web UI doesn't jump around upon reloading, and while sorting could
be done directly there, like this other callers get the benefit too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:02:30 +02:00
20429238e0 disks: also check for file systems with lsblk
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:02:30 +02:00
364299740f disks: refactor partition type handling
in preparation to also get the file system type from lsblk.

Co-developed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-09 13:02:29 +02:00
b81818b6ad subscription: set higher-level error to message instead of bailing
While the PVE one "bails" too, it has an eval around those and moves
the error to the message property, so lets do so too to ensure a user
can force an update on a too old subscription

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-09 12:43:23 +02:00
2f02e431b0 moving more code to pbs-datastore
prune and fixed/dynamic index

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 10:40:14 +02:00
e64f38cb6b move chunk_stat, read_chunk to pbs-datastore
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-09 10:40:14 +02:00
ae24382634 bump version to 2.0.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-08 14:44:26 +02:00
82cae19d19 ui: datastore/OptionView: only navigate up when we removed the datastore
and not on window close

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 14:41:13 +02:00
3f5fbc5620 ui: datastore edit: make keep-last label like the others
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-08 14:06:26 +02:00
000e6cad5c ui: TapeRestore: mark datastore selector as 'not a form field'
since extjs 7.0 those will get picked up by our query logic and
sent to the backend. prevent that by setting isFormField to false
(we assemble the values differently)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 14:05:21 +02:00
49f44cedbf api: config: delete datastore: also remove tape backup jobs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 12:15:59 +02:00
eb1c59cc2a api: add keep-job-configs flag to datastore remove endpoint
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Suggested Fixes:
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 12:15:50 +02:00
c7d032fc17 ui: use task list component from widget toolkit
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-08 11:47:44 +02:00
73b77d4787 ui: tasks: use format_task_status
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-08 11:43:43 +02:00
67466ce564 ui: MainView: fix redirectTo call
takes now an object parameter in extjs 7.0

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 11:43:43 +02:00
4e0faf5ef3 ui: use isActionDisabled
isDisabled is deprecated for actions in actioncolumns
(it produces a warning for now)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-08 11:43:43 +02:00
c23192d34e move chunk_store to pbs-datastore
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 14:37:47 +02:00
83771aa037 move tools::process_locker to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 14:16:34 +02:00
95f9d67ce9 move UPID to pbs-api-types, add UPIDExt
pbs-server side related methods are added via the UPIDExt
trait

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 13:51:03 +02:00
314d360fcd buildsys: run tests on entire workspace by default
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 12:17:10 +02:00
f8a74456cc test fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 12:17:10 +02:00
4906bac10f linking fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:59:33 +02:00
86c831a5c3 fixup examples
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:49:42 +02:00
a5951b4f38 move manifest and backup_info to pbs-datastore
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:34:56 +02:00
f75292bd8d move tools::json to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:34:56 +02:00
bfff4eaa7f move backup id related types to pbs-api-types
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:34:56 +02:00
067dc06dba add pbs-systemd: move string and unit handling there
the systemd config/unit parsing stays in pbs for now since
that's not usually required and uses our section config
parser

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 11:34:56 +02:00
18cdf20afc move tools::nom to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 10:08:26 +02:00
e57841c442 move run_command to pbs-tools
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 10:04:05 +02:00
751f6b6148 move userid types to pbs-api-types
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 09:53:48 +02:00
3c430e9a55 move id and single line comment format to pbs-api-types
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 09:49:38 +02:00
155f657f6b move TaskState trait to pbs-datastore
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 09:24:39 +02:00
86fb38776b add pbs-api-types subcrate, move key_derivation
move key_derivation to pbs-datastore

pbs-api-types should only contain "basic" types which
* are usually required by clients
* don't depend on pbs-related code directly

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 09:04:09 +02:00
f323e90602 add pbs-datastore module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 15:11:52 +02:00
770a36e53a add pbs-tools subcrate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 15:10:37 +02:00
d420962fbc split out pbs-runtime module
These are mostly tokio specific "hacks" or "workarounds" we
only really need/want in our binaries without pulling it in
via our library crates.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 14:52:25 +02:00
01fd2447b2 buildsys: don't use debcargo
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 14:50:46 +02:00
85beb7d875 tree-wide: switch to using mod.rs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 12:04:52 +02:00
af06decd1b split out pbs-buildcfg module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 12:00:14 +02:00
aceae32baa Cargo.toml: regroup imports
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 11:48:28 +02:00
74a4f9efc9 bump version to 2.0.1-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 23:14:54 +02:00
fb1e7a86f4 ui: minimally increase font-size of product title and version
Similar like we did for Proxmox VE's manager. The main title and
version should stand a bit more out compared to simple nav/button
texts.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 23:13:33 +02:00
dc99315cf9 ui: app: fix openID helper usage and rework style
one really does not need a if and an extra intermediate variable for
assigning a simple bool...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 23:12:08 +02:00
34bd1109b0 bump version to 2.0.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:45:00 +02:00
13a2445744 buildsys: docs: clean: also clean generated JS files
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:44:13 +02:00
c968da789e acme: nit code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:44:13 +02:00
3f84541412 fix #3496: acme: plugin: add sleep for dns propagation
the dns plugin config allow for a specified amount of time to wait for
the TXT record to be set and propagated through DNS.

This patch adds a sleep for this amount of time.
The log message was taken from the perl implementation in proxmox-acme
for consistency.

Tested with the powerdns plugin in my test setup.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-07-03 21:44:13 +02:00
4d8bd03668 config: acme: make validation_delay crate public
we need the setting in acme::plugin.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-07-03 21:44:13 +02:00
f9bd5e1691 acme: plugin: fix error message
extract_challenge is used by both dns-01 and http-01 challenges.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-07-03 21:44:13 +02:00
ecd66ecaf6 restore daemon: use millisecond log resolution
During startup most of the stuff is happening in milliseconds (or
less), so the timestamp granularity of seconds made it hard to tell
if the previous command required 990ms or 1ms, which is quite the
difference in the restore daemon context.

Using micros seems not to bring too much additional information, a
millisecond is already an ok lower time resolution for logging, so
switch only to millis for now.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:44:13 +02:00
33d7292f29 restore daemon: create /run/proxmox-backup on startup
fixes file restore again.

The new Memcom tracking file lives in `/run/proxmox-backup` and is
always created on REST interaction, as CachedUserInfo uses it to
efficiently track config changes, and such a cache is used in each
REST handle_request.

Further, the Memcom infra expects the base run PBS dir to exists
already, which is an OK assumption to have, but in the file-restore
daemon we have a significantly more minimal environment, and the run
dir was simply not required there, even /run isn't a tmpfs yet.

Fixes fda19dcc6f ("fix CachedUserInfo by using a shared memory version counter")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:43:07 +02:00
f4d371d2d2 REST: set error message extenesion for bad-request response log
We send it already to the user via the response body, but the
log_response does not has, nor wants to have FWIW, access to the
async body stream, so pass it through the ErrorMessageExtension
mechanism like we do else where.

Note that this is not only useful for PBS API proxy/daemon but also
the REST server of the file-restore daemon running inside the restore
VM, and it really is *very* helpful to debug things there..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
835d0e5dd3 memcom: rustfmt + (trailing) whitespace cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
9a06eb1618 file restore daemon: log about basic steps
to make the log more useful..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
309e14ebb7 file restore daemon: reword warning about manual execution
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
2d48533378 REST: rust fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
fffd6874e6 bump version to 2.0.0-2
only for the file-restore daemon, other packages where not uploaded!

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 02:15:24 +02:00
0ddd48f0b5 d/control: record breaks for older proxmox-backup-restore-image
> requires a Breaks on the old restore image (else the restore daemon
> crashes because of missing lock/LVM support).
- F.G., mailing list

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 02:13:56 +02:00
cb590dbc07 file-restore-daemon/disk: add LVM (thin) support
Parses JSON output from 'pvs' and 'lvs' LVM utils and does two passes:
one to scan for thinpools and create a device node for their
metadata_lv, and a second to load all LVs, thin-provisioned or not.

Should support every LV-type that LVM supports, as we only parse LVM
tools and use 'vgscan --mknodes' to create device nodes for us.

Produces a two-layer BucketComponent hierarchy with VGs followed by LVs,
PVs are mapped to their respective disk node.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:54:31 +02:00
6c4f762c49 file-restore-daemon/disk: ignore already-mounted error and prefix zpool
Prefix zpool mount paths to avoid clashing with other mount namespaces
(like LVM).

Also ignore "already-mounted" error and return it as success instead -
as we always assume that a mount path is unique, this is a safe
assumption, as nothing else could have been mounted here.

This fixes an issue where a mountpoint=legacy subvol might be available
on different disks, and thus have different Bucket instances that don't
share the mountpoint cache, which could lead to an error if the user
tried opening it multiple times on different disks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:54:23 +02:00
7a0afee391 file-restore-daemon/disk: fix component path errors
otherwise the path ends in an array ["foo", "bar"] instead of "foo/bar"

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:54:19 +02:00
0dda883994 file-restore-daemon/disk: dedup BucketComponents and make size optional
To support nested BucketComponents, it is necessary to dedup them, as
otherwise two components like:
  /foo/bar
  /foo/baz
will result in /foo being shown twice at the first hierarchy.

Also make the size property based on index and optional, as for example
/foo in the example above might not have a size, and bar/baz might have
differing sizes.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:54:13 +02:00
c2e2078b3f openid: conditionally disable api endpoint
since it pulls in lots of additional linked libraries for all binaries
compiled as part of proxmox-backup. it can easily be re-enabled with
`--cfg openid` added to the RUSTFLAGS env variable.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:52:01 +02:00
26a3450f19 openid: move helper from config to api2
it's not really needed in the config module, and this makes it easier to
disable the proxmox-openid dependency linkage as a stop-gap measure.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-07-03 01:52:01 +02:00
324c069848 d/control: bump versioned dependency for proxmox-widget-toolkit to 3.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 01:50:12 +02:00
bd4c5607ca d/control: commit update
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 01:49:50 +02:00
e1d85f1840 ui: login: cleanups, mostly openID related
similar to what was done in PVE.

 - factor out openid_login_param to widget-toolkit as
   getOpenIDRedirectionAuthorization and use it
 - use camel case to match our JS style guide and our framework (and
   basically the rest of the JS world)
 - minor cleanups like moving variable definition into the single if
   branch their used

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 01:46:24 +02:00
1ce1a5e5cc docs: installation: drop debian-release specific note
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 01:44:53 +02:00
6f66a0ca71 docs: faq: update support table
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 01:44:09 +02:00
62a5b3907b docs: initial update to repositories for bullseye
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-02 19:02:13 +02:00
85b6c4ead4 ui: login: fix another bogus gettext usage
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-02 09:31:50 +02:00
a190979c04 ui: login: fix bogus gettext usage
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-02 08:40:14 +02:00
4a489ae3de ui: dashboard/DataStoreStatistics: fix closing <i> tag
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-01 07:51:00 +02:00
9ac8b73e07 tape/drive: fix logging when requesting media
we try to load the correct media in a loop until we find the correct tape.
when encountering an error or wrong tape, we want to log that (and send
an email if one is set) that requests the correct tape.

while trying to avoid printing the same errors more than once in a row,
we had at least one case (starting with an empty tape in the drive)
which would not print/send any tape request.

reworking that code to use a custom 'TapeRequest' enum, which contains
the state + error message, and a helper that prints and sends an email
when the state changes

this reduces the change check/log to a single variable, instead of 4
(tried, last_media_uuid, last_error, failure_reason)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-30 10:25:48 +02:00
414be8b675 tape: fix LTO locate_file for HP drives
Add test code to the first locate_file command, compute locate_offset.
Subsequent locate_file commands use that offset.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-06-30 09:08:58 +02:00
fda19dcc6f fix CachedUserInfo by using a shared memory version counter 2021-06-30 08:54:30 +02:00
cd975e5787 ui: implement OpenId login 2021-06-30 08:54:30 +02:00
3b7b1dfb8e api: add openid redirect/login API 2021-06-30 08:54:30 +02:00
d8a47ec649 cleanup user/token is_active() check 2021-06-30 08:54:30 +02:00
252cd3b781 implement new helper is_active_user_id() 2021-06-30 08:54:30 +02:00
0decd11efb cli: add CLI to manage openid realms. 2021-06-30 08:54:30 +02:00
b84d2592fb add API to manage openid realms 2021-06-30 08:54:30 +02:00
0219ba2cc5 check_acl_path: add /access/domains and /access/openid 2021-06-30 08:54:30 +02:00
bbff6c4968 config: new domains.cfg to configure openid realm
Or other realmy types...
2021-06-30 08:54:30 +02:00
bb88c6a29d depend on proxmox-openid-rs 2021-06-30 08:54:30 +02:00
a02466966d update enterprise repository to bullseye
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:57:50 +02:00
b0fc11804e d/changelog: add actual changelog for initial 2.0 build
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:40:44 +02:00
d9d81741e3 buildsys: switch to bullseye as upload dist target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:10:26 +02:00
9678366102 bump version to 2.0.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:07:46 +02:00
a2c73c78dd buildsys: call dpkg-buildpackage directly in deb-all
else we may double-build

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:07:15 +02:00
c6a0e7d98e d/control: bump versioned dependency for ExtJS 7.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-28 19:05:58 +02:00
85417b2a88 docs: build api-viewer from widget-toolkit-dev
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
d738669066 docs: add Toolkit.js to lto-barcode
and generate a single js file for it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
442d6da8fb docs: add Toolkit.js to prune simulator
from proxmox-widget-toolkit-dev and not as normal dependency,
else we would have to ship widget-toolkit on the wiki

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
62f10a01db docs/prune-simulator: remove displayField for Calendar Field
in extjs 7.0, specifying displayField overwrites the displayTpl,
which we want to use here, so remove it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 14:26:40 +02:00
5667b76381 fix #3393 (again): pxar/create: try to read xattrs/fcaps/acls by default
we have a static list of filesystems and their capabilities regarding
file attributes and fs features (e.g. sockets/fifos/etc) which also
includes xattrs,acls and fcaps

if we did not know a filesystem by its magic number (for example cephfs),
we did not even attempt to read xattrs, etc.

this patch adds those flags by default to unknown filesystems, and
removes them when we encounter EOPNOTSUPP (to remove the number
of syscalls)

with this, we should be able to catch xattrs/acls/fcaps on all
(unknown) fs types that support them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-28 14:04:22 +02:00
d9b318a444 file-restore/disk: support ZFS subvols with mountpoint=legacy
These require mounting using the regular 'mount' syscall.
Auto-generates an appropriate mount path.

Note that subvols with mountpoint=none cannot be mounted this way, and
would require setting the mountpoint property, which is not possible as
the zpools have to be imported with readonly=on.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-28 13:58:41 +02:00
86ce56f193 file-restore/disk: support ZFS pools
Uses the ZFS utils to detect, import and mount zpools. These are
available as a new Bucket type 'zpool'.

Requires some minor changes to the existing disk and partiton detection
code, so the ZFS-specific part can use the information gathered in the
previous pass to associate drive names with their 'drive-xxxN.img.fidx'
node.

For detecting size, the zpool has to be imported. This is only done with
pools containing 5 or less disks, as anything else might take too long
(and should be seldomly found within VMs).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-28 13:58:41 +02:00
8d72c2c32e file-restore: increase RAM for ZFS and disable ARC
Even through best efforts at keeping it small, including the ZFS tools
in the initramfs seems to have exhausted the small overhead we had left
- give it a bit more RAM to compensate.

Also disable the ZFS ARC, as it's no use in such a memory constrained
environment, and we cache on the QEMU/rust layer anyway.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-28 13:58:41 +02:00
c48c38ab8c async_lru_cache: fix handling of errors in fetch
The future needs to be removed from the pending map in any case, even if
it returned an error, else all upcoming calls to access this key will
always return the same error.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-28 13:48:26 +02:00
3d3769830b tape/helpers/snapshot_reader: sort chunks by inode (per index)
sort the chunks we want to backup to tape by inode, to gain some
speed on spinning disks. this is done per index, not globally.

costs a bit memory, but not too much, about 16 bytes per chunk which
would mean ~4MiB for a 1TiB index with 4MiB chunks.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 12:16:14 +02:00
4921a411ad backup/datastore: refactor chunk inode sorting to the datastore
so that we can reuse that information

the removal of the adding to the corrupted list is ok, since
'get_chunks_in_order' returns them at the end of the list
and we do the same if the loading fails later in 'verify_index_chunks'
so we still mark them corrupt
(assuming that the load will fail if the stat does)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 12:14:52 +02:00
81c767efce proxmox-backup-manager: show task log on datastore create
since the output:
Result: "<UPID>"
is not really interesting, show instead the task log while
the datastore is creating, since it is now run in a worker

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 12:09:55 +02:00
60abf03f05 close #3459: manager: add --ignore-verified and --outdated-after parameters
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 11:03:51 +02:00
dcbf29e71b api: add ignore-verified and outdated-after to datastore verify endpoint
preparatory change for fixing #3459

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 11:03:51 +02:00
037e6c0ca8 verify-job: move snapshot filter into function
preparatory steps for fixing #3459

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
2021-06-28 11:03:44 +02:00
c7024b282a d/control: set R-R-R to run binary d/rules targets as root
the build still requires root to make helper binaries setuid

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-14 13:02:20 +02:00
90ff75f85c update to zstd 0.6
compatible with libzstd from bullseye.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-14 13:01:43 +02:00
534 changed files with 25324 additions and 22232 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "1.1.14"
version = "2.1.2"
authors = [
"Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>",
@ -15,29 +15,48 @@ edition = "2018"
license = "AGPL-3"
description = "Proxmox Backup"
homepage = "https://www.proxmox.com"
build = "build.rs"
exclude = [ "build", "debian", "tests/catar_data/test_symlink/symlink1"]
[workspace]
members = [
"pbs-buildcfg",
"pbs-client",
"pbs-config",
"pbs-datastore",
"pbs-fuse-loop",
"proxmox-rest-server",
"proxmox-rrd",
"pbs-tape",
"pbs-tools",
"proxmox-backup-banner",
"proxmox-backup-client",
"proxmox-file-restore",
"proxmox-restore-daemon",
"pxar-bin",
]
[lib]
name = "proxmox_backup"
path = "src/lib.rs"
[dependencies]
apt-pkg-native = "0.3.2"
base64 = "0.12"
base64 = "0.13"
bitflags = "1.2.1"
bytes = "1.0"
cidr = "0.2.1"
crc32fast = "1"
endian_trait = { version = "0.6", features = ["arrays"] }
env_logger = "0.7"
flate2 = "1.0"
anyhow = "1.0"
foreign-types = "0.3"
thiserror = "1.0"
futures = "0.3"
h2 = { version = "0.3", features = [ "stream" ] }
handlebars = "3.0"
hex = "0.4.3"
http = "0.2"
hyper = { version = "0.14", features = [ "full" ] }
lazy_static = "1.4"
@ -50,8 +69,6 @@ openssl = "0.10"
pam = "0.7"
pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pin-project = "1.0"
regex = "1.2"
rustyline = "7"
serde = { version = "1.0", features = ["derive"] }
@ -63,30 +80,56 @@ tokio-openssl = "0.6.1"
tokio-stream = "0.1.0"
tokio-util = { version = "0.6", features = [ "codec", "io" ] }
tower-service = "0.3.0"
udev = ">= 0.3, <0.5"
udev = "0.4"
url = "2.1"
#valgrind_request = { git = "https://github.com/edef1c/libvalgrind_request", version = "1.1.0", optional = true }
walkdir = "2"
webauthn-rs = "0.2.5"
xdg = "2.2"
zstd = { version = "0.4", features = [ "bindgen" ] }
nom = "5.1"
crossbeam-channel = "0.5"
# Used only by examples currently:
zstd = { version = "0.6", features = [ "bindgen" ] }
pathpatterns = "0.1.2"
pxar = { version = "0.10.1", features = [ "tokio-io" ] }
proxmox = { version = "0.11.6", features = [ "sortable-macro", "api-macro", "cli", "router", "tfa" ] }
proxmox = { version = "0.15.3", features = [ "sortable-macro" ] }
proxmox-http = { version = "0.5.4", features = [ "client", "http-helpers", "websocket" ] }
proxmox-io = "1"
proxmox-lang = "1"
proxmox-router = { version = "1.1", features = [ "cli" ] }
proxmox-schema = { version = "1", features = [ "api-macro" ] }
proxmox-section-config = "1"
proxmox-tfa = { version = "1.3", features = [ "api", "api-types" ] }
proxmox-time = "1"
proxmox-uuid = "1"
proxmox-shared-memory = "0.1.1"
proxmox-sys = "0.1.2"
proxmox-acme-rs = "0.3"
proxmox-fuse = "0.1.1"
proxmox-http = { version = "0.2.1", features = [ "client", "http-helpers", "websocket" ] }
proxmox-apt = "0.8.0"
proxmox-async = "0.2"
proxmox-openid = "0.9.0"
pbs-api-types = { path = "pbs-api-types" }
pbs-buildcfg = { path = "pbs-buildcfg" }
pbs-client = { path = "pbs-client" }
pbs-config = { path = "pbs-config" }
pbs-datastore = { path = "pbs-datastore" }
proxmox-rest-server = { path = "proxmox-rest-server" }
proxmox-rrd = { path = "proxmox-rrd" }
pbs-tools = { path = "pbs-tools" }
pbs-tape = { path = "pbs-tape" }
# Local path overrides
# NOTE: You must run `cargo update` after changing this for it to take effect!
[patch.crates-io]
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "cli", "router", "tfa" ] }
#proxmox-http = { path = "../proxmox/proxmox-http", features = [ "client", "http-helpers", "websocket" ] }
#pxar = { path = "../pxar", features = [ "tokio-io" ] }
#proxmox = { path = "../proxmox/proxmox" }
#proxmox-http = { path = "../proxmox/proxmox-http" }
#proxmox-tfa = { path = "../proxmox/proxmox-tfa" }
#proxmox-schema = { path = "../proxmox/proxmox-schema" }
#pxar = { path = "../pxar" }
[features]
default = []

114
Makefile
View File

@ -17,7 +17,8 @@ USR_BIN := \
# Binaries usable by admins
USR_SBIN := \
proxmox-backup-manager
proxmox-backup-manager \
proxmox-backup-debug \
# Binaries for services:
SERVICE_BIN := \
@ -30,6 +31,23 @@ SERVICE_BIN := \
RESTORE_BIN := \
proxmox-restore-daemon
SUBCRATES := \
pbs-api-types \
pbs-buildcfg \
pbs-client \
pbs-config \
pbs-datastore \
pbs-fuse-loop \
proxmox-rest-server \
proxmox-rrd \
pbs-tape \
pbs-tools \
proxmox-backup-banner \
proxmox-backup-client \
proxmox-file-restore \
proxmox-restore-daemon \
pxar-bin
ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release
COMPILEDIR := target/release
@ -57,13 +75,15 @@ RESTORE_DBG_DEB=proxmox-backup-file-restore-dbgsym_${DEB_VERSION}_${ARCH}.deb
DOC_DEB=${PACKAGE}-docs_${DEB_VERSION}_all.deb
DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} \
${RESTORE_DEB} ${RESTORE_DBG_DEB}
${RESTORE_DEB} ${RESTORE_DBG_DEB} ${DEBUG_DEB} ${DEBUG_DBG_DEB}
DSC = rust-${PACKAGE}_${DEB_VERSION}.dsc
DESTDIR=
all: cargo-build $(SUBDIRS)
tests ?= --workspace
all: $(SUBDIRS)
.PHONY: $(SUBDIRS)
$(SUBDIRS):
@ -75,25 +95,23 @@ test:
$(CARGO) test $(tests) $(CARGO_BUILD_ARGS)
doc:
$(CARGO) doc --no-deps $(CARGO_BUILD_ARGS)
$(CARGO) doc --workspace --no-deps $(CARGO_BUILD_ARGS)
# always re-create this dir
.PHONY: build
build:
@echo "Setting pkg-buildcfg version to: $(DEB_VERSION_UPSTREAM)"
sed -i -e 's/^version =.*$$/version = "$(DEB_VERSION_UPSTREAM)"/' \
pbs-buildcfg/Cargo.toml
rm -rf build
rm -f debian/control
debcargo package \
--config debian/debcargo.toml \
--changelog-ready \
--no-overlay-write-back \
--directory build \
proxmox-backup \
$(shell dpkg-parsechangelog -l debian/changelog -SVersion | sed -e 's/-.*//')
sed -e '1,/^$$/ ! d' build/debian/control > build/debian/control.src
cat build/debian/control.src build/debian/control.in > build/debian/control
rm build/debian/control.in build/debian/control.src
cp build/debian/control debian/control
rm build/Cargo.lock
mkdir build
cp -a debian \
Cargo.toml src \
$(SUBCRATES) \
docs etc examples tests www zsh-completions \
defines.mk Makefile \
./build/
rm -f build/Cargo.lock
find build/debian -name "*.hint" -delete
$(foreach i,$(SUBDIRS), \
$(MAKE) -C build/$(i) clean ;)
@ -123,27 +141,61 @@ $(DSC): build
cd build; dpkg-buildpackage -S -us -uc -d -nc
lintian $(DSC)
.PHONY: clean distclean deb clean
distclean: clean
clean:
clean: clean-deb
$(foreach i,$(SUBDIRS), \
$(MAKE) -C $(i) clean ;)
$(CARGO) clean
rm -rf *.deb *.dsc *.tar.gz *.buildinfo *.changes build
rm -f .do-cargo-build
find . -name '*~' -exec rm {} ';'
# allows one to avoid running cargo clean when one just wants to tidy up after a packgae build
clean-deb:
rm -rf *.deb *.dsc *.tar.gz *.buildinfo *.changes build/
.PHONY: dinstall
dinstall: ${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB}
dinstall: ${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} \
${DEBUG_DEB} ${DEBUG_DBG_DEB}
dpkg -i $^
# make sure we build binaries before docs
docs: cargo-build
docs: $(COMPILEDIR)/dump-catalog-shell-cli $(COMPILEDIR)/docgen
.PHONY: cargo-build
cargo-build:
$(CARGO) build $(CARGO_BUILD_ARGS)
rm -f .do-cargo-build
$(MAKE) $(COMPILED_BINS)
$(COMPILED_BINS) $(COMPILEDIR)/dump-catalog-shell-cli $(COMPILEDIR)/docgen: .do-cargo-build
.do-cargo-build:
$(CARGO) build $(CARGO_BUILD_ARGS) \
--package proxmox-backup-banner \
--bin proxmox-backup-banner \
--package proxmox-backup-client \
--bin proxmox-backup-client \
--bin dump-catalog-shell-cli \
--bin proxmox-backup-debug \
--package proxmox-file-restore \
--bin proxmox-file-restore \
--package pxar-bin \
--bin pxar \
--package pbs-tape \
--bin pmt \
--bin pmtx \
--package proxmox-restore-daemon \
--bin proxmox-restore-daemon \
--package proxmox-backup \
--bin docgen \
--bin proxmox-backup-api \
--bin proxmox-backup-manager \
--bin proxmox-backup-proxy \
--bin proxmox-daily-update \
--bin proxmox-file-restore \
--bin proxmox-tape \
--bin sg-tape-cmd
touch "$@"
$(COMPILED_BINS): cargo-build
.PHONY: lint
lint:
@ -169,12 +221,16 @@ install: $(COMPILED_BINS)
install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/ ;)
$(MAKE) -C www install
$(MAKE) -C docs install
ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))
$(MAKE) test # HACK, only test now to avoid clobbering build files with wrong config
endif
.PHONY: upload
upload: ${SERVER_DEB} ${CLIENT_DEB} ${RESTORE_DEB} ${DOC_DEB}
upload: ${SERVER_DEB} ${CLIENT_DEB} ${RESTORE_DEB} ${DOC_DEB} ${DEBUG_DEB}
# check if working directory is clean
git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} | \
ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve,pmg,pbs-client" --dist buster
tar cf - ${RESTORE_DEB} ${RESTORE_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve" --dist buster
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} ${CLIENT_DEB} \
${CLIENT_DBG_DEB} ${DEBUG_DEB} ${DEBUG_DBG_DEB} \
| ssh -X repoman@repo.proxmox.com upload --product pbs --dist bullseye
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve,pmg,pbs-client" --dist bullseye
tar cf - ${RESTORE_DEB} ${RESTORE_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pve" --dist bullseye

View File

@ -1,23 +0,0 @@
// build.rs
use std::env;
use std::process::Command;
fn git_command(args: &[&str]) -> String {
match Command::new("git").args(args).output() {
Ok(output) => String::from_utf8(output.stdout).unwrap().trim_end().to_string(),
Err(err) => {
panic!("git {:?} failed: {}", args, err);
}
}
}
fn main() {
let repo_path = git_command(&["rev-parse", "--show-toplevel"]);
let repoid = match env::var("REPOID") {
Ok(repoid) => repoid,
Err(_) => git_command(&["rev-parse", "HEAD"]),
};
println!("cargo:rustc-env=REPOID={}", repoid);
println!("cargo:rerun-if-changed={}/.git/HEAD", repo_path);
}

372
debian/changelog vendored
View File

@ -1,39 +1,332 @@
rust-proxmox-backup (1.1.14-1) buster; urgency=medium
rust-proxmox-backup (2.1.2-1) bullseye; urgency=medium
* drop RawWaker usage to avoid a leaking a refcount
* docs: backup-client: fix wrong reference
* pbs-tools: LruCache: implement Drop to fix a memory leak for the cache
* docs: remotes: note that protected flags will not be synced
* ui: add notice for nearing PBS 1.1 End-of-Life
* sync job: correctly apply rate limit
* backport "datastore: lookup: reuse ChunkStore on stale datastore re-open"
-- Proxmox Support Team <support@proxmox.com> Tue, 23 Nov 2021 13:56:15 +0100
-- Proxmox Support Team <support@proxmox.com> Thu, 02 Jun 2022 18:07:54 +0200
rust-proxmox-backup (2.1.1-2) bullseye; urgency=medium
rust-proxmox-backup (1.1.13-3) buster; urgency=medium
* docs: update and add traffic control related screenshots
* fix sending log-rotation command to API daemons
* docs: mention traffic control (bandwidth limits) for sync jobs
-- Proxmox Support Team <support@proxmox.com> Tue, 19 Oct 2021 10:21:18 +0200
-- Proxmox Support Team <support@proxmox.com> Mon, 22 Nov 2021 16:07:39 +0100
rust-proxmox-backup (1.1.13-2) buster; urgency=medium
rust-proxmox-backup (2.1.1-1) bullseye; urgency=medium
* revert "auth: improve thread safety of 'crypt' C-library", not safe for
Debian buster based releases.
* fix proxmox-backup-manager sync-job list
-- Proxmox Support Team <support@proxmox.com> Mon, 26 Jul 2021 16:40:07 +0200
* ui, api: sync-job: allow one to configure a rate limit
rust-proxmox-backup (1.1.13-1) buster; urgency=medium
* api: snapshot list: set default for 'protected' flag
* ui: datastore content: rework rendering protection state
* docs: update traffic control docs (use HumanBytes)
* ui: traffic-control: include ipv6 in 'all' networks
* ui: traffic-control edit: add spaces between networks for more
readabillity
* tape: fix passing-through key-fingerprint
* avoid a bogus error regarding logrotate-path due to a reversed check
-- Proxmox Support Team <support@proxmox.com> Mon, 22 Nov 2021 12:24:31 +0100
rust-proxmox-backup (2.1.0-1) bullseye; urgency=medium
* rest server: make successful-ticket auth log a debug one to avoid
syslog spam
* traffic-controls: add API/CLI to show current traffic
* docs: add traffic control section
* ui: use TFA widgets from widget toolkit
* sync: allow pulling groups selectively
* fix #3533: tape backup: filter groups according to config
* proxmox-tape: add missing notify-user option to backup command
* openid: allow arbitrary username-claims
* openid: support configuring the prompt, scopes and ACR values
* use human-byte for traffic-control rate-in/out and burst-in/out config
* ui: add traffic control view and editor
-- Proxmox Support Team <support@proxmox.com> Sat, 20 Nov 2021 22:44:07 +0100
rust-proxmox-backup (2.0.14-1) bullseye; urgency=medium
* fix directory permission problems
* add traffic control configuration config with API
* proxmox-backup-proxy: implement traffic control
* proxmox-backup-client: add rate/burst parameter to backup/restore CLI
* openid_login: vertify that firstname, lastname and email fits our
schema definitions
* docs: add info about protection flag to client docs
* fix #3602: ui: datastore/Content: add action to set protection status
* ui: add protected icon to snapshot (if they are protected)
* ui: PruneInputPanel: add keepReason 'protected' for protected backups
* proxmox-backup-client: add 'protected' commands
* acme: interpret no TOS as accepted
* acme: new_account: prevent replacing existing accounts
-- Proxmox Support Team <support@proxmox.com> Fri, 12 Nov 2021 08:04:55 +0100
rust-proxmox-backup (2.0.13-1) bullseye; urgency=medium
* tape: simplify export_media_set for pool writer
* tape: improve export_media error message for not found tape
* rest-server: use hashmap for parameter errors
* proxmox-rrd: use new file firmat with higher resolution
* proxmox-rrd: use a journal to reduce amount of bytes written
* use new fsync parameter to replace_file and atomic_open_or_create
* docs: langauge and formatting fixup
* docs: Update for new features/functionality
-- Proxmox Support Team <support@proxmox.com> Thu, 21 Oct 2021 08:17:00 +0200
rust-proxmox-backup (2.0.12-1) bullseye; urgency=medium
* proxmox-backup-proxy: clean up old tasks when their reference was rotated
out of the task-log index
* api daemons: fix sending log-reopen command
-- Proxmox Support Team <support@proxmox.com> Tue, 19 Oct 2021 10:48:28 +0200
rust-proxmox-backup (2.0.11-1) bullseye; urgency=medium
* drop aritifical limits for task-UPID length
* tools: smart: only throw error for the fatal usage errors of smartctl
* api: improve returning errors for extjs formatter
* proxmox-rest-server: improve logging
* subscription: switch verification domain over to shop.proxmox.com
* rest-server/daemon: use new sd_notify_barrier helper for handling
synchronization with systemd on service reloading
* ui: datastore/Content: add empty text for no snapshots
* ui: datastore/Content: move first store-load into activate listener to
ensure we've a proper loading mask for better UX
-- Proxmox Support Team <support@proxmox.com> Tue, 05 Oct 2021 16:34:14 +0200
rust-proxmox-backup (2.0.10-1) bullseye; urgency=medium
* ui: fix order of prune keep reasons
* server: add proxmox-backup-debug binary with chunk/file inspection, an API
shell with completion support
* restructured code base to reduce linkage and libraray ABI version
constraints for all non-server binaries (client, pxar, file-restore)
* zsh: fix passign parameters in auto-completion scripts
* tape: also add 'force-media-set' to availablea CLI options
* api: nodes: add missing node list (index) api endpoint
* docs: proxmox-backup-debug: add info about the new 'api' subcommand
* docs/technical-overview: add troubleshooting section
-- Proxmox Support Team <support@proxmox.com> Tue, 21 Sep 2021 14:00:48 +0200
rust-proxmox-backup (2.0.9-2) bullseye; urgency=medium
* tape backup: mention groups that were empty
* tape: compute next-media-label for each tape backup job
* tape: lto: increase default timeout to 10 minutes
* ui: display next-media-label for tape backup jobs
* cli: proxmox-tape backup-job list: use status api and display next-run
and next-media-label
-- Proxmox Support Team <support@proxmox.com> Tue, 24 Aug 2021 14:44:12 +0200
rust-proxmox-backup (2.0.8-1) bullseye; urgency=medium
* use proxmox-apt to 0.6
* api: apt: adapt to proxmox-apt back-end changes
* api/ui: allow zstd compression for new zpools
* tape: media_catalog: add snapshot list cache for catalog
* api2: tape: media: use MediaCatalog::snapshot_list for content listing
* tape: lock media_catalog file to to get a consistent view with load_catalog
* tape: changer: handle libraries that sends wrong amount of data
* tape: changer: remove unnecesary inquiry parameter
* api2: tape/restore: commit temporary catalog at the end
* docs: tape: add instructions on how to restore the catalog
* ui: tape/ChangerStatus: improve layout for large libraries
* tape: changer: handle invalid descriptor data from library in status page
* datastore config: cleanup code (use flatten attribute)
-- Proxmox Support Team <support@proxmox.com> Mon, 02 Aug 2021 10:34:55 +0200
rust-proxmox-backup (2.0.7-1) bullseye; urgency=medium
* tape changer: better cope with models that are not following spec
proposals when returning the status page
* tape changer: make DVCID information optional, not all devices return it
* restore daemon: setup the 'backup' system user and group in the minimal
restore environment, as we like to ensure that all state files are ownend
by them.
-- Proxmox Support Team <support@proxmox.com> Fri, 23 Jul 2021 08:43:51 +0200
rust-proxmox-backup (2.0.6-1) bullseye; urgency=medium
* increase maximum drives per changer to 255
* allow one to pass a secret not only directly through the environment value,
but also indirectly through a file path, an open file descriptor or a
command that can write the secret to standard out.
* pull in new proxmox library version to improve the file system
comaptibility on creation of atomic files, e.g., lock files.
-- Proxmox Support Team <support@proxmox.com> Thu, 22 Jul 2021 10:22:19 +0200
rust-proxmox-backup (2.0.5-2) bullseye; urgency=medium
* ui: tape: backup overview: increase timeout for media-set content
* tape: changer: always retry until timeout
* file-restore: increase lock timeout on QEMU map
* fix #3515: file-restore-daemon: allow LVs/PVs with dash in name
* fix #3526: correctly filter tasks with 'since' and 'until'
* tape: changer: make scsi request for DVCID a separate one, as some
libraries cannot handle requesting that combined with volume tags in one
go
* api, ui: datastore: add new 'prune-datastore' api call and expose it with
a 'Prune All' button
* make creating log files more robust so that theys are always owned by the
less privileged `backup` user
-- Proxmox Support Team <support@proxmox.com> Wed, 21 Jul 2021 09:12:39 +0200
rust-proxmox-backup (2.0.4-1) bullseye; urgency=medium
* change tape drive lock path to avoid issues with sticky bit on tmpfs
mountpoint
* tape: changer: query transport-element types separately
* auth: improve thread safety of 'crypt' C-library
* file-restore: increase lock timeout on QEMU map
-- Proxmox Support Team <support@proxmox.com> Mon, 12 Jul 2021 18:51:21 +0200
rust-proxmox-backup (2.0.3-1) bullseye; urgency=medium
* api: apt: add repositories info and update calls
* ui: administration: add APT repositories status and update panel
* api: access domains: add get/create/update/delete endpoints for realms
* ui: access control: add 'Realm' tab for adding and editing OpenID Connect
identity provider
* fix #3447: ui: Dashboard: disallow selection of datastore statistics row
* ui: tapeRestore: make window non-resizable
* ui: dashboard: rework resource-load panel to a more detailed status panel,
showing, among other things, uptime, Kernel version, CPU info and
repository status.
* ui: adminsitration/dashboard: auto-scale columns count and add
browser-local setting to override that to a fixed value of columns.
* fix #3212: api, ui: add support for notes on backup groups
-- Proxmox Support Team <support@proxmox.com> Mon, 12 Jul 2021 08:07:41 +0200
rust-proxmox-backup (2.0.2-1) bullseye; urgency=medium
* ui: use task list component from widget toolkit
* api: add keep-job-configs flag to datastore remove endpoint
* api: config: delete datastore: also remove tape backup jobs
* ui: tape restore: mark datastore selector as 'not a form field' to fix
compatibility with ExtJS 7.0
* ui: datastore removal: only navigate away when the user actually confirmed
the removal of that datastore
-- Proxmox Support Team <support@proxmox.com> Thu, 08 Jul 2021 14:44:12 +0200
rust-proxmox-backup (2.0.1-2) bullseye; urgency=medium
* file restore daemon: log basic startup steps
* REST-API: set error message extension for bad-request response log to
ensure the actual error is logged in any (access) log, making debugging
such issues easier.
such issues easier
* restore daemon: create /run/proxmox-backup on startup as there's now some
runtime state saved there, which failed all API requests to the restore
daemon otherwise
* restore daemon: use millisecond log resolution
@ -41,33 +334,19 @@ rust-proxmox-backup (1.1.13-1) buster; urgency=medium
ensuring DNS propagation of that record. This makes it catch up with the
docs/web-interface, where the option was already available.
-- Proxmox Support Team <support@proxmox.com> Fri, 23 Jul 2021 12:34:29 +0200
* docs: initial update to repositories for bullseye
rust-proxmox-backup (1.1.12-1) buster; urgency=medium
-- Proxmox Support Team <support@proxmox.com> Sat, 03 Jul 2021 23:14:49 +0200
* subscription: set higher-level error to message instead of bailing out, to
ensure a force-check gets through
rust-proxmox-backup (2.0.0-2) bullseye; urgency=medium
* ui: dashboard: datastore stats: fix closing <i> tag
* file-restore-daemon/disk: add LVM (thin) support
* ui: datastore: option view: only navigate up when we actually removed the
datastore
-- Proxmox Support Team <support@proxmox.com> Sat, 03 Jul 2021 02:15:16 +0200
-- Proxmox Support Team <support@proxmox.com> Fri, 09 Jul 2021 12:56:35 +0200
rust-proxmox-backup (2.0.0-1) bullseye; urgency=medium
rust-proxmox-backup (1.1.11-1) buster; urgency=medium
* tape/drive: fix logging when requesting media
* tape: fix LTO locate_file for HP drives
* fix #3393 (again): pxar/create: try to read xattrs/fcaps/acls by default
* proxmox-backup-manager: show task log on datastore create
-- Proxmox Support Team <support@proxmox.com> Wed, 30 Jun 2021 11:24:20 +0200
rust-proxmox-backup (1.1.10-1) buster; urgency=medium
* initial bump for Debian 11 Bullseye / Proxmox Backup Server 2.0
* ui: datastore list summary: catch and show errors per datastore
@ -84,7 +363,7 @@ rust-proxmox-backup (1.1.10-1) buster; urgency=medium
* ui: datastore options: add remove button to drop a datastore from the
configuration, without removing any actual data
* ui: tape: drive selector: do not autoselect the drive
* ui: tape: drive selector: do not auto select the drive
* ui: tape: backup job: use correct default value for pbsUserSelector
@ -93,7 +372,22 @@ rust-proxmox-backup (1.1.10-1) buster; urgency=medium
* backup: add helpers for async last recently used (LRU) caches for chunk
and index reading of backup snapshot
-- Proxmox Support Team <support@proxmox.com> Wed, 16 Jun 2021 09:46:15 +0200
* fix #3459: manager: add --ignore-verified and --outdated-after parameters
* proxmox-backup-manager: show task log on datastore create
* tape: snapshot reader: read chunks sorted by inode (per index) to improve
sequential reads when backing up data from slow spinning disks to tape.
* file-restore: support ZFS pools
* improve fix for #3393: pxar create: try to read xattrs/fcaps/acls by default
* fix compatibility with ExtJS 7.0
* docs: build api-viewer from widget-toolkit-dev
-- Proxmox Support Team <support@proxmox.com> Mon, 28 Jun 2021 19:35:40 +0200
rust-proxmox-backup (1.1.9-1) stable; urgency=medium

76
debian/control vendored
View File

@ -1,16 +1,17 @@
Source: rust-proxmox-backup
Section: admin
Priority: optional
Build-Depends: debhelper (>= 11),
dh-cargo (>= 18),
Build-Depends: debhelper (>= 12),
dh-cargo (>= 24),
cargo:native,
rustc:native,
libstd-rust-dev,
librust-anyhow-1+default-dev,
librust-apt-pkg-native-0.3+default-dev (>= 0.3.2-~~),
librust-base64-0.12+default-dev,
librust-base64-0.13+default-dev,
librust-bitflags-1+default-dev (>= 1.2.1-~~),
librust-bytes-1+default-dev,
librust-cidr-0.2+default-dev (>= 0.2.1-~~),
librust-crc32fast-1+default-dev,
librust-crossbeam-channel-0.5+default-dev,
librust-endian-trait-0.6+arrays-dev,
@ -22,9 +23,10 @@ Build-Depends: debhelper (>= 11),
librust-h2-0.3+default-dev,
librust-h2-0.3+stream-dev,
librust-handlebars-3+default-dev,
librust-hex-0.4+default-dev (>= 0.4.3-~~),
librust-http-0.2+default-dev,
librust-hyper-0.14+default-dev,
librust-hyper-0.14+full-dev,
librust-hyper-0.14+default-dev (>= 0.14.5-~~),
librust-hyper-0.14+full-dev (>= 0.14.5-~~),
librust-lazy-static-1+default-dev (>= 1.4-~~),
librust-libc-0.2+default-dev,
librust-log-0.4+default-dev,
@ -37,26 +39,44 @@ Build-Depends: debhelper (>= 11),
librust-pam-sys-0.5+default-dev,
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-1+default-dev,
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.11+api-macro-dev (>= 0.11.6-~~),
librust-proxmox-0.11+cli-dev (>= 0.11.6-~~),
librust-proxmox-0.11+default-dev (>= 0.11.6-~~),
librust-proxmox-0.11+router-dev (>= 0.11.6-~~),
librust-proxmox-0.11+sortable-macro-dev (>= 0.11.6-~~),
librust-proxmox-0.11+tfa-dev (>= 0.11.6-~~),
librust-pin-project-lite-0.2+default-dev,
librust-proxmox-0.15+default-dev (>= 0.15.3-~~),
librust-proxmox-0.15+sortable-macro-dev (>= 0.15.3-~~),
librust-proxmox-0.15+tokio-dev (>= 0.15.3-~~),
librust-proxmox-acme-rs-0.3+default-dev,
librust-proxmox-apt-0.8+default-dev,
librust-proxmox-async-0.2+default-dev,
librust-proxmox-borrow-1+default-dev,
librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~),
librust-proxmox-http-0.2+client-dev (>= 0.2.1-~~),
librust-proxmox-http-0.2+default-dev (>= 0.2.1-~~),
librust-proxmox-http-0.2+http-helpers-dev (>= 0.2.1-~~),
librust-proxmox-http-0.2+websocket-dev (>= 0.2.1-~~),
librust-proxmox-http-0.5+client-dev (>= 0.5.4-~~),
librust-proxmox-http-0.5+default-dev (>= 0.5.4-~~),
librust-proxmox-http-0.5+http-helpers-dev (>= 0.5.4-~~),
librust-proxmox-http-0.5+websocket-dev (>= 0.5.4-~~),
librust-proxmox-io-1+default-dev,
librust-proxmox-io-1+tokio-dev,
librust-proxmox-lang-1+default-dev,
librust-proxmox-openid-0.9+default-dev,
librust-proxmox-router-1+cli-dev (>= 1.1-~~),
librust-proxmox-router-1+default-dev (>= 1.1-~~),
librust-proxmox-schema-1+api-macro-dev (>= 1.0.1-~~),
librust-proxmox-schema-1+default-dev (>= 1.0.1-~~),
librust-proxmox-schema-1+upid-api-impl-dev (>= 1.0.1-~~),
librust-proxmox-section-config-1+default-dev,
librust-proxmox-shared-memory-0.1+default-dev (>= 0.1.1-~~),
librust-proxmox-sys-0.1+default-dev (>= 0.1.2-~~),
librust-proxmox-tfa-1+api-dev (>= 1.3-~~),
librust-proxmox-tfa-1+api-types-dev (>= 1.3-~~),
librust-proxmox-tfa-1+default-dev (>= 1.3-~~),
librust-proxmox-time-1+default-dev (>= 1.1-~~),
librust-proxmox-uuid-1+default-dev,
librust-proxmox-uuid-1+serde-dev,
librust-pxar-0.10+default-dev (>= 0.10.1-~~),
librust-pxar-0.10+tokio-io-dev (>= 0.10.1-~~),
librust-regex-1+default-dev (>= 1.2-~~),
librust-rustyline-7+default-dev,
librust-serde-1+default-dev,
librust-serde-1+derive-dev,
librust-serde-cbor-0.11+default-dev (>= 0.11.1-~~),
librust-serde-json-1+default-dev,
librust-siphasher-0.3+default-dev,
librust-syslog-4+default-dev,
@ -72,6 +92,7 @@ Build-Depends: debhelper (>= 11),
librust-tokio-1+rt-dev (>= 1.6-~~),
librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~),
librust-tokio-1+signal-dev (>= 1.6-~~),
librust-tokio-1+sync-dev (>= 1.6-~~),
librust-tokio-1+time-dev (>= 1.6-~~),
librust-tokio-openssl-0.6+default-dev (>= 0.6.1-~~),
librust-tokio-stream-0.1+default-dev,
@ -79,16 +100,15 @@ Build-Depends: debhelper (>= 11),
librust-tokio-util-0.6+default-dev,
librust-tokio-util-0.6+io-dev,
librust-tower-service-0.3+default-dev,
librust-udev-0.4+default-dev | librust-udev-0.3+default-dev,
librust-udev-0.4+default-dev,
librust-url-2+default-dev (>= 2.1-~~),
librust-walkdir-2+default-dev,
librust-webauthn-rs-0.2+default-dev (>= 0.2.5-~~),
librust-xdg-2+default-dev (>= 2.2-~~),
librust-zstd-0.4+bindgen-dev,
librust-zstd-0.4+default-dev,
librust-zstd-0.6+bindgen-dev,
librust-zstd-0.6+default-dev,
libacl1-dev,
libfuse3-dev,
libsystemd-dev,
libsystemd-dev (>= 246-~~),
uuid-dev,
libsgutils2-dev,
bash-completion,
@ -99,6 +119,7 @@ Build-Depends: debhelper (>= 11),
graphviz <!nodoc>,
latexmk <!nodoc>,
patchelf,
proxmox-widget-toolkit-dev <!nodoc>,
pve-eslint (>= 7.18.0-1),
python3-docutils,
python3-pygments,
@ -109,15 +130,16 @@ Build-Depends: debhelper (>= 11),
texlive-xetex <!nodoc>,
xindy <!nodoc>
Maintainer: Proxmox Support Team <support@proxmox.com>
Standards-Version: 4.4.1
Standards-Version: 4.5.1
Vcs-Git: git://git.proxmox.com/git/proxmox-backup.git
Vcs-Browser: https://git.proxmox.com/?p=proxmox-backup.git;a=summary
Homepage: https://www.proxmox.com
Rules-Requires-Root: binary-targets
Package: proxmox-backup-server
Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libjs-extjs (>= 7~),
libjs-qrcodejs (>= 1.20201119),
libproxmox-acme-plugins,
libsgutils2-2,
@ -128,7 +150,7 @@ Depends: fonts-font-awesome,
postfix | mail-transport-agent,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.6-2),
proxmox-widget-toolkit (>= 3.4-3),
pve-xtermjs (>= 4.7.0-1),
sg3-utils,
smartmontools,
@ -152,7 +174,8 @@ Description: Proxmox Backup Client tools
Package: proxmox-backup-docs
Build-Profiles: <!nodoc>
Section: doc
Depends: libjs-extjs,
Depends: fonts-font-awesome,
libjs-extjs,
libjs-mathjax,
${misc:Depends},
Architecture: all
@ -165,6 +188,7 @@ Depends: ${misc:Depends},
${shlibs:Depends},
Recommends: pve-qemu-kvm (>= 5.0.0-9),
proxmox-backup-restore-image,
Breaks: proxmox-backup-restore-image (<< 0.3.1)
Description: Proxmox Backup single file restore tools for pxar and block device backups
This package contains the Proxmox Backup single file restore client for
restoring individual files and folders from both host/container and VM/block

55
debian/control.in vendored
View File

@ -1,55 +0,0 @@
Package: proxmox-backup-server
Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libjs-qrcodejs (>= 1.20201119),
libproxmox-acme-plugins,
libsgutils2-2,
libzstd1 (>= 1.3.8),
lvm2,
openssh-server,
pbs-i18n,
postfix | mail-transport-agent,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.6-2),
pve-xtermjs (>= 4.7.0-1),
sg3-utils,
smartmontools,
${misc:Depends},
${shlibs:Depends},
Recommends: zfsutils-linux,
ifupdown2,
Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface.
Package: proxmox-backup-client
Architecture: any
Depends: qrencode,
${misc:Depends},
${shlibs:Depends},
Description: Proxmox Backup Client tools
This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups.
Package: proxmox-backup-docs
Build-Profiles: <!nodoc>
Section: doc
Depends: libjs-extjs,
libjs-mathjax,
${misc:Depends},
Architecture: all
Description: Proxmox Backup Documentation
This package contains the Proxmox Backup Documentation files.
Package: proxmox-backup-file-restore
Architecture: any
Depends: ${misc:Depends},
${shlibs:Depends},
Recommends: pve-qemu-kvm (>= 5.0.0-9),
proxmox-backup-restore-image,
Description: Proxmox Backup single file restore tools for pxar and block device backups
This package contains the Proxmox Backup single file restore client for
restoring individual files and folders from both host/container and VM/block
device backups. It includes a block device restore driver using QEMU.

42
debian/debcargo.toml vendored
View File

@ -1,42 +0,0 @@
overlay = "."
crate_src_path = ".."
whitelist = ["tests/*.c"]
maintainer = "Proxmox Support Team <support@proxmox.com>"
[source]
vcs_git = "git://git.proxmox.com/git/proxmox-backup.git"
vcs_browser = "https://git.proxmox.com/?p=proxmox-backup.git;a=summary"
section = "admin"
build_depends = [
"bash-completion",
"debhelper (>= 12~)",
"fonts-dejavu-core <!nodoc>",
"fonts-lato <!nodoc>",
"fonts-open-sans <!nodoc>",
"graphviz <!nodoc>",
"latexmk <!nodoc>",
"patchelf",
"pve-eslint (>= 7.18.0-1)",
"python3-docutils",
"python3-pygments",
"python3-sphinx <!nodoc>",
"rsync",
"texlive-fonts-extra <!nodoc>",
"texlive-fonts-recommended <!nodoc>",
"texlive-xetex <!nodoc>",
"xindy <!nodoc>",
]
build_depends_excludes = [
"debhelper (>=11)",
]
[packages.lib]
depends = [
"libacl1-dev",
"libfuse3-dev",
"libsystemd-dev",
"uuid-dev",
"libsgutils2-dev",
]

67
debian/postinst vendored
View File

@ -4,6 +4,14 @@ set -e
#DEBHELPER#
update_sync_job() {
job="$1"
echo "Updating sync job '$job' to make old 'remove-vanished' default explicit.."
proxmox-backup-manager sync-job update "$job" --remove-vanished true \
|| echo "Failed, please check sync.cfg manually!"
}
case "$1" in
configure)
# need to have user backup in the tape group
@ -26,48 +34,35 @@ case "$1" in
fi
deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true
# FIXME: Remove with 1.1
if test -n "$2"; then
if dpkg --compare-versions "$2" 'lt' '0.9.4-1'; then
if grep -s -q -P -e '^\s+verify-schedule ' /etc/proxmox-backup/datastore.cfg; then
echo "NOTE: drop all verify schedules from datastore config."
echo "You can now add more flexible verify jobs"
flock -w 30 /etc/proxmox-backup/.datastore.lck \
sed -i '/^\s\+verify-schedule /d' /etc/proxmox-backup/datastore.cfg || true
fi
fi
if dpkg --compare-versions "$2" 'le' '0.9.5-1'; then
chown --quiet backup:backup /var/log/proxmox-backup/api/auth.log || true
fi
if dpkg --compare-versions "$2" 'le' '0.9.7-1'; then
if [ -e /etc/proxmox-backup/remote.cfg ]; then
echo "NOTE: Switching over remote.cfg to new field names.."
flock -w 30 /etc/proxmox-backup/.remote.lck \
sed -i \
-e 's/^\s\+userid /\tauth-id /g' \
/etc/proxmox-backup/remote.cfg || true
fi
fi
if dpkg --compare-versions "$2" 'le' '1.0.14-1'; then
# FIXME: Remove with 2.0
if grep -s -q -P -e '^linux:' /etc/proxmox-backup/tape.cfg; then
echo "========="
echo "= NOTE: You have now unsupported 'linux' tape drives configured."
echo "= * Execute 'udevadm control --reload-rules && udevadm trigger' to update /dev"
echo "= * Edit '/etc/proxmox-backup/tape.cfg', remove 'linux' entries and re-add over CLI/GUI"
echo "========="
fi
fi
# FIXME: remove with 2.0
if [ -d "/var/lib/proxmox-backup/tape" ] &&
[ "$(stat --printf '%a' '/var/lib/proxmox-backup/tape')" != "750" ]; then
chmod 0750 /var/lib/proxmox-backup/tape || true
fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active || true
fi
if dpkg --compare-versions "$2" 'lt' '7.1-1' && test -e /etc/proxmox-backup/sync.cfg; then
prev_job=""
# read from HERE doc because POSIX sh limitations
while read -r key value; do
if test "$key" = "sync:"; then
if test -n "$prev_job"; then
# previous job doesn't have an explicit value
update_sync_job "$prev_job"
fi
prev_job=$value
else
prev_job=""
fi
done <<EOF
$(grep -e '^sync:' -e 'remove-vanished' /etc/proxmox-backup/sync.cfg)
EOF
if test -n "$prev_job"; then
# last job doesn't have an explicit value
update_sync_job "$prev_job"
fi
fi
fi
;;

8
debian/proxmox-backup-debug.bc vendored Normal file
View File

@ -0,0 +1,8 @@
# proxmox-backup-debug bash completion
# see http://tiswww.case.edu/php/chet/bash/FAQ
# and __ltrim_colon_completions() in /usr/share/bash-completion/bash_completion
# this modifies global var, but I found no better way
COMP_WORDBREAKS=${COMP_WORDBREAKS//:}
complete -C 'proxmox-backup-debug bashcomplete' proxmox-backup-debug

View File

@ -1,5 +1,6 @@
/usr/share/doc/proxmox-backup/proxmox-backup.pdf /usr/share/doc/proxmox-backup/html/proxmox-backup.pdf
/usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/prune-simulator/extjs
/usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/lto-barcode/extjs
/usr/share/fonts-font-awesome/ /usr/share/doc/proxmox-backup/html/lto-barcode/font-awesome
/usr/share/javascript/extjs /usr/share/doc/proxmox-backup/html/api-viewer/extjs
/usr/share/javascript/mathjax /usr/share/doc/proxmox-backup/html/_static/mathjax

View File

@ -1,4 +1,5 @@
debian/proxmox-backup-manager.bc proxmox-backup-manager
debian/proxmox-backup-debug.bc proxmox-backup-debug
debian/proxmox-tape.bc proxmox-tape
debian/pmtx.bc pmtx
debian/pmt.bc pmt

View File

@ -9,6 +9,7 @@ usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-daily-update
usr/lib/x86_64-linux-gnu/proxmox-backup/sg-tape-cmd
usr/sbin/proxmox-backup-debug
usr/sbin/proxmox-backup-manager
usr/bin/pmtx
usr/bin/pmt
@ -17,6 +18,7 @@ usr/share/javascript/proxmox-backup/index.hbs
usr/share/javascript/proxmox-backup/css/ext6-pbs.css
usr/share/javascript/proxmox-backup/images
usr/share/javascript/proxmox-backup/js/proxmox-backup-gui.js
usr/share/man/man1/proxmox-backup-debug.1
usr/share/man/man1/proxmox-backup-manager.1
usr/share/man/man1/proxmox-backup-proxy.1
usr/share/man/man1/proxmox-tape.1
@ -31,6 +33,7 @@ usr/share/man/man5/verification.cfg.5
usr/share/man/man5/media-pool.cfg.5
usr/share/man/man5/tape.cfg.5
usr/share/man/man5/tape-job.cfg.5
usr/share/zsh/vendor-completions/_proxmox-backup-debug
usr/share/zsh/vendor-completions/_proxmox-backup-manager
usr/share/zsh/vendor-completions/_proxmox-tape
usr/share/zsh/vendor-completions/_pmtx

8
debian/rules vendored
View File

@ -32,6 +32,9 @@ override_dh_auto_build:
override_dh_missing:
dh_missing --fail-missing
override_dh_auto_test:
# ignore here to avoid rebuilding the binaries with the wrong target
override_dh_auto_install:
dh_auto_install -- \
PROXY_USER=backup \
@ -45,11 +48,6 @@ override_dh_installsystemd:
override_dh_fixperms:
dh_fixperms --exclude sg-tape-cmd
# workaround https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933541
# TODO: remove once available (Debian 11 ?)
override_dh_dwz:
dh_dwz --no-dwz-multifile
override_dh_strip:
dh_strip
for exe in $$(find \

View File

@ -5,6 +5,7 @@ GENERATED_SYNOPSIS := \
proxmox-backup-client/synopsis.rst \
proxmox-backup-client/catalog-shell-synopsis.rst \
proxmox-backup-manager/synopsis.rst \
proxmox-backup-debug/synopsis.rst \
proxmox-file-restore/synopsis.rst \
pxar/synopsis.rst \
pmtx/synopsis.rst \
@ -27,7 +28,8 @@ MAN1_PAGES := \
proxmox-backup-proxy.1 \
proxmox-backup-client.1 \
proxmox-backup-manager.1 \
proxmox-file-restore.1
proxmox-file-restore.1 \
proxmox-backup-debug.1
MAN5_PAGES := \
media-pool.cfg.5 \
@ -46,23 +48,35 @@ PRUNE_SIMULATOR_FILES := \
prune-simulator/clear-trigger.png \
prune-simulator/prune-simulator.js
PRUNE_SIMULATOR_JS_SOURCE := \
/usr/share/javascript/proxmox-widget-toolkit-dev/Toolkit.js \
prune-simulator/prune-simulator_source.js
LTO_BARCODE_JS_SOURCE := \
/usr/share/javascript/proxmox-widget-toolkit-dev/Toolkit.js \
lto-barcode/code39.js \
lto-barcode/prefix-field.js \
lto-barcode/label-style.js \
lto-barcode/tape-type.js \
lto-barcode/paper-size.js \
lto-barcode/page-layout.js \
lto-barcode/page-calibration.js \
lto-barcode/label-list.js \
lto-barcode/label-setup.js \
lto-barcode/lto-barcode.js
LTO_BARCODE_FILES := \
lto-barcode/index.html \
lto-barcode/code39.js \
lto-barcode/prefix-field.js \
lto-barcode/label-style.js \
lto-barcode/tape-type.js \
lto-barcode/paper-size.js \
lto-barcode/page-layout.js \
lto-barcode/page-calibration.js \
lto-barcode/label-list.js \
lto-barcode/label-setup.js \
lto-barcode/lto-barcode.js
lto-barcode/lto-barcode-generator.js
API_VIEWER_SOURCES= \
api-viewer/index.html \
api-viewer/apidoc.js
API_VIEWER_FILES := \
api-viewer/apidata.js \
/usr/share/javascript/proxmox-widget-toolkit-dev/APIViewer.js \
# Sphinx documentation setup
SPHINXOPTS =
SPHINXBUILD = sphinx-build
@ -187,6 +201,12 @@ proxmox-file-restore/synopsis.rst: ${COMPILEDIR}/proxmox-file-restore
proxmox-file-restore.1: proxmox-file-restore/man1.rst proxmox-file-restore/description.rst proxmox-file-restore/synopsis.rst
rst2man $< >$@
proxmox-backup-debug/synopsis.rst: ${COMPILEDIR}/proxmox-backup-debug
${COMPILEDIR}/proxmox-backup-debug printdoc > proxmox-backup-debug/synopsis.rst
proxmox-backup-debug.1: proxmox-backup-debug/man1.rst proxmox-backup-debug/description.rst proxmox-backup-debug/synopsis.rst
rst2man $< >$@
.PHONY: onlinehelpinfo
onlinehelpinfo:
@echo "Generating OnlineHelpInfo.js..."
@ -196,8 +216,17 @@ onlinehelpinfo:
api-viewer/apidata.js: ${COMPILEDIR}/docgen
${COMPILEDIR}/docgen apidata.js >$@
api-viewer/apidoc.js: api-viewer/apidata.js api-viewer/PBSAPI.js
cat api-viewer/apidata.js api-viewer/PBSAPI.js >$@
api-viewer/apidoc.js: ${API_VIEWER_FILES}
cat ${API_VIEWER_FILES} >$@.tmp
mv $@.tmp $@
prune-simulator/prune-simulator.js: ${PRUNE_SIMULATOR_JS_SOURCE}
cat ${PRUNE_SIMULATOR_JS_SOURCE} >$@.tmp
mv $@.tmp $@
lto-barcode/lto-barcode-generator.js: ${LTO_BARCODE_JS_SOURCE}
cat ${LTO_BARCODE_JS_SOURCE} >$@.tmp
mv $@.tmp $@
.PHONY: html
html: ${GENERATED_SYNOPSIS} images/proxmox-logo.svg custom.css conf.py ${PRUNE_SIMULATOR_FILES} ${LTO_BARCODE_FILES} ${API_VIEWER_SOURCES}
@ -228,7 +257,7 @@ epub3: ${GENERATED_SYNOPSIS}
clean:
rm -r -f *~ *.1 ${BUILDDIR} ${GENERATED_SYNOPSIS} api-viewer/apidata.js
rm -f api-viewer/apidoc.js lto-barcode/lto-barcode-generator.js
rm -f api-viewer/apidoc.js lto-barcode/lto-barcode-generator.js prune-simulator/prune-simulator.js
install_manual_pages: ${MAN1_PAGES} ${MAN5_PAGES}

View File

@ -1,526 +0,0 @@
// avoid errors when running without development tools
if (!Ext.isDefined(Ext.global.console)) {
var console = {
dir: function() {},
log: function() {}
};
}
Ext.onReady(function() {
Ext.define('pve-param-schema', {
extend: 'Ext.data.Model',
fields: [
'name', 'type', 'typetext', 'description', 'verbose_description',
'enum', 'minimum', 'maximum', 'minLength', 'maxLength',
'pattern', 'title', 'requires', 'format', 'default',
'disallow', 'extends', 'links',
{
name: 'optional',
type: 'boolean'
}
]
});
var store = Ext.define('pve-updated-treestore', {
extend: 'Ext.data.TreeStore',
model: Ext.define('pve-api-doc', {
extend: 'Ext.data.Model',
fields: [
'path', 'info', 'text',
]
}),
proxy: {
type: 'memory',
data: pbsapi
},
sorters: [{
property: 'leaf',
direction: 'ASC'
}, {
property: 'text',
direction: 'ASC'
}],
filterer: 'bottomup',
doFilter: function(node) {
this.filterNodes(node, this.getFilters().getFilterFn(), true);
},
filterNodes: function(node, filterFn, parentVisible) {
var me = this,
bottomUpFiltering = me.filterer === 'bottomup',
match = filterFn(node) && parentVisible || (node.isRoot() && !me.getRootVisible()),
childNodes = node.childNodes,
len = childNodes && childNodes.length, i, matchingChildren;
if (len) {
for (i = 0; i < len; ++i) {
matchingChildren = me.filterNodes(childNodes[i], filterFn, match || bottomUpFiltering) || matchingChildren;
}
if (bottomUpFiltering) {
match = matchingChildren || match;
}
}
node.set("visible", match, me._silentOptions);
return match;
},
}).create();
var render_description = function(value, metaData, record) {
var pdef = record.data;
value = pdef.verbose_description || value;
// TODO: try to render asciidoc correctly
metaData.style = 'white-space:pre-wrap;'
return Ext.htmlEncode(value);
};
var render_type = function(value, metaData, record) {
var pdef = record.data;
return pdef['enum'] ? 'enum' : (pdef.type || 'string');
};
let render_simple_format = function(pdef, type_fallback) {
if (pdef.typetext)
return pdef.typetext;
if (pdef['enum'])
return pdef['enum'].join(' | ');
if (pdef.format)
return pdef.format;
if (pdef.pattern)
return pdef.pattern;
if (pdef.type === 'boolean')
return `<true|false>`;
if (type_fallback && pdef.type)
return `<${pdef.type}>`;
return;
};
let render_format = function(value, metaData, record) {
let pdef = record.data;
metaData.style = 'white-space:normal;'
if (pdef.type === 'array' && pdef.items) {
let format = render_simple_format(pdef.items, true);
return `[${Ext.htmlEncode(format)}, ...]`;
}
return Ext.htmlEncode(render_simple_format(pdef) || '');
};
var real_path = function(path) {
return path.replace(/^.*\/_upgrade_(\/)?/, "/");
};
var permission_text = function(permission) {
let permhtml = "";
if (permission.user) {
if (!permission.description) {
if (permission.user === 'world') {
permhtml += "Accessible without any authentication.";
} else if (permission.user === 'all') {
permhtml += "Accessible by all authenticated users.";
} else {
permhtml += 'Onyl accessible by user "' +
permission.user + '"';
}
}
} else if (permission.check) {
permhtml += "<pre>Check: " +
Ext.htmlEncode(Ext.JSON.encode(permission.check)) + "</pre>";
} else if (permission.userParam) {
permhtml += `<div>Check if user matches parameter '${permission.userParam}'`;
} else if (permission.or) {
permhtml += "<div>Or<div style='padding-left: 10px;'>";
Ext.Array.each(permission.or, function(sub_permission) {
permhtml += permission_text(sub_permission);
})
permhtml += "</div></div>";
} else if (permission.and) {
permhtml += "<div>And<div style='padding-left: 10px;'>";
Ext.Array.each(permission.and, function(sub_permission) {
permhtml += permission_text(sub_permission);
})
permhtml += "</div></div>";
} else {
//console.log(permission);
permhtml += "Unknown syntax!";
}
return permhtml;
};
var render_docu = function(data) {
var md = data.info;
// console.dir(data);
var items = [];
var clicmdhash = {
GET: 'get',
POST: 'create',
PUT: 'set',
DELETE: 'delete'
};
Ext.Array.each(['GET', 'POST', 'PUT', 'DELETE'], function(method) {
var info = md[method];
if (info) {
var usage = "";
usage += "<table><tr><td>HTTP:&nbsp;&nbsp;&nbsp;</td><td>"
+ method + " " + real_path("/api2/json" + data.path) + "</td></tr>";
var sections = [
{
title: 'Description',
html: Ext.htmlEncode(info.description),
bodyPadding: 10
},
{
title: 'Usage',
html: usage,
bodyPadding: 10
}
];
if (info.parameters && info.parameters.properties) {
var pstore = Ext.create('Ext.data.Store', {
model: 'pve-param-schema',
proxy: {
type: 'memory'
},
groupField: 'optional',
sorters: [
{
property: 'name',
direction: 'ASC'
}
]
});
Ext.Object.each(info.parameters.properties, function(name, pdef) {
pdef.name = name;
pstore.add(pdef);
});
pstore.sort();
var groupingFeature = Ext.create('Ext.grid.feature.Grouping',{
enableGroupingMenu: false,
groupHeaderTpl: '<tpl if="groupValue">Optional</tpl><tpl if="!groupValue">Required</tpl>'
});
sections.push({
xtype: 'gridpanel',
title: 'Parameters',
features: [groupingFeature],
store: pstore,
viewConfig: {
trackOver: false,
stripeRows: true
},
columns: [
{
header: 'Name',
dataIndex: 'name',
flex: 1
},
{
header: 'Type',
dataIndex: 'type',
renderer: render_type,
flex: 1
},
{
header: 'Default',
dataIndex: 'default',
flex: 1
},
{
header: 'Format',
dataIndex: 'type',
renderer: render_format,
flex: 2
},
{
header: 'Description',
dataIndex: 'description',
renderer: render_description,
flex: 6
}
]
});
}
if (info.returns) {
var retinf = info.returns;
var rtype = retinf.type;
if (!rtype && retinf.items)
rtype = 'array';
if (!rtype)
rtype = 'object';
var rpstore = Ext.create('Ext.data.Store', {
model: 'pve-param-schema',
proxy: {
type: 'memory'
},
groupField: 'optional',
sorters: [
{
property: 'name',
direction: 'ASC'
}
]
});
var properties;
if (rtype === 'array' && retinf.items.properties) {
properties = retinf.items.properties;
}
if (rtype === 'object' && retinf.properties) {
properties = retinf.properties;
}
Ext.Object.each(properties, function(name, pdef) {
pdef.name = name;
rpstore.add(pdef);
});
rpstore.sort();
var groupingFeature = Ext.create('Ext.grid.feature.Grouping',{
enableGroupingMenu: false,
groupHeaderTpl: '<tpl if="groupValue">Optional</tpl><tpl if="!groupValue">Obligatory</tpl>'
});
var returnhtml;
if (retinf.items) {
returnhtml = '<pre>items: ' + Ext.htmlEncode(JSON.stringify(retinf.items, null, 4)) + '</pre>';
}
if (retinf.properties) {
returnhtml = returnhtml || '';
returnhtml += '<pre>properties:' + Ext.htmlEncode(JSON.stringify(retinf.properties, null, 4)) + '</pre>';
}
var rawSection = Ext.create('Ext.panel.Panel', {
bodyPadding: '0px 10px 10px 10px',
html: returnhtml,
hidden: true
});
sections.push({
xtype: 'gridpanel',
title: 'Returns: ' + rtype,
features: [groupingFeature],
store: rpstore,
viewConfig: {
trackOver: false,
stripeRows: true
},
columns: [
{
header: 'Name',
dataIndex: 'name',
flex: 1
},
{
header: 'Type',
dataIndex: 'type',
renderer: render_type,
flex: 1
},
{
header: 'Default',
dataIndex: 'default',
flex: 1
},
{
header: 'Format',
dataIndex: 'type',
renderer: render_format,
flex: 2
},
{
header: 'Description',
dataIndex: 'description',
renderer: render_description,
flex: 6
}
],
bbar: [
{
xtype: 'button',
text: 'Show RAW',
handler: function(btn) {
rawSection.setVisible(!rawSection.isVisible());
btn.setText(rawSection.isVisible() ? 'Hide RAW' : 'Show RAW');
}}
]
});
sections.push(rawSection);
}
if (!data.path.match(/\/_upgrade_/)) {
var permhtml = '';
if (!info.permissions) {
permhtml = "Root only.";
} else {
if (info.permissions.description) {
permhtml += "<div style='white-space:pre-wrap;padding-bottom:10px;'>" +
Ext.htmlEncode(info.permissions.description) + "</div>";
}
permhtml += permission_text(info.permissions);
}
// we do not have this information for PBS api
//if (!info.allowtoken) {
// permhtml += "<br />This API endpoint is not available for API tokens."
//}
sections.push({
title: 'Required permissions',
bodyPadding: 10,
html: permhtml
});
}
items.push({
title: method,
autoScroll: true,
defaults: {
border: false
},
items: sections
});
}
});
var ct = Ext.getCmp('docview');
ct.setTitle("Path: " + real_path(data.path));
ct.removeAll(true);
ct.add(items);
ct.setActiveTab(0);
};
Ext.define('Ext.form.SearchField', {
extend: 'Ext.form.field.Text',
alias: 'widget.searchfield',
emptyText: 'Search...',
flex: 1,
inputType: 'search',
listeners: {
'change': function(){
var value = this.getValue();
if (!Ext.isEmpty(value)) {
store.filter({
property: 'path',
value: value,
anyMatch: true
});
} else {
store.clearFilter();
}
}
}
});
var tree = Ext.create('Ext.tree.Panel', {
title: 'Resource Tree',
tbar: [
{
xtype: 'searchfield',
}
],
tools: [
{
type: 'expand',
tooltip: 'Expand all',
tooltipType: 'title',
callback: (tree) => tree.expandAll(),
},
{
type: 'collapse',
tooltip: 'Collapse all',
tooltipType: 'title',
callback: (tree) => tree.collapseAll(),
},
],
store: store,
width: 200,
region: 'west',
split: true,
margins: '5 0 5 5',
rootVisible: false,
listeners: {
selectionchange: function(v, selections) {
if (!selections[0])
return;
var rec = selections[0];
render_docu(rec.data);
location.hash = '#' + rec.data.path;
}
}
});
Ext.create('Ext.container.Viewport', {
layout: 'border',
renderTo: Ext.getBody(),
items: [
tree,
{
xtype: 'tabpanel',
title: 'Documentation',
id: 'docview',
region: 'center',
margins: '5 5 5 0',
layout: 'fit',
items: []
}
]
});
var deepLink = function() {
var path = window.location.hash.substring(1).replace(/\/\s*$/, '')
var endpoint = store.findNode('path', path);
if (endpoint) {
tree.getSelectionModel().select(endpoint);
tree.expandPath(endpoint.getPath());
render_docu(endpoint.data);
}
}
window.onhashchange = deepLink;
deepLink();
});

View File

@ -1,31 +1,33 @@
Backup Client Usage
===================
The command line client is called :command:`proxmox-backup-client`.
The command line client for Proxmox Backup Server is called
:command:`proxmox-backup-client`.
.. _client_repository:
Backup Repository Locations
---------------------------
The client uses the following notation to specify a datastore repository
on the backup server.
The client uses the following format to specify a datastore repository
on the backup server (where username is specified in the form of user@realm):
[[username@]server[:port]:]datastore
The default value for ``username`` is ``root@pam``. If no server is specified,
the default is the local host (``localhost``).
You can specify a port if your backup server is only reachable on a different
port (e.g. with NAT and port forwarding).
You can specify a port if your backup server is only reachable on a non-default
port (for example, with NAT and port forwarding configurations).
Note that if the server is an IPv6 address, you have to write it with square
Note that if the server uses an IPv6 address, you have to write it with square
brackets (for example, `[fe80::01]`).
You can pass the repository with the ``--repository`` command line option, or
by setting the ``PBS_REPOSITORY`` environment variable.
Here some examples of valid repositories and the real values
Below are some examples of valid repositories and their corresponding real
values:
================================ ================== ================== ===========
Example User Host:Port Datastore
@ -46,16 +48,31 @@ Environment Variables
The default backup repository.
``PBS_PASSWORD``
When set, this value is used for the password required for the backup server.
You can also set this to a API token secret.
When set, this value is used as the password for the backup server.
You can also set this to an API token secret.
``PBS_PASSWORD_FD``, ``PBS_PASSWORD_FILE``, ``PBS_PASSWORD_CMD``
Like ``PBS_PASSWORD``, but read data from an open file descriptor, a file
name or from the `stdout` of a command, respectively. The first defined
environment variable from the order above is preferred.
``PBS_ENCRYPTION_PASSWORD``
When set, this value is used to access the secret encryption key (if
protected by password).
``PBS_FINGERPRINT`` When set, this value is used to verify the server
certificate (only used if the system CA certificates cannot validate the
certificate).
``PBS_ENCRYPTION_PASSWORD_FD``, ``PBS_ENCRYPTION_PASSWORD_FILE``, ``PBS_ENCRYPTION_PASSWORD_CMD``
Like ``PBS_ENCRYPTION_PASSWORD``, but read data from an open file descriptor,
a file name or from the `stdout` of a command, respectively. The first
defined environment variable from the order above is preferred.
``PBS_FINGERPRINT``
When set, this value is used to verify the server certificate (only used if
the system CA certificates cannot validate the certificate).
.. Note:: Passwords must be valid UTF-8 and may not contain newlines. For your
convienience, Proxmox Backup Server only uses the first line as password, so
you can add arbitrary comments after the first newline.
Output Format
@ -70,14 +87,15 @@ Creating Backups
----------------
This section explains how to create a backup from within the machine. This can
be a physical host, a virtual machine, or a container. Such backups may contain file
and image archives. There are no restrictions in this case.
be a physical host, a virtual machine, or a container. Such backups may contain
file and image archives. There are no restrictions in this case.
.. note:: If you want to backup virtual machines or containers on Proxmox VE, see :ref:`pve-integration`.
.. Note:: If you want to backup virtual machines or containers on Proxmox VE,
see :ref:`pve-integration`.
For the following example you need to have a backup server set up, working
credentials and need to know the repository name.
In the following examples we use ``backup-server:store1``.
For the following example, you need to have a backup server set up, have working
credentials, and know the repository name.
In the following examples, we use ``backup-server:store1``.
.. code-block:: console
@ -91,12 +109,12 @@ In the following examples we use ``backup-server:store1``.
Uploaded 12129 chunks in 87 seconds (564 MB/s).
End Time: 2019-12-03T10:36:29+01:00
This will prompt you for a password and then uploads a file archive named
This will prompt you for a password, then upload a file archive named
``root.pxar`` containing all the files in the ``/`` directory.
.. Caution:: Please note that the proxmox-backup-client does not
.. Caution:: Please note that proxmox-backup-client does not
automatically include mount points. Instead, you will see a short
``skip mount point`` notice for each of them. The idea is to
``skip mount point`` message for each of them. The idea is to
create a separate file archive for each mounted disk. You can
explicitly include them using the ``--include-dev`` option
(i.e. ``--include-dev /boot/efi``). You can use this option
@ -104,19 +122,19 @@ This will prompt you for a password and then uploads a file archive named
The ``--repository`` option can get quite long and is used by all
commands. You can avoid having to enter this value by setting the
environment variable ``PBS_REPOSITORY``. Note that if you would like this to remain set
over multiple sessions, you should instead add the below line to your
environment variable ``PBS_REPOSITORY``. Note that if you would like this to
remain set over multiple sessions, you should instead add the below line to your
``.bashrc`` file.
.. code-block:: console
# export PBS_REPOSITORY=backup-server:store1
After this you can execute all commands without specifying the ``--repository``
option.
After this, you can execute all commands without having to specify the
``--repository`` option.
One single backup is allowed to contain more than one archive. For example, if
you want to backup two disks mounted at ``/mnt/disk1`` and ``/mnt/disk2``:
A single backup is allowed to contain more than one archive. For example, if
you want to back up two disks mounted at ``/mnt/disk1`` and ``/mnt/disk2``:
.. code-block:: console
@ -130,26 +148,26 @@ archive source at the client. The format is:
<archive-name>.<type>:<source-path>
Common types are ``.pxar`` for file archives, and ``.img`` for block
device images. To create a backup of a block device run the following command:
Common types are ``.pxar`` for file archives and ``.img`` for block
device images. To create a backup of a block device, run the following command:
.. code-block:: console
# proxmox-backup-client backup mydata.img:/dev/mylvm/mydata
Excluding files/folders from a backup
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Excluding Files/Directories from a Backup
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes it is desired to exclude certain files or folders from a backup archive.
Sometimes it is desired to exclude certain files or directories from a backup archive.
To tell the Proxmox Backup client when and how to ignore files and directories,
place a text file called ``.pxarexclude`` in the filesystem hierarchy.
place a text file named ``.pxarexclude`` in the filesystem hierarchy.
Whenever the backup client encounters such a file in a directory, it interprets
each line as glob match patterns for files and directories that are to be excluded
each line as a glob match pattern for files and directories that are to be excluded
from the backup.
The file must contain a single glob pattern per line. Empty lines are ignored.
The same is true for lines starting with ``#``, which indicates a comment.
The file must contain a single glob pattern per line. Empty lines and lines
starting with ``#`` (indicating a comment) are ignored.
A ``!`` at the beginning of a line reverses the glob match pattern from an exclusion
to an explicit inclusion. This makes it possible to exclude all entries in a
directory except for a few single files/subdirectories.
@ -160,23 +178,24 @@ the given patterns. It is only possible to match files in this directory and its
``\`` is used to escape special glob characters.
``?`` matches any single character.
``*`` matches any character, including an empty string.
``**`` is used to match subdirectories. It can be used to, for example, exclude
all files ending in ``.tmp`` within the directory or subdirectories with the
following pattern ``**/*.tmp``.
``**`` is used to match current directory and subdirectories. For example, with
the pattern ``**/*.tmp``, it would exclude all files ending in ``.tmp`` within
a directory and its subdirectories.
``[...]`` matches a single character from any of the provided characters within
the brackets. ``[!...]`` does the complementary and matches any single character
not contained within the brackets. It is also possible to specify ranges with two
characters separated by ``-``. For example, ``[a-z]`` matches any lowercase
alphabetic character and ``[0-9]`` matches any one single digit.
alphabetic character, and ``[0-9]`` matches any single digit.
The order of the glob match patterns defines whether a file is included or
excluded, that is to say later entries override previous ones.
excluded, that is to say, later entries override earlier ones.
This is also true for match patterns encountered deeper down the directory tree,
which can override a previous exclusion.
Be aware that excluded directories will **not** be read by the backup client.
Thus, a ``.pxarexclude`` file in an excluded subdirectory will have no effect.
``.pxarexclude`` files are treated as regular files and will be included in the
backup archive.
.. Note:: Excluded directories will **not** be read by the backup client. Thus,
a ``.pxarexclude`` file in an excluded subdirectory will have no effect.
``.pxarexclude`` files are treated as regular files and will be included in
the backup archive.
For example, consider the following directory structure:
@ -264,7 +283,7 @@ You can avoid entering the passwords by setting the environment
variables ``PBS_PASSWORD`` and ``PBS_ENCRYPTION_PASSWORD``.
Using a master key to store and recover encryption keys
Using a Master Key to Store and Recover Encryption Keys
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can also use ``proxmox-backup-client key`` to create an RSA public/private
@ -344,7 +363,7 @@ To set up a master key:
keep keys ordered and in a place that is separate from the contents being
backed up. It can happen, for example, that you back up an entire system, using
a key on that system. If the system then becomes inaccessible for any reason
and needs to be restored, this will not be possible as the encryption key will be
and needs to be restored, this will not be possible, as the encryption key will be
lost along with the broken system.
It is recommended that you keep your master key safe, but easily accessible, in
@ -366,10 +385,10 @@ version of your master key. The following command sends the output of the
Restoring Data
--------------
The regular creation of backups is a necessary step to avoiding data
loss. More importantly, however, is the restoration. It is good practice to perform
periodic recovery tests to ensure that you can access the data in
case of problems.
The regular creation of backups is a necessary step in avoiding data loss. More
importantly, however, is the restoration. It is good practice to perform
periodic recovery tests to ensure that you can access the data in case of
disaster.
First, you need to find the snapshot which you want to restore. The snapshot
list command provides a list of all the snapshots on the server:
@ -428,23 +447,22 @@ to use the interactive recovery shell.
The interactive recovery shell is a minimal command line interface that
utilizes the metadata stored in the catalog to quickly list, navigate and
search files in a file archive.
search for files in a file archive.
To restore files, you can select them individually or match them with a glob
pattern.
Using the catalog for navigation reduces the overhead considerably because only
the catalog needs to be downloaded and, optionally, decrypted.
The actual chunks are only accessed if the metadata in the catalog is not enough
or for the actual restore.
The actual chunks are only accessed if the metadata in the catalog is
insufficient or for the actual restore.
Similar to common UNIX shells ``cd`` and ``ls`` are the commands used to change
Similar to common UNIX shells, ``cd`` and ``ls`` are the commands used to change
working directory and list directory contents in the archive.
``pwd`` shows the full path of the current working directory with respect to the
archive root.
Being able to quickly search the contents of the archive is a commonly needed feature.
That's where the catalog is most valuable.
For example:
The ability to quickly search the contents of the archive is a commonly required
feature. That's where the catalog is most valuable. For example:
.. code-block:: console
@ -455,8 +473,8 @@ For example:
pxar:/ > restore-selected /target/path
...
This will find and print all files ending in ``.txt`` located in ``etc/`` or a
subdirectory and add the corresponding pattern to the list for subsequent restores.
This will find and print all files ending in ``.txt`` located in ``etc/`` or its
subdirectories, and add the corresponding pattern to the list for subsequent restores.
``list-selected`` shows these patterns and ``restore-selected`` finally restores
all files in the archive matching the patterns to ``/target/path`` on the local
host. This will scan the whole archive.
@ -481,7 +499,7 @@ Mounting of Archives via FUSE
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :term:`FUSE` implementation for the pxar archive allows you to mount a
file archive as a read-only filesystem to a mountpoint on your host.
file archive as a read-only filesystem to a mount point on your host.
.. code-block:: console
@ -497,7 +515,7 @@ This allows you to access the full contents of the archive in a seamless manner.
load on your host, depending on the operations you perform on the mounted
filesystem.
To unmount the filesystem use the ``umount`` command on the mountpoint:
To unmount the filesystem, use the ``umount`` command on the mount point:
.. code-block:: console
@ -506,7 +524,7 @@ To unmount the filesystem use the ``umount`` command on the mountpoint:
Login and Logout
----------------
The client tool prompts you to enter the logon password as soon as you
The client tool prompts you to enter the login password as soon as you
want to access the backup server. The server checks your credentials
and responds with a ticket that is valid for two hours. The client
tool automatically stores that ticket and uses it for further requests
@ -535,7 +553,7 @@ Changing the Owner of a Backup Group
By default, the owner of a backup group is the user which was used to originally
create that backup group (or in the case of sync jobs, ``root@pam``). This
means that if a user ``mike@pbs`` created a backup, another user ``john@pbs``
can not be used to create backups in that same backup group. In case you want
can not be used to create backups in that same backup group. In case you want
to change the owner of a backup, you can do so with the below command, using a
user that has ``Datastore.Modify`` privileges on the datastore.
@ -636,6 +654,25 @@ shows the list of existing snapshots and what actions prune would take.
in the chunk-store. The chunk-store still contains the data blocks. To free
space you need to perform :ref:`client_garbage-collection`.
It is also possible to protect single snapshots from being pruned or deleted:
.. code-block:: console
# proxmox-backup-client snapshot protected update <snapshot> true
This will set the protected flag on the snapshot and prevent pruning or manual
deletion of this snapshot untilt he flag is removed again with:
.. code-block:: console
# proxmox-backup-client snapshot protected update <snapshot> false
When a group is with a protected snapshot is deleted, only the non-protected
ones are removed and the group will remain.
.. note:: This flag will not be synced when using pull or sync jobs. If you
want to protect a synced snapshot, you have to manually to this again on
the target backup server.
.. _client_garbage-collection:
@ -661,7 +698,7 @@ unused data blocks are removed.
(access time) property. Filesystems are mounted with the ``relatime`` option
by default. This results in a better performance by only updating the
``atime`` property if the last access has been at least 24 hours ago. The
downside is, that touching a chunk within these 24 hours will not always
downside is that touching a chunk within these 24 hours will not always
update its ``atime`` property.
Chunks in the grace period will be logged at the end of the garbage
@ -685,8 +722,8 @@ unused data blocks are removed.
Average chunk size: 2486565
TASK OK
.. todo:: howto run garbage-collection at regular intervals (cron)
Garbage collection can also be scheduled using ``promxox-backup-manager`` or
from the Proxmox Backup Server's web interface.
Benchmarking
------------

View File

@ -1,10 +1,10 @@
Backup Protocol
===============
Proxmox Backup Server uses a REST based API. While the management
interface use normal HTTP, the actual backup and restore interface use
Proxmox Backup Server uses a REST-based API. While the management
interface uses normal HTTP, the actual backup and restore interface uses
HTTP/2 for improved performance. Both HTTP and HTTP/2 are well known
standards, so the following section assumes that you are familiar on
standards, so the following section assumes that you are familiar with
how to use them.
@ -13,35 +13,35 @@ Backup Protocol API
To start a new backup, the API call ``GET /api2/json/backup`` needs to
be upgraded to a HTTP/2 connection using
``proxmox-backup-protocol-v1`` as protocol name::
``proxmox-backup-protocol-v1`` as the protocol name::
GET /api2/json/backup HTTP/1.1
UPGRADE: proxmox-backup-protocol-v1
The server replies with HTTP 101 Switching Protocol status code,
and you can then issue REST commands on that updated HTTP/2 connection.
The server replies with the ``HTTP 101 Switching Protocol`` status code,
and you can then issue REST commands on the updated HTTP/2 connection.
The backup protocol allows you to upload three different kind of files:
- Chunks and blobs (binary data)
- Fixed Indexes (List of chunks with fixed size)
- Fixed indexes (List of chunks with fixed size)
- Dynamic Indexes (List of chunk with variable size)
- Dynamic indexes (List of chunks with variable size)
The following section gives a short introduction how to upload such
The following section provides a short introduction on how to upload such
files. Please use the `API Viewer <api-viewer/index.html>`_ for
details about available REST commands.
details about the available REST commands.
Upload Blobs
~~~~~~~~~~~~
Uploading blobs is done using ``POST /blob``. The HTTP body contains the
data encoded as :ref:`Data Blob <data-blob-format>`).
Blobs are uploaded using ``POST /blob``. The HTTP body contains the
data encoded as :ref:`Data Blob <data-blob-format>`.
The file name needs to end with ``.blob``, and is automatically added
to the backup manifest.
The file name must end with ``.blob``, and is automatically added
to the backup manifest, following the call to ``POST /finish``.
Upload Chunks
@ -56,40 +56,41 @@ encoded as :ref:`Data Blob <data-blob-format>`).
Upload Fixed Indexes
~~~~~~~~~~~~~~~~~~~~
Fixed indexes are use to store VM image data. The VM image is split
Fixed indexes are used to store VM image data. The VM image is split
into equally sized chunks, which are uploaded individually. The index
file simply contains a list to chunk digests.
file simply contains a list of chunk digests.
You create a fixed index with ``POST /fixed_index``. Then upload
You create a fixed index with ``POST /fixed_index``. Then, upload
chunks with ``POST /fixed_chunk``, and append them to the index with
``PUT /fixed_index``. When finished, you need to close the index using
``POST /fixed_close``.
The file name needs to end with ``.fidx``, and is automatically added
to the backup manifest.
to the backup manifest, following the call to ``POST /finish``.
Upload Dynamic Indexes
~~~~~~~~~~~~~~~~~~~~~~
Dynamic indexes are use to store file archive data. The archive data
Dynamic indexes are used to store file archive data. The archive data
is split into dynamically sized chunks, which are uploaded
individually. The index file simply contains a list to chunk digests
individually. The index file simply contains a list of chunk digests
and offsets.
You create a dynamic sized index with ``POST /dynamic_index``. Then
You can create a dynamically sized index with ``POST /dynamic_index``. Then,
upload chunks with ``POST /dynamic_chunk``, and append them to the index with
``PUT /dynamic_index``. When finished, you need to close the index using
``POST /dynamic_close``.
The file name needs to end with ``.didx``, and is automatically added
to the backup manifest.
The filename needs to end with ``.didx``, and is automatically added
to the backup manifest, following the call to ``POST /finish``.
Finish Backup
~~~~~~~~~~~~~
Once you have uploaded all data, you need to call ``POST
/finish``. This commits all data and ends the backup protocol.
Once you have uploaded all data, you need to call ``POST /finish``. This
commits all data and ends the backup protocol.
Restore/Reader Protocol API
@ -102,39 +103,39 @@ be upgraded to a HTTP/2 connection using
GET /api2/json/reader HTTP/1.1
UPGRADE: proxmox-backup-reader-protocol-v1
The server replies with HTTP 101 Switching Protocol status code,
The server replies with the ``HTTP 101 Switching Protocol`` status code,
and you can then issue REST commands on that updated HTTP/2 connection.
The reader protocol allows you to download three different kind of files:
The reader protocol allows you to download three different kinds of files:
- Chunks and blobs (binary data)
- Fixed Indexes (List of chunks with fixed size)
- Fixed indexes (list of chunks with fixed size)
- Dynamic Indexes (List of chunk with variable size)
- Dynamic indexes (list of chunks with variable size)
The following section gives a short introduction how to download such
The following section provides a short introduction on how to download such
files. Please use the `API Viewer <api-viewer/index.html>`_ for details about
available REST commands.
the available REST commands.
Download Blobs
~~~~~~~~~~~~~~
Downloading blobs is done using ``GET /download``. The HTTP body contains the
Blobs are downloaded using ``GET /download``. The HTTP body contains the
data encoded as :ref:`Data Blob <data-blob-format>`.
Download Chunks
~~~~~~~~~~~~~~~
Downloading chunks is done using ``GET /chunk``. The HTTP body contains the
data encoded as :ref:`Data Blob <data-blob-format>`).
Chunks are downloaded using ``GET /chunk``. The HTTP body contains the
data encoded as :ref:`Data Blob <data-blob-format>`.
Download Index Files
~~~~~~~~~~~~~~~~~~~~
Downloading index files is done using ``GET /download``. The HTTP body
Index files are downloaded using ``GET /download``. The HTTP body
contains the data encoded as :ref:`Fixed Index <fixed-index-format>`
or :ref:`Dynamic Index <dynamic-index-format>`.

View File

@ -37,7 +37,7 @@ Each field can contain multiple values in the following formats:
* and a combination of the above: e.g., 01,05..10,12/02
* or a `*` for every possible value: e.g., \*:00
There are some special values that have specific meaning:
There are some special values that have a specific meaning:
================================= ==============================
Value Syntax
@ -81,19 +81,19 @@ Not all features of systemd calendar events are implemented:
* no Unix timestamps (e.g. `@12345`): instead use date and time to specify
a specific point in time
* no timezone: all schedules use the set timezone on the server
* no timezone: all schedules use the timezone of the server
* no sub-second resolution
* no reverse day syntax (e.g. 2020-03~01)
* no repetition of ranges (e.g. 1..10/2)
Notes on scheduling
Notes on Scheduling
-------------------
In `Proxmox Backup`_ scheduling for most tasks is done in the
In `Proxmox Backup`_, scheduling for most tasks is done in the
`proxmox-backup-proxy`. This daemon checks all job schedules
if they are due every minute. This means that even if
every minute, to see if any are due. This means that even though
`calendar events` can contain seconds, it will only be checked
once a minute.
once per minute.
Also, all schedules will be checked against the timezone set
in the `Proxmox Backup`_ server.

View File

@ -21,3 +21,7 @@ Command Line Tools
.. include:: pxar/description.rst
``proxmox-backup-debug``
~~~~~~~~
.. include:: proxmox-backup-debug/description.rst

View File

@ -10,7 +10,7 @@ Command Syntax
Catalog Shell Commands
~~~~~~~~~~~~~~~~~~~~~~
Those command are available when you start an interactive restore shell:
The following commands are available in an interactive restore shell:
.. code-block:: console

View File

@ -2,13 +2,13 @@ This file contains the access control list for the Proxmox Backup
Server API.
Each line starts with ``acl:``, followed by 4 additional values
separated by collon.
separated by colon.
:propagate: Propagate permissions down the hierachrchy
:propagate: Propagate permissions down the hierarchy
:path: The object path
:User/Token: List of users and token
:User/Token: List of users and tokens
:Role: List of assigned roles

View File

@ -1,9 +1,9 @@
The file contains a list of datastore configuration sections. Each
section starts with a header ``datastore: <name>``, followed by the
This file contains a list of datastore configuration sections. Each
section starts with the header ``datastore: <name>``, followed by the
datastore configuration options.
::
datastore: <name1>
path <path1>
<option1> <value1>

View File

@ -1,4 +1,4 @@
Each entry starts with a header ``pool: <name>``, followed by the
Each entry starts with the header ``pool: <name>``, followed by the
media pool configuration options.
::
@ -8,6 +8,6 @@ media pool configuration options.
retention overwrite
pool: ...
You can use the ``proxmox-tape pool`` command to manipulate this file.

View File

@ -1,6 +1,6 @@
This file contains information used to access remote servers.
Each entry starts with a header ``remote: <name>``, followed by the
Each entry starts with the header ``remote: <name>``, followed by the
remote configuration options.
::
@ -11,7 +11,7 @@ remote configuration options.
...
remote: ...
You can use the ``proxmox-backup-manager remote`` command to manipulate
this file.

View File

@ -1,4 +1,4 @@
Each entry starts with a header ``sync: <name>``, followed by the
Each entry starts with the header ``sync: <name>``, followed by the
job configuration options.
::
@ -9,7 +9,7 @@ job configuration options.
remote lina
sync: ...
You can use the ``proxmox-backup-manager sync-job`` command to manipulate
this file.

View File

@ -1,4 +1,4 @@
Each entry starts with a header ``backup: <name>``, followed by the
Each entry starts with the header ``backup: <name>``, followed by the
job configuration options.
::

View File

@ -1,7 +1,7 @@
Each LTO drive configuration section starts with a header ``lto: <name>``,
Each LTO drive configuration section starts with the header ``lto: <name>``,
followed by the drive configuration options.
Tape changer configurations starts with ``changer: <name>``,
Tape changer configurations start with the header ``changer: <name>``,
followed by the changer configuration options.
::
@ -18,5 +18,5 @@ followed by the changer configuration options.
You can use the ``proxmox-tape drive`` and ``proxmox-tape changer``
commands to manipulate this file.
.. NOTE:: The ``virtual:`` drive type is experimental and onyl used
.. NOTE:: The ``virtual:`` drive type is experimental and should only be used
for debugging.

View File

@ -1,9 +1,9 @@
This file contains the list of API users and API tokens.
Each user configuration section starts with a header ``user: <name>``,
Each user configuration section starts with the header ``user: <name>``,
followed by the user configuration options.
API token configuration starts with a header ``token:
API token configuration starts with the header ``token:
<userid!token_name>``, followed by the token configuration. The data
used to authenticate tokens is stored in a separate file
(``token.shadow``).

View File

@ -1,4 +1,4 @@
Each entry starts with a header ``verification: <name>``, followed by the
Each entry starts with the header ``verification: <name>``, followed by the
job configuration options.
::

View File

@ -1,7 +1,7 @@
Configuration Files
===================
All Proxmox Backup Server configuration files resides inside directory
All Proxmox Backup Server configuration files reside in the directory
``/etc/proxmox-backup/``.

View File

@ -1,5 +1,5 @@
.. Epilog (included at top of each file)
We use this file to define external links and common replacement
patterns.
@ -13,7 +13,6 @@
.. _Proxmox: https://www.proxmox.com
.. _Proxmox Community Forum: https://forum.proxmox.com
.. _Proxmox Virtual Environment: https://www.proxmox.com/proxmox-ve
.. FIXME
.. _Proxmox Backup: https://pbs.proxmox.com/wiki/index.php/Main_Page
.. _PBS Development List: https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
.. _reStructuredText: https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html
@ -23,6 +22,7 @@
.. _Virtual machine: https://en.wikipedia.org/wiki/Virtual_machine
.. _APT: http://en.wikipedia.org/wiki/Advanced_Packaging_Tool
.. _QEMU: https://www.qemu.org/
.. _LXC: https://linuxcontainers.org/lxc/introduction/
.. _Client-server model: https://en.wikipedia.org/wiki/Client-server_model
.. _AE: https://en.wikipedia.org/wiki/Authenticated_encryption

View File

@ -24,11 +24,13 @@ future plans to support 32-bit processors.
How long will my Proxmox Backup Server version be supported?
------------------------------------------------------------
+-----------------------+--------------------+---------------+------------+--------------------+
|Proxmox Backup Version | Debian Version | First Release | Debian EOL | Proxmox Backup EOL |
+=======================+====================+===============+============+====================+
|Proxmox Backup 1.x | Debian 10 (Buster) | 2020-11 | tba | tba |
+-----------------------+--------------------+---------------+------------+--------------------+
+-----------------------+----------------------+---------------+------------+--------------------+
|Proxmox Backup Version | Debian Version | First Release | Debian EOL | Proxmox Backup EOL |
+=======================+======================+===============+============+====================+
|Proxmox Backup 2.x | Debian 11 (Bullseye) | 2021-07 | tba | tba |
+-----------------------+----------------------+---------------+------------+--------------------+
|Proxmox Backup 1.x | Debian 10 (Buster) | 2020-11 | ~Q2/2022 | Q2-Q3/2022 |
+-----------------------+----------------------+---------------+------------+--------------------+
Can I copy or synchronize my datastore to another location?
@ -67,6 +69,6 @@ be able to read the data.
Is the backup incremental/deduplicated?
---------------------------------------
With Proxmox Backup Server, backups are sent incremental and data is
deduplicated on the server.
This minimizes both the storage consumed and the network impact.
With Proxmox Backup Server, backups are sent incrementally to the server, and
data is then deduplicated on the server. This minimizes both the storage
consumed and the impact on the network.

View File

@ -14,7 +14,8 @@ Proxmox File Archive Format (``.pxar``)
Data Blob Format (``.blob``)
----------------------------
The data blob format is used to store small binary data. The magic number decides the exact format:
The data blob format is used to store small binary data. The magic number
decides the exact format:
.. list-table::
:widths: auto
@ -32,7 +33,8 @@ The data blob format is used to store small binary data. The magic number decide
- encrypted
- compressed
Compression algorithm is ``zstd``. Encryption cipher is ``AES_256_GCM``.
The compression algorithm used is ``zstd``. The encryption cipher is
``AES_256_GCM``.
Unencrypted blobs use the following format:
@ -43,15 +45,15 @@ Unencrypted blobs use the following format:
* - ``CRC32: [u8; 4]``
* - ``Data: (max 16MiB)``
Encrypted blobs additionally contains a 16 byte IV, followed by a 16
byte Authenticated Encyryption (AE) tag, followed by the encrypted
data:
Encrypted blobs additionally contain a 16 byte initialization vector (IV),
followed by a 16 byte authenticated encryption (AE) tag, followed by the
encrypted data:
.. list-table::
* - ``MAGIC: [u8; 8]``
* - ``CRC32: [u8; 4]``
* - ``ÌV: [u8; 16]``
* - ``IV: [u8; 16]``
* - ``TAG: [u8; 16]``
* - ``Data: (max 16MiB)``
@ -72,19 +74,19 @@ All numbers are stored as little-endian.
* - ``ctime: i64``,
- Creation Time (epoch)
* - ``index_csum: [u8; 32]``,
- Sha256 over the index (without header) ``SHA256(digest1||digest2||...)``
- SHA-256 over the index (without header) ``SHA256(digest1||digest2||...)``
* - ``size: u64``,
- Image size
* - ``chunk_size: u64``,
- Chunk size
* - ``reserved: [u8; 4016]``,
- overall header size is one page (4096 bytes)
- Overall header size is one page (4096 bytes)
* - ``digest1: [u8; 32]``
- first chunk digest
- First chunk digest
* - ``digest2: [u8; 32]``
- next chunk
- Second chunk digest
* - ...
- next chunk ...
- Next chunk digest ...
.. _dynamic-index-format:
@ -103,16 +105,16 @@ All numbers are stored as little-endian.
* - ``ctime: i64``,
- Creation Time (epoch)
* - ``index_csum: [u8; 32]``,
- Sha256 over the index (without header) ``SHA256(offset1||digest1||offset2||digest2||...)``
- SHA-256 over the index (without header) ``SHA256(offset1||digest1||offset2||digest2||...)``
* - ``reserved: [u8; 4032]``,
- Overall header size is one page (4096 bytes)
* - ``offset1: u64``
- End of first chunk
* - ``digest1: [u8; 32]``
- first chunk digest
- First chunk digest
* - ``offset2: u64``
- End of second chunk
* - ``digest2: [u8; 32]``
- second chunk digest
- Second chunk digest
* - ...
- next chunk offset/digest
- Next chunk offset/digest

View File

@ -11,7 +11,7 @@ Glossary
`Container`_
A container is an isolated user space. Programs run directly on
the host's kernel, but with limited access to the host resources.
the host's kernel, but with limited access to the host's resources.
Datastore
@ -23,19 +23,19 @@ Glossary
Rust is a new, fast and memory-efficient system programming
language. It has no runtime or garbage collector. Rusts rich type
system and ownership model guarantee memory-safety and
thread-safety. I can eliminate many classes of bugs
thread-safety. This can eliminate many classes of bugs
at compile-time.
`Sphinx`_
Is a tool that makes it easy to create intelligent and
beautiful documentation. It was originally created for the
documentation of the Python programming language. It has excellent facilities for the
Is a tool that makes it easy to create intelligent and nicely formatted
documentation. It was originally created for the documentation of the
Python programming language. It has excellent facilities for the
documentation of software projects in a range of languages.
`reStructuredText`_
Is an easy-to-read, what-you-see-is-what-you-get plaintext
Is an easy-to-read, what-you-see-is-what-you-get, plaintext
markup syntax and parser system.
`FUSE`

View File

@ -8,8 +8,9 @@ tools. The web interface also provides a built-in console, so if you prefer the
command line or need some extra control, you have this option.
The web interface can be accessed via https://youripaddress:8007. The default
login is `root`, and the password is the one specified during the installation
process.
login is `root`, and the password is either the one specified during the
installation process or the password of the root user, in case of installation
on top of Debian.
Features
@ -48,12 +49,13 @@ GUI Overview
The Proxmox Backup Server web interface consists of 3 main sections:
* **Header**: At the top. This shows version information, and contains buttons to view
documentation, monitor running tasks, set the language and logout.
* **Sidebar**: On the left. This contains the configuration options for
* **Header**: At the top. This shows version information and contains buttons to
view documentation, monitor running tasks, set the language, configure various
display settings, and logout.
* **Sidebar**: On the left. This contains the administration options for
the server.
* **Configuration Panel**: In the center. This contains the control interface for the
configuration options in the *Sidebar*.
* **Configuration Panel**: In the center. This contains the respective control
interfaces for the administration options in the *Sidebar*.
Sidebar
@ -74,12 +76,14 @@ previous and currently running tasks, and subscription information.
Configuration
^^^^^^^^^^^^^
The Configuration section contains some system configuration options, such as
time and network configuration. It also contains the following subsections:
The Configuration section contains some system options, such as time, network,
WebAuthn, and HTTP proxy configuration. It also contains the following
subsections:
* **Access Control**: Add and manage users, API tokens, and the permissions
associated with these items
* **Remotes**: Add, edit and remove remotes (see :term:`Remote`)
* **Certificates**: Manage ACME accounts and create SSL certificates.
* **Subscription**: Upload a subscription key, view subscription status and
access a text-based system report.
@ -98,6 +102,7 @@ tasks and information. These are:
resource usage statistics
* **Services**: Manage and monitor system services
* **Updates**: An interface for upgrading packages
* **Repositories**: An interface for configuring APT repositories
* **Syslog**: View log messages from the server
* **Tasks**: Task history with multiple filter options
@ -110,7 +115,7 @@ The administration menu item also contains a disk management subsection:
* **Disks**: View information on available disks
* **Directory**: Create and view information on *ext4* and *xfs* disks
* **ZFS**: Create and view information on *ZFS* disks
* **ZFS**: Create and view information on *ZFS* disks
Tape Backup
^^^^^^^^^^^
@ -119,11 +124,20 @@ Tape Backup
:align: right
:alt: Tape Backup: Tape changer overview
The `Tape Backup`_ section contains a top panel, managing tape media sets,
inventories, drives, changers and the tape backup jobs itself.
The `Tape Backup`_ section contains a top panel, with options for managing tape
media sets, inventories, drives, changers, encryption keys, and the tape backup
jobs itself. The tabs are as follows:
It also contains a subsection per standalone drive and per changer, with a
status and management view for those devices.
* **Content**: Information on the contents of the tape backup
* **Inventory**: Manage the tapes attached to the system
* **Changers**: Manage tape loading devices
* **Drives**: Manage drives used for reading and writing to tapes
* **Media Pools**: Manage logical pools of tapes
* **Encryption Keys**: Manage tape backup encryption keys
* **Backup Jobs**: Manage tape backup jobs
The section also contains a subsection per standalone drive and per changer,
with a status and management view for those devices.
Datastore
^^^^^^^^^
@ -133,9 +147,9 @@ Datastore
:alt: Datastore Configuration
The Datastore section contains interfaces for creating and managing
datastores. It contains a button to create a new datastore on the server, as
well as a subsection for each datastore on the system, in which you can use the
top panel to view:
datastores. It also contains a button for creating a new datastore on the
server, as well as a subsection for each datastore on the system, in which you
can use the top panel to view:
* **Summary**: Access a range of datastore usage statistics
* **Content**: Information on the datastore's backup groups and their respective
@ -144,5 +158,7 @@ top panel to view:
collection <client_garbage-collection>` operations, and run garbage collection
manually
* **Sync Jobs**: Create, manage and run :ref:`syncjobs` from remote servers
* **Verify Jobs**: Create, manage and run :ref:`maintenance_verification` jobs on the
datastore
* **Verify Jobs**: Create, manage and run :ref:`maintenance_verification` jobs
on the datastore
* **Options**: Configure notification and verification settings
* **Permissions**: Manage permissions on the datastore

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

View File

@ -19,24 +19,24 @@ for various management tasks such as disk management.
`Proxmox Backup`_ without the server part.
The disk image (ISO file) provided by Proxmox includes a complete Debian system
("buster" for version 1.x) as well as all necessary packages for the `Proxmox Backup`_ server.
as well as all necessary packages for the `Proxmox Backup`_ Server.
The installer will guide you through the setup process and allow
you to partition the local disk(s), apply basic system configurations
(e.g. timezone, language, network), and install all required packages.
you to partition the local disk(s), apply basic system configuration
(for example timezone, language, network), and install all required packages.
The provided ISO will get you started in just a few minutes, and is the
recommended method for new and existing users.
Alternatively, `Proxmox Backup`_ server can be installed on top of an
Alternatively, `Proxmox Backup`_ Server can be installed on top of an
existing Debian system.
Install `Proxmox Backup`_ with the Installer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install `Proxmox Backup`_ Server using the Installer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Download the ISO from |DOWNLOADS|.
It includes the following:
* The `Proxmox Backup`_ server installer, which partitions the local
* The `Proxmox Backup`_ Server installer, which partitions the local
disk(s) with ext4, xfs or ZFS, and installs the operating system
* Complete operating system (Debian Linux, 64-bit)
@ -63,7 +63,7 @@ standard Debian installation. After configuring the
# apt-get update
# apt-get install proxmox-backup-server
The commands above keep the current (Debian) kernel and install a minimal
The above commands keep the current (Debian) kernel and install a minimal
set of required packages.
If you want to install the same set of packages as the installer

View File

@ -4,15 +4,15 @@ Introduction
What is Proxmox Backup Server?
------------------------------
Proxmox Backup Server is an enterprise-class, client-server backup software
package that backs up :term:`virtual machine`\ s, :term:`container`\ s, and
Proxmox Backup Server is an enterprise-class, client-server backup solution that
is capable of backing up :term:`virtual machine`\ s, :term:`container`\ s, and
physical hosts. It is specially optimized for the `Proxmox Virtual Environment`_
platform and allows you to back up your data securely, even between remote
sites, providing easy management with a web-based user interface.
sites, providing easy management through a web-based user interface.
It supports deduplication, compression, and authenticated
encryption (AE_). Using :term:`Rust` as the implementation language guarantees high
performance, low resource usage, and a safe, high-quality codebase.
encryption (AE_). Using :term:`Rust` as the implementation language guarantees
high performance, low resource usage, and a safe, high-quality codebase.
Proxmox Backup uses state of the art cryptography for both client-server
communication and backup content :ref:`encryption <client_encryption>`. All
@ -28,22 +28,23 @@ Proxmox Backup Server uses a `client-server model`_. The server stores the
backup data and provides an API to create and manage datastores. With the
API, it's also possible to manage disks and other server-side resources.
The backup client uses this API to access the backed up data. With the command
line tool ``proxmox-backup-client`` you can create backups and restore data.
For QEMU_ with `Proxmox Virtual Environment`_ we deliver an integrated client.
The backup client uses this API to access the backed up data. You can use the
``proxmox-backup-client`` command line tool to create and restore file backups.
For QEMU_ and LXC_ within `Proxmox Virtual Environment`_, we deliver an
integrated client.
A single backup is allowed to contain several archives. For example, when you
backup a :term:`virtual machine`, each disk is stored as a separate archive
inside that backup. The VM configuration itself is stored as an extra file.
This way, it's easy to access and restore only important parts of the backup,
without the need to scan the whole backup.
This way, it's easy to access and restore only the important parts of the
backup, without the need to scan the whole backup.
Main Features
-------------
:Support for Proxmox VE: The `Proxmox Virtual Environment`_ is fully
supported and you can easily backup :term:`virtual machine`\ s and
supported, and you can easily backup :term:`virtual machine`\ s and
:term:`container`\ s.
:Performance: The whole software stack is written in :term:`Rust`,
@ -70,6 +71,10 @@ Main Features
modern hardware. In addition to client-side encryption, all data is
transferred via a secure TLS connection.
:Tape backup: For long-term archiving of data, Proxmox Backup Server also
provides extensive support for backing up to tape and managing tape
libraries.
:Web interface: Manage the Proxmox Backup Server with the integrated, web-based
user interface.
@ -80,7 +85,7 @@ Main Features
backup-clients.
:Enterprise Support: Proxmox Server Solutions GmbH offers enterprise support in
form of `Proxmox Backup Server Subscription Plans
the form of `Proxmox Backup Server Subscription Plans
<https://www.proxmox.com/en/proxmox-backup-server/pricing>`_. Users at every
subscription level get access to the Proxmox Backup :ref:`Enterprise
Repository <sysadmin_package_repos_enterprise>`. In addition, with a Basic,
@ -173,7 +178,7 @@ Bug Tracker
~~~~~~~~~~~
Proxmox runs a public bug tracker at `<https://bugzilla.proxmox.com>`_. If an
issue appears, file your report there. An issue can be a bug as well as a
issue appears, file your report there. An issue can be a bug, as well as a
request for a new feature or enhancement. The bug tracker helps to keep track
of the issue and will send a notification once it has been solved.
@ -224,5 +229,6 @@ requirements.
In July 2020, we released the first beta version of Proxmox Backup
Server, followed by the first stable version in November 2020. With support for
incremental, fully deduplicated backups, Proxmox Backup significantly reduces
network load and saves valuable storage space.
encryption and incremental, fully deduplicated backups, Proxmox Backup offers a
secure environment, which significantly reduces network load and saves valuable
storage space.

View File

@ -4,17 +4,17 @@
ZFS on Linux
------------
ZFS is a combined file system and logical volume manager designed by
ZFS is a combined file system and logical volume manager, designed by
Sun Microsystems. There is no need to manually compile ZFS modules - all
packages are included.
By using ZFS, it's possible to achieve maximum enterprise features with
low budget hardware, but also high performance systems by leveraging
SSD caching or even SSD only setups. ZFS can replace cost intense
hardware raid cards by moderate CPU and memory load combined with easy
low budget hardware, and also high performance systems by leveraging
SSD caching or even SSD only setups. ZFS can replace expensive
hardware raid cards with moderate CPU and memory load, combined with easy
management.
General ZFS advantages
General advantages of ZFS:
* Easy configuration and management with GUI and CLI.
* Reliable
@ -34,18 +34,18 @@ General ZFS advantages
Hardware
~~~~~~~~~
ZFS depends heavily on memory, so you need at least 8GB to start. In
practice, use as much you can get for your hardware/budget. To prevent
ZFS depends heavily on memory, so it's recommended to have at least 8GB to
start. In practice, use as much you can get for your hardware/budget. To prevent
data corruption, we recommend the use of high quality ECC RAM.
If you use a dedicated cache and/or log disk, you should use an
enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
enterprise class SSD (for example, Intel SSD DC S3700 Series). This can
increase the overall performance significantly.
IMPORTANT: Do not use ZFS on top of hardware controller which has its
IMPORTANT: Do not use ZFS on top of a hardware controller which has its
own cache management. ZFS needs to directly communicate with disks. An
HBA adapter is the way to go, or something like LSI controller flashed
in ``IT`` mode.
HBA adapter or something like an LSI controller flashed in ``IT`` mode is
recommended.
ZFS Administration
@ -53,7 +53,7 @@ ZFS Administration
This section gives you some usage examples for common tasks. ZFS
itself is really powerful and provides many options. The main commands
to manage ZFS are `zfs` and `zpool`. Both commands come with great
to manage ZFS are `zfs` and `zpool`. Both commands come with extensive
manual pages, which can be read with:
.. code-block:: console
@ -123,7 +123,7 @@ Create a new pool with cache (L2ARC)
It is possible to use a dedicated cache drive partition to increase
the performance (use SSD).
As `<device>` it is possible to use more devices, like it's shown in
For `<device>`, you can use multiple devices, as is shown in
"Create a new pool with RAID*".
.. code-block:: console
@ -136,7 +136,7 @@ Create a new pool with log (ZIL)
It is possible to use a dedicated cache drive partition to increase
the performance (SSD).
As `<device>` it is possible to use more devices, like it's shown in
For `<device>`, you can use multiple devices, as is shown in
"Create a new pool with RAID*".
.. code-block:: console
@ -146,8 +146,9 @@ As `<device>` it is possible to use more devices, like it's shown in
Add cache and log to an existing pool
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have a pool without cache and log. First partition the SSD in
2 partition with `parted` or `gdisk`
You can add cache and log devices to a pool after its creation. In this example,
we will use a single drive for both cache and log. First, you need to create
2 partitions on the SSD with `parted` or `gdisk`
.. important:: Always use GPT partition tables.
@ -171,12 +172,12 @@ Changing a failed device
Changing a failed bootable device
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Depending on how Proxmox Backup was installed it is either using `grub` or `systemd-boot`
as bootloader.
Depending on how Proxmox Backup was installed, it is either using `grub` or
`systemd-boot` as a bootloader.
The first steps of copying the partition table, reissuing GUIDs and replacing
the ZFS partition are the same. To make the system bootable from the new disk,
different steps are needed which depend on the bootloader in use.
In either case, the first steps of copying the partition table, reissuing GUIDs
and replacing the ZFS partition are the same. To make the system bootable from
the new disk, different steps are needed which depend on the bootloader in use.
.. code-block:: console
@ -207,7 +208,7 @@ Usually `grub.cfg` is located in `/boot/grub/grub.cfg`
# grub-mkconfig -o /path/to/grub.cfg
Activate E-Mail Notification
Activate e-mail notification
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ZFS comes with an event daemon, which monitors events generated by the
@ -219,24 +220,24 @@ and you can install it using `apt-get`:
# apt-get install zfs-zed
To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
favorite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
To activate the daemon, it is necessary to to uncomment the ZED_EMAIL_ADDR
setting, in the file `/etc/zfs/zed.d/zed.rc`.
.. code-block:: console
ZED_EMAIL_ADDR="root"
Please note Proxmox Backup forwards mails to `root` to the email address
Please note that Proxmox Backup forwards mails to `root` to the email address
configured for the root user.
IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
other settings are optional.
Limit ZFS Memory Usage
Limit ZFS memory usage
^^^^^^^^^^^^^^^^^^^^^^
It is good to use at most 50 percent (which is the default) of the
system memory for ZFS ARC to prevent performance shortage of the
system memory for ZFS ARC, to prevent performance degradation of the
host. Use your preferred editor to change the configuration in
`/etc/modprobe.d/zfs.conf` and insert:
@ -244,27 +245,42 @@ host. Use your preferred editor to change the configuration in
options zfs zfs_arc_max=8589934592
This example setting limits the usage to 8GB.
The above example limits the usage to 8 GiB ('8 * 2^30^').
.. IMPORTANT:: If your root file system is ZFS you must update your initramfs every time this value changes:
.. IMPORTANT:: In case your desired `zfs_arc_max` value is lower than or equal
to `zfs_arc_min` (which defaults to 1/32 of the system memory), `zfs_arc_max`
will be ignored. Thus, for it to work in this case, you must set
`zfs_arc_min` to at most `zfs_arc_max - 1`. This would require updating the
configuration in `/etc/modprobe.d/zfs.conf`, with:
.. code-block:: console
options zfs zfs_arc_min=8589934591
options zfs zfs_arc_max=8589934592
This example setting limits the usage to 8 GiB ('8 * 2^30^') on
systems with more than 256 GiB of total memory, where simply setting
`zfs_arc_max` alone would not work.
.. IMPORTANT:: If your root file system is ZFS, you must update your initramfs
every time this value changes.
.. code-block:: console
# update-initramfs -u
SWAP on ZFS
Swap on ZFS
^^^^^^^^^^^
Swap-space created on a zvol may generate some troubles, like blocking the
Swap-space created on a zvol may cause some issues, such as blocking the
server or generating a high IO load, often seen when starting a Backup
to an external Storage.
We strongly recommend to use enough memory, so that you normally do not
We strongly recommend using enough memory, so that you normally do not
run into low memory situations. Should you need or want to add swap, it is
preferred to create a partition on a physical disk and use it as swap device.
preferred to create a partition on a physical disk and use it as a swap device.
You can leave some space free for this purpose in the advanced options of the
installer. Additionally, you can lower the `swappiness` value.
installer. Additionally, you can lower the `swappiness` value.
A good value for servers is 10:
.. code-block:: console
@ -291,7 +307,7 @@ an editor of your choice and add the following line:
vm.swappiness = 100 The kernel will swap aggressively.
==================== ===============================================================
ZFS Compression
ZFS compression
^^^^^^^^^^^^^^^
To activate compression:
@ -300,10 +316,11 @@ To activate compression:
# zpool set compression=lz4 <pool>
We recommend using the `lz4` algorithm, since it adds very little CPU overhead.
Other algorithms such as `lzjb` and `gzip-N` (where `N` is an integer `1-9` representing
the compression ratio, 1 is fastest and 9 is best compression) are also available.
Depending on the algorithm and how compressible the data is, having compression enabled can even increase
I/O performance.
Other algorithms such as `lzjb` and `gzip-N` (where `N` is an integer from `1-9`
representing the compression ratio, where 1 is fastest and 9 is best
compression) are also available. Depending on the algorithm and how
compressible the data is, having compression enabled can even increase I/O
performance.
You can disable compression at any time with:
.. code-block:: console
@ -314,26 +331,26 @@ Only new blocks will be affected by this change.
.. _local_zfs_special_device:
ZFS Special Device
ZFS special device
^^^^^^^^^^^^^^^^^^
Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
Since version 0.8.0, ZFS supports `special` devices. A `special` device in a
pool is used to store metadata, deduplication tables, and optionally small
file blocks.
A `special` device can improve the speed of a pool consisting of slow spinning
hard disks with a lot of metadata changes. For example workloads that involve
hard disks with a lot of metadata changes. For example, workloads that involve
creating, updating or deleting a large number of files will benefit from the
presence of a `special` device. ZFS datasets can also be configured to store
whole small files on the `special` device which can further improve the
small files on the `special` device, which can further improve the
performance. Use fast SSDs for the `special` device.
.. IMPORTANT:: The redundancy of the `special` device should match the one of the
pool, since the `special` device is a point of failure for the whole pool.
pool, since the `special` device is a point of failure for the entire pool.
.. WARNING:: Adding a `special` device to a pool cannot be undone!
Create a pool with `special` device and RAID-1:
To create a pool with `special` device and RAID-1:
.. code-block:: console
@ -346,8 +363,8 @@ Adding a `special` device to an existing pool with RAID-1:
# zpool add <pool> special mirror <device1> <device2>
ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
`0` to disable storing small file blocks on the `special` device or a power of
two in the range between `512B` to `128K`. After setting the property new file
`0` to disable storing small file blocks on the `special` device, or a power of
two in the range between `512B` to `128K`. After setting this property, new file
blocks smaller than `size` will be allocated on the `special` device.
.. IMPORTANT:: If the value for `special_small_blocks` is greater than or equal to
@ -355,10 +372,10 @@ blocks smaller than `size` will be allocated on the `special` device.
the `special` device, so be careful!
Setting the `special_small_blocks` property on a pool will change the default
value of that property for all child ZFS datasets (for example all containers
value of that property for all child ZFS datasets (for example, all containers
in the pool will opt in for small file blocks).
Opt in for all file smaller than 4K-blocks pool-wide:
Opt in for all files smaller than 4K-blocks pool-wide:
.. code-block:: console
@ -379,10 +396,15 @@ Opt out from small file blocks for a single dataset:
Troubleshooting
^^^^^^^^^^^^^^^
Corrupted cachefile
Corrupt cache file
""""""""""""""""""
In case of a corrupted ZFS cachefile, some volumes may not be mounted during
boot until mounted manually later.
`zfs-import-cache.service` imports ZFS pools using the ZFS cache file. If this
file becomes corrupted, the service won't be able to import the pools that it's
unable to read from it.
As a result, in case of a corrupted ZFS cache file, some volumes may not be
mounted during boot and must be mounted manually later.
For each pool, run:
@ -390,16 +412,13 @@ For each pool, run:
# zpool set cachefile=/etc/zfs/zpool.cache POOLNAME
and afterwards update the `initramfs` by running:
then, update the `initramfs` by running:
.. code-block:: console
# update-initramfs -u -k all
and finally reboot your node.
Sometimes the ZFS cachefile can get corrupted, and `zfs-import-cache.service`
doesn't import the pools that aren't present in the cachefile.
and finally, reboot the node.
Another workaround to this problem is enabling the `zfs-import-scan.service`,
which searches and imports pools via device scanning (usually slower).

View File

@ -34,17 +34,7 @@
</style>
<link rel="stylesheet" type="text/css" href="font-awesome/css/font-awesome.css"/>
<script type="text/javascript" src="extjs/ext-all.js"></script>
<script type="text/javascript" src="code39.js"></script>
<script type="text/javascript" src="prefix-field.js"></script>
<script type="text/javascript" src="label-style.js"></script>
<script type="text/javascript" src="tape-type.js"></script>
<script type="text/javascript" src="paper-size.js"></script>
<script type="text/javascript" src="page-layout.js"></script>
<script type="text/javascript" src="page-calibration.js"></script>
<script type="text/javascript" src="label-list.js"></script>
<script type="text/javascript" src="label-setup.js"></script>
<script type="text/javascript" src="lto-barcode.js"></script>
<script type="text/javascript" src="lto-barcode-generator.js"></script>
</head>
<body>
</body>

View File

@ -1,7 +1,5 @@
// FIXME: HACK! Makes scrolling in number spinner work again. fixed in ExtJS >= 6.1
if (Ext.isFirefox) {
Ext.$eventNameMap.DOMMouseScroll = 'DOMMouseScroll';
}
// for toolkit.js
function gettext(val) { return val; };
function draw_labels(target_id, label_list, page_layout, calibration) {
let max_labels = compute_max_labels(page_layout);

View File

@ -14,15 +14,15 @@ following retention options are available:
``keep-hourly <N>``
Keep backups for the last ``<N>`` hours. If there is more than one
backup for a single hour, only the latest is kept.
backup for a single hour, only the latest is retained.
``keep-daily <N>``
Keep backups for the last ``<N>`` days. If there is more than one
backup for a single day, only the latest is kept.
backup for a single day, only the latest is retained.
``keep-weekly <N>``
Keep backups for the last ``<N>`` weeks. If there is more than one
backup for a single week, only the latest is kept.
backup for a single week, only the latest is retained.
.. note:: Weeks start on Monday and end on Sunday. The software
uses the `ISO week date`_ system and handles weeks at
@ -30,17 +30,17 @@ following retention options are available:
``keep-monthly <N>``
Keep backups for the last ``<N>`` months. If there is more than one
backup for a single month, only the latest is kept.
backup for a single month, only the latest is retained.
``keep-yearly <N>``
Keep backups for the last ``<N>`` years. If there is more than one
backup for a single year, only the latest is kept.
backup for a single year, only the latest is retained.
The retention options are processed in the order given above. Each option
only covers backups within its time period. The next option does not take care
of already covered backups. It will only consider older backups.
Unfinished and incomplete backups will be removed by the prune command unless
Unfinished and incomplete backups will be removed by the prune command, unless
they are newer than the last successful backup. In this case, the last failed
backup is retained.
@ -48,7 +48,7 @@ Prune Simulator
^^^^^^^^^^^^^^^
You can use the built-in `prune simulator <prune-simulator/index.html>`_
to explore the effect of different retetion options with various backup
to explore the effect of different retention options with various backup
schedules.
Manual Pruning
@ -59,10 +59,10 @@ Manual Pruning
:align: right
:alt: Prune and garbage collection options
To access pruning functionality for a specific backup group, you can use the
prune command line option discussed in :ref:`backup-pruning`, or navigate to
the **Content** tab of the datastore and click the scissors icon in the
**Actions** column of the relevant backup group.
To manually prune a specific backup group, you can use
``proxmox-backup-client``'s ``prune`` subcommand, discussed in
:ref:`backup-pruning`, or navigate to the **Content** tab of the datastore and
click the scissors icon in the **Actions** column of the relevant backup group.
Prune Schedules
^^^^^^^^^^^^^^^
@ -81,7 +81,7 @@ Retention Settings Example
^^^^^^^^^^^^^^^^^^^^^^^^^^
The backup frequency and retention of old backups may depend on how often data
changes, and how important an older state may be, in a specific work load.
changes and how important an older state may be in a specific workload.
When backups act as a company's document archive, there may also be legal
requirements for how long backup snapshots must be kept.
@ -125,8 +125,8 @@ start garbage collection on an entire datastore and the ``status`` subcommand to
see attributes relating to the :ref:`garbage collection <client_garbage-collection>`.
This functionality can also be accessed in the GUI, by navigating to **Prune &
GC** from the top panel. From here, you can edit the schedule at which garbage
collection runs and manually start the operation.
GC** from the top panel of a datastore. From here, you can edit the schedule at
which garbage collection runs and manually start the operation.
.. _maintenance_verification:
@ -139,13 +139,13 @@ Verification
:align: right
:alt: Adding a verify job
Proxmox Backup offers various verification options to ensure that backup data is
intact. Verification is generally carried out through the creation of verify
jobs. These are scheduled tasks that run verification at a given interval (see
:ref:`calendar-event-scheduling`). With these, you can set whether already verified
snapshots are ignored, as well as set a time period, after which verified jobs
are checked again. The interface for creating verify jobs can be found under the
**Verify Jobs** tab of the datastore.
Proxmox Backup Server offers various verification options to ensure that backup
data is intact. Verification is generally carried out through the creation of
verify jobs. These are scheduled tasks that run verification at a given interval
(see :ref:`calendar-event-scheduling`). With these, you can also set whether
already verified snapshots are ignored, as well as set a time period, after
which snapshots are checked again. The interface for creating verify jobs can be
found under the **Verify Jobs** tab of the datastore.
.. Note:: It is recommended that you reverify all backups at least monthly, even
if a previous verification was successful. This is because physical drives
@ -158,9 +158,9 @@ are checked again. The interface for creating verify jobs can be found under the
data.
Aside from using verify jobs, you can also run verification manually on entire
datastores, backup groups, or snapshots. To do this, navigate to the **Content**
tab of the datastore and either click *Verify All*, or select the *V.* icon from
the *Actions* column in the table.
datastores, backup groups or snapshots. To do this, navigate to the **Content**
tab of the datastore and either click *Verify All* or select the *V.* icon from
the **Actions** column in the table.
.. _maintenance_notification:
@ -170,8 +170,8 @@ Notifications
Proxmox Backup Server can send you notification emails about automatically
scheduled verification, garbage-collection and synchronization tasks results.
By default, notifications are send to the email address configured for the
`root@pam` user. You can set that user for each datastore.
By default, notifications are sent to the email address configured for the
`root@pam` user. You can instead set this user for each datastore.
You can also change the level of notification received per task type, the
following options are available:
@ -179,6 +179,6 @@ following options are available:
* Always: send a notification for any scheduled task, independent of the
outcome
* Errors: send a notification for any scheduled task resulting in an error
* Errors: send a notification for any scheduled task that results in an error
* Never: do not send any notification at all

View File

@ -17,8 +17,8 @@ configuration information for remotes is stored in the file
:align: right
:alt: Add a remote
To add a remote, you need its hostname or IP, a userid and password on the
remote, and its certificate fingerprint. To get the fingerprint, use the
To add a remote, you need its hostname or IP address, a userid and password on
the remote, and its certificate fingerprint. To get the fingerprint, use the
``proxmox-backup-manager cert info`` command on the remote, or navigate to
**Dashboard** in the remote's web interface and select **Show Fingerprint**.
@ -60,12 +60,13 @@ Sync Jobs
Sync jobs are configured to pull the contents of a datastore on a **Remote** to
a local datastore. You can manage sync jobs in the web interface, from the
**Sync Jobs** tab of the datastore which you'd like to set one up for, or using
the ``proxmox-backup-manager sync-job`` command. The configuration information
for sync jobs is stored at ``/etc/proxmox-backup/sync.cfg``. To create a new
sync job, click the add button in the GUI, or use the ``create`` subcommand.
After creating a sync job, you can either start it manually from the GUI or
provide it with a schedule (see :ref:`calendar-event-scheduling`) to run regularly.
**Sync Jobs** tab of the **Datastore** panel or from that of the Datastore
itself. Alternatively, you can manage them with the ``proxmox-backup-manager
sync-job`` command. The configuration information for sync jobs is stored at
``/etc/proxmox-backup/sync.cfg``. To create a new sync job, click the add button
in the GUI, or use the ``create`` subcommand. After creating a sync job, you can
either start it manually from the GUI or provide it with a schedule (see
:ref:`calendar-event-scheduling`) to run regularly.
.. code-block:: console
@ -79,17 +80,48 @@ provide it with a schedule (see :ref:`calendar-event-scheduling`) to run regular
└────────────┴───────┴────────┴──────────────┴───────────┴─────────┘
# proxmox-backup-manager sync-job remove pbs2-local
For setting up sync jobs, the configuring user needs the following permissions:
To set up sync jobs, the configuring user needs the following permissions:
#. ``Remote.Read`` on the ``/remote/{remote}/{remote-store}`` path
#. at least ``Datastore.Backup`` on the local target datastore (``/datastore/{store}``)
If the ``remove-vanished`` option is set, ``Datastore.Prune`` is required on
the local datastore as well. If the ``owner`` option is not set (defaulting to
``root@pam``) or set to something other than the configuring user,
``Datastore.Modify`` is required as well.
#. At least ``Datastore.Backup`` on the local target datastore (``/datastore/{store}``)
.. note:: A sync job can only sync backup groups that the configured remote's
user/API token can read. If a remote is configured with a user/API token that
only has ``Datastore.Backup`` privileges, only the limited set of accessible
snapshots owned by that user/API token can be synced.
If the ``remove-vanished`` option is set, ``Datastore.Prune`` is required on
the local datastore as well. If the ``owner`` option is not set (defaulting to
``root@pam``) or is set to something other than the configuring user,
``Datastore.Modify`` is required as well.
If the ``group-filter`` option is set, only backup groups matching at least one
of the specified criteria are synced. The available criteria are:
* backup type, for example to only sync groups of the `ct` (Container) type:
.. code-block:: console
# proxmox-backup-manager sync-job update ID --group-filter type:ct
* full group identifier
.. code-block:: console
# proxmox-backup-manager sync-job update ID --group-filter group:vm/100
* regular expression matched against the full group identifier
.. todo:: add example for regex
The same filter is applied to local groups for handling of the
``remove-vanished`` option.
.. note:: The ``protected`` flag of remote backup snapshots will not be synced.
Bandwidth Limit
^^^^^^^^^^^^^^^
Syncing datastores to an archive can produce lots of traffic and impact other
users of the network. So, to avoid network or storage congetsion you can limit
the bandwith of the sync job by setting the ``rate-in`` option either in the
web interface or using the ``proxmox-backup-manager`` command-line tool:
.. code-block:: console
# proxmox-backup-manager sync-job update ID --rate-in 20MiB

View File

@ -82,9 +82,12 @@ is:
.. note:: This command and corresponding GUI button rely on the ``ifreload``
command, from the package ``ifupdown2``. This package is included within the
Proxmox Backup Server installation, however, you may have to install it yourself,
if you have installed Proxmox Backup Server on top of Debian or Proxmox VE.
if you have installed Proxmox Backup Server on top of Debian or a Proxmox VE
version prior to version 7.
You can also configure DNS settings, from the **DNS** section
of **Configuration** or by using the ``dns`` subcommand of
``proxmox-backup-manager``.
.. include:: traffic-control.rst

View File

@ -1,5 +1,5 @@
Most commands producing output supports the ``--output-format``
parameter. It accepts the following values:
Most commands that produce output support the ``--output-format``
parameter. This accepts the following values:
:``text``: Text format (default). Structured data is rendered as a table.

View File

@ -17,15 +17,13 @@ update``.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
deb http://ftp.debian.org/debian bullseye main contrib
deb http://ftp.debian.org/debian bullseye-updates main contrib
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
deb http://security.debian.org/debian-security bullseye-security main contrib
.. FIXME for 7.0: change security update suite to bullseye-security
In addition, you need a package repository from Proxmox to get Proxmox Backup
updates.
@ -45,31 +43,21 @@ key with the following commands:
.. code-block:: console
# wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
# wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
Verify the SHA512 checksum afterwards with:
Verify the SHA512 checksum afterwards with the expected output below:
.. code-block:: console
# sha512sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
# sha512sum /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
7fb03ec8a1675723d2853b84aa4fdb49a46a3bb72b9951361488bfd19b29aab0a789a4f8c7406e71a69aabbc727c936d3549731c4659ffa1a08f44db8fdcebfa /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
The output should be:
and the md5sum, with the expected output below:
.. code-block:: console
acca6f416917e8e11490a08a1e2842d500b3a5d9f322c6319db0927b2901c3eae23cfb5cd5df6facf2b57399d3cfa52ad7769ebdd75d9b204549ca147da52626 /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
and the md5sum:
.. code-block:: console
# md5sum /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
Here, the output should be:
.. code-block:: console
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
# md5sum /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
bcc35c7173e0845c0d6ad6470b70f50e /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
.. _sysadmin_package_repos_enterprise:
@ -84,7 +72,7 @@ enabled by default:
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list``
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise
deb https://enterprise.proxmox.com/debian/pbs bullseye pbs-enterprise
To never miss important security fixes, the superuser (``root@pam`` user) is
@ -114,15 +102,15 @@ We recommend to configure this repository in ``/etc/apt/sources.list``.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
deb http://ftp.debian.org/debian bullseye main contrib
deb http://ftp.debian.org/debian bullseye-updates main contrib
# PBS pbs-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pbs buster pbs-no-subscription
deb http://download.proxmox.com/debian/pbs bullseye pbs-no-subscription
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
deb http://security.debian.org/debian-security bullseye-security main contrib
`Proxmox Backup`_ Test Repository
@ -140,7 +128,7 @@ You can access this repository by adding the following line to
.. code-block:: sources.list
:caption: sources.list entry for ``pbstest``
deb http://download.proxmox.com/debian/pbs buster pbstest
deb http://download.proxmox.com/debian/pbs bullseye pbstest
.. _package_repositories_client_only:
@ -161,6 +149,26 @@ APT-based Proxmox Backup Client Repository
For modern Linux distributions using `apt` as package manager, like all Debian
and Ubuntu Derivative do, you may be able to use the APT-based repository.
In order to configure this repository you need to first :ref:`setup the Proxmox
release key <package_repos_secure_apt>`. After that, add the repository URL to
the APT sources lists.
**Repositories for Debian 11 (Bullseye) based releases**
This repository is tested with:
- Debian Bullseye
Edit the file ``/etc/apt/sources.list.d/pbs-client.list`` and add the following
snipped
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list``
deb http://download.proxmox.com/debian/pbs-client bullseye main
**Repositories for Debian 10 (Buster) based releases**
This repository is tested with:
- Debian Buster
@ -168,9 +176,6 @@ This repository is tested with:
It may work with older, and should work with more recent released versions.
In order to configure this repository you need to first :ref:`setup the Proxmox
release key <package_repos_secure_apt>`. After that, add the repository URL to
the APT sources lists.
Edit the file ``/etc/apt/sources.list.d/pbs-client.list`` and add the following
snipped

View File

@ -0,0 +1,14 @@
Implements debugging functionality to inspect Proxmox Backup datastore
files, verify the integrity of chunks.
Also contains an 'api' subcommand where arbitrary api paths can be called
(get/create/set/delete) as well as display their parameters (usage) and
their child-links (ls).
By default, it connects to the proxmox-backup-proxy on localhost via https,
but by setting the environment variable `PROXMOX_DEBUG_API_CODE` to `1` the
tool directly calls the corresponding code.
.. WARNING:: Using `PROXMOX_DEBUG_API_CODE` can be dangerous and is only intended
for debugging purposes. It is not intended for use on a production system.

View File

@ -0,0 +1,33 @@
==========================
proxmox-backup-debug
==========================
.. include:: ../epilog.rst
-------------------------------------------------------------
Debugging command line tool for Backup and Restore
-------------------------------------------------------------
:Author: |AUTHOR|
:Version: Version |VERSION|
:Manual section: 1
Synopsis
==========
.. include:: synopsis.rst
Common Options
==============
.. include:: ../output-format.rst
Description
============
.. include:: description.rst
.. include:: ../pbs-copyright.rst

View File

@ -1,5 +1,5 @@
This daemon exposes the whole Proxmox Backup Server API on TCP port
8007 using HTTPS. It runs as user ``backup`` and has very limited
permissions. Operation requiring more permissions are forwarded to
permissions. Operations requiring more permissions are forwarded to
the local ``proxmox-backup`` service.

View File

@ -1,7 +1,5 @@
// FIXME: HACK! Makes scrolling in number spinner work again. fixed in ExtJS >= 6.1
if (Ext.isFirefox) {
Ext.$eventNameMap.DOMMouseScroll = 'DOMMouseScroll';
}
// for Toolkit.js
function gettext(val) { return val; };
Ext.onReady(function() {
const NOW = new Date();
@ -37,7 +35,6 @@ Ext.onReady(function() {
editable: true,
displayField: 'text',
valueField: 'value',
queryMode: 'local',

View File

@ -3,8 +3,8 @@
`Proxmox VE`_ Integration
-------------------------
A Proxmox Backup Server can be integrated into a Proxmox VE setup by adding the
former as a storage in a Proxmox VE standalone or cluster setup.
Proxmox Backup Server can be integrated into a Proxmox VE standalone or cluster
setup, by adding it as a storage in Proxmox VE.
See also the `Proxmox VE Storage - Proxmox Backup Server
<https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_pbs>`_ section
@ -14,8 +14,8 @@ of the Proxmox VE Administration Guide for Proxmox VE specific documentation.
Using the Proxmox VE Web-Interface
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Proxmox VE has native API and web-interface integration of Proxmox Backup
Server since the `Proxmox VE 6.3 release
Proxmox VE has native API and web interface integration of Proxmox Backup
Server as of `Proxmox VE 6.3
<https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.3>`_.
A Proxmox Backup Server can be added under ``Datacenter -> Storage``.
@ -24,8 +24,8 @@ Using the Proxmox VE Command-Line
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You need to define a new storage with type 'pbs' on your `Proxmox VE`_
node. The following example uses ``store2`` as storage name, and
assumes the server address is ``localhost``, and you want to connect
node. The following example uses ``store2`` as the storage's name, and
assumes the server address is ``localhost`` and you want to connect
as ``user1@pbs``.
.. code-block:: console
@ -33,7 +33,7 @@ as ``user1@pbs``.
# pvesm add pbs store2 --server localhost --datastore store2
# pvesm set store2 --username user1@pbs --password <secret>
.. note:: If you would rather not pass your password as plain text, you can pass
.. note:: If you would rather not enter your password as plain text, you can pass
the ``--password`` parameter, without any arguments. This will cause the
program to prompt you for a password upon entering the command.
@ -53,7 +53,7 @@ relationship:
# pvesm set store2 --fingerprint 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe
After that you should be able to see storage status with:
After that, you should be able to view storage status with:
.. code-block:: console

View File

@ -1,12 +1,12 @@
``pxar`` is a command line utility to create and manipulate archives in the
``pxar`` is a command line utility for creating and manipulating archives in the
:ref:`pxar-format`.
It is inspired by `casync file archive format
<http://0pointer.net/blog/casync-a-tool-for-distributing-file-system-images.html>`_,
which caters to a similar use-case.
The ``.pxar`` format is adapted to fulfill the specific needs of the Proxmox
Backup Server, for example, efficient storage of hardlinks.
The format is designed to reduce storage space needed on the server by achieving
a high level of deduplication.
Backup Server, for example, efficient storage of hard links.
The format is designed to reduce the required storage on the server by
achieving a high level of deduplication.
Creating an Archive
^^^^^^^^^^^^^^^^^^^
@ -24,10 +24,10 @@ This will create a new archive called ``archive.pxar`` with the contents of the
the same name is already present in the target folder, the creation will
fail.
By default, ``pxar`` will skip certain mountpoints and will not follow device
By default, ``pxar`` will skip certain mount points and will not follow device
boundaries. This design decision is based on the primary use case of creating
archives for backups. It makes sense to not back up the contents of certain
temporary or system specific files.
archives for backups. It makes sense to ignore the contents of certain
temporary or system specific files in a backup.
To alter this behavior and follow device boundaries, use the
``--all-file-systems`` flag.
@ -41,40 +41,38 @@ by running:
# pxar create archive.pxar /path/to/source --exclude '**/*.txt'
Be aware that the shell itself will try to expand all of the glob patterns before
invoking ``pxar``.
In order to avoid this, all globs have to be quoted correctly.
Be aware that the shell itself will try to expand glob patterns before invoking
``pxar``. In order to avoid this, all globs have to be quoted correctly.
It is possible to pass the ``--exclude`` parameter multiple times, in order to
match more than one pattern. This allows you to use more complex
file exclusion/inclusion behavior. However, it is recommended to use
file inclusion/exclusion behavior. However, it is recommended to use
``.pxarexclude`` files instead for such cases.
For example you might want to exclude all ``.txt`` files except for a specific
one from the archive. This is achieved via the negated match pattern, prefixed
by ``!``.
All the glob patterns are relative to the ``source`` directory.
For example you might want to exclude all ``.txt`` files except a specific
one from the archive. This would be achieved via the negated match pattern,
prefixed by ``!``. All the glob patterns are relative to the ``source``
directory.
.. code-block:: console
# pxar create archive.pxar /path/to/source --exclude '**/*.txt' --exclude '!/folder/file.txt'
.. NOTE:: The order of the glob match patterns matters as later ones override
previous ones. Permutations of the same patterns lead to different results.
.. NOTE:: The order of the glob match patterns matters, as later ones override
earlier ones. Permutations of the same patterns lead to different results.
``pxar`` will store the list of glob match patterns passed as parameters via the
command line, in a file called ``.pxarexclude-cli`` at the root of
the archive.
command line, in a file called ``.pxarexclude-cli``, at the root of the archive.
If a file with this name is already present in the source folder during archive
creation, this file is not included in the archive and the file containing the
new patterns is added to the archive instead, the original file is not altered.
creation, this file is not included in the archive, and the file containing the
new patterns is added to the archive instead. The original file is not altered.
A more convenient and persistent way to exclude files from the archive is by
placing the glob match patterns in ``.pxarexclude`` files.
It is possible to create and place these files in any directory of the filesystem
tree.
These files must contain one pattern per line, again later patterns win over
previous ones.
These files must contain one pattern per line, and later patterns override
earlier ones.
The patterns control file exclusions of files present within the given directory
or further below it in the tree.
The behavior is the same as described in :ref:`client_creating_backups`.
@ -89,7 +87,7 @@ with the following command:
# pxar extract archive.pxar /path/to/target
If no target is provided, the content of the archive is extracted to the current
If no target is provided, the contents of the archive is extracted to the current
working directory.
In order to restore only parts of an archive, single files, and/or folders,
@ -116,13 +114,13 @@ run the following command:
# pxar list archive.pxar
This displays the full path of each file or directory with respect to the
archives root.
archive's root.
Mounting an Archive
^^^^^^^^^^^^^^^^^^^
``pxar`` allows you to mount and inspect the contents of an archive via _`FUSE`.
In order to mount an archive named ``archive.pxar`` to the mountpoint ``/mnt``,
In order to mount an archive named ``archive.pxar`` to the mount point ``/mnt``,
run the command:
.. code-block:: console
@ -130,7 +128,7 @@ run the command:
# pxar mount archive.pxar /mnt
Once the archive is mounted, you can access its content under the given
mountpoint.
mount point.
.. code-block:: console

View File

@ -15,7 +15,7 @@ accessed using the ``disk`` subcommand. This subcommand allows you to initialize
disks, create various filesystems, and get information about the disks.
To view the disks connected to the system, navigate to **Administration ->
Disks** in the web interface or use the ``list`` subcommand of
Storage/Disks** in the web interface or use the ``list`` subcommand of
``disk``:
.. code-block:: console
@ -42,9 +42,9 @@ To initialize a disk with a new GPT, use the ``initialize`` subcommand:
:alt: Create a directory
You can create an ``ext4`` or ``xfs`` filesystem on a disk using ``fs
create``, or by navigating to **Administration -> Disks -> Directory** in the
web interface and creating one from there. The following command creates an
``ext4`` filesystem and passes the ``--add-datastore`` parameter, in order to
create``, or by navigating to **Administration -> Storage/Disks -> Directory**
in the web interface and creating one from there. The following command creates
an ``ext4`` filesystem and passes the ``--add-datastore`` parameter, in order to
automatically create a datastore on the disk (in this case ``sdd``). This will
create a datastore at the location ``/mnt/datastore/store1``:
@ -57,7 +57,7 @@ create a datastore at the location ``/mnt/datastore/store1``:
:alt: Create ZFS
You can also create a ``zpool`` with various raid levels from **Administration
-> Disks -> Zpool** in the web interface, or by using ``zpool create``. The command
-> Storage/Disks -> ZFS** in the web interface, or by using ``zpool create``. The command
below creates a mirrored ``zpool`` using two disks (``sdb`` & ``sdc``) and
mounts it under ``/mnt/datastore/zpool1``:
@ -102,7 +102,7 @@ is stored in the file ``/etc/proxmox-backup/datastore.cfg``.
subdirectories per directory. That number comes from the 2\ :sup:`16`
pre-created chunk namespace directories, and the ``.`` and ``..`` default
directory entries. This requirement excludes certain filesystems and
filesystem configuration from being supported for a datastore. For example,
filesystem configurations from being supported for a datastore. For example,
``ext3`` as a whole or ``ext4`` with the ``dir_nlink`` feature manually disabled.
@ -113,14 +113,15 @@ Datastore Configuration
:align: right
:alt: Datastore Overview
You can configure multiple datastores. Minimum one datastore needs to be
You can configure multiple datastores. A minimum of one datastore needs to be
configured. The datastore is identified by a simple *name* and points to a
directory on the filesystem. Each datastore also has associated retention
settings of how many backup snapshots for each interval of ``hourly``,
``daily``, ``weekly``, ``monthly``, ``yearly`` as well as a time-independent
number of backups to keep in that store. :ref:`backup-pruning` and
:ref:`garbage collection <client_garbage-collection>` can also be configured to run
periodically based on a configured schedule (see :ref:`calendar-event-scheduling`) per datastore.
:ref:`garbage collection <client_garbage-collection>` can also be configured to
run periodically, based on a configured schedule (see
:ref:`calendar-event-scheduling`) per datastore.
.. _storage_datastore_create:
@ -146,7 +147,8 @@ window:
* *Comment* can be used to add some contextual information to the datastore.
Alternatively you can create a new datastore from the command line. The
following command creates a new datastore called ``store1`` on :file:`/backup/disk1/store1`
following command creates a new datastore called ``store1`` on
:file:`/backup/disk1/store1`
.. code-block:: console
@ -156,7 +158,7 @@ following command creates a new datastore called ``store1`` on :file:`/backup/di
Managing Datastores
^^^^^^^^^^^^^^^^^^^
To list existing datastores from the command line run:
To list existing datastores from the command line, run:
.. code-block:: console
@ -216,8 +218,9 @@ After creating a datastore, the following default layout will appear:
`.lock` is an empty file used for process locking.
The `.chunks` directory contains folders, starting from `0000` and taking hexadecimal values until `ffff`. These
directories will store the chunked data after a backup operation has been executed.
The `.chunks` directory contains folders, starting from `0000` and increasing in
hexadecimal values until `ffff`. These directories will store the chunked data,
categorized by checksum, after a backup operation has been executed.
.. code-block:: console

View File

@ -4,8 +4,8 @@ Host System Administration
==========================
`Proxmox Backup`_ is based on the famous Debian_ Linux
distribution. That means that you have access to the whole world of
Debian packages, and the base system is well documented. The `Debian
distribution. This means that you have access to the entire range of
Debian packages, and that the base system is well documented. The `Debian
Administrator's Handbook`_ is available online, and provides a
comprehensive introduction to the Debian operating system.
@ -17,11 +17,11 @@ updates to some Debian packages when necessary.
We also deliver a specially optimized Linux kernel, where we enable
all required virtualization and container features. That kernel
includes drivers for ZFS_, and several hardware drivers. For example,
includes drivers for ZFS_, as well as several hardware drivers. For example,
we ship Intel network card drivers to support their newest hardware.
The following sections will concentrate on backup related topics. They
either explain things which are different on `Proxmox Backup`_, or
will explain things which are different on `Proxmox Backup`_, or
tasks which are commonly used on `Proxmox Backup`_. For other topics,
please refer to the standard Debian documentation.

View File

@ -3,9 +3,6 @@
Tape Backup
===========
.. CAUTION:: Tape Backup is a technical preview feature, not meant for
production use.
.. image:: images/screenshots/pbs-gui-tape-changer-overview.png
:align: right
:alt: Tape Backup: Tape changer overview
@ -848,6 +845,17 @@ Update Inventory
Restore Catalog
~~~~~~~~~~~~~~~
To restore a catalog from an existing tape, just insert the tape into the drive
and execute:
.. code-block:: console
# proxmox-tape catalog
You can restore from a tape even without an existing catalog, but only the
whole media set. If you do this, the catalog will be automatically created.
Encryption Key Management
~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -8,7 +8,7 @@ Datastores
A Datastore is the logical place where :ref:`Backup Snapshots
<term_backup_snapshot>` and their chunks are stored. Snapshots consist of a
manifest, blobs, dynamic- and fixed-indexes (see :ref:`terms`), and are
manifest, blobs, and dynamic- and fixed-indexes (see :ref:`terms`), and are
stored in the following directory structure:
<datastore-root>/<type>/<id>/<time>/
@ -32,8 +32,8 @@ The chunks of a datastore are found in
<datastore-root>/.chunks/
This chunk directory is further subdivided by the first four byte of the chunks
checksum, so the chunk with the checksum
This chunk directory is further subdivided by the first four bytes of the
chunk's checksum, so a chunk with the checksum
a342e8151cbf439ce65f3df696b54c67a114982cc0aa751f2852c2f7acc19a8b
@ -47,7 +47,7 @@ per directory can be bad for file system performance.
These chunk directories ('0000'-'ffff') will be preallocated when a datastore
is created.
Fixed-sized Chunks
Fixed-Sized Chunks
^^^^^^^^^^^^^^^^^^
For block based backups (like VMs), fixed-sized chunks are used. The content
@ -58,10 +58,10 @@ often tries to allocate files in contiguous pieces, so new files get new
blocks, and changing existing files changes only their own blocks.
As an optimization, VMs in `Proxmox VE`_ can make use of 'dirty bitmaps', which
can track the changed blocks of an image. Since these bitmap are also a
can track the changed blocks of an image. Since these bitmaps are also a
representation of the image split into chunks, there is a direct relation
between dirty blocks of the image and chunks which need to get uploaded, so
only modified chunks of the disk have to be uploaded for a backup.
between the dirty blocks of the image and chunks which need to be uploaded.
Thus, only modified chunks of the disk need to be uploaded to a backup.
Since the image is always split into chunks of the same size, unchanged blocks
will result in identical checksums for those chunks, so such chunks do not need
@ -71,24 +71,24 @@ changed blocks.
For consistency, `Proxmox VE`_ uses a QEMU internal snapshot mechanism, that
does not rely on storage snapshots either.
Dynamically sized Chunks
Dynamically Sized Chunks
^^^^^^^^^^^^^^^^^^^^^^^^
If one does not want to backup block-based systems but rather file-based
systems, using fixed-sized chunks is not a good idea, since every time a file
would change in size, the remaining data gets shifted around and this would
result in many chunks changing, reducing the amount of deduplication.
When working with file-based systems rather than block-based systems,
using fixed-sized chunks is not a good idea, since every time a file
would change in size, the remaining data would be shifted around,
resulting in many chunks changing and the amount of deduplication being reduced.
To improve this, `Proxmox Backup`_ Server uses dynamically sized chunks
instead. Instead of splitting an image into fixed sizes, it first generates a
consistent file archive (:ref:`pxar <pxar-format>`) and uses a rolling hash
over this on-the-fly generated archive to calculate chunk boundaries.
We use a variant of Buzhash which is a cyclic polynomial algorithm. It works
We use a variant of Buzhash which is a cyclic polynomial algorithm. It works
by continuously calculating a checksum while iterating over the data, and on
certain conditions it triggers a hash boundary.
certain conditions, it triggers a hash boundary.
Assuming that most files of the system that is to be backed up have not
Assuming that most files on the system that is to be backed up have not
changed, eventually the algorithm triggers the boundary on the same data as a
previous backup, resulting in chunks that can be reused.
@ -100,8 +100,8 @@ can be encrypted, and they are handled in a slightly different manner than
normal chunks.
The hashes of encrypted chunks are calculated not with the actual (encrypted)
chunk content, but with the plain-text content concatenated with the encryption
key. This way, two chunks of the same data encrypted with different keys
chunk content, but with the plain-text content, concatenated with the encryption
key. This way, two chunks with the same data but encrypted with different keys
generate two different checksums and no collisions occur for multiple
encryption keys.
@ -112,14 +112,14 @@ the previous backup, do not need to be encrypted and uploaded.
Caveats and Limitations
-----------------------
Notes on hash collisions
Notes on Hash Collisions
^^^^^^^^^^^^^^^^^^^^^^^^
Every hashing algorithm has a chance to produce collisions, meaning two (or
more) inputs generate the same checksum. For SHA-256, this chance is
negligible. To calculate such a collision, one can use the ideas of the
'birthday problem' from probability theory. For big numbers, this is actually
infeasible to calculate with regular computers, but there is a good
negligible. To calculate the chances of such a collision, one can use the ideas
of the 'birthday problem' from probability theory. For big numbers, this is
actually unfeasible to calculate with regular computers, but there is a good
approximation:
.. math::
@ -127,7 +127,7 @@ approximation:
p(n, d) = 1 - e^{-n^2/(2d)}
Where `n` is the number of tries, and `d` is the number of possibilities.
For a concrete example lets assume a large datastore of 1 PiB, and an average
For a concrete example, lets assume a large datastore of 1 PiB and an average
chunk size of 4 MiB. That means :math:`n = 268435456` tries, and :math:`d =
2^{256}` possibilities. Inserting those values in the formula from earlier you
will see that the probability of a collision in that scenario is:
@ -136,31 +136,96 @@ will see that the probability of a collision in that scenario is:
3.1115 * 10^{-61}
For context, in a lottery game of guessing 6 out of 45, the chance to correctly
guess all 6 numbers is only :math:`1.2277 * 10^{-7}`, that means the chance of
a collision is about the same as winning 13 such lotto games *in a row*.
For context, in a lottery game of guessing 6 numbers out of 45, the chance to
correctly guess all 6 numbers is only :math:`1.2277 * 10^{-7}`. This means the
chance of a collision is about the same as winning 13 such lottery games *in a
row*.
In conclusion, it is extremely unlikely that such a collision would occur by
accident in a normal datastore.
Additionally, SHA-256 is prone to length extension attacks, but since there is
an upper limit for how big the chunk are, this is not a problem, since a
an upper limit for how big the chunks are, this is not a problem, because a
potential attacker cannot arbitrarily add content to the data beyond that
limit.
File-based Backup
File-Based Backup
^^^^^^^^^^^^^^^^^
Since dynamically sized chunks (for file-based backups) are created on a custom
archive format (pxar) and not over the files directly, there is no relation
between files and the chunks. This means that the Proxmox Backup client has to
between the files and chunks. This means that the Proxmox Backup Client has to
read all files again for every backup, otherwise it would not be possible to
generate a consistent independent pxar archive where the original chunks can be
reused. Note that there will be still only new or change chunks be uploaded.
generate a consistent, independent pxar archive where the original chunks can be
reused. Note that in spite of this, only new or changed chunks will be uploaded.
Verification of encrypted chunks
Verification of Encrypted Chunks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For encrypted chunks, only the checksum of the original (plaintext) data is
available, making it impossible for the server (without the encryption key), to
available, making it impossible for the server (without the encryption key) to
verify its content against it. Instead only the CRC-32 checksum gets checked.
Troubleshooting
---------------
Index files(*.fidx*, *.didx*) contain information about how to rebuild a file.
More precisely, they contain an ordered list of references to the chunks that
the original file was split into. If there is something wrong with a snapshot,
it might be useful to find out which chunks are referenced in it, and check
whether they are present and intact. The ``proxmox-backup-debug`` command line
tool can be used to inspect such files and recover their contents. For example,
to get a list of the referenced chunks of a *.fidx* index:
.. code-block:: console
# proxmox-backup-debug inspect file drive-scsi0.img.fidx
The same command can be used to inspect *.blob* files. Without the ``--decode``
parameter, just the size and the encryption type, if any, are printed. If
``--decode`` is set, the blob file is decoded into the specified file ('-' will
decode it directly to stdout).
The following example would print the decoded contents of
`qemu-server.conf.blob`. If the file you're trying to inspect is encrypted, a
path to the key file must be provided using ``--keyfile``.
.. code-block:: console
# proxmox-backup-debug inspect file qemu-server.conf.blob --decode -
You can also check in which index files a specific chunk file is referenced
with:
.. code-block:: console
# proxmox-backup-debug inspect chunk b531d3ffc9bd7c65748a61198c060678326a431db7eded874c327b7986e595e0 --reference-filter /path/in/a/datastore/directory
Here ``--reference-filter`` specifies where index files should be searched. This
can be an arbitrary path. If, for some reason, the filename of the chunk was
changed, you can explicitly specify the digest using ``--digest``. By default, the
chunk filename is used as the digest to look for. If no ``--reference-filter``
is specified, it will only print the CRC and encryption status of the chunk. You
can also decode chunks, by setting the ``--decode`` flag. If the chunk is
encrypted, a ``--keyfile`` must be provided, in order to decode it.
Restore without a Running Proxmox Backup Server
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It's possible to restore specific files from a snapshot, without a running
Proxmox Backup Server instance, using the ``recover`` subcommand, provided you
have access to the intact index and chunk files. Note that you also need the
corresponding key file if the backup was encrypted.
.. code-block:: console
# proxmox-backup-debug recover index drive-scsi0.img.fidx /path/to/.chunks
In the above example, the `/path/to/.chunks` argument is the path to the
directory that contains the chunks, and `drive-scsi0.img.fidx` is the index file
of the file you'd like to restore. Both paths can be absolute or relative. With
``--skip-crc``, it's possible to disable the CRC checks of the chunks. This
will speed up the process slightly and allow for trying to restore (partially)
corrupt chunks. It's recommended to always try without the skip-CRC option
first.

View File

@ -41,23 +41,23 @@ Binary Data (BLOBs)
~~~~~~~~~~~~~~~~~~~
This type is used to store smaller (< 16MB) binary data such as
configuration files. Larger files should be stored as image archive.
configuration files. Larger files should be stored as image archives.
.. caution:: Please do not store all files as BLOBs. Instead, use the
file archive to store whole directory trees.
file archive to store entire directory trees.
Catalog File: ``catalog.pcat1``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The catalog file is an index for file archives. It contains
the list of files and is used to speed up search operations.
the list of included files and is used to speed up search operations.
The Manifest: ``index.json``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The manifest contains the list of all backup files, their
The manifest contains a list of all backed up files, and their
sizes and checksums. It is used to verify the consistency of a
backup.
@ -68,18 +68,19 @@ Backup Type
The backup server groups backups by *type*, where *type* is one of:
``vm``
This type is used for :term:`virtual machine`\ s. Typically
This type is used for :term:`virtual machine`\ s. It typically
consists of the virtual machine's configuration file and an image archive
for each disk.
``ct``
This type is used for :term:`container`\ s. Consists of the container's
configuration and a single file archive for the filesystem content.
This type is used for :term:`container`\ s. It consists of the container's
configuration and a single file archive for the filesystem's contents.
``host``
This type is used for backups created from within the backed up machine.
Typically this would be a physical host but could also be a virtual machine
or container. Such backups may contain file and image archives, there are no restrictions in this regard.
This type is used for file/directory backups created from within a machine.
Typically this would be a physical host, but could also be a virtual machine
or container. Such backups may contain file and image archives; there are no
restrictions in this regard.
Backup ID

101
docs/traffic-control.rst Normal file
View File

@ -0,0 +1,101 @@
.. _sysadmin_traffic_control:
Traffic Control
---------------
.. image:: images/screenshots/pbs-gui-traffic-control-add.png
:align: right
:alt: Add a traffic control limit
Creating and restoring backups can produce lots of traffic and impact other
users of the network or shared storages.
Proxmox Backup Server allows to limit network traffic for clients within
specified networks using a token bucket filter (TBF).
This allows you to avoid network congestion or to prioritize traffic from
certain hosts.
You can manage the traffic controls either over the web-interface or using the
``traffic-control`` commandos of the ``proxmox-backup-manager`` command-line
tool.
.. note:: Sync jobs on the server are not affected by its rate-in limits. If
you want to limit the incomming traffic that a pull-based sync job
generates, you need to setup a job-specific rate-in limit. See
:ref:`syncjobs`.
The following command adds a traffic control rule to limit all IPv4 clients
(network ``0.0.0.0/0``) to 100 MB/s:
.. code-block:: console
# proxmox-backup-manager traffic-control create rule0 --network 0.0.0.0/0 \
--rate-in 100MB --rate-out 100MB \
--comment "Default rate limit (100MB/s) for all clients"
.. note:: To limit both IPv4 and IPv6 network spaces you need to pass two
network parameters ``::/0`` and ``0.0.0.0/0``.
It is possible to restrict rules to certain time frames, for example the
company office hours:
.. tip:: You can use SI (base 10: KB, MB, ...) or IEC (base 2: KiB, MiB, ...)
units.
.. code-block:: console
# proxmox-backup-manager traffic-control update rule0 \
--timeframe "mon..fri 8-12" \
--timeframe "mon..fri 14:30-18"
If there are more rules, the server uses the rule with the smaller network. For
example, we can overwrite the setting for our private network (and the server
itself) with:
.. code-block:: console
# proxmox-backup-manager traffic-control create rule1 \
--network 192.168.2.0/24 \
--network 127.0.0.0/8 \
--rate-in 20GB --rate-out 20GB \
--comment "Use 20GB/s for the local network"
.. note:: The behavior is undefined if there are several rules for the same network.
If there are multiple rules that match the same network all of them will be
applied, which means that the smallest one wins, as it's bucket fills up the
fastest.
To list the current rules use:
.. code-block:: console
# proxmox-backup-manager traffic-control list
┌───────┬─────────────┬─────────────┬─────────────────────────┬────────────...─┐
│ name │ rate-in │ rate-out │ network │ timeframe ... │
╞═══════╪═════════════╪═════════════╪═════════════════════════╪════════════...═╡
│ rule0 │ 100 MB │ 100 MB │ ["0.0.0.0/0"] │ ["mon..fri ... │
├───────┼─────────────┼─────────────┼─────────────────────────┼────────────...─┤
│ rule1 │ 20 GB │ 20 GB │ ["192.168.2.0/24", ...] │ ... │
└───────┴─────────────┴─────────────┴─────────────────────────┴────────────...─┘
Rules can also be removed:
.. code-block:: console
# proxmox-backup-manager traffic-control remove rule1
To show the state (current data rate) of all configured rules use:
.. code-block:: console
# proxmox-backup-manager traffic-control traffic
┌───────┬─────────────┬──────────────┐
│ name │ cur-rate-in │ cur-rate-out │
╞═══════╪═════════════╪══════════════╡
│ rule0 │ 0 B │ 0 B │
├───────┼─────────────┼──────────────┤
│ rule1 │ 1.161 GiB │ 19.146 KiB │
└───────┴─────────────┴──────────────┘

View File

@ -15,17 +15,19 @@ Proxmox Backup Server supports several authentication realms, and you need to
choose the realm when you add a new user. Possible realms are:
:pam: Linux PAM standard authentication. Use this if you want to
authenticate as Linux system user (Users need to exist on the
authenticate as a Linux system user (users need to exist on the
system).
:pbs: Proxmox Backup Server realm. This type stores hashed passwords in
``/etc/proxmox-backup/shadow.json``.
After installation, there is a single user ``root@pam``, which
corresponds to the Unix superuser. User configuration information is stored in the file
``/etc/proxmox-backup/user.cfg``. You can use the
``proxmox-backup-manager`` command line tool to list or manipulate
users:
:openid: OpenID Connect server. Users can authenticate against an external
OpenID Connect server.
After installation, there is a single user, ``root@pam``, which corresponds to
the Unix superuser. User configuration information is stored in the file
``/etc/proxmox-backup/user.cfg``. You can use the ``proxmox-backup-manager``
command line tool to list or manipulate users:
.. code-block:: console
@ -40,13 +42,13 @@ users:
:align: right
:alt: Add a new user
The superuser has full administration rights on everything, so you
normally want to add other users with less privileges. You can add a new
The superuser has full administration rights on everything, so it's recommended
to add other users with less privileges. You can add a new
user with the ``user create`` subcommand or through the web
interface, under the **User Management** tab of **Configuration -> Access
Control**. The ``create`` subcommand lets you specify many options like
``--email`` or ``--password``. You can update or change any user properties
using the ``update`` subcommand later (**Edit** in the GUI):
using the ``user update`` subcommand later (**Edit** in the GUI):
.. code-block:: console
@ -71,16 +73,16 @@ The resulting user list looks like this:
│ root@pam │ 1 │ │ │ │ │ Superuser │
└──────────┴────────┴────────┴───────────┴──────────┴──────────────────┴──────────────────┘
Newly created users do not have any permissions. Please read the Access Control
Newly created users do not have any permissions. Please read the :ref:`user_acl`
section to learn how to set access permissions.
If you want to disable a user account, you can do that by setting ``--enable`` to ``0``
You can disable a user account by setting ``--enable`` to ``0``:
.. code-block:: console
# proxmox-backup-manager user update john@pbs --enable 0
Or completely remove the user with:
Or completely remove a user with:
.. code-block:: console
@ -95,7 +97,7 @@ API Tokens
:align: right
:alt: API Token Overview
Any authenticated user can generate API tokens which can in turn be used to
Any authenticated user can generate API tokens, which can in turn be used to
configure various clients, instead of directly providing the username and
password.
@ -117,7 +119,7 @@ The API token is passed from the client to the server by setting the
``Authorization`` HTTP header with method ``PBSAPIToken`` to the value
``TOKENID:TOKENSECRET``.
Generating new tokens can done using ``proxmox-backup-manager`` or the GUI:
You can generate tokens from the GUI or by using ``proxmox-backup-manager``:
.. code-block:: console
@ -154,9 +156,9 @@ section to learn how to set access permissions.
Access Control
--------------
By default new users and API tokens do not have any permission. Instead you
By default, new users and API tokens do not have any permissions. Instead you
need to specify what is allowed and what is not. You can do this by assigning
roles to users/tokens on specific objects like datastores or remotes. The
roles to users/tokens on specific objects, like datastores or remotes. The
following roles exist:
**NoAccess**
@ -176,7 +178,7 @@ following roles exist:
is not allowed to read the actual data.
**DatastoreReader**
Can Inspect datastore content and can do restores.
Can Inspect datastore content and do restores.
**DatastoreBackup**
Can backup and restore owned backups.
@ -193,6 +195,18 @@ following roles exist:
**RemoteSyncOperator**
Is allowed to read data from a remote.
**TapeAudit**
Can view tape related configuration and status
**TapeAdministrat**
Can do anything related to tape backup
**TapeOperator**
Can do tape backup and restore (but no configuration changes)
**TapeReader**
Can read and inspect tape configuration and media content
.. image:: images/screenshots/pbs-gui-user-management-add-user.png
:align: right
:alt: Add permissions for user
@ -236,7 +250,8 @@ You can list the ACLs of each user/token using the following command:
│ john@pbs │ /datastore/store1 │ 1 │ DatastoreAdmin │
└──────────┴───────────────────┴───────────┴────────────────┘
A single user/token can be assigned multiple permission sets for different datastores.
A single user/token can be assigned multiple permission sets for different
datastores.
.. Note::
Naming convention is important here. For datastores on the host,
@ -247,11 +262,11 @@ A single user/token can be assigned multiple permission sets for different datas
remote (see `Remote` below) and ``{storename}`` is the name of the datastore on
the remote.
API Token permissions
API Token Permissions
~~~~~~~~~~~~~~~~~~~~~
API token permissions are calculated based on ACLs containing their ID
independent of those of their corresponding user. The resulting permission set
API token permissions are calculated based on ACLs containing their ID,
independently of those of their corresponding user. The resulting permission set
on a given path is then intersected with that of the corresponding user.
In practice this means:
@ -259,17 +274,17 @@ In practice this means:
#. API tokens require their own ACL entries
#. API tokens can never do more than their corresponding user
Effective permissions
Effective Permissions
~~~~~~~~~~~~~~~~~~~~~
To calculate and display the effective permission set of a user or API token
To calculate and display the effective permission set of a user or API token,
you can use the ``proxmox-backup-manager user permission`` command:
.. code-block:: console
# proxmox-backup-manager user permissions john@pbs --path /datastore/store1
Privileges with (*) have the propagate flag set
Path: /datastore/store1
- Datastore.Audit (*)
- Datastore.Backup (*)
@ -277,17 +292,17 @@ you can use the ``proxmox-backup-manager user permission`` command:
- Datastore.Prune (*)
- Datastore.Read (*)
- Datastore.Verify (*)
# proxmox-backup-manager acl update /datastore/store1 DatastoreBackup --auth-id 'john@pbs!client1'
# proxmox-backup-manager user permissions 'john@pbs!client1' --path /datastore/store1
Privileges with (*) have the propagate flag set
Path: /datastore/store1
- Datastore.Backup (*)
.. _user_tfa:
Two-factor authentication
Two-Factor Authentication
-------------------------
Introduction
@ -296,7 +311,7 @@ Introduction
With simple authentication, only a password (single factor) is required to
successfully claim an identity (authenticate), for example, to be able to log in
as `root@pam` on a specific instance of Proxmox Backup Server. In this case, if
the password gets stolen or leaked, anybody can use it to log in - even if they
the password gets leaked or stolen, anybody can use it to log in - even if they
should not be allowed to do so.
With two-factor authentication (TFA), a user is asked for an additional factor
@ -359,16 +374,18 @@ WebAuthn
For WebAuthn to work, you need to have two things:
* a trusted HTTPS certificate (for example, by using `Let's Encrypt
* A trusted HTTPS certificate (for example, by using `Let's Encrypt
<https://pbs.proxmox.com/wiki/index.php/HTTPS_Certificate_Configuration>`_).
While it probably works with an untrusted certificate, some browsers may warn
or refuse WebAuthn operations if it is not trusted.
* setup the WebAuthn configuration (see *Configuration -> Authentication* in the
Proxmox Backup Server web-interface). This can be auto-filled in most setups.
* Setup the WebAuthn configuration (see **Configuration -> Authentication** in
the Proxmox Backup Server web interface). This can be auto-filled in most
setups.
Once you have fulfilled both of these requirements, you can add a WebAuthn
configuration in the *Access Control* panel.
configuration in the **Two Factor Authentication** tab of the **Access Control**
panel.
.. _user_tfa_setup_recovery_keys:
@ -380,7 +397,8 @@ Recovery Keys
:alt: Add a new user
Recovery key codes do not need any preparation; you can simply create a set of
recovery keys in the *Access Control* panel.
recovery keys in the **Two Factor Authentication** tab of the **Access Control**
panel.
.. note:: There can only be one set of single-use recovery keys per user at any
time.

View File

@ -1 +1 @@
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise
deb https://enterprise.proxmox.com/debian/pbs bullseye pbs-enterprise

View File

@ -1,4 +1,4 @@
use anyhow::{Error};
use anyhow::Error;
// chacha20-poly1305

View File

@ -1,6 +1,7 @@
use anyhow::{Error};
use proxmox::api::{*, cli::*};
use proxmox_schema::*;
use proxmox_router::cli::*;
#[api(
input: {

View File

@ -1,9 +1,9 @@
use std::io::Write;
use anyhow::{Error};
use anyhow::Error;
use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader};
use pbs_api_types::Authid;
use pbs_client::{HttpClient, HttpClientOptions, BackupReader};
pub struct DummyWriter {
bytes: usize,
@ -34,7 +34,7 @@ async fn run() -> Result<(), Error> {
let client = HttpClient::new(host, 8007, auth_id, options)?;
let backup_time = proxmox::tools::time::parse_rfc3339("2019-06-28T10:49:48Z")?;
let backup_time = proxmox_time::parse_rfc3339("2019-06-28T10:49:48Z")?;
let client = BackupReader::start(client, None, "store2", "host", "elsa", backup_time, true)
.await?;
@ -59,7 +59,7 @@ async fn run() -> Result<(), Error> {
}
fn main() {
if let Err(err) = proxmox_backup::tools::runtime::main(run()) {
if let Err(err) = proxmox_async::runtime::main(run()) {
eprintln!("ERROR: {}", err);
}
println!("DONE");

View File

@ -1,9 +1,9 @@
use anyhow::{bail, Error};
use std::thread;
use std::path::PathBuf;
use std::io::Write;
use anyhow::{bail, Error};
// tar handle files that shrink during backup, by simply padding with zeros.
//
// this binary run multiple thread which writes some large files, then truncates

View File

@ -69,7 +69,7 @@ fn send_request(
}
fn main() -> Result<(), Error> {
proxmox_backup::tools::runtime::main(run())
proxmox_async::runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -69,7 +69,7 @@ fn send_request(
}
fn main() -> Result<(), Error> {
proxmox_backup::tools::runtime::main(run())
proxmox_async::runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -6,10 +6,10 @@ use hyper::{Body, Request, Response};
use openssl::ssl::{SslAcceptor, SslFiletype, SslMethod};
use tokio::net::{TcpListener, TcpStream};
use proxmox_backup::configdir;
use pbs_buildcfg::configdir;
fn main() -> Result<(), Error> {
proxmox_backup::tools::runtime::main(run())
proxmox_async::runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -5,7 +5,7 @@ use hyper::{Body, Request, Response};
use tokio::net::{TcpListener, TcpStream};
fn main() -> Result<(), Error> {
proxmox_backup::tools::runtime::main(run())
proxmox_async::runtime::main(run())
}
async fn run() -> Result<(), Error> {

View File

@ -5,7 +5,7 @@ extern crate proxmox_backup;
use anyhow::{Error};
use std::io::{Read, Write};
use proxmox_backup::backup::*;
use pbs_datastore::Chunker;
struct ChunkWriter {
chunker: Chunker,

View File

@ -1,7 +1,6 @@
extern crate proxmox_backup;
//use proxmox_backup::backup::chunker::*;
use proxmox_backup::backup::*;
use pbs_datastore::Chunker;
fn main() {

View File

@ -3,7 +3,7 @@ use futures::*;
extern crate proxmox_backup;
use proxmox_backup::backup::*;
use pbs_client::ChunkStream;
// Test Chunker with real data read from a file.
//
@ -13,7 +13,7 @@ use proxmox_backup::backup::*;
// Note: I can currently get about 830MB/s
fn main() {
if let Err(err) = proxmox_backup::tools::runtime::main(run()) {
if let Err(err) = proxmox_async::runtime::main(run()) {
panic!("ERROR: {}", err);
}
}

View File

@ -1,7 +1,7 @@
use anyhow::{Error};
use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::*;
use pbs_client::{HttpClient, HttpClientOptions, BackupWriter};
use pbs_api_types::Authid;
async fn upload_speed() -> Result<f64, Error> {
@ -16,7 +16,7 @@ async fn upload_speed() -> Result<f64, Error> {
let client = HttpClient::new(host, 8007, auth_id, options)?;
let backup_time = proxmox::tools::time::epoch_i64();
let backup_time = proxmox_time::epoch_i64();
let client = BackupWriter::start(client, None, datastore, "host", "speedtest", backup_time, false, true).await?;
@ -27,7 +27,7 @@ async fn upload_speed() -> Result<f64, Error> {
}
fn main() {
match proxmox_backup::tools::runtime::main(upload_speed()) {
match proxmox_async::runtime::main(upload_speed()) {
Ok(mbs) => {
println!("average upload speed: {} MB/s", mbs);
}

22
pbs-api-types/Cargo.toml Normal file
View File

@ -0,0 +1,22 @@
[package]
name = "pbs-api-types"
version = "0.1.0"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "general API type helpers for PBS"
[dependencies]
anyhow = "1.0"
hex = "0.4.3"
lazy_static = "1.4"
libc = "0.2"
nix = "0.19.1"
openssl = "0.10"
regex = "1.2"
serde = { version = "1.0", features = ["derive"] }
proxmox = "0.15.3"
proxmox-lang = "1.0.0"
proxmox-schema = { version = "1.0.1", features = [ "api-macro" ] }
proxmox-time = "1.1"
proxmox-uuid = { version = "1.0.0", features = [ "serde" ] }

280
pbs-api-types/src/acl.rs Normal file
View File

@ -0,0 +1,280 @@
use std::str::FromStr;
use serde::de::{value, IntoDeserializer};
use serde::{Deserialize, Serialize};
use proxmox_lang::constnamedbitmap;
use proxmox_schema::{
api, const_regex, ApiStringFormat, BooleanSchema, EnumEntry, Schema, StringSchema,
};
const_regex! {
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
}
// define Privilege bitfield
constnamedbitmap! {
/// Contains a list of privilege name to privilege value mappings.
///
/// The names are used when displaying/persisting privileges anywhere, the values are used to
/// allow easy matching of privileges as bitflags.
PRIVILEGES: u64 => {
/// Sys.Audit allows knowing about the system and its status
PRIV_SYS_AUDIT("Sys.Audit");
/// Sys.Modify allows modifying system-level configuration
PRIV_SYS_MODIFY("Sys.Modify");
/// Sys.Modify allows to poweroff/reboot/.. the system
PRIV_SYS_POWER_MANAGEMENT("Sys.PowerManagement");
/// Datastore.Audit allows knowing about a datastore,
/// including reading the configuration entry and listing its contents
PRIV_DATASTORE_AUDIT("Datastore.Audit");
/// Datastore.Allocate allows creating or deleting datastores
PRIV_DATASTORE_ALLOCATE("Datastore.Allocate");
/// Datastore.Modify allows modifying a datastore and its contents
PRIV_DATASTORE_MODIFY("Datastore.Modify");
/// Datastore.Read allows reading arbitrary backup contents
PRIV_DATASTORE_READ("Datastore.Read");
/// Allows verifying a datastore
PRIV_DATASTORE_VERIFY("Datastore.Verify");
/// Datastore.Backup allows Datastore.Read|Verify and creating new snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_BACKUP("Datastore.Backup");
/// Datastore.Prune allows deleting snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_PRUNE("Datastore.Prune");
/// Permissions.Modify allows modifying ACLs
PRIV_PERMISSIONS_MODIFY("Permissions.Modify");
/// Remote.Audit allows reading remote.cfg and sync.cfg entries
PRIV_REMOTE_AUDIT("Remote.Audit");
/// Remote.Modify allows modifying remote.cfg
PRIV_REMOTE_MODIFY("Remote.Modify");
/// Remote.Read allows reading data from a configured `Remote`
PRIV_REMOTE_READ("Remote.Read");
/// Sys.Console allows access to the system's console
PRIV_SYS_CONSOLE("Sys.Console");
/// Tape.Audit allows reading tape backup configuration and status
PRIV_TAPE_AUDIT("Tape.Audit");
/// Tape.Modify allows modifying tape backup configuration
PRIV_TAPE_MODIFY("Tape.Modify");
/// Tape.Write allows writing tape media
PRIV_TAPE_WRITE("Tape.Write");
/// Tape.Read allows reading tape backup configuration and media contents
PRIV_TAPE_READ("Tape.Read");
/// Realm.Allocate allows viewing, creating, modifying and deleting realms
PRIV_REALM_ALLOCATE("Realm.Allocate");
}
}
/// Admin always has all privileges. It can do everything except a few actions
/// which are limited to the 'root@pam` superuser
pub const ROLE_ADMIN: u64 = u64::MAX;
/// NoAccess can be used to remove privileges from specific (sub-)paths
pub const ROLE_NO_ACCESS: u64 = 0;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Audit can view configuration and status information, but not modify it.
pub const ROLE_AUDIT: u64 = 0
| PRIV_SYS_AUDIT
| PRIV_DATASTORE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Admin can do anything on the datastore.
pub const ROLE_DATASTORE_ADMIN: u64 = 0
| PRIV_DATASTORE_AUDIT
| PRIV_DATASTORE_MODIFY
| PRIV_DATASTORE_READ
| PRIV_DATASTORE_VERIFY
| PRIV_DATASTORE_BACKUP
| PRIV_DATASTORE_PRUNE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Reader can read/verify datastore content and do restore
pub const ROLE_DATASTORE_READER: u64 = 0
| PRIV_DATASTORE_AUDIT
| PRIV_DATASTORE_VERIFY
| PRIV_DATASTORE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Backup can do backup and restore, but no prune.
pub const ROLE_DATASTORE_BACKUP: u64 = 0
| PRIV_DATASTORE_BACKUP;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.PowerUser can do backup, restore, and prune.
pub const ROLE_DATASTORE_POWERUSER: u64 = 0
| PRIV_DATASTORE_PRUNE
| PRIV_DATASTORE_BACKUP;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Audit can audit the datastore.
pub const ROLE_DATASTORE_AUDIT: u64 = 0
| PRIV_DATASTORE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.Audit can audit the remote
pub const ROLE_REMOTE_AUDIT: u64 = 0
| PRIV_REMOTE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.Admin can do anything on the remote.
pub const ROLE_REMOTE_ADMIN: u64 = 0
| PRIV_REMOTE_AUDIT
| PRIV_REMOTE_MODIFY
| PRIV_REMOTE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.SyncOperator can do read and prune on the remote.
pub const ROLE_REMOTE_SYNC_OPERATOR: u64 = 0
| PRIV_REMOTE_AUDIT
| PRIV_REMOTE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Audit can audit the tape backup configuration and media content
pub const ROLE_TAPE_AUDIT: u64 = 0
| PRIV_TAPE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Admin can do anything on the tape backup
pub const ROLE_TAPE_ADMIN: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_MODIFY
| PRIV_TAPE_READ
| PRIV_TAPE_WRITE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Operator can do tape backup and restore (but no configuration changes)
pub const ROLE_TAPE_OPERATOR: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_READ
| PRIV_TAPE_WRITE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Reader can do read and inspect tape content
pub const ROLE_TAPE_READER: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_READ;
/// NoAccess can be used to remove privileges from specific (sub-)paths
pub const ROLE_NAME_NO_ACCESS: &str = "NoAccess";
#[api(
type_text: "<role>",
)]
#[repr(u64)]
#[derive(Serialize, Deserialize)]
/// Enum representing roles via their [PRIVILEGES] combination.
///
/// Since privileges are implemented as bitflags, each unique combination of privileges maps to a
/// single, unique `u64` value that is used in this enum definition.
pub enum Role {
/// Administrator
Admin = ROLE_ADMIN,
/// Auditor
Audit = ROLE_AUDIT,
/// Disable Access
NoAccess = ROLE_NO_ACCESS,
/// Datastore Administrator
DatastoreAdmin = ROLE_DATASTORE_ADMIN,
/// Datastore Reader (inspect datastore content and do restores)
DatastoreReader = ROLE_DATASTORE_READER,
/// Datastore Backup (backup and restore owned backups)
DatastoreBackup = ROLE_DATASTORE_BACKUP,
/// Datastore PowerUser (backup, restore and prune owned backup)
DatastorePowerUser = ROLE_DATASTORE_POWERUSER,
/// Datastore Auditor
DatastoreAudit = ROLE_DATASTORE_AUDIT,
/// Remote Auditor
RemoteAudit = ROLE_REMOTE_AUDIT,
/// Remote Administrator
RemoteAdmin = ROLE_REMOTE_ADMIN,
/// Syncronisation Opertator
RemoteSyncOperator = ROLE_REMOTE_SYNC_OPERATOR,
/// Tape Auditor
TapeAudit = ROLE_TAPE_AUDIT,
/// Tape Administrator
TapeAdmin = ROLE_TAPE_ADMIN,
/// Tape Operator
TapeOperator = ROLE_TAPE_OPERATOR,
/// Tape Reader
TapeReader = ROLE_TAPE_READER,
}
impl FromStr for Role {
type Err = value::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Self::deserialize(s.into_deserializer())
}
}
pub const ACL_PATH_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&ACL_PATH_REGEX);
pub const ACL_PATH_SCHEMA: Schema = StringSchema::new("Access control path.")
.format(&ACL_PATH_FORMAT)
.min_length(1)
.max_length(128)
.schema();
pub const ACL_PROPAGATE_SCHEMA: Schema =
BooleanSchema::new("Allow to propagate (inherit) permissions.")
.default(true)
.schema();
pub const ACL_UGID_TYPE_SCHEMA: Schema = StringSchema::new("Type of 'ugid' property.")
.format(&ApiStringFormat::Enum(&[
EnumEntry::new("user", "User"),
EnumEntry::new("group", "Group"),
]))
.schema();
#[api(
properties: {
propagate: {
schema: ACL_PROPAGATE_SCHEMA,
},
path: {
schema: ACL_PATH_SCHEMA,
},
ugid_type: {
schema: ACL_UGID_TYPE_SCHEMA,
},
ugid: {
type: String,
description: "User or Group ID.",
},
roleid: {
type: Role,
}
}
)]
#[derive(Serialize, Deserialize)]
/// ACL list entry.
pub struct AclListItem {
pub path: String,
pub ugid: String,
pub ugid_type: String,
pub propagate: bool,
pub roleid: String,
}

View File

@ -0,0 +1,98 @@
use std::fmt::{self, Display};
use anyhow::Error;
use serde::{Deserialize, Serialize};
use proxmox_schema::api;
#[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Defines whether data is encrypted (using an AEAD cipher), only signed, or neither.
pub enum CryptMode {
/// Don't encrypt.
None,
/// Encrypt.
Encrypt,
/// Only sign.
SignOnly,
}
#[derive(Debug, Eq, PartialEq, Hash, Clone, Deserialize, Serialize)]
#[serde(transparent)]
/// 32-byte fingerprint, usually calculated with SHA256.
pub struct Fingerprint {
#[serde(with = "bytes_as_fingerprint")]
bytes: [u8; 32],
}
impl Fingerprint {
pub fn new(bytes: [u8; 32]) -> Self {
Self { bytes }
}
pub fn bytes(&self) -> &[u8; 32] {
&self.bytes
}
pub fn signature(&self) -> String {
as_fingerprint(&self.bytes)
}
}
/// Display as short key ID
impl Display for Fingerprint {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", as_fingerprint(&self.bytes[0..8]))
}
}
impl std::str::FromStr for Fingerprint {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Error> {
let mut tmp = s.to_string();
tmp.retain(|c| c != ':');
let bytes = proxmox::tools::hex_to_digest(&tmp)?;
Ok(Fingerprint::new(bytes))
}
}
fn as_fingerprint(bytes: &[u8]) -> String {
hex::encode(bytes)
.as_bytes()
.chunks(2)
.map(|v| unsafe { std::str::from_utf8_unchecked(v) }) // it's a hex string
.collect::<Vec<&str>>().join(":")
}
pub mod bytes_as_fingerprint {
use std::mem::MaybeUninit;
use serde::{Deserialize, Serializer, Deserializer};
pub fn serialize<S>(
bytes: &[u8; 32],
serializer: S,
) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let s = super::as_fingerprint(bytes);
serializer.serialize_str(&s)
}
pub fn deserialize<'de, D>(
deserializer: D,
) -> Result<[u8; 32], D::Error>
where
D: Deserializer<'de>,
{
// TODO: more efficiently implement with a Visitor implementing visit_str using split() and
// hex::decode by-byte
let mut s = String::deserialize(deserializer)?;
s.retain(|c| c != ':');
let mut out = MaybeUninit::<[u8; 32]>::uninit();
hex::decode_to_slice(s.as_bytes(), unsafe { &mut (*out.as_mut_ptr())[..] })
.map_err(serde::de::Error::custom)?;
Ok(unsafe { out.assume_init() })
}
}

View File

@ -0,0 +1,627 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::{
api, const_regex, ApiStringFormat, ApiType, ArraySchema, EnumEntry, IntegerSchema, ReturnType,
Schema, StringSchema, Updater,
};
use crate::{
PROXMOX_SAFE_ID_FORMAT, SHA256_HEX_REGEX, SINGLE_LINE_COMMENT_SCHEMA, CryptMode, UPID,
Fingerprint, Userid, Authid,
GC_SCHEDULE_SCHEMA, DATASTORE_NOTIFY_STRING_SCHEMA, PRUNE_SCHEDULE_SCHEMA,
};
const_regex!{
pub BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$");
pub BACKUP_ID_REGEX = concat!(r"^", BACKUP_ID_RE!(), r"$");
pub BACKUP_DATE_REGEX = concat!(r"^", BACKUP_TIME_RE!() ,r"$");
pub GROUP_PATH_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$");
pub BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
pub SNAPSHOT_PATH_REGEX = concat!(r"^", SNAPSHOT_PATH_REGEX_STR!(), r"$");
pub DATASTORE_MAP_REGEX = concat!(r"(:?", PROXMOX_SAFE_ID_REGEX_STR!(), r"=)?", PROXMOX_SAFE_ID_REGEX_STR!());
}
pub const CHUNK_DIGEST_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const DIR_NAME_SCHEMA: Schema = StringSchema::new("Directory name")
.min_length(1)
.max_length(4096)
.schema();
pub const BACKUP_ARCHIVE_NAME_SCHEMA: Schema = StringSchema::new("Backup archive name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.schema();
pub const BACKUP_ID_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_ID_REGEX);
pub const BACKUP_GROUP_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&GROUP_PATH_REGEX);
pub const BACKUP_ID_SCHEMA: Schema = StringSchema::new("Backup ID.")
.format(&BACKUP_ID_FORMAT)
.schema();
pub const BACKUP_TYPE_SCHEMA: Schema = StringSchema::new("Backup type.")
.format(&ApiStringFormat::Enum(&[
EnumEntry::new("vm", "Virtual Machine Backup"),
EnumEntry::new("ct", "Container Backup"),
EnumEntry::new("host", "Host Backup"),
]))
.schema();
pub const BACKUP_TIME_SCHEMA: Schema = IntegerSchema::new("Backup time (Unix epoch.)")
.minimum(1_547_797_308)
.schema();
pub const BACKUP_GROUP_SCHEMA: Schema = StringSchema::new("Backup Group")
.format(&BACKUP_GROUP_FORMAT)
.schema();
pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const CHUNK_DIGEST_SCHEMA: Schema = StringSchema::new("Chunk digest (SHA256).")
.format(&CHUNK_DIGEST_FORMAT)
.schema();
pub const DATASTORE_MAP_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&DATASTORE_MAP_REGEX);
pub const DATASTORE_MAP_SCHEMA: Schema = StringSchema::new("Datastore mapping.")
.format(&DATASTORE_MAP_FORMAT)
.min_length(3)
.max_length(65)
.type_text("(<source>=)?<target>")
.schema();
pub const DATASTORE_MAP_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Datastore mapping list.", &DATASTORE_MAP_SCHEMA)
.schema();
pub const DATASTORE_MAP_LIST_SCHEMA: Schema = StringSchema::new(
"A list of Datastore mappings (or single datastore), comma separated. \
For example 'a=b,e' maps the source datastore 'a' to target 'b and \
all other sources to the default 'e'. If no default is given, only the \
specified sources are mapped.")
.format(&ApiStringFormat::PropertyString(&DATASTORE_MAP_ARRAY_SCHEMA))
.schema();
pub const PRUNE_SCHEMA_KEEP_DAILY: Schema = IntegerSchema::new("Number of daily backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_HOURLY: Schema =
IntegerSchema::new("Number of hourly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_LAST: Schema = IntegerSchema::new("Number of backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_MONTHLY: Schema =
IntegerSchema::new("Number of monthly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_WEEKLY: Schema =
IntegerSchema::new("Number of weekly backups to keep.")
.minimum(1)
.schema();
pub const PRUNE_SCHEMA_KEEP_YEARLY: Schema =
IntegerSchema::new("Number of yearly backups to keep.")
.minimum(1)
.schema();
#[api(
properties: {
"keep-last": {
schema: PRUNE_SCHEMA_KEEP_LAST,
optional: true,
},
"keep-hourly": {
schema: PRUNE_SCHEMA_KEEP_HOURLY,
optional: true,
},
"keep-daily": {
schema: PRUNE_SCHEMA_KEEP_DAILY,
optional: true,
},
"keep-weekly": {
schema: PRUNE_SCHEMA_KEEP_WEEKLY,
optional: true,
},
"keep-monthly": {
schema: PRUNE_SCHEMA_KEEP_MONTHLY,
optional: true,
},
"keep-yearly": {
schema: PRUNE_SCHEMA_KEEP_YEARLY,
optional: true,
},
}
)]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
/// Common pruning options
pub struct PruneOptions {
#[serde(skip_serializing_if="Option::is_none")]
pub keep_last: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_hourly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_daily: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_weekly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_monthly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_yearly: Option<u64>,
}
#[api(
properties: {
name: {
schema: DATASTORE_SCHEMA,
},
path: {
schema: DIR_NAME_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
schema: DATASTORE_NOTIFY_STRING_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
},
"prune-schedule": {
optional: true,
schema: PRUNE_SCHEDULE_SCHEMA,
},
"keep-last": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST,
},
"keep-hourly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_HOURLY,
},
"keep-daily": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_DAILY,
},
"keep-weekly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_WEEKLY,
},
"keep-monthly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_MONTHLY,
},
"keep-yearly": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_YEARLY,
},
"verify-new": {
description: "If enabled, all new backups will be verified right after completion.",
optional: true,
type: bool,
},
}
)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all="kebab-case")]
/// Datastore configuration properties.
pub struct DataStoreConfig {
#[updater(skip)]
pub name: String,
#[updater(skip)]
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub gc_schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub prune_schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_last: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_hourly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_daily: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_weekly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_monthly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_yearly: Option<u64>,
/// If enabled, all backups will be verified right after completion.
#[serde(skip_serializing_if="Option::is_none")]
pub verify_new: Option<bool>,
/// Send job email notification to this user
#[serde(skip_serializing_if="Option::is_none")]
pub notify_user: Option<Userid>,
/// Send notification only for job errors
#[serde(skip_serializing_if="Option::is_none")]
pub notify: Option<String>,
}
#[api(
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about a datastore.
pub struct DataStoreListItem {
pub store: String,
pub comment: Option<String>,
}
#[api(
properties: {
"filename": {
schema: BACKUP_ARCHIVE_NAME_SCHEMA,
},
"crypt-mode": {
type: CryptMode,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about archive files inside a backup snapshot.
pub struct BackupContent {
pub filename: String,
/// Info if file is encrypted, signed, or neither.
#[serde(skip_serializing_if = "Option::is_none")]
pub crypt_mode: Option<CryptMode>,
/// Archive size (from backup manifest).
#[serde(skip_serializing_if = "Option::is_none")]
pub size: Option<u64>,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Result of a verify operation.
pub enum VerifyState {
/// Verification was successful
Ok,
/// Verification reported one or more errors
Failed,
}
#[api(
properties: {
upid: {
type: UPID,
},
state: {
type: VerifyState,
},
},
)]
#[derive(Serialize, Deserialize)]
/// Task properties.
pub struct SnapshotVerifyState {
/// UPID of the verify task
pub upid: UPID,
/// State of the verification. Enum.
pub state: VerifyState,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
comment: {
schema: SINGLE_LINE_COMMENT_SCHEMA,
optional: true,
},
verification: {
type: SnapshotVerifyState,
optional: true,
},
fingerprint: {
type: String,
optional: true,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
},
},
owner: {
type: Authid,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about backup snapshot.
pub struct SnapshotListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub backup_time: i64,
/// The first line from manifest "notes"
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
/// The result of the last run verify task
#[serde(skip_serializing_if = "Option::is_none")]
pub verification: Option<SnapshotVerifyState>,
/// Fingerprint of encryption key
#[serde(skip_serializing_if = "Option::is_none")]
pub fingerprint: Option<Fingerprint>,
/// List of contained archive files.
pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes).
#[serde(skip_serializing_if = "Option::is_none")]
pub size: Option<u64>,
/// The owner of the snapshots group
#[serde(skip_serializing_if = "Option::is_none")]
pub owner: Option<Authid>,
/// Protection from prunes
#[serde(default)]
pub protected: bool,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"last-backup": {
schema: BACKUP_TIME_SCHEMA,
},
"backup-count": {
type: Integer,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
},
},
owner: {
type: Authid,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Basic information about a backup group.
pub struct GroupListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub last_backup: i64,
/// Number of contained snapshots
pub backup_count: u64,
/// List of contained archive files.
pub files: Vec<String>,
/// The owner of group
#[serde(skip_serializing_if = "Option::is_none")]
pub owner: Option<Authid>,
/// The first line from group "notes"
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
}
#[api(
properties: {
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
},
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Prune result.
pub struct PruneListItem {
pub backup_type: String, // enum
pub backup_id: String,
pub backup_time: i64,
/// Keep snapshot
pub keep: bool,
}
#[api(
properties: {
ct: {
type: TypeCounts,
optional: true,
},
host: {
type: TypeCounts,
optional: true,
},
vm: {
type: TypeCounts,
optional: true,
},
other: {
type: TypeCounts,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize, Default)]
/// Counts of groups/snapshots per BackupType.
pub struct Counts {
/// The counts for CT backups
pub ct: Option<TypeCounts>,
/// The counts for Host backups
pub host: Option<TypeCounts>,
/// The counts for VM backups
pub vm: Option<TypeCounts>,
/// The counts for other backup types
pub other: Option<TypeCounts>,
}
#[api()]
#[derive(Serialize, Deserialize, Default)]
/// Backup Type group/snapshot counts.
pub struct TypeCounts {
/// The number of groups of the type.
pub groups: u64,
/// The number of snapshots of the type.
pub snapshots: u64,
}
#[api(
properties: {
"upid": {
optional: true,
type: UPID,
},
},
)]
#[derive(Clone, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Garbage collection status.
pub struct GarbageCollectionStatus {
pub upid: Option<String>,
/// Number of processed index files.
pub index_file_count: usize,
/// Sum of bytes referred by index files.
pub index_data_bytes: u64,
/// Bytes used on disk.
pub disk_bytes: u64,
/// Chunks used on disk.
pub disk_chunks: usize,
/// Sum of removed bytes.
pub removed_bytes: u64,
/// Number of removed chunks.
pub removed_chunks: usize,
/// Sum of pending bytes (pending removal - kept for safety).
pub pending_bytes: u64,
/// Number of pending chunks (pending removal - kept for safety).
pub pending_chunks: usize,
/// Number of chunks marked as .bad by verify that have been removed by GC.
pub removed_bad: usize,
/// Number of chunks still marked as .bad after garbage collection.
pub still_bad: usize,
}
impl Default for GarbageCollectionStatus {
fn default() -> Self {
GarbageCollectionStatus {
upid: None,
index_file_count: 0,
index_data_bytes: 0,
disk_bytes: 0,
disk_chunks: 0,
removed_bytes: 0,
removed_chunks: 0,
pending_bytes: 0,
pending_chunks: 0,
removed_bad: 0,
still_bad: 0,
}
}
}
#[api(
properties: {
"gc-status": {
type: GarbageCollectionStatus,
optional: true,
},
counts: {
type: Counts,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Overall Datastore status and useful information.
pub struct DataStoreStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
/// Status of last GC
#[serde(skip_serializing_if="Option::is_none")]
pub gc_status: Option<GarbageCollectionStatus>,
/// Group/Snapshot counts
#[serde(skip_serializing_if="Option::is_none")]
pub counts: Option<Counts>,
}
pub const ADMIN_DATASTORE_LIST_SNAPSHOTS_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of snapshots.",
&SnapshotListItem::API_SCHEMA,
).schema(),
};
pub const ADMIN_DATASTORE_LIST_SNAPSHOT_FILES_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of archive files inside a backup snapshots.",
&BackupContent::API_SCHEMA,
).schema(),
};
pub const ADMIN_DATASTORE_LIST_GROUPS_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of backup groups.",
&GroupListItem::API_SCHEMA,
).schema(),
};
pub const ADMIN_DATASTORE_PRUNE_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"Returns the list of snapshots and a flag indicating if there are kept or removed.",
&PruneListItem::API_SCHEMA,
).schema(),
};

View File

@ -1,7 +1,8 @@
use serde::{Deserialize, Serialize};
use proxmox::api::api;
#[api()]
use proxmox_schema::api;
#[api]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// General status information about a running VM file-restore daemon
@ -12,4 +13,3 @@ pub struct RestoreDaemonStatus {
/// not set, as then the status call will have reset the timer before returning the value
pub timeout: i64,
}

View File

@ -0,0 +1,341 @@
use anyhow::{bail, Error};
use proxmox_schema::{ApiStringFormat, ApiType, Schema, StringSchema, UpdaterType};
/// Size units for byte sizes
#[derive(Debug, Copy, Clone, PartialEq)]
pub enum SizeUnit {
Byte,
// SI (base 10)
KByte,
MByte,
GByte,
TByte,
PByte,
// IEC (base 2)
Kibi,
Mebi,
Gibi,
Tebi,
Pebi,
}
impl SizeUnit {
/// Returns the scaling factor
pub fn factor(&self) -> f64 {
match self {
SizeUnit::Byte => 1.0,
// SI (base 10)
SizeUnit::KByte => 1_000.0,
SizeUnit::MByte => 1_000_000.0,
SizeUnit::GByte => 1_000_000_000.0,
SizeUnit::TByte => 1_000_000_000_000.0,
SizeUnit::PByte => 1_000_000_000_000_000.0,
// IEC (base 2)
SizeUnit::Kibi => 1024.0,
SizeUnit::Mebi => 1024.0 * 1024.0,
SizeUnit::Gibi => 1024.0 * 1024.0 * 1024.0,
SizeUnit::Tebi => 1024.0 * 1024.0 * 1024.0 * 1024.0,
SizeUnit::Pebi => 1024.0 * 1024.0 * 1024.0 * 1024.0 * 1024.0,
}
}
/// gets the biggest possible unit still having a value greater zero before the decimal point
/// 'binary' specifies if IEC (base 2) units should be used or SI (base 10) ones
pub fn auto_scale(size: f64, binary: bool) -> SizeUnit {
if binary {
let bits = 64 - (size as u64).leading_zeros();
match bits {
51.. => SizeUnit::Pebi,
41..=50 => SizeUnit::Tebi,
31..=40 => SizeUnit::Gibi,
21..=30 => SizeUnit::Mebi,
11..=20 => SizeUnit::Kibi,
_ => SizeUnit::Byte,
}
} else {
if size >= 1_000_000_000_000_000.0 {
SizeUnit::PByte
} else if size >= 1_000_000_000_000.0 {
SizeUnit::TByte
} else if size >= 1_000_000_000.0 {
SizeUnit::GByte
} else if size >= 1_000_000.0 {
SizeUnit::MByte
} else if size >= 1_000.0 {
SizeUnit::KByte
} else {
SizeUnit::Byte
}
}
}
}
/// Returns the string repesentation
impl std::fmt::Display for SizeUnit {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
SizeUnit::Byte => write!(f, "B"),
// SI (base 10)
SizeUnit::KByte => write!(f, "KB"),
SizeUnit::MByte => write!(f, "MB"),
SizeUnit::GByte => write!(f, "GB"),
SizeUnit::TByte => write!(f, "TB"),
SizeUnit::PByte => write!(f, "PB"),
// IEC (base 2)
SizeUnit::Kibi => write!(f, "KiB"),
SizeUnit::Mebi => write!(f, "MiB"),
SizeUnit::Gibi => write!(f, "GiB"),
SizeUnit::Tebi => write!(f, "TiB"),
SizeUnit::Pebi => write!(f, "PiB"),
}
}
}
/// Strips a trailing SizeUnit inclusive trailing whitespace
/// Supports both IEC and SI based scales, the B/b byte symbol is optional.
fn strip_unit(v: &str) -> (&str, SizeUnit) {
let v = v.strip_suffix(&['b', 'B'][..]).unwrap_or(v); // byte is implied anyway
let (v, binary) = match v.strip_suffix('i') {
Some(n) => (n, true),
None => (v, false),
};
let mut unit = SizeUnit::Byte;
(v.strip_suffix(|c: char| match c {
'k' | 'K' if !binary => { unit = SizeUnit::KByte; true }
'm' | 'M' if !binary => { unit = SizeUnit::MByte; true }
'g' | 'G' if !binary => { unit = SizeUnit::GByte; true }
't' | 'T' if !binary => { unit = SizeUnit::TByte; true }
'p' | 'P' if !binary => { unit = SizeUnit::PByte; true }
// binary (IEC recommended) variants
'k' | 'K' if binary => { unit = SizeUnit::Kibi; true }
'm' | 'M' if binary => { unit = SizeUnit::Mebi; true }
'g' | 'G' if binary => { unit = SizeUnit::Gibi; true }
't' | 'T' if binary => { unit = SizeUnit::Tebi; true }
'p' | 'P' if binary => { unit = SizeUnit::Pebi; true }
_ => false
}).unwrap_or(v).trim_end(), unit)
}
/// Byte size which can be displayed in a human friendly way
#[derive(Debug, Copy, Clone, UpdaterType)]
pub struct HumanByte {
/// The siginficant value, it does not includes any factor of the `unit`
size: f64,
/// The scale/unit of the value
unit: SizeUnit,
}
fn verify_human_byte(s: &str) -> Result<(), Error> {
match s.parse::<HumanByte>() {
Ok(_) => Ok(()),
Err(err) => bail!("byte-size parse error for '{}': {}", s, err),
}
}
impl ApiType for HumanByte {
const API_SCHEMA: Schema = StringSchema::new(
"Byte size with optional unit (B, KB (base 10), MB, GB, ..., KiB (base 2), MiB, Gib, ...).",
)
.format(&ApiStringFormat::VerifyFn(verify_human_byte))
.min_length(1)
.max_length(64)
.schema();
}
impl HumanByte {
/// Create instance with size and unit (size must be positive)
pub fn with_unit(size: f64, unit: SizeUnit) -> Result<Self, Error> {
if size < 0.0 {
bail!("byte size may not be negative");
}
Ok(HumanByte { size, unit })
}
/// Create a new instance with optimal binary unit computed
pub fn new_binary(size: f64) -> Self {
let unit = SizeUnit::auto_scale(size, true);
HumanByte { size: size / unit.factor(), unit }
}
/// Create a new instance with optimal decimal unit computed
pub fn new_decimal(size: f64) -> Self {
let unit = SizeUnit::auto_scale(size, false);
HumanByte { size: size / unit.factor(), unit }
}
/// Returns the size as u64 number of bytes
pub fn as_u64(&self) -> u64 {
self.as_f64() as u64
}
/// Returns the size as f64 number of bytes
pub fn as_f64(&self) -> f64 {
self.size * self.unit.factor()
}
/// Returns a copy with optimal binary unit computed
pub fn auto_scale_binary(self) -> Self {
HumanByte::new_binary(self.as_f64())
}
/// Returns a copy with optimal decimal unit computed
pub fn auto_scale_decimal(self) -> Self {
HumanByte::new_decimal(self.as_f64())
}
}
impl From<u64> for HumanByte {
fn from(v: u64) -> Self {
HumanByte::new_binary(v as f64)
}
}
impl From<usize> for HumanByte {
fn from(v: usize) -> Self {
HumanByte::new_binary(v as f64)
}
}
impl std::fmt::Display for HumanByte {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let precision = f.precision().unwrap_or(3) as f64;
let precision_factor = 1.0 * 10.0_f64.powf(precision);
// this could cause loss of information, rust has sadly no shortest-max-X flt2dec fmt yet
let size = ((self.size * precision_factor).round()) / precision_factor;
write!(f, "{} {}", size, self.unit)
}
}
impl std::str::FromStr for HumanByte {
type Err = Error;
fn from_str(v: &str) -> Result<Self, Error> {
let (v, unit) = strip_unit(v);
HumanByte::with_unit(v.parse()?, unit)
}
}
proxmox::forward_deserialize_to_from_str!(HumanByte);
proxmox::forward_serialize_to_display!(HumanByte);
#[test]
fn test_human_byte_parser() -> Result<(), Error> {
assert!("-10".parse::<HumanByte>().is_err()); // negative size
fn do_test(v: &str, size: f64, unit: SizeUnit, as_str: &str) -> Result<(), Error> {
let h: HumanByte = v.parse()?;
if h.size != size {
bail!("got unexpected size for '{}' ({} != {})", v, h.size, size);
}
if h.unit != unit {
bail!("got unexpected unit for '{}' ({:?} != {:?})", v, h.unit, unit);
}
let new = h.to_string();
if &new != as_str {
bail!("to_string failed for '{}' ({:?} != {:?})", v, new, as_str);
}
Ok(())
}
fn test(v: &str, size: f64, unit: SizeUnit, as_str: &str) -> bool {
match do_test(v, size, unit, as_str) {
Ok(_) => true,
Err(err) => {
eprintln!("{}", err); // makes debugging easier
false
}
}
}
assert!(test("14", 14.0, SizeUnit::Byte, "14 B"));
assert!(test("14.4", 14.4, SizeUnit::Byte, "14.4 B"));
assert!(test("14.45", 14.45, SizeUnit::Byte, "14.45 B"));
assert!(test("14.456", 14.456, SizeUnit::Byte, "14.456 B"));
assert!(test("14.4567", 14.4567, SizeUnit::Byte, "14.457 B"));
let h: HumanByte = "1.2345678".parse()?;
assert_eq!(&format!("{:.0}", h), "1 B");
assert_eq!(&format!("{:.0}", h.as_f64()), "1"); // use as_f64 to get raw bytes without unit
assert_eq!(&format!("{:.1}", h), "1.2 B");
assert_eq!(&format!("{:.2}", h), "1.23 B");
assert_eq!(&format!("{:.3}", h), "1.235 B");
assert_eq!(&format!("{:.4}", h), "1.2346 B");
assert_eq!(&format!("{:.5}", h), "1.23457 B");
assert_eq!(&format!("{:.6}", h), "1.234568 B");
assert_eq!(&format!("{:.7}", h), "1.2345678 B");
assert_eq!(&format!("{:.8}", h), "1.2345678 B");
assert!(test("987654321", 987654321.0, SizeUnit::Byte, "987654321 B"));
assert!(test("1300b", 1300.0, SizeUnit::Byte, "1300 B"));
assert!(test("1300B", 1300.0, SizeUnit::Byte, "1300 B"));
assert!(test("1300 B", 1300.0, SizeUnit::Byte, "1300 B"));
assert!(test("1300 b", 1300.0, SizeUnit::Byte, "1300 B"));
assert!(test("1.5KB", 1.5, SizeUnit::KByte, "1.5 KB"));
assert!(test("1.5kb", 1.5, SizeUnit::KByte, "1.5 KB"));
assert!(test("1.654321MB", 1.654_321, SizeUnit::MByte, "1.654 MB"));
assert!(test("2.0GB", 2.0, SizeUnit::GByte, "2 GB"));
assert!(test("1.4TB", 1.4, SizeUnit::TByte, "1.4 TB"));
assert!(test("1.4tb", 1.4, SizeUnit::TByte, "1.4 TB"));
assert!(test("2KiB", 2.0, SizeUnit::Kibi, "2 KiB"));
assert!(test("2Ki", 2.0, SizeUnit::Kibi, "2 KiB"));
assert!(test("2kib", 2.0, SizeUnit::Kibi, "2 KiB"));
assert!(test("2.3454MiB", 2.3454, SizeUnit::Mebi, "2.345 MiB"));
assert!(test("2.3456MiB", 2.3456, SizeUnit::Mebi, "2.346 MiB"));
assert!(test("4gib", 4.0, SizeUnit::Gibi, "4 GiB"));
Ok(())
}
#[test]
fn test_human_byte_auto_unit_decimal() {
fn convert(b: u64) -> String {
HumanByte::new_decimal(b as f64).to_string()
}
assert_eq!(convert(987), "987 B");
assert_eq!(convert(1022), "1.022 KB");
assert_eq!(convert(9_000), "9 KB");
assert_eq!(convert(1_000), "1 KB");
assert_eq!(convert(1_000_000), "1 MB");
assert_eq!(convert(1_000_000_000), "1 GB");
assert_eq!(convert(1_000_000_000_000), "1 TB");
assert_eq!(convert(1_000_000_000_000_000), "1 PB");
assert_eq!(convert((1 << 30) + 103 * (1 << 20)), "1.182 GB");
assert_eq!(convert((1 << 30) + 128 * (1 << 20)), "1.208 GB");
assert_eq!(convert((2 << 50) + 500 * (1 << 40)), "2.802 PB");
}
#[test]
fn test_human_byte_auto_unit_binary() {
fn convert(b: u64) -> String {
HumanByte::from(b).to_string()
}
assert_eq!(convert(0), "0 B");
assert_eq!(convert(987), "987 B");
assert_eq!(convert(1022), "1022 B");
assert_eq!(convert(9_000), "8.789 KiB");
assert_eq!(convert(10_000_000), "9.537 MiB");
assert_eq!(convert(10_000_000_000), "9.313 GiB");
assert_eq!(convert(10_000_000_000_000), "9.095 TiB");
assert_eq!(convert(1 << 10), "1 KiB");
assert_eq!(convert((1 << 10) * 10), "10 KiB");
assert_eq!(convert(1 << 20), "1 MiB");
assert_eq!(convert(1 << 30), "1 GiB");
assert_eq!(convert(1 << 40), "1 TiB");
assert_eq!(convert(1 << 50), "1 PiB");
assert_eq!(convert((1 << 30) + 103 * (1 << 20)), "1.101 GiB");
assert_eq!(convert((1 << 30) + 128 * (1 << 20)), "1.125 GiB");
assert_eq!(convert((1 << 40) + 128 * (1 << 30)), "1.125 TiB");
assert_eq!(convert((2 << 50) + 512 * (1 << 40)), "2.5 PiB");
}

464
pbs-api-types/src/jobs.rs Normal file
View File

@ -0,0 +1,464 @@
use anyhow::format_err;
use std::str::FromStr;
use regex::Regex;
use serde::{Deserialize, Serialize};
use proxmox_schema::*;
use crate::{
Userid, Authid, RateLimitConfig,
REMOTE_ID_SCHEMA, DRIVE_NAME_SCHEMA, MEDIA_POOL_NAME_SCHEMA,
SINGLE_LINE_COMMENT_SCHEMA, PROXMOX_SAFE_ID_FORMAT, DATASTORE_SCHEMA,
BACKUP_GROUP_SCHEMA, BACKUP_TYPE_SCHEMA,
};
const_regex!{
/// Regex for verification jobs 'DATASTORE:ACTUAL_JOB_ID'
pub VERIFICATION_JOB_WORKER_ID_REGEX = concat!(r"^(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):");
/// Regex for sync jobs 'REMOTE:REMOTE_DATASTORE:LOCAL_DATASTORE:ACTUAL_JOB_ID'
pub SYNC_JOB_WORKER_ID_REGEX = concat!(r"^(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):(", PROXMOX_SAFE_ID_REGEX_STR!(), r"):");
}
pub const JOB_ID_SCHEMA: Schema = StringSchema::new("Job ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const SYNC_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run sync job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const GC_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run garbage collection job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const PRUNE_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run prune job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const VERIFICATION_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run verify job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(proxmox_time::verify_calendar_event))
.type_text("<calendar-event>")
.schema();
pub const REMOVE_VANISHED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
"Delete vanished backups. This remove the local copy if the remote backup was deleted.")
.default(false)
.schema();
#[api(
properties: {
"next-run": {
description: "Estimated time of the next run (UNIX epoch).",
optional: true,
type: Integer,
},
"last-run-state": {
description: "Result of the last run.",
optional: true,
type: String,
},
"last-run-upid": {
description: "Task UPID of the last run.",
optional: true,
type: String,
},
"last-run-endtime": {
description: "Endtime of the last run.",
optional: true,
type: Integer,
},
}
)]
#[derive(Serialize,Deserialize,Default)]
#[serde(rename_all="kebab-case")]
/// Job Scheduling Status
pub struct JobScheduleStatus {
#[serde(skip_serializing_if="Option::is_none")]
pub next_run: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_state: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_upid: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_endtime: Option<i64>,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// When do we send notifications
pub enum Notify {
/// Never send notification
Never,
/// Send notifications for failed and successful jobs
Always,
/// Send notifications for failed jobs only
Error,
}
#[api(
properties: {
gc: {
type: Notify,
optional: true,
},
verify: {
type: Notify,
optional: true,
},
sync: {
type: Notify,
optional: true,
},
},
)]
#[derive(Debug, Serialize, Deserialize)]
/// Datastore notify settings
pub struct DatastoreNotify {
/// Garbage collection settings
pub gc: Option<Notify>,
/// Verify job setting
pub verify: Option<Notify>,
/// Sync job setting
pub sync: Option<Notify>,
}
pub const DATASTORE_NOTIFY_STRING_SCHEMA: Schema = StringSchema::new(
"Datastore notification setting")
.format(&ApiStringFormat::PropertyString(&DatastoreNotify::API_SCHEMA))
.schema();
pub const IGNORE_VERIFIED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
"Do not verify backups that are already verified if their verification is not outdated.")
.default(true)
.schema();
pub const VERIFICATION_OUTDATED_AFTER_SCHEMA: Schema = IntegerSchema::new(
"Days after that a verification becomes outdated")
.minimum(1)
.schema();
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
"ignore-verified": {
optional: true,
schema: IGNORE_VERIFIED_BACKUPS_SCHEMA,
},
"outdated-after": {
optional: true,
schema: VERIFICATION_OUTDATED_AFTER_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: VERIFICATION_SCHEDULE_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all="kebab-case")]
/// Verification Job
pub struct VerificationJobConfig {
/// unique ID to address this job
#[updater(skip)]
pub id: String,
/// the datastore ID this verificaiton job affects
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
/// if not set to false, check the age of the last snapshot verification to filter
/// out recent ones, depending on 'outdated_after' configuration.
pub ignore_verified: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
/// Reverify snapshots after X days, never if 0. Ignored if 'ignore_verified' is false.
pub outdated_after: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// when to schedule this job in calendar event notation
pub schedule: Option<String>,
}
#[api(
properties: {
config: {
type: VerificationJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Status of Verification Job
pub struct VerificationJobStatus {
#[serde(flatten)]
pub config: VerificationJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
}
#[api(
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
drive: {
schema: DRIVE_NAME_SCHEMA,
},
"eject-media": {
description: "Eject media upon job completion.",
type: bool,
optional: true,
},
"export-media-set": {
description: "Export media set upon job completion.",
type: bool,
optional: true,
},
"latest-only": {
description: "Backup latest snapshots only.",
type: bool,
optional: true,
},
"notify-user": {
optional: true,
type: Userid,
},
"group-filter": {
schema: GROUP_FILTER_LIST_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize,Deserialize,Clone,Updater)]
#[serde(rename_all="kebab-case")]
/// Tape Backup Job Setup
pub struct TapeBackupJobSetup {
pub store: String,
pub pool: String,
pub drive: String,
#[serde(skip_serializing_if="Option::is_none")]
pub eject_media: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub export_media_set: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub latest_only: Option<bool>,
/// Send job email notification to this user
#[serde(skip_serializing_if="Option::is_none")]
pub notify_user: Option<Userid>,
#[serde(skip_serializing_if="Option::is_none")]
pub group_filter: Option<Vec<GroupFilter>>,
}
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
setup: {
type: TapeBackupJobSetup,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize,Clone,Updater)]
#[serde(rename_all="kebab-case")]
/// Tape Backup Job
pub struct TapeBackupJobConfig {
#[updater(skip)]
pub id: String,
#[serde(flatten)]
pub setup: TapeBackupJobSetup,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub schedule: Option<String>,
}
#[api(
properties: {
config: {
type: TapeBackupJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Status of Tape Backup Job
pub struct TapeBackupJobStatus {
#[serde(flatten)]
pub config: TapeBackupJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
/// Next tape used (best guess)
#[serde(skip_serializing_if="Option::is_none")]
pub next_media_label: Option<String>,
}
#[derive(Clone, Debug)]
/// Filter for matching `BackupGroup`s, for use with `BackupGroup::filter`.
pub enum GroupFilter {
/// BackupGroup type - either `vm`, `ct`, or `host`.
BackupType(String),
/// Full identifier of BackupGroup, including type
Group(String),
/// A regular expression matched against the full identifier of the BackupGroup
Regex(Regex),
}
impl std::str::FromStr for GroupFilter {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s.split_once(":") {
Some(("group", value)) => parse_simple_value(value, &BACKUP_GROUP_SCHEMA).map(|_| GroupFilter::Group(value.to_string())),
Some(("type", value)) => parse_simple_value(value, &BACKUP_TYPE_SCHEMA).map(|_| GroupFilter::BackupType(value.to_string())),
Some(("regex", value)) => Ok(GroupFilter::Regex(Regex::new(value)?)),
Some((ty, _value)) => Err(format_err!("expected 'group', 'type' or 'regex' prefix, got '{}'", ty)),
None => Err(format_err!("input doesn't match expected format '<group:GROUP||type:<vm|ct|host>|regex:REGEX>'")),
}.map_err(|err| format_err!("'{}' - {}", s, err))
}
}
// used for serializing below, caution!
impl std::fmt::Display for GroupFilter {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
GroupFilter::BackupType(backup_type) => write!(f, "type:{}", backup_type),
GroupFilter::Group(backup_group) => write!(f, "group:{}", backup_group),
GroupFilter::Regex(regex) => write!(f, "regex:{}", regex.as_str()),
}
}
}
proxmox::forward_deserialize_to_from_str!(GroupFilter);
proxmox::forward_serialize_to_display!(GroupFilter);
fn verify_group_filter(input: &str) -> Result<(), anyhow::Error> {
GroupFilter::from_str(input).map(|_| ())
}
pub const GROUP_FILTER_SCHEMA: Schema = StringSchema::new(
"Group filter based on group identifier ('group:GROUP'), group type ('type:<vm|ct|host>'), or regex ('regex:RE').")
.format(&ApiStringFormat::VerifyFn(verify_group_filter))
.type_text("<type:<vm|ct|host>|group:GROUP|regex:RE>")
.schema();
pub const GROUP_FILTER_LIST_SCHEMA: Schema = ArraySchema::new("List of group filters.", &GROUP_FILTER_SCHEMA).schema();
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
"owner": {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
"remote-store": {
schema: DATASTORE_SCHEMA,
},
"remove-vanished": {
schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
limit: {
type: RateLimitConfig,
},
schedule: {
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
"group-filter": {
schema: GROUP_FILTER_LIST_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize,Deserialize,Clone,Updater)]
#[serde(rename_all="kebab-case")]
/// Sync Job
pub struct SyncJobConfig {
#[updater(skip)]
pub id: String,
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Authid>,
pub remote: String,
pub remote_store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub remove_vanished: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub group_filter: Option<Vec<GroupFilter>>,
#[serde(flatten)]
pub limit: RateLimitConfig,
}
#[api(
properties: {
config: {
type: SyncJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Status of Sync Job
pub struct SyncJobStatus {
#[serde(flatten)]
pub config: SyncJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
}

View File

@ -0,0 +1,56 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::api;
use crate::CERT_FINGERPRINT_SHA256_SCHEMA;
#[api(default: "scrypt")]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
/// Encrtypt the Key with a password using PBKDF2
PBKDF2,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api(
properties: {
kdf: {
type: Kdf,
},
fingerprint: {
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
optional: true,
},
},
)]
#[derive(Deserialize, Serialize)]
/// Encryption Key Information
pub struct KeyInfo {
/// Path to key (if stored in a file)
#[serde(skip_serializing_if="Option::is_none")]
pub path: Option<String>,
pub kdf: Kdf,
/// Key creation time
pub created: i64,
/// Key modification time
pub modified: i64,
/// Key fingerprint
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<String>,
/// Password hint
#[serde(skip_serializing_if="Option::is_none")]
pub hint: Option<String>,
}

468
pbs-api-types/src/lib.rs Normal file
View File

@ -0,0 +1,468 @@
//! Basic API types used by most of the PBS code.
use serde::{Deserialize, Serialize};
use anyhow::bail;
use proxmox_schema::{
api, const_regex, ApiStringFormat, ApiType, ArraySchema, Schema, StringSchema, ReturnType,
};
use proxmox::{IPRE, IPRE_BRACKET, IPV4OCTET, IPV4RE, IPV6H16, IPV6LS32, IPV6RE};
use proxmox_time::parse_daily_duration;
#[rustfmt::skip]
#[macro_export]
macro_rules! PROXMOX_SAFE_ID_REGEX_STR { () => { r"(?:[A-Za-z0-9_][A-Za-z0-9._\-]*)" }; }
#[rustfmt::skip]
#[macro_export]
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9_][A-Za-z0-9._\-]*") }
#[rustfmt::skip]
#[macro_export]
macro_rules! BACKUP_TYPE_RE { () => (r"(?:host|vm|ct)") }
#[rustfmt::skip]
#[macro_export]
macro_rules! BACKUP_TIME_RE { () => (r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z") }
#[rustfmt::skip]
#[macro_export]
macro_rules! SNAPSHOT_PATH_REGEX_STR {
() => (
concat!(r"(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")")
);
}
mod acl;
pub use acl::*;
mod datastore;
pub use datastore::*;
mod human_byte;
pub use human_byte::HumanByte;
mod jobs;
pub use jobs::*;
mod key_derivation;
pub use key_derivation::{Kdf, KeyInfo};
mod network;
pub use network::*;
#[macro_use]
mod userid;
pub use userid::Authid;
pub use userid::Userid;
pub use userid::{Realm, RealmRef};
pub use userid::{Tokenname, TokennameRef};
pub use userid::{Username, UsernameRef};
pub use userid::{PROXMOX_GROUP_ID_SCHEMA, PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA};
#[macro_use]
mod user;
pub use user::*;
pub use proxmox_schema::upid::*;
mod crypto;
pub use crypto::{CryptMode, Fingerprint, bytes_as_fingerprint};
pub mod file_restore;
mod openid;
pub use openid::*;
mod remote;
pub use remote::*;
mod tape;
pub use tape::*;
mod traffic_control;
pub use traffic_control::*;
mod zfs;
pub use zfs::*;
#[rustfmt::skip]
#[macro_use]
mod local_macros {
macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_NAME { () => (concat!(r"(?:(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!(), ")")) }
macro_rules! CIDR_V4_REGEX_STR { () => (concat!(r"(?:", IPV4RE!(), r"/\d{1,2})$")) }
macro_rules! CIDR_V6_REGEX_STR { () => (concat!(r"(?:", IPV6RE!(), r"/\d{1,3})$")) }
macro_rules! DNS_ALIAS_LABEL { () => (r"(?:[a-zA-Z0-9_](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_ALIAS_NAME {
() => (concat!(r"(?:(?:", DNS_ALIAS_LABEL!() , r"\.)*", DNS_ALIAS_LABEL!(), ")"))
}
}
const_regex! {
pub IP_V4_REGEX = concat!(r"^", IPV4RE!(), r"$");
pub IP_V6_REGEX = concat!(r"^", IPV6RE!(), r"$");
pub IP_REGEX = concat!(r"^", IPRE!(), r"$");
pub CIDR_V4_REGEX = concat!(r"^", CIDR_V4_REGEX_STR!(), r"$");
pub CIDR_V6_REGEX = concat!(r"^", CIDR_V6_REGEX_STR!(), r"$");
pub CIDR_REGEX = concat!(r"^(?:", CIDR_V4_REGEX_STR!(), "|", CIDR_V6_REGEX_STR!(), r")$");
pub HOSTNAME_REGEX = r"^(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)$";
pub DNS_NAME_REGEX = concat!(r"^", DNS_NAME!(), r"$");
pub DNS_ALIAS_REGEX = concat!(r"^", DNS_ALIAS_NAME!(), r"$");
pub DNS_NAME_OR_IP_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE!(), r")$");
pub SHA256_HEX_REGEX = r"^[a-f0-9]{64}$"; // fixme: define in common_regex ?
pub PASSWORD_REGEX = r"^[[:^cntrl:]]*$"; // everything but control characters
pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$";
pub SYSTEMD_DATETIME_REGEX = r"^\d{4}-\d{2}-\d{2}( \d{2}:\d{2}(:\d{2})?)?$"; // fixme: define in common_regex ?
pub FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";
/// Regex for safe identifiers.
///
/// This
/// [article](https://dwheeler.com/essays/fixing-unix-linux-filenames.html)
/// contains further information why it is reasonable to restict
/// names this way. This is not only useful for filenames, but for
/// any identifier command line tools work with.
pub PROXMOX_SAFE_ID_REGEX = concat!(r"^", PROXMOX_SAFE_ID_REGEX_STR!(), r"$");
pub SINGLE_LINE_COMMENT_REGEX = r"^[[:^cntrl:]]*$";
pub BACKUP_REPO_URL_REGEX = concat!(
r"^^(?:(?:(",
USER_ID_REGEX_STR!(), "|", APITOKEN_ID_REGEX_STR!(),
")@)?(",
DNS_NAME!(), "|", IPRE_BRACKET!(),
"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$"
);
pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$";
pub SUBSCRIPTION_KEY_REGEX = concat!(r"^pbs(?:[cbsp])-[0-9a-f]{10}$");
}
pub const IP_V4_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&IP_V4_REGEX);
pub const IP_V6_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&IP_V6_REGEX);
pub const IP_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&IP_REGEX);
pub const CIDR_V4_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&CIDR_V4_REGEX);
pub const CIDR_V6_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&CIDR_V6_REGEX);
pub const CIDR_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&CIDR_REGEX);
pub const PVE_CONFIG_DIGEST_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const PASSWORD_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&PASSWORD_REGEX);
pub const UUID_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&UUID_REGEX);
pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX);
pub const SUBSCRIPTION_KEY_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SUBSCRIPTION_KEY_REGEX);
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SYSTEMD_DATETIME_REGEX);
pub const HOSTNAME_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&HOSTNAME_REGEX);
pub const DNS_ALIAS_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_ALIAS_REGEX);
pub const DAILY_DURATION_FORMAT: ApiStringFormat =
ApiStringFormat::VerifyFn(|s| parse_daily_duration(s).map(drop));
pub const SEARCH_DOMAIN_SCHEMA: Schema =
StringSchema::new("Search domain for host-name lookup.").schema();
pub const FIRST_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("First name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const SECOND_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("Second name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const THIRD_DNS_SERVER_SCHEMA: Schema =
StringSchema::new("Third name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const HOSTNAME_SCHEMA: Schema = StringSchema::new("Hostname (as defined in RFC1123).")
.format(&HOSTNAME_FORMAT)
.schema();
pub const DNS_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_REGEX);
pub const DNS_NAME_OR_IP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DNS_NAME_OR_IP_REGEX);
pub const DNS_NAME_OR_IP_SCHEMA: Schema = StringSchema::new("DNS name or IP address.")
.format(&DNS_NAME_OR_IP_FORMAT)
.schema();
pub const NODE_SCHEMA: Schema = StringSchema::new("Node name (or 'localhost')")
.format(&ApiStringFormat::VerifyFn(|node| {
if node == "localhost" || node == proxmox::tools::nodename() {
Ok(())
} else {
bail!("no such node '{}'", node);
}
}))
.schema();
pub const TIME_ZONE_SCHEMA: Schema = StringSchema::new(
"Time zone. The file '/usr/share/zoneinfo/zone.tab' contains the list of valid names.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name (/sys/block/<name>).")
.format(&BLOCKDEVICE_NAME_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const DISK_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Disk name list.", &BLOCKDEVICE_NAME_SCHEMA)
.schema();
pub const DISK_LIST_SCHEMA: Schema = StringSchema::new(
"A list of disk names, comma separated.")
.format(&ApiStringFormat::PropertyString(&DISK_ARRAY_SCHEMA))
.schema();
pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
.format(&PASSWORD_FORMAT)
.min_length(5)
.max_length(64)
.schema();
pub const REALM_ID_SCHEMA: Schema = StringSchema::new("Realm name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(2)
.max_length(32)
.schema();
pub const FINGERPRINT_SHA256_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&FINGERPRINT_SHA256_REGEX);
pub const CERT_FINGERPRINT_SHA256_SCHEMA: Schema =
StringSchema::new("X509 certificate fingerprint (sha256).")
.format(&FINGERPRINT_SHA256_FORMAT)
.schema();
pub const PROXMOX_SAFE_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const SINGLE_LINE_COMMENT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SINGLE_LINE_COMMENT_REGEX);
pub const SINGLE_LINE_COMMENT_SCHEMA: Schema = StringSchema::new("Comment (single line).")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.schema();
pub const SUBSCRIPTION_KEY_SCHEMA: Schema = StringSchema::new("Proxmox Backup Server subscription key.")
.format(&SUBSCRIPTION_KEY_FORMAT)
.min_length(15)
.max_length(16)
.schema();
pub const SERVICE_ID_SCHEMA: Schema = StringSchema::new("Service ID.")
.max_length(256)
.schema();
pub const PROXMOX_CONFIG_DIGEST_SCHEMA: Schema = StringSchema::new(
"Prevent changes if current configuration file has different \
SHA256 digest. This can be used to prevent concurrent \
modifications.",
)
.format(&PVE_CONFIG_DIGEST_FORMAT)
.schema();
/// API schema format definition for repository URLs
pub const BACKUP_REPO_URL: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_REPO_URL_REGEX);
// Complex type definitions
#[api()]
#[derive(Default, Serialize, Deserialize)]
/// Storage space usage information.
pub struct StorageStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
}
pub const PASSWORD_HINT_SCHEMA: Schema = StringSchema::new("Password hint.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(1)
.max_length(64)
.schema();
#[api]
#[derive(Deserialize, Serialize)]
/// RSA public key information
pub struct RsaPubKeyInfo {
/// Path to key (if stored in a file)
#[serde(skip_serializing_if="Option::is_none")]
pub path: Option<String>,
/// RSA exponent
pub exponent: String,
/// Hex-encoded RSA modulus
pub modulus: String,
/// Key (modulus) length in bits
pub length: usize,
}
impl std::convert::TryFrom<openssl::rsa::Rsa<openssl::pkey::Public>> for RsaPubKeyInfo {
type Error = anyhow::Error;
fn try_from(value: openssl::rsa::Rsa<openssl::pkey::Public>) -> Result<Self, Self::Error> {
let modulus = value.n().to_hex_str()?.to_string();
let exponent = value.e().to_dec_str()?.to_string();
let length = value.size() as usize * 8;
Ok(Self {
path: None,
exponent,
modulus,
length,
})
}
}
#[api()]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
/// Describes a package for which an update is available.
pub struct APTUpdateInfo {
/// Package name
pub package: String,
/// Package title
pub title: String,
/// Package architecture
pub arch: String,
/// Human readable package description
pub description: String,
/// New version to be updated to
pub version: String,
/// Old version currently installed
pub old_version: String,
/// Package origin
pub origin: String,
/// Package priority in human-readable form
pub priority: String,
/// Package section
pub section: String,
/// URL under which the package's changelog can be retrieved
pub change_log_url: String,
/// Custom extra field for additional package information
#[serde(skip_serializing_if="Option::is_none")]
pub extra_info: Option<String>,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Node Power command type.
pub enum NodePowerCommand {
/// Restart the server
Reboot,
/// Shutdown the server
Shutdown,
}
#[api()]
#[derive(Eq, PartialEq, Debug, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum TaskStateType {
/// Ok
OK,
/// Warning
Warning,
/// Error
Error,
/// Unknown
Unknown,
}
#[api(
properties: {
upid: { schema: UPID::API_SCHEMA },
},
)]
#[derive(Serialize, Deserialize)]
/// Task properties.
pub struct TaskListItem {
pub upid: String,
/// The node name where the task is running on.
pub node: String,
/// The Unix PID
pub pid: i64,
/// The task start time (Epoch)
pub pstart: u64,
/// The task start time (Epoch)
pub starttime: i64,
/// Worker type (arbitrary ASCII string)
pub worker_type: String,
/// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>,
/// The authenticated entity who started the task
pub user: String,
/// The task end time (Epoch)
#[serde(skip_serializing_if="Option::is_none")]
pub endtime: Option<i64>,
/// Task end status
#[serde(skip_serializing_if="Option::is_none")]
pub status: Option<String>,
}
pub const NODE_TASKS_LIST_TASKS_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new(
"A list of tasks.",
&TaskListItem::API_SCHEMA,
).schema(),
};
#[api()]
#[derive(Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "UPPERCASE")]
/// RRD consolidation mode
pub enum RRDMode {
/// Maximum
Max,
/// Average
Average,
}
#[api()]
#[derive(Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// RRD time frame
pub enum RRDTimeFrame {
/// Hour
Hour,
/// Day
Day,
/// Week
Week,
/// Month
Month,
/// Year
Year,
/// Decade (10 years)
Decade,
}

View File

@ -0,0 +1,308 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::*;
use crate::{
PROXMOX_SAFE_ID_REGEX,
IP_V4_FORMAT, IP_V6_FORMAT, IP_FORMAT,
CIDR_V4_FORMAT, CIDR_V6_FORMAT, CIDR_FORMAT,
};
pub const NETWORK_INTERFACE_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const IP_V4_SCHEMA: Schema =
StringSchema::new("IPv4 address.")
.format(&IP_V4_FORMAT)
.max_length(15)
.schema();
pub const IP_V6_SCHEMA: Schema =
StringSchema::new("IPv6 address.")
.format(&IP_V6_FORMAT)
.max_length(39)
.schema();
pub const IP_SCHEMA: Schema =
StringSchema::new("IP (IPv4 or IPv6) address.")
.format(&IP_FORMAT)
.max_length(39)
.schema();
pub const CIDR_V4_SCHEMA: Schema =
StringSchema::new("IPv4 address with netmask (CIDR notation).")
.format(&CIDR_V4_FORMAT)
.max_length(18)
.schema();
pub const CIDR_V6_SCHEMA: Schema =
StringSchema::new("IPv6 address with netmask (CIDR notation).")
.format(&CIDR_V6_FORMAT)
.max_length(43)
.schema();
pub const CIDR_SCHEMA: Schema =
StringSchema::new("IP address (IPv4 or IPv6) with netmask (CIDR notation).")
.format(&CIDR_FORMAT)
.max_length(43)
.schema();
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Interface configuration method
pub enum NetworkConfigMethod {
/// Configuration is done manually using other tools
Manual,
/// Define interfaces with statically allocated addresses.
Static,
/// Obtain an address via DHCP
DHCP,
/// Define the loopback interface.
Loopback,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
#[allow(non_camel_case_types)]
#[repr(u8)]
/// Linux Bond Mode
pub enum LinuxBondMode {
/// Round-robin policy
balance_rr = 0,
/// Active-backup policy
active_backup = 1,
/// XOR policy
balance_xor = 2,
/// Broadcast policy
broadcast = 3,
/// IEEE 802.3ad Dynamic link aggregation
#[serde(rename = "802.3ad")]
ieee802_3ad = 4,
/// Adaptive transmit load balancing
balance_tlb = 5,
/// Adaptive load balancing
balance_alb = 6,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
#[allow(non_camel_case_types)]
#[repr(u8)]
/// Bond Transmit Hash Policy for LACP (802.3ad)
pub enum BondXmitHashPolicy {
/// Layer 2
layer2 = 0,
/// Layer 2+3
#[serde(rename = "layer2+3")]
layer2_3 = 1,
/// Layer 3+4
#[serde(rename = "layer3+4")]
layer3_4 = 2,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Network interface type
pub enum NetworkInterfaceType {
/// Loopback
Loopback,
/// Physical Ethernet device
Eth,
/// Linux Bridge
Bridge,
/// Linux Bond
Bond,
/// Linux VLAN (eth.10)
Vlan,
/// Interface Alias (eth:1)
Alias,
/// Unknown interface type
Unknown,
}
pub const NETWORK_INTERFACE_NAME_SCHEMA: Schema = StringSchema::new("Network interface name.")
.format(&NETWORK_INTERFACE_FORMAT)
.min_length(1)
.max_length(libc::IFNAMSIZ-1)
.schema();
pub const NETWORK_INTERFACE_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Network interface list.", &NETWORK_INTERFACE_NAME_SCHEMA)
.schema();
pub const NETWORK_INTERFACE_LIST_SCHEMA: Schema = StringSchema::new(
"A list of network devices, comma separated.")
.format(&ApiStringFormat::PropertyString(&NETWORK_INTERFACE_ARRAY_SCHEMA))
.schema();
#[api(
properties: {
name: {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
},
"type": {
type: NetworkInterfaceType,
},
method: {
type: NetworkConfigMethod,
optional: true,
},
method6: {
type: NetworkConfigMethod,
optional: true,
},
cidr: {
schema: CIDR_V4_SCHEMA,
optional: true,
},
cidr6: {
schema: CIDR_V6_SCHEMA,
optional: true,
},
gateway: {
schema: IP_V4_SCHEMA,
optional: true,
},
gateway6: {
schema: IP_V6_SCHEMA,
optional: true,
},
options: {
description: "Option list (inet)",
type: Array,
items: {
description: "Optional attribute line.",
type: String,
},
},
options6: {
description: "Option list (inet6)",
type: Array,
items: {
description: "Optional attribute line.",
type: String,
},
},
comments: {
description: "Comments (inet, may span multiple lines)",
type: String,
optional: true,
},
comments6: {
description: "Comments (inet6, may span multiple lines)",
type: String,
optional: true,
},
bridge_ports: {
schema: NETWORK_INTERFACE_ARRAY_SCHEMA,
optional: true,
},
slaves: {
schema: NETWORK_INTERFACE_ARRAY_SCHEMA,
optional: true,
},
bond_mode: {
type: LinuxBondMode,
optional: true,
},
"bond-primary": {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
optional: true,
},
bond_xmit_hash_policy: {
type: BondXmitHashPolicy,
optional: true,
},
}
)]
#[derive(Debug, Serialize, Deserialize)]
/// Network Interface configuration
pub struct Interface {
/// Autostart interface
#[serde(rename = "autostart")]
pub autostart: bool,
/// Interface is active (UP)
pub active: bool,
/// Interface name
pub name: String,
/// Interface type
#[serde(rename = "type")]
pub interface_type: NetworkInterfaceType,
#[serde(skip_serializing_if="Option::is_none")]
pub method: Option<NetworkConfigMethod>,
#[serde(skip_serializing_if="Option::is_none")]
pub method6: Option<NetworkConfigMethod>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv4 address with netmask
pub cidr: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv4 gateway
pub gateway: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv6 address with netmask
pub cidr6: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// IPv6 gateway
pub gateway6: Option<String>,
#[serde(skip_serializing_if="Vec::is_empty")]
pub options: Vec<String>,
#[serde(skip_serializing_if="Vec::is_empty")]
pub options6: Vec<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub comments: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub comments6: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// Maximum Transmission Unit
pub mtu: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub bridge_ports: Option<Vec<String>>,
/// Enable bridge vlan support.
#[serde(skip_serializing_if="Option::is_none")]
pub bridge_vlan_aware: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub slaves: Option<Vec<String>>,
#[serde(skip_serializing_if="Option::is_none")]
pub bond_mode: Option<LinuxBondMode>,
#[serde(skip_serializing_if="Option::is_none")]
#[serde(rename = "bond-primary")]
pub bond_primary: Option<String>,
pub bond_xmit_hash_policy: Option<BondXmitHashPolicy>,
}
impl Interface {
pub fn new(name: String) -> Self {
Self {
name,
interface_type: NetworkInterfaceType::Unknown,
autostart: false,
active: false,
method: None,
method6: None,
cidr: None,
gateway: None,
cidr6: None,
gateway6: None,
options: Vec::new(),
options6: Vec::new(),
comments: None,
comments6: None,
mtu: None,
bridge_ports: None,
bridge_vlan_aware: None,
slaves: None,
bond_mode: None,
bond_primary: None,
bond_xmit_hash_policy: None,
}
}
}

121
pbs-api-types/src/openid.rs Normal file
View File

@ -0,0 +1,121 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::{
api, ApiStringFormat, ArraySchema, Schema, StringSchema, Updater,
};
use super::{
PROXMOX_SAFE_ID_REGEX, PROXMOX_SAFE_ID_FORMAT, REALM_ID_SCHEMA,
SINGLE_LINE_COMMENT_SCHEMA,
};
pub const OPENID_SCOPE_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const OPENID_SCOPE_SCHEMA: Schema = StringSchema::new("OpenID Scope Name.")
.format(&OPENID_SCOPE_FORMAT)
.schema();
pub const OPENID_SCOPE_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Array of OpenId Scopes.", &OPENID_SCOPE_SCHEMA).schema();
pub const OPENID_SCOPE_LIST_FORMAT: ApiStringFormat =
ApiStringFormat::PropertyString(&OPENID_SCOPE_ARRAY_SCHEMA);
pub const OPENID_DEFAILT_SCOPE_LIST: &'static str = "email profile";
pub const OPENID_SCOPE_LIST_SCHEMA: Schema = StringSchema::new("OpenID Scope List")
.format(&OPENID_SCOPE_LIST_FORMAT)
.default(OPENID_DEFAILT_SCOPE_LIST)
.schema();
pub const OPENID_ACR_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const OPENID_ACR_SCHEMA: Schema = StringSchema::new("OpenID Authentication Context Class Reference.")
.format(&OPENID_SCOPE_FORMAT)
.schema();
pub const OPENID_ACR_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Array of OpenId ACRs.", &OPENID_ACR_SCHEMA).schema();
pub const OPENID_ACR_LIST_FORMAT: ApiStringFormat =
ApiStringFormat::PropertyString(&OPENID_ACR_ARRAY_SCHEMA);
pub const OPENID_ACR_LIST_SCHEMA: Schema = StringSchema::new("OpenID ACR List")
.format(&OPENID_ACR_LIST_FORMAT)
.schema();
pub const OPENID_USERNAME_CLAIM_SCHEMA: Schema = StringSchema::new(
"Use the value of this attribute/claim as unique user name. It \
is up to the identity provider to guarantee the uniqueness. The \
OpenID specification only guarantees that Subject ('sub') is \
unique. Also make sure that the user is not allowed to change that \
attribute by himself!")
.max_length(64)
.min_length(1)
.format(&PROXMOX_SAFE_ID_FORMAT) .schema();
#[api(
properties: {
realm: {
schema: REALM_ID_SCHEMA,
},
"client-key": {
optional: true,
},
"scopes": {
schema: OPENID_SCOPE_LIST_SCHEMA,
optional: true,
},
"acr-values": {
schema: OPENID_ACR_LIST_SCHEMA,
optional: true,
},
prompt: {
description: "OpenID Prompt",
type: String,
format: &PROXMOX_SAFE_ID_FORMAT,
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
autocreate: {
optional: true,
default: false,
},
"username-claim": {
schema: OPENID_USERNAME_CLAIM_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize, Updater)]
#[serde(rename_all="kebab-case")]
/// OpenID configuration properties.
pub struct OpenIdRealmConfig {
#[updater(skip)]
pub realm: String,
/// OpenID Issuer Url
pub issuer_url: String,
/// OpenID Client ID
pub client_id: String,
#[serde(skip_serializing_if="Option::is_none")]
pub scopes: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub acr_values: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub prompt: Option<String>,
/// OpenID Client Key
#[serde(skip_serializing_if="Option::is_none")]
pub client_key: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
/// Automatically create users if they do not exist.
#[serde(skip_serializing_if="Option::is_none")]
pub autocreate: Option<bool>,
#[updater(skip)]
#[serde(skip_serializing_if="Option::is_none")]
pub username_claim: Option<String>,
}

View File

@ -0,0 +1,86 @@
use serde::{Deserialize, Serialize};
use super::*;
use proxmox_schema::*;
pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth token for remote host.")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const REMOTE_PASSWORD_BASE64_SCHEMA: Schema = StringSchema::new("Password or auth token for remote host (stored as base64 string).")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const REMOTE_ID_SCHEMA: Schema = StringSchema::new("Remote ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
#[api(
properties: {
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
host: {
schema: DNS_NAME_OR_IP_SCHEMA,
},
port: {
optional: true,
description: "The (optional) port",
type: u16,
},
"auth-id": {
type: Authid,
},
fingerprint: {
optional: true,
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
},
},
)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all = "kebab-case")]
/// Remote configuration properties.
pub struct RemoteConfig {
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
pub host: String,
#[serde(skip_serializing_if="Option::is_none")]
pub port: Option<u16>,
pub auth_id: Authid,
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<String>,
}
#[api(
properties: {
name: {
schema: REMOTE_ID_SCHEMA,
},
config: {
type: RemoteConfig,
},
password: {
schema: REMOTE_PASSWORD_BASE64_SCHEMA,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Remote properties.
pub struct Remote {
pub name: String,
// Note: The stored password is base64 encoded
#[serde(skip_serializing_if="String::is_empty")]
#[serde(with = "proxmox::tools::serde::string_as_base64")]
pub password: String,
#[serde(flatten)]
pub config: RemoteConfig,
}

View File

@ -2,21 +2,11 @@
use serde::{Deserialize, Serialize};
use proxmox::api::{
api,
schema::{
Schema,
ApiStringFormat,
ArraySchema,
IntegerSchema,
StringSchema,
},
use proxmox_schema::{
api, ApiStringFormat, ArraySchema, IntegerSchema, Schema, StringSchema, Updater,
};
use crate::api2::types::{
PROXMOX_SAFE_ID_FORMAT,
OptionalDeviceIdentification,
};
use crate::{OptionalDeviceIdentification, PROXMOX_SAFE_ID_FORMAT};
pub const CHANGER_NAME_SCHEMA: Schema = StringSchema::new("Tape Changer Identifier.")
.format(&PROXMOX_SAFE_ID_FORMAT)
@ -24,9 +14,8 @@ pub const CHANGER_NAME_SCHEMA: Schema = StringSchema::new("Tape Changer Identifi
.max_length(32)
.schema();
pub const SCSI_CHANGER_PATH_SCHEMA: Schema = StringSchema::new(
"Path to Linux generic SCSI device (e.g. '/dev/sg4')")
.schema();
pub const SCSI_CHANGER_PATH_SCHEMA: Schema =
StringSchema::new("Path to Linux generic SCSI device (e.g. '/dev/sg4')").schema();
pub const MEDIA_LABEL_SCHEMA: Schema = StringSchema::new("Media Label/Barcode.")
.format(&PROXMOX_SAFE_ID_FORMAT)
@ -35,16 +24,18 @@ pub const MEDIA_LABEL_SCHEMA: Schema = StringSchema::new("Media Label/Barcode.")
.schema();
pub const SLOT_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Slot list.", &IntegerSchema::new("Slot number")
.minimum(1)
.schema())
.schema();
"Slot list.",
&IntegerSchema::new("Slot number").minimum(1).schema(),
)
.schema();
pub const EXPORT_SLOT_LIST_SCHEMA: Schema = StringSchema::new("\
pub const EXPORT_SLOT_LIST_SCHEMA: Schema = StringSchema::new(
"\
A list of slot numbers, comma separated. Those slots are reserved for
Import/Export, i.e. any media in those slots are considered to be
'offline'.
")
",
)
.format(&ApiStringFormat::PropertyString(&SLOT_ARRAY_SCHEMA))
.schema();
@ -62,13 +53,14 @@ Import/Export, i.e. any media in those slots are considered to be
},
},
)]
#[derive(Serialize,Deserialize)]
#[derive(Serialize, Deserialize, Updater)]
#[serde(rename_all = "kebab-case")]
/// SCSI tape changer
pub struct ScsiTapeChanger {
#[updater(skip)]
pub name: String,
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]
#[serde(skip_serializing_if = "Option::is_none")]
pub export_slots: Option<String>,
}
@ -82,7 +74,7 @@ pub struct ScsiTapeChanger {
},
},
)]
#[derive(Serialize,Deserialize)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Changer config with optional device identification attributes
pub struct ChangerListEntry {
@ -93,7 +85,7 @@ pub struct ChangerListEntry {
}
#[api()]
#[derive(Serialize,Deserialize)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Mtx Entry Kind
pub enum MtxEntryKind {
@ -116,7 +108,7 @@ pub enum MtxEntryKind {
},
},
)]
#[derive(Serialize,Deserialize)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Mtx Status Entry
pub struct MtxStatusEntry {
@ -124,12 +116,12 @@ pub struct MtxStatusEntry {
/// The ID of the slot or drive
pub entry_id: u64,
/// The media label (volume tag) if the slot/drive is full
#[serde(skip_serializing_if="Option::is_none")]
#[serde(skip_serializing_if = "Option::is_none")]
pub label_text: Option<String>,
/// The slot the drive was loaded from
#[serde(skip_serializing_if="Option::is_none")]
#[serde(skip_serializing_if = "Option::is_none")]
pub loaded_slot: Option<u64>,
/// The current state of the drive
#[serde(skip_serializing_if="Option::is_none")]
#[serde(skip_serializing_if = "Option::is_none")]
pub state: Option<String>,
}

View File

@ -1,6 +1,6 @@
use ::serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox_schema::api;
#[api()]
#[derive(Serialize,Deserialize)]

View File

@ -4,12 +4,9 @@ use std::convert::TryFrom;
use anyhow::{bail, Error};
use serde::{Deserialize, Serialize};
use proxmox::api::{
api,
schema::{Schema, IntegerSchema, StringSchema},
};
use proxmox_schema::{api, Schema, IntegerSchema, StringSchema, Updater};
use crate::api2::types::{
use crate::{
PROXMOX_SAFE_ID_FORMAT,
CHANGER_NAME_SCHEMA,
OptionalDeviceIdentification,
@ -28,7 +25,7 @@ pub const LTO_DRIVE_PATH_SCHEMA: Schema = StringSchema::new(
pub const CHANGER_DRIVENUM_SCHEMA: Schema = IntegerSchema::new(
"Associated changer drive number (requires option changer)")
.minimum(0)
.maximum(8)
.maximum(255)
.default(0)
.schema();
@ -69,10 +66,11 @@ pub struct VirtualTapeDrive {
},
}
)]
#[derive(Serialize,Deserialize)]
#[derive(Serialize,Deserialize,Updater)]
#[serde(rename_all = "kebab-case")]
/// Lto SCSI tape driver
pub struct LtoTapeDrive {
#[updater(skip)]
pub name: String,
pub path: String,
#[serde(skip_serializing_if="Option::is_none")]

View File

@ -1,17 +1,24 @@
use ::serde::{Deserialize, Serialize};
use proxmox::{
api::api,
tools::Uuid,
};
use proxmox_schema::*;
use proxmox_uuid::Uuid;
use crate::api2::types::{
MEDIA_UUID_SCHEMA,
MEDIA_SET_UUID_SCHEMA,
use crate::{
UUID_FORMAT,
MediaStatus,
MediaLocation,
};
pub const MEDIA_SET_UUID_SCHEMA: Schema =
StringSchema::new("MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).")
.format(&UUID_FORMAT)
.schema();
pub const MEDIA_UUID_SCHEMA: Schema =
StringSchema::new("Media Uuid.")
.format(&UUID_FORMAT)
.schema();
#[api(
properties: {
"media-set-uuid": {

View File

@ -1,18 +1,8 @@
use anyhow::{bail, Error};
use proxmox::api::{
schema::{
Schema,
StringSchema,
ApiStringFormat,
parse_simple_value,
},
};
use proxmox_schema::{parse_simple_value, ApiStringFormat, Schema, StringSchema};
use crate::api2::types::{
PROXMOX_SAFE_ID_FORMAT,
CHANGER_NAME_SCHEMA,
};
use crate::{CHANGER_NAME_SCHEMA, PROXMOX_SAFE_ID_FORMAT};
pub const VAULT_NAME_SCHEMA: Schema = StringSchema::new("Vault name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
@ -35,28 +25,27 @@ pub enum MediaLocation {
proxmox::forward_deserialize_to_from_str!(MediaLocation);
proxmox::forward_serialize_to_display!(MediaLocation);
impl MediaLocation {
pub const API_SCHEMA: Schema = StringSchema::new(
"Media location (e.g. 'offline', 'online-<changer_name>', 'vault-<vault_name>')")
.format(&ApiStringFormat::VerifyFn(|text| {
let location: MediaLocation = text.parse()?;
match location {
MediaLocation::Online(ref changer) => {
parse_simple_value(changer, &CHANGER_NAME_SCHEMA)?;
}
MediaLocation::Vault(ref vault) => {
parse_simple_value(vault, &VAULT_NAME_SCHEMA)?;
}
MediaLocation::Offline => { /* OK */}
impl proxmox_schema::ApiType for MediaLocation {
const API_SCHEMA: Schema = StringSchema::new(
"Media location (e.g. 'offline', 'online-<changer_name>', 'vault-<vault_name>')",
)
.format(&ApiStringFormat::VerifyFn(|text| {
let location: MediaLocation = text.parse()?;
match location {
MediaLocation::Online(ref changer) => {
parse_simple_value(changer, &CHANGER_NAME_SCHEMA)?;
}
Ok(())
}))
.schema();
MediaLocation::Vault(ref vault) => {
parse_simple_value(vault, &VAULT_NAME_SCHEMA)?;
}
MediaLocation::Offline => { /* OK */ }
}
Ok(())
}))
.schema();
}
impl std::fmt::Display for MediaLocation {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
MediaLocation::Offline => {

View File

@ -4,28 +4,20 @@
//! so we cannot use them directly for the API. Instead, we represent
//! them as String.
use anyhow::Error;
use std::str::FromStr;
use anyhow::Error;
use serde::{Deserialize, Serialize};
use proxmox::api::{
api,
schema::{Schema, StringSchema, ApiStringFormat},
};
use proxmox_schema::{api, Schema, StringSchema, ApiStringFormat, Updater};
use proxmox_time::{parse_calendar_event, parse_time_span, CalendarEvent, TimeSpan};
use crate::{
tools::systemd::time::{
CalendarEvent,
TimeSpan,
parse_time_span,
parse_calendar_event,
},
api2::types::{
PROXMOX_SAFE_ID_FORMAT,
SINGLE_LINE_COMMENT_FORMAT,
SINGLE_LINE_COMMENT_SCHEMA,
TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
},
PROXMOX_SAFE_ID_FORMAT,
SINGLE_LINE_COMMENT_FORMAT,
SINGLE_LINE_COMMENT_SCHEMA,
TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
};
pub const MEDIA_POOL_NAME_SCHEMA: Schema = StringSchema::new("Media pool name.")
@ -138,10 +130,11 @@ impl std::str::FromStr for RetentionPolicy {
},
},
)]
#[derive(Serialize,Deserialize)]
#[derive(Serialize,Deserialize,Updater)]
/// Media pool configuration
pub struct MediaPoolConfig {
/// The pool name
#[updater(skip)]
pub name: String,
/// Media Set allocation policy
#[serde(skip_serializing_if="Option::is_none")]

View File

@ -1,6 +1,6 @@
use ::serde::{Deserialize, Serialize};
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox_schema::api;
#[api()]
/// Media status

View File

@ -0,0 +1,91 @@
//! Types for tape backup API
mod device;
pub use device::*;
mod changer;
pub use changer::*;
mod drive;
pub use drive::*;
mod media_pool;
pub use media_pool::*;
mod media_status;
pub use media_status::*;
mod media_location;
pub use media_location::*;
mod media;
pub use media::*;
use serde::{Deserialize, Serialize};
use proxmox_schema::{api, const_regex, Schema, StringSchema, ApiStringFormat};
use proxmox_uuid::Uuid;
use crate::{
FINGERPRINT_SHA256_FORMAT, BACKUP_ID_SCHEMA, BACKUP_TYPE_SCHEMA,
};
const_regex!{
pub TAPE_RESTORE_SNAPSHOT_REGEX = concat!(r"^", PROXMOX_SAFE_ID_REGEX_STR!(), r":", SNAPSHOT_PATH_REGEX_STR!(), r"$");
}
pub const TAPE_RESTORE_SNAPSHOT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&TAPE_RESTORE_SNAPSHOT_REGEX);
pub const TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA: Schema = StringSchema::new(
"Tape encryption key fingerprint (sha256)."
)
.format(&FINGERPRINT_SHA256_FORMAT)
.schema();
pub const TAPE_RESTORE_SNAPSHOT_SCHEMA: Schema = StringSchema::new(
"A snapshot in the format: 'store:type/id/time")
.format(&TAPE_RESTORE_SNAPSHOT_FORMAT)
.type_text("store:type/id/time")
.schema();
#[api(
properties: {
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
"label-text": {
schema: MEDIA_LABEL_SCHEMA,
optional: true,
},
"media": {
schema: MEDIA_UUID_SCHEMA,
optional: true,
},
"media-set": {
schema: MEDIA_SET_UUID_SCHEMA,
optional: true,
},
"backup-type": {
schema: BACKUP_TYPE_SCHEMA,
optional: true,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize)]
#[serde(rename_all="kebab-case")]
/// Content list filter parameters
pub struct MediaContentListFilter {
pub pool: Option<String>,
pub label_text: Option<String>,
pub media: Option<Uuid>,
pub media_set: Option<Uuid>,
pub backup_type: Option<String>,
pub backup_id: Option<String>,
}

View File

@ -0,0 +1,122 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::{api, Schema, IntegerSchema, StringSchema, Updater};
use crate::{
HumanByte, CIDR_SCHEMA, DAILY_DURATION_FORMAT,
PROXMOX_SAFE_ID_FORMAT, SINGLE_LINE_COMMENT_SCHEMA,
};
pub const TRAFFIC_CONTROL_TIMEFRAME_SCHEMA: Schema = StringSchema::new(
"Timeframe to specify when the rule is actice.")
.format(&DAILY_DURATION_FORMAT)
.schema();
pub const TRAFFIC_CONTROL_ID_SCHEMA: Schema = StringSchema::new("Rule ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const TRAFFIC_CONTROL_RATE_SCHEMA: Schema = IntegerSchema::new(
"Rate limit (for Token bucket filter) in bytes/second.")
.minimum(100_000)
.schema();
pub const TRAFFIC_CONTROL_BURST_SCHEMA: Schema = IntegerSchema::new(
"Size of the token bucket (for Token bucket filter) in bytes.")
.minimum(1000)
.schema();
#[api(
properties: {
"rate-in": {
type: HumanByte,
optional: true,
},
"burst-in": {
type: HumanByte,
optional: true,
},
"rate-out": {
type: HumanByte,
optional: true,
},
"burst-out": {
type: HumanByte,
optional: true,
},
},
)]
#[derive(Serialize,Deserialize,Default,Clone,Updater)]
#[serde(rename_all = "kebab-case")]
/// Rate Limit Configuration
pub struct RateLimitConfig {
#[serde(skip_serializing_if="Option::is_none")]
pub rate_in: Option<HumanByte>,
#[serde(skip_serializing_if="Option::is_none")]
pub burst_in: Option<HumanByte>,
#[serde(skip_serializing_if="Option::is_none")]
pub rate_out: Option<HumanByte>,
#[serde(skip_serializing_if="Option::is_none")]
pub burst_out: Option<HumanByte>,
}
impl RateLimitConfig {
pub fn with_same_inout(rate: Option<HumanByte>, burst: Option<HumanByte>) -> Self {
Self {
rate_in: rate,
burst_in: burst,
rate_out: rate,
burst_out: burst,
}
}
}
#[api(
properties: {
name: {
schema: TRAFFIC_CONTROL_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
limit: {
type: RateLimitConfig,
},
network: {
type: Array,
items: {
schema: CIDR_SCHEMA,
},
},
timeframe: {
type: Array,
items: {
schema: TRAFFIC_CONTROL_TIMEFRAME_SCHEMA,
},
optional: true,
},
},
)]
#[derive(Serialize,Deserialize, Updater)]
#[serde(rename_all = "kebab-case")]
/// Traffic control rule
pub struct TrafficControlRule {
#[updater(skip)]
pub name: String,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
/// Rule applies to Source IPs within this networks
pub network: Vec<String>,
#[serde(flatten)]
pub limit: RateLimitConfig,
// fixme: expose this?
// /// Bandwidth is shared accross all connections
// #[serde(skip_serializing_if="Option::is_none")]
// pub shared: Option<bool>,
/// Enable the rule at specific times
#[serde(skip_serializing_if="Option::is_none")]
pub timeframe: Option<Vec<String>>,
}

207
pbs-api-types/src/user.rs Normal file
View File

@ -0,0 +1,207 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::{
api, BooleanSchema, IntegerSchema, Schema, StringSchema, Updater,
};
use super::{SINGLE_LINE_COMMENT_FORMAT, SINGLE_LINE_COMMENT_SCHEMA};
use super::userid::{Authid, Userid, PROXMOX_TOKEN_ID_SCHEMA};
pub const ENABLE_USER_SCHEMA: Schema = BooleanSchema::new(
"Enable the account (default). You can set this to '0' to disable the account.")
.default(true)
.schema();
pub const EXPIRE_USER_SCHEMA: Schema = IntegerSchema::new(
"Account expiration date (seconds since epoch). '0' means no expiration date.")
.default(0)
.minimum(0)
.schema();
pub const FIRST_NAME_SCHEMA: Schema = StringSchema::new("First name.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const LAST_NAME_SCHEMA: Schema = StringSchema::new("Last name.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const EMAIL_SCHEMA: Schema = StringSchema::new("E-Mail Address.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
#[api(
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
firstname: {
optional: true,
schema: FIRST_NAME_SCHEMA,
},
lastname: {
schema: LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: EMAIL_SCHEMA,
optional: true,
},
tokens: {
type: Array,
optional: true,
description: "List of user's API tokens.",
items: {
type: ApiToken
},
},
}
)]
#[derive(Serialize,Deserialize)]
/// User properties with added list of ApiTokens
pub struct UserWithTokens {
pub userid: Userid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub firstname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub lastname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub email: Option<String>,
#[serde(skip_serializing_if="Vec::is_empty", default)]
pub tokens: Vec<ApiToken>,
}
#[api(
properties: {
tokenid: {
schema: PROXMOX_TOKEN_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize)]
/// ApiToken properties.
pub struct ApiToken {
pub tokenid: Authid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
}
impl ApiToken {
pub fn is_active(&self) -> bool {
if !self.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = self.expire {
let now = proxmox_time::epoch_i64();
if expire > 0 && expire <= now {
return false;
}
}
true
}
}
#[api(
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
firstname: {
optional: true,
schema: FIRST_NAME_SCHEMA,
},
lastname: {
schema: LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: EMAIL_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize,Deserialize,Updater)]
/// User properties.
pub struct User {
#[updater(skip)]
pub userid: Userid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub firstname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub lastname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub email: Option<String>,
}
impl User {
pub fn is_active(&self) -> bool {
if !self.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = self.expire {
let now = proxmox_time::epoch_i64();
if expire > 0 && expire <= now {
return false;
}
}
true
}
}

View File

@ -29,19 +29,24 @@ use anyhow::{bail, format_err, Error};
use lazy_static::lazy_static;
use serde::{Deserialize, Serialize};
use proxmox::api::api;
use proxmox::api::schema::{ApiStringFormat, Schema, StringSchema};
use proxmox::const_regex;
use proxmox_schema::{
api, const_regex, ApiStringFormat, ApiType, Schema, StringSchema, UpdaterType,
};
// we only allow a limited set of characters
// colon is not allowed, because we store usernames in
// colon separated lists)!
// slash is not allowed because it is used as pve API delimiter
// also see "man useradd"
#[macro_export]
macro_rules! USER_NAME_REGEX_STR { () => (r"(?:[^\s:/[:cntrl:]]+)") }
#[macro_export]
macro_rules! GROUP_NAME_REGEX_STR { () => (USER_NAME_REGEX_STR!()) }
#[macro_export]
macro_rules! TOKEN_NAME_REGEX_STR { () => (PROXMOX_SAFE_ID_REGEX_STR!()) }
#[macro_export]
macro_rules! USER_ID_REGEX_STR { () => (concat!(USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!())) }
#[macro_export]
macro_rules! APITOKEN_ID_REGEX_STR { () => (concat!(USER_ID_REGEX_STR!() , r"!", TOKEN_NAME_REGEX_STR!())) }
const_regex! {
@ -93,7 +98,6 @@ pub const PROXMOX_AUTH_REALM_STRING_SCHEMA: StringSchema =
.max_length(32);
pub const PROXMOX_AUTH_REALM_SCHEMA: Schema = PROXMOX_AUTH_REALM_STRING_SCHEMA.schema();
#[api(
type: String,
format: &PROXMOX_USER_NAME_FORMAT,
@ -393,19 +397,21 @@ impl<'a> TryFrom<&'a str> for &'a TokennameRef {
}
/// A complete user id consisting of a user name and a realm
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
#[derive(Clone, Debug, PartialEq, Eq, Hash, UpdaterType)]
pub struct Userid {
data: String,
name_len: usize,
}
impl Userid {
pub const API_SCHEMA: Schema = StringSchema::new("User ID")
impl ApiType for Userid {
const API_SCHEMA: Schema = StringSchema::new("User ID")
.format(&PROXMOX_USER_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
}
impl Userid {
const fn new(data: String, name_len: usize) -> Self {
Self { data, name_len }
}
@ -522,19 +528,21 @@ impl PartialEq<String> for Userid {
}
/// A complete authentication id consisting of a user id and an optional token name.
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
#[derive(Clone, Debug, Eq, PartialEq, Hash, UpdaterType)]
pub struct Authid {
user: Userid,
tokenname: Option<Tokenname>
}
impl Authid {
pub const API_SCHEMA: Schema = StringSchema::new("Authentication ID")
impl ApiType for Authid {
const API_SCHEMA: Schema = StringSchema::new("Authentication ID")
.format(&PROXMOX_AUTH_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
}
impl Authid {
const fn new(user: Userid, tokenname: Option<Tokenname>) -> Self {
Self { user, tokenname }
}

79
pbs-api-types/src/zfs.rs Normal file
View File

@ -0,0 +1,79 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::*;
const_regex! {
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
}
pub const ZFS_ASHIFT_SCHEMA: Schema = IntegerSchema::new(
"Pool sector size exponent.")
.minimum(9)
.maximum(16)
.default(12)
.schema();
pub const ZPOOL_NAME_SCHEMA: Schema = StringSchema::new("ZFS Pool Name")
.format(&ApiStringFormat::Pattern(&ZPOOL_NAME_REGEX))
.schema();
#[api(default: "On")]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// The ZFS compression algorithm to use.
pub enum ZfsCompressionType {
/// Gnu Zip
Gzip,
/// LZ4
Lz4,
/// LZJB
Lzjb,
/// ZLE
Zle,
/// ZStd
ZStd,
/// Enable compression using the default algorithm.
On,
/// Disable compression.
Off,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// The ZFS RAID level to use.
pub enum ZfsRaidLevel {
/// Single Disk
Single,
/// Mirror
Mirror,
/// Raid10
Raid10,
/// RaidZ
RaidZ,
/// RaidZ2
RaidZ2,
/// RaidZ3
RaidZ3,
}
#[api()]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// zpool list item
pub struct ZpoolListItem {
/// zpool name
pub name: String,
/// Health
pub health: String,
/// Total size
pub size: u64,
/// Used size
pub alloc: u64,
/// Free space
pub free: u64,
/// ZFS fragnentation level
pub frag: u64,
/// ZFS deduplication ratio
pub dedup: f64,
}

9
pbs-buildcfg/Cargo.toml Normal file
View File

@ -0,0 +1,9 @@
[package]
name = "pbs-buildcfg"
version = "2.1.2"
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2018"
description = "macros used for pbs related paths such as configdir and rundir"
build = "build.rs"
[dependencies]

24
pbs-buildcfg/build.rs Normal file
View File

@ -0,0 +1,24 @@
// build.rs
use std::env;
use std::process::Command;
fn main() {
let repoid = match env::var("REPOID") {
Ok(repoid) => repoid,
Err(_) => {
match Command::new("git")
.args(&["rev-parse", "HEAD"])
.output()
{
Ok(output) => {
String::from_utf8(output.stdout).unwrap()
}
Err(err) => {
panic!("git rev-parse failed: {}", err);
}
}
}
};
println!("cargo:rustc-env=REPOID={}", repoid);
}

View File

@ -1,12 +1,30 @@
//! Exports configuration data from the build system
pub const PROXMOX_PKG_VERSION: &str =
concat!(
env!("CARGO_PKG_VERSION_MAJOR"),
".",
env!("CARGO_PKG_VERSION_MINOR"),
);
pub const PROXMOX_PKG_RELEASE: &str = env!("CARGO_PKG_VERSION_PATCH");
pub const PROXMOX_PKG_REPOID: &str = env!("REPOID");
/// The configured configuration directory
pub const CONFIGDIR: &str = "/etc/proxmox-backup";
pub const JS_DIR: &str = "/usr/share/javascript/proxmox-backup";
/// Unix system user used by proxmox-backup-proxy
pub const BACKUP_USER_NAME: &str = "backup";
/// Unix system group used by proxmox-backup-proxy
pub const BACKUP_GROUP_NAME: &str = "backup";
#[macro_export]
macro_rules! PROXMOX_BACKUP_RUN_DIR_M { () => ("/run/proxmox-backup") }
#[macro_export]
macro_rules! PROXMOX_BACKUP_STATE_DIR_M { () => ("/var/lib/proxmox-backup") }
#[macro_export]
macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") }
@ -21,6 +39,9 @@ macro_rules! PROXMOX_BACKUP_FILE_RESTORE_BIN_DIR_M {
/// namespaced directory for in-memory (tmpfs) run state
pub const PROXMOX_BACKUP_RUN_DIR: &str = PROXMOX_BACKUP_RUN_DIR_M!();
/// namespaced directory for persistent state
pub const PROXMOX_BACKUP_STATE_DIR: &str = PROXMOX_BACKUP_STATE_DIR_M!();
/// namespaced directory for persistent logging
pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!();
@ -56,7 +77,7 @@ pub const PROXMOX_BACKUP_KERNEL_FN: &str =
/// This is a simply way to get the full path for configuration files.
/// #### Example:
/// ```
/// # #[macro_use] extern crate proxmox_backup;
/// use pbs_buildcfg::configdir;
/// let cert_path = configdir!("/proxy.pfx");
/// ```
#[macro_export]
@ -70,6 +91,6 @@ macro_rules! configdir {
#[macro_export]
macro_rules! rundir {
($subdir:expr) => {
concat!(PROXMOX_BACKUP_RUN_DIR_M!(), $subdir)
concat!($crate::PROXMOX_BACKUP_RUN_DIR_M!(), $subdir)
};
}

Some files were not shown because too many files have changed in this diff Show More