Compare commits

..

165 Commits

Author SHA1 Message Date
027eb2bbe6 bump version to 1.0.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:56:18 +01:00
6982a54701 gui: add snapshot/file fingerprint tooltip
display short key ID, like backend's Display trait.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
035c40e638 list_snapshots: return manifest fingerprint
for display in clients.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
79c535955d refactor BackupInfo -> SnapshotListItem helper
before adding more fields to the tuple, let's just create the struct
inside the match arms to improve readability.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
8b7f8d3f3d expose previous backup time in backup env
and use this information to add more information to client backup log
and guide the download manifest decision.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:44:55 +01:00
866c859a1e bump version to 1.0.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-24 08:33:20 +01:00
23e4e90540 verification: fix message in notification mail
the errors Vec can contain failed groups as well (e.g., if a group has
no or an invalid owner).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
a4fa3fc241 verification job: log failed dirs
else users have to manually search through a potentially very long task
log to find the entries that are different.. this is the same summary
printed at the end of a manual verify task.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 08:33:20 +01:00
81d10c3b37 cleanup: remove dead code 2020-11-24 08:03:00 +01:00
f1e2904150 paperkey: refactor common code
from formatting functions to main function, and pass along the key data
lines instead of the full string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:57:21 +01:00
23f9503a31 client: check fingerprint after downloading manifest
this is stricter than the check that happened on manifest load, as it
also fails if the manifest is signed but we don't have a key available.

add some additional output at the start of a backup to indicate whether
a previous manifest is available to base the backup on.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:55:12 +01:00
a0ef68b93c manifest: check fingerprint when loading with key
otherwise loading will run into the signature mismatch which is
technically true, but not the complete picture in this case.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:49:51 +01:00
6b127e6ea0 fix #3139: add key fingerprint to manifest
if the manifest is signed/the contained archives/blobs are encrypted.
stored in 'unprotected' area, since there is already a strong binding
between key and manifest via the signature, and this avoids breaking
backwards compatibility for a simple usability improvement.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-24 07:45:11 +01:00
5e17dbf2bb cli: cleanup 'key show' - use format_and_print_result_full
We now expose all key derivation functions on the cli, so users can
choose between scrypt or pbkdf2.
2020-11-24 07:32:34 +01:00
dfb04575ad client: add 'key show' command
for (pretty-)printing a keyfile.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:15:29 +01:00
6f2626ae19 client: print key fingerprint and master key
for operations where it makes sense.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:11:26 +01:00
37e60ddcde key: add fingerprint to key config
and set/generate it on
- key creation
- key passphrase change
- key decryption if not already set
- key encryption with master key

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:46 +01:00
05cdc05347 crypt config: add fingerprint mechanism
by computing the ID digest of a hash of a static string.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-23 13:03:16 +01:00
6364115b4b OnlineHelpInfo.js problems
Anybody known why I always get the following diff:
2020-11-23 12:57:41 +01:00
2133cd9103 update debian/control 2020-11-23 12:13:58 +01:00
01f84fcce1 ui: datastore content: use our keep field for group pruning
sets some defaults and provides the clear trigger, so less code and
slightly nicer UX.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-21 19:52:03 +01:00
08b3823025 bump dependency on proxmox to 0.7.1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-20 17:38:34 +01:00
968a0ab261 fix systemd-encoded upid strings in http client
since we systemd-encode parts of the upid string, and those can contain
characters that are invalid in urls (e.g. '\'), we have to percent encode
those

add a 'percent_encode_component' helper, so that we can maybe change
the AsciiSet for all uses at the same time

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-19 11:01:19 +01:00
21b552848a prune sim: make numberfields more similar to PBS's
by creating a new class that adds a clear trigger and also uses the
clear-trigger image. Code was taken from the one in PBS's prune window,
but we have default values here, so a bit of adapting was necessary. For
example, we don't want to reset to the original value (which might have
been one of the defaults) when clearing, but always to 'null'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-19 09:47:51 +01:00
fd19256470 gc: treat .bad files like regular chunks
Simplify the phase 2 code by treating .bad files just like regular
chunks, with the exception of stat logging.

To facilitate, we need to touch .bad files in phase 1. We only do this
under the condition that 1) the original chunk is missing (as before),
and 2) the original chunk is still referenced somewhere (since the code
lives in the error handler for a failed chunk touch, it only gets called
for chunks we expect to be there, i.e. ones that are referenced).

Untouched they will then be cleaned up after 24 hours (or after the last
longer-running task finishes).

Reason 2) is also a fix for .bad files not being cleaned up at all if
the original is no longer referenced anywhere (e.g. a user deleting all
snapshots after seeing some corrupt chunks appear).

cond_touch_path is introduced to touch arbitrary paths in the chunk
store with the same logic as touching chunks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-18 14:04:49 +01:00
1ed022576c api: include store in invalid owner errors
since a group might exist in plenty stores

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:24 +01:00
f6aa7b38bf drop now unused BackupInfo::list_backups
all global backup listing now happens via BackupGroup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:11:21 +01:00
fdfcb74d67 api: filter snapshot counts
unprivileged users should only see the counts related to their part of
the datastore.

while we're at it, switch to a list groups, filter groups, count
snapshots approach (like list_snapshots) to speedup calls to this
endpoint when many unprivileged users share a datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:50 +01:00
98afc7b152 api: make expensive parts of datastore status opt-in
used in the PBS GUI, but also for PVE usage queries which don't need all
the extra expensive information..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 11:05:47 +01:00
0d08fceeb9 improve group/snapshot listing
by listing groups first, then filtering, then listing group snapshots.

this cuts down the number of openat/getdirents calls for users that just
have a partial view of the datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-18 10:37:04 +01:00
3c945d73c2 client/http_client: add put method
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 16:59:14 +01:00
58fcbf5ab7 client: expose all-file-systems option
Useful to avoid the need for a long (and possibly changing) list of include-dev
options in certain situations, e.g. nested ZFS file systems. The option is
already implemented and seems to work as expected. The checks for virtual
filesystems are not affected by this option.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-16 16:59:14 +01:00
3a3f31c947 ui: datastores: hide "no datastore" box by default
avoids that it shows during store load, we do not know if there are
no datastores at that point and have already a loading mask.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-16 16:59:14 +01:00
8fc63287df ui: improve comment behaviour for datastore Summary
when we could not load the config (e.g. missing permissions)
show the comment from the global datastore-list

also show a messagebox for a load error instead of setting
the text of the comment box

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
172473e4de ui: DataStoreList: show message when there are no datastores
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
76f549debb ui: DataStoreList: remove datastores also from hash
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-16 10:39:34 +01:00
c9097ff801 pxar: avoid including archive root's exclude patterns in .pxarexclude-cli
The patterns from the archive root's .pxarexclude file are already present in
self.patterns when encode_pxarexclude_cli is called. Pass along the number of
CLI patterns and slice accordingly.

Suggested-By: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 13:05:09 +01:00
fb01fd3af6 visibility cleanups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:53:50 +01:00
fa4bcbcad0 pxar: only generate .pxarexclude-cli if there were CLI parameters
previously a .pxarexclude entry in the root of the archive caused the file to
be generated as well, because the patterns are read before calling
generate_directory_file_list and within the function it wasn't possible to
distinguish between a pattern coming from the CLI and a pattern coming from
archive/root/.pxarexclude

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:08 +01:00
189cdb7427 pxar: include .pxarexclude files in the archive
The documentation states:
.pxarexclude files are treated as regular files and will be included in the
backup archive.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:06 +01:00
874bd5454d pxar: fix anchored exclusion at archive root
There is no leading slash in an entry's full_path, causing an anchored
exclude at the root level to fail, e.g. having "/name" as the content of the
file archive/root/.pxarexclude didn't match the file archive/root/name

Fix this by prepending a leading slash before matching.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:18:04 +01:00
b649887e9a remove unused function
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 11:15:15 +01:00
8c62c15f56 follouwp: whitespace cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 11:02:45 +01:00
51ac17b56e api: apt/versions: fix running_kernel string for unknown package case
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-12 11:02:20 +01:00
fc5a012068 manager: versions: non-verbose should actually print server pkg info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 10:28:03 +01:00
5e293f1315 apt: use typed response for get_versions
...and cleanup get_versions for manager CLI.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-12 10:15:32 +01:00
87367decf2 ui: tell ESLint to be strict in check target
the .lint-incremental target, which is implicitly used by the install
target, is still more forgiving to allow faster "change, build, test"
iteration when developing.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
f792220dd4 d/control: update for new pin-project dependency
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-12 09:54:39 +01:00
97030c9407 cleanup clippy leftovers
this used to contain a pointer cast, now it doesn't

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
5d1d0f5d6c use pin-project to remove more unsafe blocks
we already have it in our dependency tree, so use it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-12 09:43:38 +01:00
294466ee61 manager: versions: unify printing
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
c100fe9108 add versions command to proxmox-backup-manager
Add the versions command to proxmox-backup-manager with a similar output
to pveversion [-v]. It prints the packages line by line with only the
package name, followed by the version and, for proxmox-backup and
proxmox-backup-server, some additional information (running kernel,
running version).

In addition it supports the optional output-format parameter which can
be used to print the complete data in either json, json-pretty or text
format. If output-format is specified, the --verbose parameter is
ignored and the detailed list of packages is printed.

With the addition of the versions command, the report is extended as
well.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 18:30:33 +01:00
e754da3ac2 api: versions: add version also in server package unknown case
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
bc1e52bc38 api: versions: rust fmt cleanups
line length limit is 100

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
6f0073bbb5 api: apt update info: do not serialize extra info if none
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 18:30:33 +01:00
2decf85d6e add extra_info field to APTUpdateInfo
Add an optional string field to APTUpdateInfo which can be used for
extra information.

This is used for passing running kernel and running version information
in the versions API call together with proxmox-backup and
proxmox-backup-server.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-11-11 16:39:11 +01:00
1d8f849457 api2/node/syslog: use 'real_service_name' here also
for now this only does the 'postfix' -> 'postfix@-' conversion,
fixes the issue that we only showed the 'postfix' service syslog
(which is rather empty in a default setup) instead of the instance one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 16:36:42 +01:00
beb07279b6 log source of encryption key
This patch prints the source of the encryption key when running
operations with proxmox-backup-client.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
8c6854c8fd inform user when using default encryption key
Currently if you generate a default encryption key:
`proxmox-backup-client key create --kdf none`

all backup operations which don't explicitly disable encryption will be
encrypted with this key.

I found it quite surprising, that my backups were all encrypted without
me explicitly specfying neither key nor encryption mode

This patch informs the user when the default key is used (and no
crypt-mode is provided explicitly)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-11-11 16:35:20 +01:00
57f472d9bb report: use '$' instead of '#' for showing commands
since some files can contain '#' character for comments. (i.e.,
/etc/hosts)

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:37 +01:00
94ffca10a2 report: fix grammar error
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2020-11-11 16:19:33 +01:00
0a274ab0a0 ui: UserView: render name as 'Firstname Lastname'
instead of only the firstname

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
c0026563b0 make user properties deletable
by using our usual pattern for the update call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 14:09:40 +01:00
e411924c7c rest: check for disabled token (user)
when authenticating a token, and not just when authenticating a
user/ticket.

Reported-By: Dominik Jäger <d.jaeger@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-11 12:21:29 +01:00
709c15abaa bump version to 1.0.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 10:21:30 +01:00
b404e4d930 d/control: check in new dependnecies to generated control
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 10:21:30 +01:00
f507580c3f docs: faq: fix first releases
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 10:14:01 +01:00
291b786076 docs: fix prune retention example
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 10:14:01 +01:00
06c9059dac daemon: rename method, endless loop, bail on exec error
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 10:14:01 +01:00
d7c6ad60dd daemon: add hack for sd_notify
sd_notify is not synchronous, iow. it only waits until the message
reaches the queue not until it is processed by systemd

when the process that sent such a message exits before systemd could
process it, it cannot be associated to the correct pid

so in case of reloading, we send a message with 'MAINPID=<newpid>'
to signal that it will change. if now the old process exits before
systemd knows this, it will not accept the 'READY=1' message from the
child, since it rejects the MAINPID change

since there is no (AFAICS) library interface to check the unit status,
we use 'systemctl is-active <SERVICE_NAME>' to check the state until
it is not 'reloading' anymore.

on newer systemd versions, there is 'sd_notify_barrier' which would
allow us to wait for systemd to have all messages from the current
pid to be processed before acknowledging to the child, but on buster
the systemd version is to old...

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 09:43:00 +01:00
0a0ba0785b prune sim: avoid colon to separate keep desc from count
hack for space issues for monthly keeps and >9 counts

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 08:20:13 +01:00
6ed79592f2 prune sim: make backup schedule a form, bind update button to its validity
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 08:11:46 +01:00
4c75ee3471 prune sim: do not use unecesarry variable, declare in line
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 08:11:16 +01:00
6f997da8cd prune sim: set min-heigth for calendar day cells
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 08:10:43 +01:00
03e40aa4ee ui: datastore add: set default schedule
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 07:49:01 +01:00
be1d6cbcc6 ui: shorten automatic ID length a bit
Without hyphens, we had 20 hex digits, so ~80 bit which is probably overkill.
Use 12 (13 with hyphen), this is still 48 bit.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 07:40:23 +01:00
ffaca016ad ui: datastore summary: drop removed bytes display
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 07:27:21 +01:00
71f82a98d7 d/control: add missing dependencies for non ISO installations
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 07:26:05 +01:00
deef6fbc0c cargo: extend authors list
this was mostly selected by executing

and adding those with more than a hand full of commits, so no hard
feelings here, this was definitively also a team effort to get stuff
polished!

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 14:47:48 +01:00
4ac529141f bump version to 1.0.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 14:47:48 +01:00
a108a2e967 ui: drop debug beta code
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 14:47:48 +01:00
ff7a29104c postinst: fix version check for remote.cfg cleanup
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-10 14:35:37 +01:00
240b2ffb9b ui: improve activeTab selection from fragment and state
handle invalid fragments for tabs, as well as not rendered tabpanels

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 14:21:54 +01:00
a86e703661 tools::runtime: pin_mut instead of unsafe block
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-10 14:18:45 +01:00
1ecf4e6d20 async_io: require Unpin for EitherStream and HyperAccept
We use it with Unpin types and this way we get rid of a lot
of `unsafe` blocks.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-10 14:18:45 +01:00
9f9a661b1a verify: cleanup logging order/messages
otherwise we end up printing warnings before the start message..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-10 14:11:36 +01:00
1b1cab8321 verify: log/warn on invalid owner
in order to trigger a notification/make the problem more visible than
just in syslog.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-10 14:11:36 +01:00
f4f9a503de ui: add mising panel help buttons
add missing help buttons (question mark, top right) so that we are
consistent and each panel has it.

I chose the IMHO most fitting sections.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 13:53:21 +01:00
a4971d5f90 docs: add ref for sysadmin host admin section
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 13:53:21 +01:00
477ebe6b78 docs: user management: avoid some inconsistencies
The space between '--' and 'path' in two of the commands was wrong. The other
changes make the names of the store and token consistent with the rest of the
section and should improve readability.

Also add the Datastore.Verify permission in the output of the command:
proxmox-backup-manager user permissions john@pbs --path /datastore/store1
A DatastoreAdmin now has this permission and that's what john@pbs is in the
example.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 13:47:52 +01:00
38efbfc148 ui: app: fix fixme
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 13:38:30 +01:00
10052ea644 remote.cfg: rename userid to 'auth-id'
and fixup config file on upgrades accordingly

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-10 13:25:24 +01:00
b57619ea29 ui: datastores sync: future proof and move local store column in front
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 13:22:54 +01:00
445b0043b2 ui: show (local)datastore column only in global sync/verifyview
its rather hacky, but our cbind mixin does not support columns (yet).
if it does sometime in the future, we could use that instead

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 13:14:47 +01:00
8b62cbe752 docs: update package repositories
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 13:14:04 +01:00
81f99362d9 docs: installation: don't mention ext3 as an option anymore
Support for ext3 was removed by commit 0abf0d3683b74421eca24ba61d1d4e100d35211a
in pve-installer.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 13:13:44 +01:00
414c23facb fix #3060:: improve get_owner error handling
log invalid owners to system log, and continue with next group just as
if permission checks fail for the following operations:
- verify store with limited permissions
- list store groups
- list store snapshots

all other call sites either handle it correctly already (sync/pull), or
operate on a single group/snapshot and can bubble up the error.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-10 12:58:44 +01:00
c5608cf86c encryption: add best practice for storing master key
Further clarify that the paperkey should be a last resort
recovery option, after a password manager and usb drive.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-11-10 12:51:30 +01:00
5d08c750ef HttpsConnector: include destination on connect errors
for more useful log output
old:
Nov 10 11:50:51 foo pvestatd[3378]: proxmox-backup-client failed: Error: error trying to connect: tcp connect error: No route to host (os error 113)
new:
Nov 10 11:55:21 foo pvestatd[3378]: proxmox-backup-client failed: Error: error trying to connect: error connecting to https://thebackuphost:8007/ - tcp connect error: No route to host (os error 113)

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-10 11:58:19 +01:00
f3fde36beb client: error context when building HttpClient
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-10 11:58:19 +01:00
0c83e8891e ui: fix task description 2020-11-10 11:53:39 +01:00
133de2dd1f ui: add/fix help buttons
added a few more help buttons were appropriate:

* GC and Prune schedule windows
* Create Directory window
* API Tokens, link directly to token section
* verify jobs window

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 11:51:03 +01:00
c8219747f0 ui: add all online help refs found in docs
recommit the onlinehelp after the scanrefs script has been adapted and
the docs are up to date

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 11:50:56 +01:00
0247f794e9 docs: add network management reference
needed in order for the help button in the network edit window to work.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 11:50:17 +01:00
710f787c41 docs: add maintenance chapter prefix to verification ref
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 11:50:12 +01:00
d8916a326c scanrefs: only scan docs, not JS files
This is a temporary hack until we find a sensible way to scan the
proxmox-widget-toolkit JS files as well.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-10 11:50:09 +01:00
924d6d4072 prune sim: show count for rule
and rename 'all zero' to 'keep-all' to make it consistent with the prune dialog
in PBS.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 11:47:37 +01:00
984ac33d5c ui: subscription: usage chart: render date as ISO 8601
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 11:46:22 +01:00
0a4dfd63c9 ui: usage graph: show axis and set maximum
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 11:46:05 +01:00
a6e746f652 ui: datastore list summary: add more padding between elements
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 11:46:05 +01:00
30f73fa2e0 fix bug #3060: continue sync if we cannot aquire the group lock 2020-11-10 11:29:36 +01:00
9f0ee346e9 ui: Datastores Summary: change layout and chart
changes the layout to look i little bit more like the statistics panel
we have for ceph in pve, while changing to the UsageChart and adding
some more datastore infos (from last garbage collect)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 10:43:07 +01:00
48d6dede4a ui: refactor calculate_dedup_factor
so that we can reuse this

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 10:43:07 +01:00
8432e4a655 ui: add panel/UsageChart
heavily inspired by pveRunningChart, without the dynamically adding
of data and specific for the usage of datastores

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 10:43:07 +01:00
b35eb0a175 api2/status/datastore-usage: add gc-status and history start and delta
so that we can show more info and calculate the points in time for the
history

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-10 10:43:07 +01:00
c3a1b34ed3 ui: subscription: add more button icons, small UX fix
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 10:42:45 +01:00
bb26843cd6 ui/docs: add get help onlineHelp
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 10:35:35 +01:00
ee0ab12dd0 ui: move disks/directory stuff to tab panel
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 10:15:44 +01:00
d5f7755467 docs: online help scanner: also include help tool links
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 10:15:08 +01:00
5c64e83b1e ui: datastore: set onlineHelp for chaging group owner
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 09:53:05 +01:00
0f6f99b4ec ui: prune: set onlineHelp
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 09:51:30 +01:00
f668862ae0 ui: prune: add clear-trigger to keep fields
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 09:51:20 +01:00
c960d2b501 bail if mount point already exists for directories
similar to what we do for zfs. By bailing before partitioning, the disk is
still considered unused after a failed attempt.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 09:25:58 +01:00
f5d9f2534b mount zpools created via API under /mnt/datastore
as we do for other file systems

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 09:25:58 +01:00
9a3ddcea33 ui: utils: eslint format fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 09:24:35 +01:00
030464d3a9 docs: s/DataStore/Datastore/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 09:24:13 +01:00
3f30b32c2e ui: prune: show count for rule
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 09:24:13 +01:00
5eafe6aabc ui: prune: show which rule keeps backup
and adjust layout so the description fits.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-10 09:24:13 +01:00
2c9f274efa ui: add help tool to user and remote config 2020-11-10 09:23:22 +01:00
31112c79ac ui: add help tool to datastore panel 2020-11-10 09:15:12 +01:00
d89f91b538 ui: acl editor: disallow path editing for datastore permission views
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 08:19:17 +01:00
a6310ec294 ui: fix widget height in dashboard 2020-11-10 08:12:35 +01:00
98d9323534 ui: add link to www.proxmox.com for subscription plans 2020-11-10 08:07:49 +01:00
09f1f28800 ui: ACL view: fix path filtering
and add some comments about actual behavior of those config
properties..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-10 07:33:20 +01:00
e1da9ca4bb ui: datastore dashboard: use gauge for usage, rework layout a bit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 19:26:48 +01:00
625c7bfc0b ui: task summary: enable grid mouse track over
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 19:25:43 +01:00
d9503950e3 ui: tasl summary: add pointer cursor if clickable
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 18:09:05 +01:00
376e927980 ui: datastore summary: increase usage graph height
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 17:55:59 +01:00
5204cbcf0f ui: datastore summary: add line chart icon to full-estimation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 17:48:53 +01:00
e373dcc564 ui: datastore/content: improve action button layout
Fix font-size to 14px to improve font-awesome rendering, add some
slight margin between the buttons so that they are not glued
together, add a slight text-shadow on mouse over.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 17:45:08 +01:00
137a6ebcad apt: allow changelog retrieval from enterprise repo
If a package is or will be installed from the enterprise repo, retrieve
the changelog from there as well (securely via HTTPS and authenticated
with the subcription key).

Extends the get_string method to take additional headers, in this case
used for 'Authorization'. Hyper does not have built-in basic auth
support AFAICT but it's simple enough to just build the header manually.

Take the opportunity and also set the User-Agent sensibly for GET
requests, just like for POST.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-09 17:28:58 +01:00
ed1329ecf7 ui: make Datastore clickable again
by showing the previously added pbsDataStores panel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
2371c1e371 ui: add Panels necessary for Datastores Overview
a panel for a single datastore that gets updated from an external caller
shows the usage, estimated full date, history and task summary grid

a panel that dynamically generates the panel above for each datastore

and a tabpanel that includes the panel above, as well as a global
syncview, verifiyview and aclview

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
63c07d950c ui: TaskSummary: handle less defined parameters of tasks
this makes it a little easier to provide good data, without
hardcoding all types in the source object

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
a3cdb19e33 ui: TaskSummary: add subPanelModal and datastore parameters
in preparation for the per-datastore grid

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
4623cd6497 ui: TaskSummary: move state/types/titles out of the controller
it seems that under certain circumstances, extjs does not initialize
or remove the content from objects in controllers

move it to the view, were they always exist

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
ab81bb13ad ui: make Sync/VerifyView and Edit usable without datastore
we want to use this panel again for a 'global' overview, without
any datastore preselected, so we have to handle that, and
adding a datastore selector in the editwindow

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
616650a198 ui: Utils: add parse_datastore_worker_id
to parse the datastore out of a worker_id
for this we need some regexes that are the same as in the backend

for now we only parse out the datastore, but we can extend this
in the future to parse relevant info (e.g. remote for syncs,
id/type for backups)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
78763d21b1 ui: refactor render_size_usage to Utils
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
f2d6324958 ui: refactor render_estimate
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
6e880f19cc api2/node/tasks: add check_job_store and use it
to easily check the store of a worker_id
this fixes the issue that one could not filter by type 'syncjob' and
datastore simultaneously

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-09 16:37:24 +01:00
64623f329e ui: recommit onlinehelp
now that the last commit fixed the title generation

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 16:36:00 +01:00
407f3fb994 scanrefs: remove term prefix from title
It can happen, that a title is defined as term in the following way:
:term:`My title`

This patch checks for it and strips the leading part and the last `.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-11-09 16:35:29 +01:00
0eb0c4bd63 proxy: fix log message for auth log rotation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 16:34:03 +01:00
82422c115a ui: admin/summary: add versions button/window
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 16:33:22 +01:00
ed2beb334d api: node/apt: add versions call
very basic, based on API/concepts of PVE one.

Still missing, addint an extra_info string option to APTUpdateInfo
and pass along running kernel/PBS version there.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 16:31:56 +01:00
f3b4820d06 www: show more ACLs in datastore panel
since just the ACLs defined on the exact datastore path don't give
anywhere near a complete picture of who has access to it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-09 15:19:15 +01:00
8f7cd96df4 installation: minor wording fix
very minor but worthwhile edits

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-11-09 15:18:44 +01:00
4accbc5853 backup-client: encryption: discuss paperkey command
adds a paragraph to the encryption section about
encoding the master key into a qr code for printing

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-11-09 15:18:44 +01:00
2791318ff1 fix bug #3121: forbid removing used reemotes 2020-11-09 12:48:29 +01:00
47208b4147 pxar: log when skipping mount points
Clippy complains about the number of paramters we have for
create_archive and it really does need to be made somewhat
less awkward and more usable. For now we just log to stderr
as we previously did. Added todo-comments for this.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-09 12:43:16 +01:00
b783591fb5 ui: datastore content: ensure action column is wide enough
with the "change owner" action added we now need more than the
default of 100 px, so increase to 120 px for now.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 12:31:14 +01:00
9dd6175808 ui: token selector: use same layout as auth id selector
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 12:24:54 +01:00
5e8b97178e ui: auth/token selector: tell ExtJS we injected data into the store
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 12:21:02 +01:00
38260cddf5 tools apt: include package name in filter data
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-09 08:55:08 +01:00
120 changed files with 2879 additions and 921 deletions

View File

@ -1,7 +1,16 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "0.9.7" version = "1.0.3"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"] authors = [
"Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>",
"Christian Ebner <c.ebner@proxmox.com>",
"Fabian Grünbichler <f.gruenbichler@proxmox.com>",
"Stefan Reiter <s.reiter@proxmox.com>",
"Thomas Lamprecht <t.lamprecht@proxmox.com>",
"Wolfgang Bumiller <w.bumiller@proxmox.com>",
"Proxmox Support Team <support@proxmox.com>",
]
edition = "2018" edition = "2018"
license = "AGPL-3" license = "AGPL-3"
description = "Proxmox Backup" description = "Proxmox Backup"
@ -37,8 +46,9 @@ pam = "0.7"
pam-sys = "0.5" pam-sys = "0.5"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-utils = "0.1.0" pin-utils = "0.1.0"
pin-project = "0.4"
pathpatterns = "0.1.2" pathpatterns = "0.1.2"
proxmox = { version = "0.7.0", features = [ "sortable-macro", "api-macro", "websocket" ] } proxmox = { version = "0.7.1", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0" proxmox-fuse = "0.1.0"

81
debian/changelog vendored
View File

@ -1,3 +1,84 @@
rust-proxmox-backup (1.0.3-1) unstable; urgency=medium
* client: inform user when automatically using the default encryption key
* ui: UserView: render name as 'Firstname Lastname'
* proxmox-backup-manager: add versions command
* pxar: fix anchored exclusion at archive root
* pxar: include .pxarexclude files in the archive
* client: expose all-file-systems option
* api: make expensive parts of datastore status opt-in
* api: include datastore ID in invalid owner errors
* garbage collection: treat .bad files like regular chunks to ensure they
are removed if not referenced anymore
* client: fix issues with encoded UPID strings
* encryption: add fingerprint to key config
* client: add 'key show' command
* fix #3139: add key fingerprint to backup snapshot manifest and check it
when loading with a key
* ui: add snapshot/file fingerprint tooltip
-- Proxmox Support Team <support@proxmox.com> Tue, 24 Nov 2020 08:55:47 +0100
rust-proxmox-backup (1.0.1-1) unstable; urgency=medium
* ui: datastore summary: drop 'removed bytes' display
* ui: datastore add: set default schedule
* prune sim: make backup schedule a form, bind update button to its validity
* daemon: add workaround for race in reloading and systemd 'ready' notification
-- Proxmox Support Team <support@proxmox.com> Wed, 11 Nov 2020 10:18:12 +0100
rust-proxmox-backup (1.0.0-1) unstable; urgency=medium
* fix #3121: forbid removing used remotes
* docs: backup-client: encryption: discuss paperkey command
* pxar: log when skipping mount points
* ui: show also parent ACLs which affect a datastore in its panel
* api: node/apt: add versions call
* ui: make Datastore a selectable panel again. Show a datastore summary
list, and provide unfiltered access to all sync and verify jobs.
* ui: add help tool-button to various paneös
* ui: set various onlineHelp buttons
* zfs: mount new zpools created via API under /mnt/datastore/<id>
* ui: move disks/directory views to its own tab panel
* fix #3060: continue sync if we cannot aquire the group lock
* HttpsConnector: include destination on connect errors
* fix #3060:: improve get_owner error handling
* remote.cfg: rename userid to 'auth-id'
* verify: log/warn on invalid owner
-- Proxmox Support Team <support@proxmox.com> Tue, 10 Nov 2020 14:36:13 +0100
rust-proxmox-backup (0.9.7-1) unstable; urgency=medium rust-proxmox-backup (0.9.7-1) unstable; urgency=medium
* ui: add remote store selector * ui: add remote store selector

14
debian/control vendored
View File

@ -33,11 +33,12 @@ Build-Depends: debhelper (>= 11),
librust-pam-sys-0.5+default-dev, librust-pam-sys-0.5+default-dev,
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~), librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~), librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-0.4+default-dev,
librust-pin-utils-0.1+default-dev, librust-pin-utils-0.1+default-dev,
librust-proxmox-0.7+api-macro-dev, librust-proxmox-0.7+api-macro-dev (>= 0.7.1-~~),
librust-proxmox-0.7+default-dev, librust-proxmox-0.7+default-dev (>= 0.7.1-~~),
librust-proxmox-0.7+sortable-macro-dev, librust-proxmox-0.7+sortable-macro-dev (>= 0.7.1-~~),
librust-proxmox-0.7+websocket-dev, librust-proxmox-0.7+websocket-dev (>= 0.7.1-~~),
librust-proxmox-fuse-0.1+default-dev, librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.6+default-dev (>= 0.6.1-~~), librust-pxar-0.6+default-dev (>= 0.6.1-~~),
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~), librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),
@ -78,7 +79,7 @@ Build-Depends: debhelper (>= 11),
uuid-dev, uuid-dev,
debhelper (>= 12~), debhelper (>= 12~),
bash-completion, bash-completion,
pve-eslint, pve-eslint (>= 7.12.1-1),
python3-docutils, python3-docutils,
python3-pygments, python3-pygments,
rsync, rsync,
@ -104,7 +105,9 @@ Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1), libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8), libzstd1 (>= 1.3.8),
lvm2, lvm2,
openssh-server,
pbs-i18n, pbs-i18n,
postfix | mail-transport-agent,
proxmox-backup-docs, proxmox-backup-docs,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.3-6), proxmox-widget-toolkit (>= 2.3-6),
@ -113,6 +116,7 @@ Depends: fonts-font-awesome,
${misc:Depends}, ${misc:Depends},
${shlibs:Depends}, ${shlibs:Depends},
Recommends: zfsutils-linux, Recommends: zfsutils-linux,
ifupdown2,
Description: Proxmox Backup Server daemon with tools and GUI Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface. tools. This includes a web-based graphical user interface.

3
debian/control.in vendored
View File

@ -4,7 +4,9 @@ Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1), libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8), libzstd1 (>= 1.3.8),
lvm2, lvm2,
openssh-server,
pbs-i18n, pbs-i18n,
postfix | mail-transport-agent,
proxmox-backup-docs, proxmox-backup-docs,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.3-6), proxmox-widget-toolkit (>= 2.3-6),
@ -13,6 +15,7 @@ Depends: fonts-font-awesome,
${misc:Depends}, ${misc:Depends},
${shlibs:Depends}, ${shlibs:Depends},
Recommends: zfsutils-linux, Recommends: zfsutils-linux,
ifupdown2,
Description: Proxmox Backup Server daemon with tools and GUI Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface. tools. This includes a web-based graphical user interface.

View File

@ -14,7 +14,7 @@ section = "admin"
build_depends = [ build_depends = [
"debhelper (>= 12~)", "debhelper (>= 12~)",
"bash-completion", "bash-completion",
"pve-eslint", "pve-eslint (>= 7.12.1-1)",
"python3-docutils", "python3-docutils",
"python3-pygments", "python3-pygments",
"rsync", "rsync",

View File

@ -1,2 +1,2 @@
proxmox-backup-server: package-installs-apt-sources etc/apt/sources.list.d/pbstest-beta.list proxmox-backup-server: package-installs-apt-sources etc/apt/sources.list.d/pbs-enterprise.list
proxmox-backup-server: systemd-service-file-refers-to-unusual-wantedby-target lib/systemd/system/proxmox-backup-banner.service getty.target proxmox-backup-server: systemd-service-file-refers-to-unusual-wantedby-target lib/systemd/system/proxmox-backup-banner.service getty.target

9
debian/postinst vendored
View File

@ -28,6 +28,15 @@ case "$1" in
if dpkg --compare-versions "$2" 'le' '0.9.5-1'; then if dpkg --compare-versions "$2" 'le' '0.9.5-1'; then
chown --quiet backup:backup /var/log/proxmox-backup/api/auth.log || true chown --quiet backup:backup /var/log/proxmox-backup/api/auth.log || true
fi fi
if dpkg --compare-versions "$2" 'le' '0.9.7-1'; then
if [ -e /etc/proxmox-backup/remote.cfg ]; then
echo "NOTE: Switching over remote.cfg to new field names.."
flock -w 30 /etc/proxmox-backup/.remote.lck \
sed -i \
-e 's/^\s\+userid /\tauth-id /g' \
/etc/proxmox-backup/remote.cfg || true
fi
fi
fi fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files # FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then

View File

@ -3,7 +3,7 @@ etc/proxmox-backup.service /lib/systemd/system/
etc/proxmox-backup-banner.service /lib/systemd/system/ etc/proxmox-backup-banner.service /lib/systemd/system/
etc/proxmox-backup-daily-update.service /lib/systemd/system/ etc/proxmox-backup-daily-update.service /lib/systemd/system/
etc/proxmox-backup-daily-update.timer /lib/systemd/system/ etc/proxmox-backup-daily-update.timer /lib/systemd/system/
etc/pbstest-beta.list /etc/apt/sources.list.d/ etc/pbs-enterprise.list /etc/apt/sources.list.d/
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner

View File

@ -0,0 +1 @@
rm_conffile /etc/apt/sources.list.d/pbstest-beta.list 1.0.0~ proxmox-backup-server

View File

@ -17,6 +17,7 @@ MANUAL_PAGES := \
PRUNE_SIMULATOR_FILES := \ PRUNE_SIMULATOR_FILES := \
prune-simulator/index.html \ prune-simulator/index.html \
prune-simulator/documentation.html \ prune-simulator/documentation.html \
prune-simulator/clear-trigger.png \
prune-simulator/prune-simulator.js prune-simulator/prune-simulator.js
# Sphinx documentation setup # Sphinx documentation setup

View File

@ -44,7 +44,7 @@ def scan_extjs_files(wwwdir="../www"): # a bit rough i know, but we can optimize
js_files.append(os.path.join(root, filename)) js_files.append(os.path.join(root, filename))
for js_file in js_files: for js_file in js_files:
fd = open(js_file).read() fd = open(js_file).read()
allmatch = re.findall("onlineHelp:\s*[\'\"](.*?)[\'\"]", fd, re.M) allmatch = re.findall("(?:onlineHelp:|get_help_tool\s*\()\s*[\'\"](.*?)[\'\"]", fd, re.M)
for match in allmatch: for match in allmatch:
anchor = match anchor = match
anchor = re.sub('_', '-', anchor) # normalize labels anchor = re.sub('_', '-', anchor) # normalize labels
@ -73,7 +73,9 @@ class ReflabelMapper(Builder):
'link': '/docs/index.html', 'link': '/docs/index.html',
'title': 'Proxmox Backup Server Documentation Index', 'title': 'Proxmox Backup Server Documentation Index',
} }
self.env.used_anchors = scan_extjs_files() # Disabled until we find a sensible way to scan proxmox-widget-toolkit
# as well
#self.env.used_anchors = scan_extjs_files()
if not os.path.isdir(self.outdir): if not os.path.isdir(self.outdir):
os.mkdir(self.outdir) os.mkdir(self.outdir)
@ -93,6 +95,9 @@ class ReflabelMapper(Builder):
logger.info('traversing section {}'.format(title.astext())) logger.info('traversing section {}'.format(title.astext()))
ref_name = getattr(title, 'rawsource', title.astext()) ref_name = getattr(title, 'rawsource', title.astext())
if (ref_name[:7] == ':term:`'):
ref_name = ref_name[7:-1]
self.env.online_help[labelid] = {'link': '', 'title': ''} self.env.online_help[labelid] = {'link': '', 'title': ''}
self.env.online_help[labelid]['link'] = "/docs/" + os.path.basename(filename_html) + "#{}".format(labelid) self.env.online_help[labelid]['link'] = "/docs/" + os.path.basename(filename_html) + "#{}".format(labelid)
self.env.online_help[labelid]['title'] = ref_name self.env.online_help[labelid]['title'] = ref_name
@ -112,15 +117,18 @@ class ReflabelMapper(Builder):
def validate_anchors(self): def validate_anchors(self):
#pprint(self.env.online_help) #pprint(self.env.online_help)
to_remove = [] to_remove = []
for anchor in self.env.used_anchors:
if anchor not in self.env.online_help: # Disabled until we find a sensible way to scan proxmox-widget-toolkit
logger.info("[-] anchor {} is missing from onlinehelp!".format(anchor)) # as well
for anchor in self.env.online_help: #for anchor in self.env.used_anchors:
if anchor not in self.env.used_anchors and anchor != 'pbs_documentation_index': # if anchor not in self.env.online_help:
logger.info("[*] anchor {} not used! deleting...".format(anchor)) # logger.info("[-] anchor {} is missing from onlinehelp!".format(anchor))
to_remove.append(anchor) #for anchor in self.env.online_help:
for anchor in to_remove: # if anchor not in self.env.used_anchors and anchor != 'pbs_documentation_index':
self.env.online_help.pop(anchor, None) # logger.info("[*] anchor {} not used! deleting...".format(anchor))
# to_remove.append(anchor)
#for anchor in to_remove:
# self.env.online_help.pop(anchor, None)
return return
def finish(self): def finish(self):

View File

@ -365,9 +365,22 @@ To set up a master key:
backed up. It can happen, for example, that you back up an entire system, using backed up. It can happen, for example, that you back up an entire system, using
a key on that system. If the system then becomes inaccessible for any reason a key on that system. If the system then becomes inaccessible for any reason
and needs to be restored, this will not be possible as the encryption key will be and needs to be restored, this will not be possible as the encryption key will be
lost along with the broken system. In preparation for the worst case scenario, lost along with the broken system.
you should consider keeping a paper copy of this key locked away in
a safe place. It is recommended that you keep your master key safe, but easily accessible, in
order for quick disaster recovery. For this reason, the best place to store it
is in your password manager, where it is immediately recoverable. As a backup to
this, you should also save the key to a USB drive and store that in a secure
place. This way, it is detached from any system, but is still easy to recover
from, in case of emergency. Finally, in preparation for the worst case scenario,
you should also consider keeping a paper copy of your master key locked away in
a safe place. The ``paperkey`` subcommand can be used to create a QR encoded
version of your master key. The following command sends the output of the
``paperkey`` command to a text file, for easy printing.
.. code-block:: console
proxmox-backup-client key paperkey --output-format text > qrkey.txt
Restoring Data Restoring Data

View File

@ -27,7 +27,7 @@ How long will my Proxmox Backup Server version be supported?
+-----------------------+--------------------+---------------+------------+--------------------+ +-----------------------+--------------------+---------------+------------+--------------------+
|Proxmox Backup Version | Debian Version | First Release | Debian EOL | Proxmox Backup EOL | |Proxmox Backup Version | Debian Version | First Release | Debian EOL | Proxmox Backup EOL |
+=======================+====================+===============+============+====================+ +=======================+====================+===============+============+====================+
|Proxmox Backup 1.x | Debian 10 (Buster) | tba | tba | tba | |Proxmox Backup 1.x | Debian 10 (Buster) | 2020-11 | tba | tba |
+-----------------------+--------------------+---------------+------------+--------------------+ +-----------------------+--------------------+---------------+------------+--------------------+

View File

@ -132,5 +132,5 @@ top panel to view:
collection <garbage-collection>` operations, and run garbage collection collection <garbage-collection>` operations, and run garbage collection
manually manually
* **Sync Jobs**: Create, manage and run :ref:`syncjobs` from remote servers * **Sync Jobs**: Create, manage and run :ref:`syncjobs` from remote servers
* **Verify Jobs**: Create, manage and run :ref:`verification` jobs on the * **Verify Jobs**: Create, manage and run :ref:`maintenance_verification` jobs on the
datastore datastore

View File

@ -37,16 +37,15 @@ Download the ISO from |DOWNLOADS|.
It includes the following: It includes the following:
* The `Proxmox Backup`_ server installer, which partitions the local * The `Proxmox Backup`_ server installer, which partitions the local
disk(s) with ext4, ext3, xfs or ZFS, and installs the operating disk(s) with ext4, xfs or ZFS, and installs the operating system
system
* Complete operating system (Debian Linux, 64-bit) * Complete operating system (Debian Linux, 64-bit)
* Our Linux kernel with ZFS support * Proxmox Linux kernel with ZFS support
* Complete tool-set to administer backups and all necessary resources * Complete tool-set to administer backups and all necessary resources
* Web based GUI management interface * Web based management interface
.. note:: During the installation process, the complete server .. note:: During the installation process, the complete server
is used by default and all existing data is removed. is used by default and all existing data is removed.

View File

@ -127,8 +127,7 @@ language.
-- `The Rust Programming Language <https://doc.rust-lang.org/book/ch00-00-introduction.html>`_ -- `The Rust Programming Language <https://doc.rust-lang.org/book/ch00-00-introduction.html>`_
.. todo:: further explain the software stack .. _get_help:
Getting Help Getting Help
------------ ------------

View File

@ -77,6 +77,42 @@ edit the interval at which pruning takes place.
:alt: Prune and garbage collection options :alt: Prune and garbage collection options
Retention Settings Example
^^^^^^^^^^^^^^^^^^^^^^^^^^
The backup frequency and retention of old backups may depend on how often data
changes, and how important an older state may be, in a specific work load.
When backups act as a company's document archive, there may also be legal
requirements for how long backup snapshots must be kept.
For this example, we assume that you are doing daily backups, have a retention
period of 10 years, and the period between backups stored gradually grows.
- **keep-last:** ``3`` - even if only daily backups, an admin may want to create
an extra one just before or after a big upgrade. Setting keep-last ensures
this.
- **keep-hourly:** not set - for daily backups this is not relevant. You cover
extra manual backups already, with keep-last.
- **keep-daily:** ``13`` - together with keep-last, which covers at least one
day, this ensures that you have at least two weeks of backups.
- **keep-weekly:** ``8`` - ensures that you have at least two full months of
weekly backups.
- **keep-monthly:** ``11`` - together with the previous keep settings, this
ensures that you have at least a year of monthly backups.
- **keep-yearly:** ``9`` - this is for the long term archive. As you covered the
current year with the previous options, you would set this to nine for the
remaining ones, giving you a total of at least 10 years of coverage.
We recommend that you use a higher retention period than is minimally required
by your environment; you can always reduce it if you find it is unnecessarily
high, but you cannot recreate backup snapshots from the past.
.. _maintenance_gc: .. _maintenance_gc:
Garbage Collection Garbage Collection
@ -93,7 +129,7 @@ GC** from the top panel. From here, you can edit the schedule at which garbage
collection runs and manually start the operation. collection runs and manually start the operation.
.. _verification: .. _maintenance_verification:
Verification Verification
------------ ------------

View File

@ -1,3 +1,5 @@
.. _sysadmin_network_configuration:
Network Management Network Management
================== ==================

View File

@ -26,11 +26,8 @@ update``.
.. FIXME for 7.0: change security update suite to bullseye-security .. FIXME for 7.0: change security update suite to bullseye-security
In addition, you need a package repository from Proxmox to get Proxmox Backup updates. In addition, you need a package repository from Proxmox to get Proxmox Backup
updates.
During the Proxmox Backup beta phase, only one repository (pbstest) will be
available. Once released, an Enterprise repository for production use and a
no-subscription repository will be provided.
SecureApt SecureApt
~~~~~~~~~ ~~~~~~~~~
@ -72,49 +69,45 @@ Here, the output should be:
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
.. comment `Proxmox Backup`_ Enterprise Repository
`Proxmox Backup`_ Enterprise Repository ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This will be the default, stable, and recommended repository. It is available for This will be the default, stable, and recommended repository. It is available for
all `Proxmox Backup`_ subscription users. It contains the most stable packages, all `Proxmox Backup`_ subscription users. It contains the most stable packages,
and is suitable for production use. The ``pbs-enterprise`` repository is and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default: enabled by default:
.. note:: During the Proxmox Backup beta phase only one repository (pbstest) .. code-block:: sources.list
will be available.
.. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list`` :caption: File: ``/etc/apt/sources.list.d/pbs-enterprise.list``
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise
To never miss important security fixes, the superuser (``root@pam`` user) is To never miss important security fixes, the superuser (``root@pam`` user) is
notified via email about new packages as soon as they are available. The notified via email about new packages as soon as they are available. The
change-log and details of each package can be viewed in the GUI (if available). change-log and details of each package can be viewed in the GUI (if available).
Please note that you need a valid subscription key to access this Please note that you need a valid subscription key to access this
repository. More information regarding subscription levels and pricing can be repository. More information regarding subscription levels and pricing can be
found at https://www.proxmox.com/en/proxmox-backup/pricing. found at https://www.proxmox.com/en/proxmox-backup-server/pricing
.. note:: You can disable this repository by commenting out the above .. note:: You can disable this repository by commenting out the above line
line using a `#` (at the start of the line). This prevents error using a `#` (at the start of the line). This prevents error messages if you do
messages if you do not have a subscription key. Please configure the not have a subscription key. Please configure the ``pbs-no-subscription``
``pbs-no-subscription`` repository in that case. repository in that case.
`Proxmox Backup`_ No-Subscription Repository `Proxmox Backup`_ No-Subscription Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As the name suggests, you do not need a subscription key to access As the name suggests, you do not need a subscription key to access
this repository. It can be used for testing and non-production this repository. It can be used for testing and non-production
use. It is not recommended to use it on production servers, because these use. It is not recommended to use it on production servers, because these
packages are not always heavily tested and validated. packages are not always heavily tested and validated.
We recommend to configure this repository in ``/etc/apt/sources.list``. We recommend to configure this repository in ``/etc/apt/sources.list``.
.. code-block:: sources.list .. code-block:: sources.list
:caption: File: ``/etc/apt/sources.list`` :caption: File: ``/etc/apt/sources.list``
deb http://ftp.debian.org/debian buster main contrib deb http://ftp.debian.org/debian buster main contrib
@ -128,12 +121,11 @@ Here, the output should be:
deb http://security.debian.org/debian-security buster/updates main contrib deb http://security.debian.org/debian-security buster/updates main contrib
`Proxmox Backup`_ Beta Repository `Proxmox Backup`_ Test Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
During the public beta, there is a repository called ``pbstest``. This one This repository contains the latest packages and is heavily used by developers
contains the latest packages and is heavily used by developers to test new to test new features.
features.
.. .. warning:: the ``pbstest`` repository should (as the name implies) .. .. warning:: the ``pbstest`` repository should (as the name implies)
only be used to test new features or bug fixes. only be used to test new features or bug fixes.
@ -145,7 +137,3 @@ You can access this repository by adding the following line to
:caption: sources.list entry for ``pbstest`` :caption: sources.list entry for ``pbstest``
deb http://download.proxmox.com/debian/pbs buster pbstest deb http://download.proxmox.com/debian/pbs buster pbstest
If you installed Proxmox Backup Server from the official beta ISO, you should
have this repository already configured in
``/etc/apt/sources.list.d/pbstest-beta.list``

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -13,6 +13,7 @@
.cal-day { .cal-day {
vertical-align: top; vertical-align: top;
width: 150px; width: 150px;
height: 75px; /* this is like min-height when used in tables */
border: #939393 1px solid; border: #939393 1px solid;
color: #454545; color: #454545;
} }
@ -32,6 +33,9 @@
.first-of-month { .first-of-month {
border-right: dashed black 4px; border-right: dashed black 4px;
} }
.clear-trigger {
background-image: url(./clear-trigger.png);
}
</style> </style>
<script type="text/javascript" src="extjs/ext-all.js"></script> <script type="text/javascript" src="extjs/ext-all.js"></script>

View File

@ -152,7 +152,12 @@ Ext.onReady(function() {
dataIndex: 'mark', dataIndex: 'mark',
renderer: function(value, metaData, record) { renderer: function(value, metaData, record) {
if (record.data.mark === 'keep') { if (record.data.mark === 'keep') {
if (record.data.keepCount) {
return 'keep (' + record.data.keepName +
': ' + record.data.keepCount + ')';
} else {
return 'keep (' + record.data.keepName + ')'; return 'keep (' + record.data.keepName + ')';
}
} else { } else {
return value; return value;
} }
@ -212,8 +217,12 @@ Ext.onReady(function() {
let text = Ext.Date.format(backup.data.backuptime, 'H:i'); let text = Ext.Date.format(backup.data.backuptime, 'H:i');
if (backup.data.mark === 'remove') { if (backup.data.mark === 'remove') {
html += `<span class="strikethrough">${text}</span>`; html += `<span class="strikethrough">${text}</span>`;
} else {
if (backup.data.keepCount) {
text += ` (${backup.data.keepName} ${backup.data.keepCount})`;
} else { } else {
text += ` (${backup.data.keepName})`; text += ` (${backup.data.keepName})`;
}
if (me.useColors) { if (me.useColors) {
let bgColor = COLORS[backup.data.keepName]; let bgColor = COLORS[backup.data.keepName];
let textColor = TEXT_COLORS[backup.data.keepName]; let textColor = TEXT_COLORS[backup.data.keepName];
@ -256,6 +265,34 @@ Ext.onReady(function() {
}, },
}); });
Ext.define('PBS.PruneSimulatorKeepInput', {
extend: 'Ext.form.field.Number',
alias: 'widget.prunesimulatorKeepInput',
allowBlank: true,
fieldGroup: 'keep',
minValue: 1,
listeners: {
afterrender: function(field) {
this.triggers.clear.setVisible(field.value !== null);
},
change: function(field, newValue, oldValue) {
this.triggers.clear.setVisible(newValue !== null);
},
},
triggers: {
clear: {
cls: 'clear-trigger',
weight: -1,
handler: function() {
this.triggers.clear.setVisible(false);
this.setValue(null);
},
},
},
});
Ext.define('PBS.PruneSimulatorPanel', { Ext.define('PBS.PruneSimulatorPanel', {
extend: 'Ext.panel.Panel', extend: 'Ext.panel.Panel',
alias: 'widget.prunesimulatorPanel', alias: 'widget.prunesimulatorPanel',
@ -470,6 +507,7 @@ Ext.onReady(function() {
newlyIncludedCount++; newlyIncludedCount++;
backup.mark = 'keep'; backup.mark = 'keep';
backup.keepName = keepName; backup.keepName = keepName;
backup.keepCount = newlyIncludedCount;
} else { } else {
backup.mark = 'remove'; backup.mark = 'remove';
} }
@ -488,7 +526,7 @@ Ext.onReady(function() {
Number(keepParams['keep-yearly']) === 0) { Number(keepParams['keep-yearly']) === 0) {
backups.forEach(function(backup) { backups.forEach(function(backup) {
backup.mark = 'keep'; backup.mark = 'keep';
backup.keepName = 'all zero'; backup.keepName = 'keep-all';
}); });
return; return;
@ -550,58 +588,37 @@ Ext.onReady(function() {
keepItems: [ keepItems: [
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-last', name: 'keep-last',
allowBlank: true,
fieldLabel: 'keep-last', fieldLabel: 'keep-last',
minValue: 0,
value: 4, value: 4,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-hourly', name: 'keep-hourly',
allowBlank: true,
fieldLabel: 'keep-hourly', fieldLabel: 'keep-hourly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-daily', name: 'keep-daily',
allowBlank: true,
fieldLabel: 'keep-daily', fieldLabel: 'keep-daily',
minValue: 0,
value: 5, value: 5,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-weekly', name: 'keep-weekly',
allowBlank: true,
fieldLabel: 'keep-weekly', fieldLabel: 'keep-weekly',
minValue: 0,
value: 2, value: 2,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-monthly', name: 'keep-monthly',
allowBlank: true,
fieldLabel: 'keep-monthly', fieldLabel: 'keep-monthly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
{ {
xtype: 'numberfield', xtype: 'prunesimulatorKeepInput',
name: 'keep-yearly', name: 'keep-yearly',
allowBlank: true,
fieldLabel: 'keep-yearly', fieldLabel: 'keep-yearly',
minValue: 0,
value: 0,
fieldGroup: 'keep',
}, },
], ],
@ -613,42 +630,6 @@ Ext.onReady(function() {
sorters: { property: 'backuptime', direction: 'DESC' }, sorters: { property: 'backuptime', direction: 'DESC' },
}); });
let scheduleItems = [
{
xtype: 'prunesimulatorDayOfWeekSelector',
name: 'schedule-weekdays',
fieldLabel: 'Day of week',
value: ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun'],
allowBlank: false,
multiSelect: true,
padding: '0 0 0 10',
},
{
xtype: 'prunesimulatorCalendarEvent',
name: 'schedule-time',
allowBlank: false,
value: '0/6:00',
fieldLabel: 'Backup schedule',
padding: '0 0 0 10',
},
{
xtype: 'numberfield',
name: 'numberOfWeeks',
allowBlank: false,
fieldLabel: 'Number of weeks',
minValue: 1,
value: 15,
maxValue: 260, // five years
padding: '0 0 0 10',
},
{
xtype: 'button',
name: 'schedule-button',
text: 'Update Schedule',
handler: 'reloadFull',
},
];
me.items = [ me.items = [
{ {
xtype: 'panel', xtype: 'panel',
@ -684,6 +665,7 @@ Ext.onReady(function() {
}, },
{ xtype: "panel", width: 1, border: 1 }, { xtype: "panel", width: 1, border: 1 },
{ {
xtype: 'form',
layout: 'anchor', layout: 'anchor',
flex: 1, flex: 1,
border: false, border: false,
@ -692,7 +674,42 @@ Ext.onReady(function() {
labelWidth: 120, labelWidth: 120,
}, },
bodyPadding: 10, bodyPadding: 10,
items: scheduleItems, items: [
{
xtype: 'prunesimulatorDayOfWeekSelector',
name: 'schedule-weekdays',
fieldLabel: 'Day of week',
value: ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun'],
allowBlank: false,
multiSelect: true,
padding: '0 0 0 10',
},
{
xtype: 'prunesimulatorCalendarEvent',
name: 'schedule-time',
allowBlank: false,
value: '0/6:00',
fieldLabel: 'Backup schedule',
padding: '0 0 0 10',
},
{
xtype: 'numberfield',
name: 'numberOfWeeks',
allowBlank: false,
fieldLabel: 'Number of weeks',
minValue: 1,
value: 15,
maxValue: 260, // five years
padding: '0 0 0 10',
},
{
xtype: 'button',
name: 'schedule-button',
text: 'Update Schedule',
formBind: true,
handler: 'reloadFull',
},
],
}, },
], ],
}, },

View File

@ -1,6 +1,8 @@
Storage Storage
======= =======
.. _storage_disk_management:
Disk Management Disk Management
--------------- ---------------
@ -57,7 +59,7 @@ create a datastore at the location ``/mnt/datastore/store1``:
You can also create a ``zpool`` with various raid levels from **Administration You can also create a ``zpool`` with various raid levels from **Administration
-> Disks -> Zpool** in the web interface, or by using ``zpool create``. The command -> Disks -> Zpool** in the web interface, or by using ``zpool create``. The command
below creates a mirrored ``zpool`` using two disks (``sdb`` & ``sdc``) and below creates a mirrored ``zpool`` using two disks (``sdb`` & ``sdc``) and
mounts it on the root directory (default): mounts it under ``/mnt/datastore/zpool1``:
.. code-block:: console .. code-block:: console
@ -85,7 +87,7 @@ display S.M.A.R.T. attributes from the web interface or by using the command:
.. _datastore_intro: .. _datastore_intro:
:term:`DataStore` :term:`Datastore`
----------------- -----------------
A datastore refers to a location at which backups are stored. The current A datastore refers to a location at which backups are stored. The current
@ -121,6 +123,8 @@ number of backups to keep in that store. :ref:`backup-pruning` and
periodically based on a configured schedule (see :ref:`calendar-events`) per datastore. periodically based on a configured schedule (see :ref:`calendar-events`) per datastore.
.. _storage_datastore_create:
Creating a Datastore Creating a Datastore
^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^
.. image:: images/screenshots/pbs-gui-datastore-create-general.png .. image:: images/screenshots/pbs-gui-datastore-create-general.png

View File

@ -1,3 +1,5 @@
.. _sysadmin_host_administration:
Host System Administration Host System Administration
========================== ==========================

View File

@ -230,11 +230,11 @@ You can list the ACLs of each user/token using the following command:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager acl list # proxmox-backup-manager acl list
┌──────────┬──────────────────┬───────────┬────────────────┐ ┌──────────┬──────────────────┬───────────┬────────────────┐
│ ugid │ path │ propagate │ roleid │ │ ugid │ path │ propagate │ roleid │
╞══════════╪══════════════════╪═══════════╪════════════════╡ ╞══════════╪══════════════════╪═══════════╪════════════════╡
│ john@pbs │ /datastore/disk1 │ 1 │ DatastoreAdmin │ │ john@pbs │ /datastore/store1 │ 1 │ DatastoreAdmin │
└──────────┴──────────────────┴───────────┴────────────────┘ └──────────┴──────────────────┴───────────┴────────────────┘
A single user/token can be assigned multiple permission sets for different datastores. A single user/token can be assigned multiple permission sets for different datastores.
@ -267,7 +267,7 @@ you can use the ``proxmox-backup-manager user permission`` command:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager user permissions john@pbs -- path /datastore/store1 # proxmox-backup-manager user permissions john@pbs --path /datastore/store1
Privileges with (*) have the propagate flag set Privileges with (*) have the propagate flag set
Path: /datastore/store1 Path: /datastore/store1
@ -276,9 +276,10 @@ you can use the ``proxmox-backup-manager user permission`` command:
- Datastore.Modify (*) - Datastore.Modify (*)
- Datastore.Prune (*) - Datastore.Prune (*)
- Datastore.Read (*) - Datastore.Read (*)
- Datastore.Verify (*)
# proxmox-backup-manager acl update /datastore/store1 DatastoreBackup --auth-id 'john@pbs!client1' # proxmox-backup-manager acl update /datastore/store1 DatastoreBackup --auth-id 'john@pbs!client1'
# proxmox-backup-manager user permissions 'john@pbs!test' -- path /datastore/store1 # proxmox-backup-manager user permissions 'john@pbs!client1' --path /datastore/store1
Privileges with (*) have the propagate flag set Privileges with (*) have the propagate flag set
Path: /datastore/store1 Path: /datastore/store1

View File

@ -9,7 +9,7 @@ DYNAMIC_UNITS := \
proxmox-backup.service \ proxmox-backup.service \
proxmox-backup-proxy.service proxmox-backup-proxy.service
all: $(UNITS) $(DYNAMIC_UNITS) pbstest-beta.list all: $(UNITS) $(DYNAMIC_UNITS) pbs-enterprise.list
clean: clean:
rm -f $(DYNAMIC_UNITS) rm -f $(DYNAMIC_UNITS)

1
etc/pbs-enterprise.list Normal file
View File

@ -0,0 +1 @@
deb https://enterprise.proxmox.com/debian/pbs buster pbs-enterprise

View File

@ -1 +0,0 @@
deb http://download.proxmox.com/debian/pbs buster pbstest

View File

@ -268,6 +268,21 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
Ok(user) Ok(user)
} }
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
#[allow(non_camel_case_types)]
pub enum DeletableProperty {
/// Delete the comment property.
comment,
/// Delete the firstname property.
firstname,
/// Delete the lastname property.
lastname,
/// Delete the email property.
email,
}
#[api( #[api(
protected: true, protected: true,
input: { input: {
@ -303,6 +318,14 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
schema: user::EMAIL_SCHEMA, schema: user::EMAIL_SCHEMA,
optional: true, optional: true,
}, },
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: { digest: {
optional: true, optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA, schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
@ -326,6 +349,7 @@ pub fn update_user(
firstname: Option<String>, firstname: Option<String>,
lastname: Option<String>, lastname: Option<String>,
email: Option<String>, email: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
@ -340,6 +364,17 @@ pub fn update_user(
let mut data: user::User = config.lookup("user", userid.as_str())?; let mut data: user::User = config.lookup("user", userid.as_str())?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::comment => data.comment = None,
DeletableProperty::firstname => data.firstname = None,
DeletableProperty::lastname => data.lastname = None,
DeletableProperty::email => data.email = None,
}
}
}
if let Some(comment) = comment { if let Some(comment) = comment {
let comment = comment.trim().to_string(); let comment = comment.trim().to_string();
if comment.is_empty() { if comment.is_empty() {

View File

@ -1,4 +1,4 @@
use std::collections::{HashSet, HashMap}; use std::collections::HashSet;
use std::ffi::OsStr; use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
@ -125,19 +125,6 @@ fn get_all_snapshot_files(
Ok((manifest, files)) Ok((manifest, files))
} }
fn group_backups(backup_list: Vec<BackupInfo>) -> HashMap<String, Vec<BackupInfo>> {
let mut group_hash = HashMap::new();
for info in backup_list {
let group_id = info.backup_dir.group().group_path().to_str().unwrap().to_owned();
let time_list = group_hash.entry(group_id).or_insert(vec![]);
time_list.push(info);
}
group_hash
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -171,39 +158,64 @@ fn list_groups(
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let backup_list = BackupInfo::list_backups(&datastore.base_path())?;
let group_hash = group_backups(backup_list);
let mut groups = Vec::new();
for (_group_id, mut list) in group_hash {
BackupInfo::sort_list(&mut list, false);
let info = &list[0];
let group = info.backup_dir.group();
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0; let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?;
let backup_groups = BackupInfo::list_backup_groups(&datastore.base_path())?;
let group_info = backup_groups
.into_iter()
.fold(Vec::new(), |mut group_info, group| {
let owner = match datastore.get_owner(&group) {
Ok(auth_id) => auth_id,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
&store,
group,
err);
return group_info;
},
};
if !list_all && check_backup_owner(&owner, &auth_id).is_err() { if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue; return group_info;
} }
let result_item = GroupListItem { let snapshots = match group.list_backups(&datastore.base_path()) {
Ok(snapshots) => snapshots,
Err(_) => {
return group_info;
},
};
let backup_count: u64 = snapshots.len() as u64;
if backup_count == 0 {
return group_info;
}
let last_backup = snapshots
.iter()
.fold(&snapshots[0], |last, curr| {
if curr.is_finished()
&& curr.backup_dir.backup_time() > last.backup_dir.backup_time() {
curr
} else {
last
}
})
.to_owned();
group_info.push(GroupListItem {
backup_type: group.backup_type().to_string(), backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(), backup_id: group.backup_id().to_string(),
last_backup: info.backup_dir.backup_time(), last_backup: last_backup.backup_dir.backup_time(),
backup_count: list.len() as u64,
files: info.files.clone(),
owner: Some(owner), owner: Some(owner),
}; backup_count,
groups.push(result_item); files: last_backup.files,
} });
Ok(groups) group_info
});
Ok(group_info)
} }
#[api( #[api(
@ -351,43 +363,56 @@ pub fn list_snapshots (
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let base_path = datastore.base_path(); let base_path = datastore.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?; let groups = match (backup_type, backup_id) {
(Some(backup_type), Some(backup_id)) => {
let mut groups = Vec::with_capacity(1);
groups.push(BackupGroup::new(backup_type, backup_id));
groups
},
(Some(backup_type), None) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_type() == backup_type)
.collect()
},
(None, Some(backup_id)) => {
BackupInfo::list_backup_groups(&base_path)?
.into_iter()
.filter(|group| group.backup_id() == backup_id)
.collect()
},
_ => BackupInfo::list_backup_groups(&base_path)?,
};
let mut snapshots = vec![]; let info_to_snapshot_list_item = |group: &BackupGroup, owner, info: BackupInfo| {
let backup_type = group.backup_type().to_string();
let backup_id = group.backup_id().to_string();
let backup_time = info.backup_dir.backup_time();
for info in backup_list { match get_all_snapshot_files(&datastore, &info) {
let group = info.backup_dir.group();
if let Some(ref backup_type) = backup_type {
if backup_type != group.backup_type() { continue; }
}
if let Some(ref backup_id) = backup_id {
if backup_id != group.backup_id() { continue; }
}
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?;
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
let mut size = None;
let (comment, verification, files) = match get_all_snapshot_files(&datastore, &info) {
Ok((manifest, files)) => { Ok((manifest, files)) => {
size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
// extract the first line from notes // extract the first line from notes
let comment: Option<String> = manifest.unprotected["notes"] let comment: Option<String> = manifest.unprotected["notes"]
.as_str() .as_str()
.and_then(|notes| notes.lines().next()) .and_then(|notes| notes.lines().next())
.map(String::from); .map(String::from);
let verify = manifest.unprotected["verify_state"].clone(); let fingerprint = match manifest.fingerprint() {
let verify: Option<SnapshotVerifyState> = match serde_json::from_value(verify) { Ok(fp) => fp,
Err(err) => {
eprintln!("error parsing fingerprint: '{}'", err);
None
},
};
let verification = manifest.unprotected["verify_state"].clone();
let verification: Option<SnapshotVerifyState> = match serde_json::from_value(verification) {
Ok(verify) => verify, Ok(verify) => verify,
Err(err) => { Err(err) => {
eprintln!("error parsing verification state : '{}'", err); eprintln!("error parsing verification state : '{}'", err);
@ -395,88 +420,114 @@ pub fn list_snapshots (
} }
}; };
(comment, verify, files) let size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment,
verification,
fingerprint,
files,
size,
owner,
}
}, },
Err(err) => { Err(err) => {
eprintln!("error during snapshot file listing: '{}'", err); eprintln!("error during snapshot file listing: '{}'", err);
( let files = info
None,
None,
info
.files .files
.iter() .into_iter()
.map(|x| BackupContent { .map(|x| BackupContent {
filename: x.to_string(), filename: x.to_string(),
size: None, size: None,
crypt_mode: None, crypt_mode: None,
}) })
.collect() .collect();
)
SnapshotListItem {
backup_type,
backup_id,
backup_time,
comment: None,
verification: None,
fingerprint: None,
files,
size: None,
owner,
}
},
}
};
groups
.iter()
.try_fold(Vec::new(), |mut snapshots, group| {
let owner = match datastore.get_owner(group) {
Ok(auth_id) => auth_id,
Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
&store,
group,
err);
return Ok(snapshots);
}, },
}; };
let result_item = SnapshotListItem { if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
backup_type: group.backup_type().to_string(), return Ok(snapshots);
backup_id: group.backup_id().to_string(),
backup_time: info.backup_dir.backup_time(),
comment,
verification,
files,
size,
owner: Some(owner),
};
snapshots.push(result_item);
} }
let group_backups = group.list_backups(&datastore.base_path())?;
snapshots.extend(
group_backups
.into_iter()
.map(|info| info_to_snapshot_list_item(&group, Some(owner.clone()), info))
);
Ok(snapshots) Ok(snapshots)
})
} }
fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> { fn get_snapshots_count(store: &DataStore, filter_owner: Option<&Authid>) -> Result<Counts, Error> {
let base_path = store.base_path(); let base_path = store.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?; let groups = BackupInfo::list_backup_groups(&base_path)?;
let mut groups = HashSet::new();
let mut result = Counts { groups.iter()
ct: None, .filter(|group| {
host: None, let owner = match store.get_owner(&group) {
vm: None, Ok(owner) => owner,
other: None, Err(err) => {
eprintln!("Failed to get owner of group '{}/{}' - {}",
store.name(),
group,
err);
return false;
},
}; };
for info in backup_list { match filter_owner {
let group = info.backup_dir.group(); Some(filter) => check_backup_owner(&owner, filter).is_ok(),
None => true,
let id = group.backup_id();
let backup_type = group.backup_type();
let mut new_id = false;
if groups.insert(format!("{}-{}", &backup_type, &id)) {
new_id = true;
} }
})
.try_fold(Counts::default(), |mut counts, group| {
let snapshot_count = group.list_backups(&base_path)?.len() as u64;
let mut counts = match backup_type { let type_count = match group.backup_type() {
"ct" => result.ct.take().unwrap_or(Default::default()), "ct" => counts.ct.get_or_insert(Default::default()),
"host" => result.host.take().unwrap_or(Default::default()), "vm" => counts.vm.get_or_insert(Default::default()),
"vm" => result.vm.take().unwrap_or(Default::default()), "host" => counts.host.get_or_insert(Default::default()),
_ => result.other.take().unwrap_or(Default::default()), _ => counts.other.get_or_insert(Default::default()),
}; };
counts.snapshots += 1; type_count.groups += 1;
if new_id { type_count.snapshots += snapshot_count;
counts.groups +=1;
}
match backup_type { Ok(counts)
"ct" => result.ct = Some(counts), })
"host" => result.host = Some(counts),
"vm" => result.vm = Some(counts),
_ => result.other = Some(counts),
}
}
Ok(result)
} }
#[api( #[api(
@ -485,8 +536,15 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
store: { store: {
schema: DATASTORE_SCHEMA, schema: DATASTORE_SCHEMA,
}, },
verbose: {
type: bool,
default: false,
optional: true,
description: "Include additional information like snapshot counts and GC status.",
}, },
}, },
},
returns: { returns: {
type: DataStoreStatus, type: DataStoreStatus,
}, },
@ -497,13 +555,30 @@ fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
/// Get datastore status. /// Get datastore status.
pub fn status( pub fn status(
store: String, store: String,
verbose: bool,
_info: &ApiMethod, _info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<DataStoreStatus, Error> { ) -> Result<DataStoreStatus, Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let storage = crate::tools::disks::disk_usage(&datastore.base_path())?; let storage = crate::tools::disks::disk_usage(&datastore.base_path())?;
let counts = get_snapshots_count(&datastore)?; let (counts, gc_status) = if verbose {
let gc_status = datastore.last_gc_status(); let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let store_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let filter_owner = if store_privs & PRIV_DATASTORE_AUDIT != 0 {
None
} else {
Some(&auth_id)
};
let counts = Some(get_snapshots_count(&datastore, filter_owner)?);
let gc_status = Some(datastore.last_gc_status());
(counts, gc_status)
} else {
(None, None)
};
Ok(DataStoreStatus { Ok(DataStoreStatus {
total: storage.total, total: storage.total,
@ -636,7 +711,7 @@ pub fn verify(
verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)? verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)?
}; };
if failed_dirs.len() > 0 { if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:"); worker.log("Failed to verify the following snapshots/groups:");
for dir in failed_dirs { for dir in failed_dirs {
worker.log(format!("\t{}", dir)); worker.log(format!("\t{}", dir));
} }

View File

@ -311,6 +311,10 @@ pub const BACKUP_API_SUBDIRS: SubdirMap = &[
"previous", &Router::new() "previous", &Router::new()
.download(&API_METHOD_DOWNLOAD_PREVIOUS) .download(&API_METHOD_DOWNLOAD_PREVIOUS)
), ),
(
"previous_backup_time", &Router::new()
.get(&API_METHOD_GET_PREVIOUS_BACKUP_TIME)
),
( (
"speedtest", &Router::new() "speedtest", &Router::new()
.upload(&API_METHOD_UPLOAD_SPEEDTEST) .upload(&API_METHOD_UPLOAD_SPEEDTEST)
@ -694,6 +698,28 @@ fn finish_backup (
Ok(Value::Null) Ok(Value::Null)
} }
#[sortable]
pub const API_METHOD_GET_PREVIOUS_BACKUP_TIME: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&get_previous_backup_time),
&ObjectSchema::new(
"Get previous backup time.",
&[],
)
);
fn get_previous_backup_time(
_param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let env: &BackupEnvironment = rpcenv.as_ref();
let backup_time = env.last_backup.as_ref().map(|info| info.backup_dir.backup_time());
Ok(json!(backup_time))
}
#[sortable] #[sortable]
pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new( pub const API_METHOD_DOWNLOAD_PREVIOUS: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&download_previous), &ApiHandler::AsyncHttp(&download_previous),

View File

@ -78,7 +78,7 @@ pub fn list_remotes(
optional: true, optional: true,
default: 8007, default: 8007,
}, },
userid: { "auth-id": {
type: Authid, type: Authid,
}, },
password: { password: {
@ -178,7 +178,7 @@ pub enum DeletableProperty {
type: u16, type: u16,
optional: true, optional: true,
}, },
userid: { "auth-id": {
optional: true, optional: true,
type: Authid, type: Authid,
}, },
@ -214,7 +214,7 @@ pub fn update_remote(
comment: Option<String>, comment: Option<String>,
host: Option<String>, host: Option<String>,
port: Option<u16>, port: Option<u16>,
userid: Option<Authid>, auth_id: Option<Authid>,
password: Option<String>, password: Option<String>,
fingerprint: Option<String>, fingerprint: Option<String>,
delete: Option<Vec<DeletableProperty>>, delete: Option<Vec<DeletableProperty>>,
@ -252,7 +252,7 @@ pub fn update_remote(
} }
if let Some(host) = host { data.host = host; } if let Some(host) = host { data.host = host; }
if port.is_some() { data.port = port; } if port.is_some() { data.port = port; }
if let Some(userid) = userid { data.userid = userid; } if let Some(auth_id) = auth_id { data.auth_id = auth_id; }
if let Some(password) = password { data.password = password; } if let Some(password) = password { data.password = password; }
if let Some(fingerprint) = fingerprint { data.fingerprint = Some(fingerprint); } if let Some(fingerprint) = fingerprint { data.fingerprint = Some(fingerprint); }
@ -284,6 +284,17 @@ pub fn update_remote(
/// Remove a remote from the configuration file. /// Remove a remote from the configuration file.
pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error> {
use crate::config::sync::{self, SyncJobConfig};
let (sync_jobs, _) = sync::config()?;
let job_list: Vec<SyncJobConfig> = sync_jobs.convert_to_typed_array("sync")?;
for job in job_list {
if job.remote == name {
bail!("remote '{}' is used by sync job '{}' (datastore '{}')", name, job.id, job.store);
}
}
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?; let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = remote::config()?; let (mut config, expected_digest) = remote::config()?;
@ -312,7 +323,7 @@ pub async fn remote_client(remote: remote::Remote) -> Result<HttpClient, Error>
let client = HttpClient::new( let client = HttpClient::new(
&remote.host, &remote.host,
remote.port.unwrap_or(8007), remote.port.unwrap_or(8007),
&remote.userid, &remote.auth_id,
options)?; options)?;
let _auth_info = client.login() // make sure we can auth let _auth_info = client.login() // make sure we can auth
.await .await

View File

@ -34,7 +34,7 @@ pub mod subscription;
pub(crate) mod rrd; pub(crate) mod rrd;
mod journal; mod journal;
mod services; pub(crate) mod services;
mod status; mod status;
mod syslog; mod syslog;
mod time; mod time;

View File

@ -1,12 +1,13 @@
use anyhow::{Error, bail, format_err}; use anyhow::{Error, bail, format_err};
use serde_json::{json, Value}; use serde_json::{json, Value};
use std::collections::HashMap;
use proxmox::list_subdirs_api_method; use proxmox::list_subdirs_api_method;
use proxmox::api::{api, RpcEnvironment, RpcEnvironmentType, Permission}; use proxmox::api::{api, RpcEnvironment, RpcEnvironmentType, Permission};
use proxmox::api::router::{Router, SubdirMap}; use proxmox::api::router::{Router, SubdirMap};
use crate::server::WorkerTask; use crate::server::WorkerTask;
use crate::tools::{apt, http}; use crate::tools::{apt, http, subscription};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY}; use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::api2::types::{Authid, APTUpdateInfo, NODE_SCHEMA, UPID_SCHEMA}; use crate::api2::types::{Authid, APTUpdateInfo, NODE_SCHEMA, UPID_SCHEMA};
@ -202,9 +203,34 @@ fn apt_get_changelog(
let changelog_url = &pkg_info[0].change_log_url; let changelog_url = &pkg_info[0].change_log_url;
// FIXME: use 'apt-get changelog' for proxmox packages as well, once repo supports it // FIXME: use 'apt-get changelog' for proxmox packages as well, once repo supports it
if changelog_url.starts_with("http://download.proxmox.com/") { if changelog_url.starts_with("http://download.proxmox.com/") {
let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url)) let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url, None))
.map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?; .map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?;
return Ok(json!(changelog)); return Ok(json!(changelog));
} else if changelog_url.starts_with("https://enterprise.proxmox.com/") {
let sub = match subscription::read_subscription()? {
Some(sub) => sub,
None => bail!("cannot retrieve changelog from enterprise repo: no subscription info found")
};
let (key, id) = match sub.key {
Some(key) => {
match sub.serverid {
Some(id) => (key, id),
None =>
bail!("cannot retrieve changelog from enterprise repo: no server id found")
}
},
None => bail!("cannot retrieve changelog from enterprise repo: no subscription key found")
};
let mut auth_header = HashMap::new();
auth_header.insert("Authorization".to_owned(),
format!("Basic {}", base64::encode(format!("{}:{}", key, id))));
let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url, Some(&auth_header)))
.map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?;
return Ok(json!(changelog));
} else { } else {
let mut command = std::process::Command::new("apt-get"); let mut command = std::process::Command::new("apt-get");
command.arg("changelog"); command.arg("changelog");
@ -215,12 +241,128 @@ fn apt_get_changelog(
} }
} }
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
description: "List of more relevant packages.",
type: Array,
items: {
type: APTUpdateInfo,
},
},
access: {
permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
},
)]
/// Get package information for important Proxmox Backup Server packages.
pub fn get_versions() -> Result<Vec<APTUpdateInfo>, Error> {
const PACKAGES: &[&str] = &[
"ifupdown2",
"libjs-extjs",
"proxmox-backup",
"proxmox-backup-docs",
"proxmox-backup-client",
"proxmox-backup-server",
"proxmox-mini-journalreader",
"proxmox-widget-toolkit",
"pve-xtermjs",
"smartmontools",
"zfsutils-linux",
];
fn unknown_package(package: String, extra_info: Option<String>) -> APTUpdateInfo {
APTUpdateInfo {
package,
title: "unknown".into(),
arch: "unknown".into(),
description: "unknown".into(),
version: "unknown".into(),
old_version: "unknown".into(),
origin: "unknown".into(),
priority: "unknown".into(),
section: "unknown".into(),
change_log_url: "unknown".into(),
extra_info,
}
}
let is_kernel = |name: &str| name.starts_with("pve-kernel-");
let mut packages: Vec<APTUpdateInfo> = Vec::new();
let pbs_packages = apt::list_installed_apt_packages(
|filter| {
filter.installed_version == Some(filter.active_version)
&& (is_kernel(filter.package) || PACKAGES.contains(&filter.package))
},
None,
);
let running_kernel = format!(
"running kernel: {}",
nix::sys::utsname::uname().release().to_owned()
);
if let Some(proxmox_backup) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup") {
let mut proxmox_backup = proxmox_backup.clone();
proxmox_backup.extra_info = Some(running_kernel);
packages.push(proxmox_backup);
} else {
packages.push(unknown_package("proxmox-backup".into(), Some(running_kernel)));
}
let version = crate::api2::version::PROXMOX_PKG_VERSION;
let release = crate::api2::version::PROXMOX_PKG_RELEASE;
let daemon_version_info = Some(format!("running version: {}.{}", version, release));
if let Some(pkg) = pbs_packages.iter().find(|pkg| pkg.package == "proxmox-backup-server") {
let mut pkg = pkg.clone();
pkg.extra_info = daemon_version_info;
packages.push(pkg);
} else {
packages.push(unknown_package("proxmox-backup".into(), daemon_version_info));
}
let mut kernel_pkgs: Vec<APTUpdateInfo> = pbs_packages
.iter()
.filter(|pkg| is_kernel(&pkg.package))
.cloned()
.collect();
// make sure the cache mutex gets dropped before the next call to list_installed_apt_packages
{
let cache = apt_pkg_native::Cache::get_singleton();
kernel_pkgs.sort_by(|left, right| {
cache
.compare_versions(&left.old_version, &right.old_version)
.reverse()
});
}
packages.append(&mut kernel_pkgs);
// add entry for all packages we're interested in, even if not installed
for pkg in PACKAGES.iter() {
if pkg == &"proxmox-backup" || pkg == &"proxmox-backup-server" {
continue;
}
match pbs_packages.iter().find(|item| &item.package == pkg) {
Some(apt_pkg) => packages.push(apt_pkg.to_owned()),
None => packages.push(unknown_package(pkg.to_string(), None)),
}
}
Ok(packages)
}
const SUBDIRS: SubdirMap = &[ const SUBDIRS: SubdirMap = &[
("changelog", &Router::new().get(&API_METHOD_APT_GET_CHANGELOG)), ("changelog", &Router::new().get(&API_METHOD_APT_GET_CHANGELOG)),
("update", &Router::new() ("update", &Router::new()
.get(&API_METHOD_APT_UPDATE_AVAILABLE) .get(&API_METHOD_APT_UPDATE_AVAILABLE)
.post(&API_METHOD_APT_UPDATE_DATABASE) .post(&API_METHOD_APT_UPDATE_DATABASE)
), ),
("versions", &Router::new().get(&API_METHOD_GET_VERSIONS)),
]; ];
pub const ROUTER: Router = Router::new() pub const ROUTER: Router = Router::new()

View File

@ -142,6 +142,18 @@ pub fn create_datastore_disk(
bail!("disk '{}' is already in use.", disk); bail!("disk '{}' is already in use.", disk);
} }
let mount_point = format!("/mnt/datastore/{}", &name);
// check if the default path does exist already and bail if it does
let default_path = std::path::PathBuf::from(&mount_point);
match std::fs::metadata(&default_path) {
Err(_) => {}, // path does not exist
Ok(_) => {
bail!("path {:?} already exists", default_path);
}
}
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"dircreate", Some(name.clone()), auth_id, to_stdout, move |worker| "dircreate", Some(name.clone()), auth_id, to_stdout, move |worker|
{ {
@ -160,7 +172,7 @@ pub fn create_datastore_disk(
let uuid = get_fs_uuid(&partition)?; let uuid = get_fs_uuid(&partition)?;
let uuid_path = format!("/dev/disk/by-uuid/{}", uuid); let uuid_path = format!("/dev/disk/by-uuid/{}", uuid);
let (mount_unit_name, mount_point) = create_datastore_mount_unit(&name, filesystem, &uuid_path)?; let mount_unit_name = create_datastore_mount_unit(&name, &mount_point, filesystem, &uuid_path)?;
systemd::reload_daemon()?; systemd::reload_daemon()?;
systemd::enable_unit(&mount_unit_name)?; systemd::enable_unit(&mount_unit_name)?;
@ -243,11 +255,11 @@ pub const ROUTER: Router = Router::new()
fn create_datastore_mount_unit( fn create_datastore_mount_unit(
datastore_name: &str, datastore_name: &str,
mount_point: &str,
fs_type: FileSystemType, fs_type: FileSystemType,
what: &str, what: &str,
) -> Result<(String, String), Error> { ) -> Result<String, Error> {
let mount_point = format!("/mnt/datastore/{}", datastore_name);
let mut mount_unit_name = systemd::escape_unit(&mount_point, true); let mut mount_unit_name = systemd::escape_unit(&mount_point, true);
mount_unit_name.push_str(".mount"); mount_unit_name.push_str(".mount");
@ -265,7 +277,7 @@ fn create_datastore_mount_unit(
let mount = SystemdMountSection { let mount = SystemdMountSection {
What: what.to_string(), What: what.to_string(),
Where: mount_point.clone(), Where: mount_point.to_string(),
Type: Some(fs_type.to_string()), Type: Some(fs_type.to_string()),
Options: Some(String::from("defaults")), Options: Some(String::from("defaults")),
..Default::default() ..Default::default()
@ -278,5 +290,5 @@ fn create_datastore_mount_unit(
systemd::config::save_systemd_mount(&mount_unit_path, &config)?; systemd::config::save_systemd_mount(&mount_unit_path, &config)?;
Ok((mount_unit_name, mount_point)) Ok(mount_unit_name)
} }

View File

@ -243,7 +243,7 @@ pub fn zpool_details(
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_MODIFY, false), permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_MODIFY, false),
}, },
)] )]
/// Create a new ZFS pool. /// Create a new ZFS pool. Will be mounted under '/mnt/datastore/<name>'.
pub fn create_zpool( pub fn create_zpool(
name: String, name: String,
devices: String, devices: String,
@ -303,10 +303,11 @@ pub fn create_zpool(
bail!("{:?} needs at least {} disks.", raidlevel, min_disks); bail!("{:?} needs at least {} disks.", raidlevel, min_disks);
} }
let mount_point = format!("/mnt/datastore/{}", &name);
// check if the default path does exist already and bail if it does // check if the default path does exist already and bail if it does
// otherwise we get an error on mounting // otherwise 'zpool create' aborts after partitioning, but before creating the pool
let mut default_path = std::path::PathBuf::from("/"); let default_path = std::path::PathBuf::from(&mount_point);
default_path.push(&name);
match std::fs::metadata(&default_path) { match std::fs::metadata(&default_path) {
Err(_) => {}, // path does not exist Err(_) => {}, // path does not exist
@ -322,7 +323,7 @@ pub fn create_zpool(
let mut command = std::process::Command::new("zpool"); let mut command = std::process::Command::new("zpool");
command.args(&["create", "-o", &format!("ashift={}", ashift), &name]); command.args(&["create", "-o", &format!("ashift={}", ashift), "-m", &mount_point, &name]);
match raidlevel { match raidlevel {
ZfsRaidLevel::Single => { ZfsRaidLevel::Single => {
@ -371,7 +372,6 @@ pub fn create_zpool(
} }
if add_datastore { if add_datastore {
let mount_point = format!("/{}", name);
crate::api2::config::datastore::create_datastore(json!({ "name": name, "path": mount_point }))? crate::api2::config::datastore::create_datastore(json!({ "name": name, "path": mount_point }))?
} }

View File

@ -22,7 +22,7 @@ static SERVICE_NAME_LIST: [&str; 7] = [
"systemd-timesyncd", "systemd-timesyncd",
]; ];
fn real_service_name(service: &str) -> &str { pub fn real_service_name(service: &str) -> &str {
// since postfix package 3.1.0-3.1 the postfix unit is only here // since postfix package 3.1.0-3.1 the postfix unit is only here
// to manage subinstances, of which the default is called "-". // to manage subinstances, of which the default is called "-".

View File

@ -134,12 +134,18 @@ fn get_syslog(
mut rpcenv: &mut dyn RpcEnvironment, mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let service = if let Some(service) = param["service"].as_str() {
Some(crate::api2::node::services::real_service_name(service))
} else {
None
};
let (count, lines) = dump_journal( let (count, lines) = dump_journal(
param["start"].as_u64(), param["start"].as_u64(),
param["limit"].as_u64(), param["limit"].as_u64(),
param["since"].as_str(), param["since"].as_str(),
param["until"].as_str(), param["until"].as_str(),
param["service"].as_str())?; service)?;
rpcenv["total"] = Value::from(count); rpcenv["total"] = Value::from(count);

View File

@ -71,6 +71,36 @@ fn check_job_privs(auth_id: &Authid, user_info: &CachedUserInfo, upid: &UPID) ->
bail!("not a scheduled job task"); bail!("not a scheduled job task");
} }
// get the store out of the worker_id
fn check_job_store(upid: &UPID, store: &str) -> bool {
match (upid.worker_type.as_str(), &upid.worker_id) {
(workertype, Some(workerid)) if workertype.starts_with("verif") => {
if let Some(captures) = VERIFICATION_JOB_WORKER_ID_REGEX.captures(&workerid) {
if let Some(jobstore) = captures.get(1) {
return store == jobstore.as_str();
}
} else {
return workerid == store;
}
}
("syncjob", Some(workerid)) => {
if let Some(captures) = SYNC_JOB_WORKER_ID_REGEX.captures(&workerid) {
if let Some(local_store) = captures.get(3) {
return store == local_store.as_str();
}
}
}
("prune", Some(workerid))
| ("backup", Some(workerid))
| ("garbage_collection", Some(workerid)) => {
return workerid == store || workerid.starts_with(&format!("{}:", store));
}
_ => {}
};
false
}
fn check_task_access(auth_id: &Authid, upid: &UPID) -> Result<(), Error> { fn check_task_access(auth_id: &Authid, upid: &UPID) -> Result<(), Error> {
let task_auth_id = &upid.auth_id; let task_auth_id = &upid.auth_id;
if auth_id == task_auth_id if auth_id == task_auth_id
@ -455,21 +485,8 @@ pub fn list_tasks(
} }
if let Some(store) = store { if let Some(store) = store {
// Note: useful to select all tasks spawned by proxmox-backup-client if !check_job_store(&info.upid, store) {
let worker_id = match &info.upid.worker_id { return None;
Some(w) => w,
None => return None, // skip
};
if info.upid.worker_type == "backup" || info.upid.worker_type == "restore" ||
info.upid.worker_type == "prune"
{
let prefix = format!("{}:", store);
if !worker_id.starts_with(&prefix) { return None; }
} else if info.upid.worker_type == "garbage_collection" {
if worker_id != store { return None; }
} else {
return None; // skip
} }
} }

View File

@ -50,7 +50,7 @@ pub async fn get_pull_parameters(
let (remote_config, _digest) = remote::config()?; let (remote_config, _digest) = remote::config()?;
let remote: remote::Remote = remote_config.lookup("remote", remote)?; let remote: remote::Remote = remote_config.lookup("remote", remote)?;
let src_repo = BackupRepository::new(Some(remote.userid.clone()), Some(remote.host.clone()), remote.port, remote_store.to_string()); let src_repo = BackupRepository::new(Some(remote.auth_id.clone()), Some(remote.host.clone()), remote.port, remote_store.to_string());
let client = crate::api2::config::remote::remote_client(remote).await?; let client = crate::api2::config::remote::remote_client(remote).await?;

View File

@ -103,6 +103,7 @@ fn datastore_status(
"total": status.total, "total": status.total,
"used": status.used, "used": status.used,
"avail": status.avail, "avail": status.avail,
"gc-status": datastore.last_gc_status(),
}); });
let rrd_dir = format!("datastore/{}", store); let rrd_dir = format!("datastore/{}", store);
@ -152,6 +153,8 @@ fn datastore_status(
} }
} }
entry["history-start"] = start.into();
entry["history-delta"] = reso.into();
entry["history"] = history.into(); entry["history"] = history.into();
// we skip the calculation for datastores with not enough data // we skip the calculation for datastores with not enough data

View File

@ -5,7 +5,7 @@ use proxmox::api::{api, schema::*};
use proxmox::const_regex; use proxmox::const_regex;
use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32}; use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::{CryptMode, BACKUP_ID_REGEX}; use crate::backup::{CryptMode, Fingerprint, BACKUP_ID_REGEX};
use crate::server::UPID; use crate::server::UPID;
#[macro_use] #[macro_use]
@ -484,6 +484,10 @@ pub struct SnapshotVerifyState {
type: SnapshotVerifyState, type: SnapshotVerifyState,
optional: true, optional: true,
}, },
fingerprint: {
type: String,
optional: true,
},
files: { files: {
items: { items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA schema: BACKUP_ARCHIVE_NAME_SCHEMA
@ -508,6 +512,9 @@ pub struct SnapshotListItem {
/// The result of the last run verify task /// The result of the last run verify task
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub verification: Option<SnapshotVerifyState>, pub verification: Option<SnapshotVerifyState>,
/// Fingerprint of encryption key
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<Fingerprint>,
/// List of contained archive files. /// List of contained archive files.
pub files: Vec<BackupContent>, pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes). /// Overall snapshot size (sum of all archive sizes).
@ -692,7 +699,7 @@ pub struct TypeCounts {
}, },
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize, Default)]
/// Counts of groups/snapshots per BackupType. /// Counts of groups/snapshots per BackupType.
pub struct Counts { pub struct Counts {
/// The counts for CT backups /// The counts for CT backups
@ -707,8 +714,14 @@ pub struct Counts {
#[api( #[api(
properties: { properties: {
"gc-status": { type: GarbageCollectionStatus, }, "gc-status": {
counts: { type: Counts, } type: GarbageCollectionStatus,
optional: true,
},
counts: {
type: Counts,
optional: true,
},
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
@ -722,9 +735,11 @@ pub struct DataStoreStatus {
/// Available space (bytes). /// Available space (bytes).
pub avail: u64, pub avail: u64,
/// Status of last GC /// Status of last GC
pub gc_status: GarbageCollectionStatus, #[serde(skip_serializing_if="Option::is_none")]
pub gc_status: Option<GarbageCollectionStatus>,
/// Group/Snapshot counts /// Group/Snapshot counts
pub counts: Counts, #[serde(skip_serializing_if="Option::is_none")]
pub counts: Option<Counts>,
} }
#[api( #[api(
@ -1153,7 +1168,7 @@ pub enum RRDTimeFrameResolution {
} }
#[api()] #[api()]
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")] #[serde(rename_all = "PascalCase")]
/// Describes a package for which an update is available. /// Describes a package for which an update is available.
pub struct APTUpdateInfo { pub struct APTUpdateInfo {
@ -1177,6 +1192,9 @@ pub struct APTUpdateInfo {
pub section: String, pub section: String,
/// URL under which the package's changelog can be retrieved /// URL under which the package's changelog can be retrieved
pub change_log_url: String, pub change_log_url: String,
/// Custom extra field for additional package information
#[serde(skip_serializing_if="Option::is_none")]
pub extra_info: Option<String>,
} }
#[api()] #[api()]

View File

@ -327,26 +327,20 @@ impl BackupInfo {
Ok(files) Ok(files)
} }
pub fn list_backups(base_path: &Path) -> Result<Vec<BackupInfo>, Error> { pub fn list_backup_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
let mut list = Vec::new(); let mut list = Vec::new();
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| { tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |l1_fd, backup_id, file_type| { tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l1_fd, backup_id, &BACKUP_DATE_REGEX, |l2_fd, backup_time_string, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
let backup_dir = BackupDir::with_rfc3339(backup_type, backup_id, backup_time_string)?; list.push(BackupGroup::new(backup_type, backup_id));
let files = list_backup_files(l2_fd, backup_time_string)?;
list.push(BackupInfo { backup_dir, files });
Ok(()) Ok(())
}) })
})
})?; })?;
Ok(list) Ok(list)
} }

View File

@ -154,9 +154,11 @@ impl ChunkStore {
} }
pub fn cond_touch_chunk(&self, digest: &[u8; 32], fail_if_not_exist: bool) -> Result<bool, Error> { pub fn cond_touch_chunk(&self, digest: &[u8; 32], fail_if_not_exist: bool) -> Result<bool, Error> {
let (chunk_path, _digest_str) = self.chunk_path(digest); let (chunk_path, _digest_str) = self.chunk_path(digest);
self.cond_touch_path(&chunk_path, fail_if_not_exist)
}
pub fn cond_touch_path(&self, path: &Path, fail_if_not_exist: bool) -> Result<bool, Error> {
const UTIME_NOW: i64 = (1 << 30) - 1; const UTIME_NOW: i64 = (1 << 30) - 1;
const UTIME_OMIT: i64 = (1 << 30) - 2; const UTIME_OMIT: i64 = (1 << 30) - 2;
@ -167,7 +169,7 @@ impl ChunkStore {
use nix::NixPath; use nix::NixPath;
let res = chunk_path.with_nix_path(|cstr| unsafe { let res = path.with_nix_path(|cstr| unsafe {
let tmp = libc::utimensat(-1, cstr.as_ptr(), &times[0], libc::AT_SYMLINK_NOFOLLOW); let tmp = libc::utimensat(-1, cstr.as_ptr(), &times[0], libc::AT_SYMLINK_NOFOLLOW);
nix::errno::Errno::result(tmp) nix::errno::Errno::result(tmp)
})?; })?;
@ -177,7 +179,7 @@ impl ChunkStore {
return Ok(false); return Ok(false);
} }
bail!("update atime failed for chunk {:?} - {}", chunk_path, err); bail!("update atime failed for chunk/file {:?} - {}", path, err);
} }
Ok(true) Ok(true)
@ -328,49 +330,13 @@ impl ChunkStore {
let lock = self.mutex.lock(); let lock = self.mutex.lock();
if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) { if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) {
if bad { if stat.st_atime < min_atime {
// filename validity checked in iterator
let orig_filename = std::ffi::CString::new(&filename.to_bytes()[..64])?;
match fstatat(
dirfd,
orig_filename.as_c_str(),
nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW)
{
Ok(_) => {
match unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
Err(err) =>
crate::task_warn!(
worker,
"unlinking corrupt chunk {:?} failed on store '{}' - {}",
filename,
self.name,
err,
),
Ok(_) => {
status.removed_bad += 1;
status.removed_bytes += stat.st_size as u64;
}
}
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
// chunk hasn't been rewritten yet, keep .bad file
status.still_bad += 1;
},
Err(err) => {
// some other error, warn user and keep .bad file around too
status.still_bad += 1;
crate::task_warn!(
worker,
"error during stat on '{:?}' - {}",
orig_filename,
err,
);
}
}
} else if stat.st_atime < min_atime {
//let age = now - stat.st_atime; //let age = now - stat.st_atime;
//println!("UNLINK {} {:?}", age/(3600*24), filename); //println!("UNLINK {} {:?}", age/(3600*24), filename);
if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) { if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
if bad {
status.still_bad += 1;
}
bail!( bail!(
"unlinking chunk {:?} failed on store '{}' - {}", "unlinking chunk {:?} failed on store '{}' - {}",
filename, filename,
@ -378,13 +344,23 @@ impl ChunkStore {
err, err,
); );
} }
if bad {
status.removed_bad += 1;
} else {
status.removed_chunks += 1; status.removed_chunks += 1;
}
status.removed_bytes += stat.st_size as u64; status.removed_bytes += stat.st_size as u64;
} else if stat.st_atime < oldest_writer { } else if stat.st_atime < oldest_writer {
if bad {
status.still_bad += 1;
} else {
status.pending_chunks += 1; status.pending_chunks += 1;
}
status.pending_bytes += stat.st_size as u64; status.pending_bytes += stat.st_size as u64;
} else { } else {
if !bad {
status.disk_chunks += 1; status.disk_chunks += 1;
}
status.disk_bytes += stat.st_size as u64; status.disk_bytes += stat.st_size as u64;
} }
} }

View File

@ -7,6 +7,8 @@
//! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption) //! encryption](https://en.wikipedia.org/wiki/Authenticated_encryption)
//! for a short introduction. //! for a short introduction.
use std::fmt;
use std::fmt::Display;
use std::io::Write; use std::io::Write;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
@ -15,8 +17,15 @@ use openssl::pkcs5::pbkdf2_hmac;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode}; use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::tools::format::{as_fingerprint, bytes_as_fingerprint};
use proxmox::api::api; use proxmox::api::api;
// openssl::sha::sha256(b"Proxmox Backup Encryption Key Fingerprint")
const FINGERPRINT_INPUT: [u8; 32] = [ 110, 208, 239, 119, 71, 31, 255, 77,
85, 199, 168, 254, 74, 157, 182, 33,
97, 64, 127, 19, 76, 114, 93, 223,
48, 153, 45, 37, 236, 69, 237, 38, ];
#[api(default: "encrypt")] #[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)] #[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
@ -30,6 +39,21 @@ pub enum CryptMode {
SignOnly, SignOnly,
} }
#[derive(Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(transparent)]
/// 32-byte fingerprint, usually calculated with SHA256.
pub struct Fingerprint {
#[serde(with = "bytes_as_fingerprint")]
bytes: [u8; 32],
}
/// Display as short key ID
impl Display for Fingerprint {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", as_fingerprint(&self.bytes[0..8]))
}
}
/// Encryption Configuration with secret key /// Encryption Configuration with secret key
/// ///
/// This structure stores the secret key and provides helpers for /// This structure stores the secret key and provides helpers for
@ -101,6 +125,12 @@ impl CryptConfig {
tag tag
} }
pub fn fingerprint(&self) -> Fingerprint {
Fingerprint {
bytes: self.compute_digest(&FINGERPRINT_INPUT)
}
}
pub fn data_crypter(&self, iv: &[u8; 16], mode: Mode) -> Result<Crypter, Error> { pub fn data_crypter(&self, iv: &[u8; 16], mode: Mode) -> Result<Crypter, Error> {
let mut crypter = openssl::symm::Crypter::new(self.cipher, mode, &self.enc_key, Some(iv))?; let mut crypter = openssl::symm::Crypter::new(self.cipher, mode, &self.enc_key, Some(iv))?;
crypter.aad_update(b"")?; //?? crypter.aad_update(b"")?; //??
@ -219,7 +249,13 @@ impl CryptConfig {
) -> Result<Vec<u8>, Error> { ) -> Result<Vec<u8>, Error> {
let modified = proxmox::tools::time::epoch_i64(); let modified = proxmox::tools::time::epoch_i64();
let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() }; let key_config = super::KeyConfig {
kdf: None,
created,
modified,
data: self.enc_key.to_vec(),
fingerprint: Some(self.fingerprint()),
};
let data = serde_json::to_string(&key_config)?.as_bytes().to_vec(); let data = serde_json::to_string(&key_config)?.as_bytes().to_vec();
let mut buffer = vec![0u8; rsa.size() as usize]; let mut buffer = vec![0u8; rsa.size() as usize];

View File

@ -446,6 +446,17 @@ impl DataStore {
file_name, file_name,
err, err,
); );
// touch any corresponding .bad files to keep them around, meaning if a chunk is
// rewritten correctly they will be removed automatically, as well as if no index
// file requires the chunk anymore (won't get to this loop then)
for i in 0..=9 {
let bad_ext = format!("{}.bad", i);
let mut bad_path = PathBuf::new();
bad_path.push(self.chunk_path(digest).0);
bad_path.set_extension(bad_ext);
self.chunk_store.cond_touch_path(&bad_path, false)?;
}
} }
} }
Ok(()) Ok(())

View File

@ -219,7 +219,6 @@ impl IndexFile for DynamicIndexReader {
(csum, chunk_end) (csum, chunk_end)
} }
#[allow(clippy::cast_ptr_alignment)]
fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo> { fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo> {
if pos >= self.index.len() { if pos >= self.index.len() {
return None; return None;

View File

@ -2,9 +2,34 @@ use anyhow::{bail, format_err, Context, Error};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::backup::{CryptConfig, Fingerprint};
use proxmox::api::api;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions}; use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::try_block; use proxmox::try_block;
#[api(default: "scrypt")]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
/// Encrtypt the Key with a password using PBKDF2
PBKDF2,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[derive(Deserialize, Serialize, Debug)] #[derive(Deserialize, Serialize, Debug)]
pub enum KeyDerivationConfig { pub enum KeyDerivationConfig {
Scrypt { Scrypt {
@ -66,6 +91,9 @@ pub struct KeyConfig {
pub modified: i64, pub modified: i64,
#[serde(with = "proxmox::tools::serde::bytes_as_base64")] #[serde(with = "proxmox::tools::serde::bytes_as_base64")]
pub data: Vec<u8>, pub data: Vec<u8>,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(default)]
pub fingerprint: Option<Fingerprint>,
} }
pub fn store_key_config( pub fn store_key_config(
@ -103,15 +131,25 @@ pub fn store_key_config(
pub fn encrypt_key_with_passphrase( pub fn encrypt_key_with_passphrase(
raw_key: &[u8], raw_key: &[u8],
passphrase: &[u8], passphrase: &[u8],
kdf: Kdf,
) -> Result<KeyConfig, Error> { ) -> Result<KeyConfig, Error> {
let salt = proxmox::sys::linux::random_data(32)?; let salt = proxmox::sys::linux::random_data(32)?;
let kdf = KeyDerivationConfig::Scrypt { let kdf = match kdf {
Kdf::Scrypt => KeyDerivationConfig::Scrypt {
n: 65536, n: 65536,
r: 8, r: 8,
p: 1, p: 1,
salt, salt,
},
Kdf::PBKDF2 => KeyDerivationConfig::PBKDF2 {
iter: 65535,
salt,
},
Kdf::None => {
bail!("No key derivation function specified");
}
}; };
let derived_key = kdf.derive_key(passphrase)?; let derived_key = kdf.derive_key(passphrase)?;
@ -142,28 +180,22 @@ pub fn encrypt_key_with_passphrase(
created, created,
modified: created, modified: created,
data: enc_data, data: enc_data,
fingerprint: None,
}) })
} }
pub fn load_and_decrypt_key( pub fn load_and_decrypt_key(
path: &std::path::Path, path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>, passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> { ) -> Result<([u8;32], i64, Fingerprint), Error> {
do_load_and_decrypt_key(path, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
}
fn do_load_and_decrypt_key(
path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> {
decrypt_key(&file_get_contents(&path)?, passphrase) decrypt_key(&file_get_contents(&path)?, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path))
} }
pub fn decrypt_key( pub fn decrypt_key(
mut keydata: &[u8], mut keydata: &[u8],
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>, passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], i64), Error> { ) -> Result<([u8;32], i64, Fingerprint), Error> {
let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?; let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?;
let raw_data = key_config.data; let raw_data = key_config.data;
@ -203,5 +235,13 @@ pub fn decrypt_key(
let mut result = [0u8; 32]; let mut result = [0u8; 32];
result.copy_from_slice(&key); result.copy_from_slice(&key);
Ok((result, created)) let fingerprint = match key_config.fingerprint {
Some(fingerprint) => fingerprint,
None => {
let crypt_config = CryptConfig::new(result.clone())?;
crypt_config.fingerprint()
},
};
Ok((result, created, fingerprint))
} }

View File

@ -5,7 +5,7 @@ use std::path::Path;
use serde_json::{json, Value}; use serde_json::{json, Value};
use ::serde::{Deserialize, Serialize}; use ::serde::{Deserialize, Serialize};
use crate::backup::{BackupDir, CryptMode, CryptConfig}; use crate::backup::{BackupDir, CryptMode, CryptConfig, Fingerprint};
pub const MANIFEST_BLOB_NAME: &str = "index.json.blob"; pub const MANIFEST_BLOB_NAME: &str = "index.json.blob";
pub const MANIFEST_LOCK_NAME: &str = ".index.json.lck"; pub const MANIFEST_LOCK_NAME: &str = ".index.json.lck";
@ -223,12 +223,48 @@ impl BackupManifest {
if let Some(crypt_config) = crypt_config { if let Some(crypt_config) = crypt_config {
let sig = self.signature(crypt_config)?; let sig = self.signature(crypt_config)?;
manifest["signature"] = proxmox::tools::digest_to_hex(&sig).into(); manifest["signature"] = proxmox::tools::digest_to_hex(&sig).into();
let fingerprint = &crypt_config.fingerprint();
manifest["unprotected"]["key-fingerprint"] = serde_json::to_value(fingerprint)?;
} }
let manifest = serde_json::to_string_pretty(&manifest).unwrap().into(); let manifest = serde_json::to_string_pretty(&manifest).unwrap().into();
Ok(manifest) Ok(manifest)
} }
pub fn fingerprint(&self) -> Result<Option<Fingerprint>, Error> {
match &self.unprotected["key-fingerprint"] {
Value::Null => Ok(None),
value => Ok(Some(serde_json::from_value(value.clone())?))
}
}
/// Checks if a BackupManifest and a CryptConfig share a valid fingerprint combination.
///
/// An unsigned manifest is valid with any or no CryptConfig.
/// A signed manifest is only valid with a matching CryptConfig.
pub fn check_fingerprint(&self, crypt_config: Option<&CryptConfig>) -> Result<(), Error> {
if let Some(fingerprint) = self.fingerprint()? {
match crypt_config {
None => bail!(
"missing key - manifest was created with key {}",
fingerprint,
),
Some(crypt_config) => {
let config_fp = crypt_config.fingerprint();
if config_fp != fingerprint {
bail!(
"wrong key - manifest's key {} does not match provided key {}",
fingerprint,
config_fp
);
}
}
}
};
Ok(())
}
/// Try to read the manifest. This verifies the signature if there is a crypt_config. /// Try to read the manifest. This verifies the signature if there is a crypt_config.
pub fn from_data(data: &[u8], crypt_config: Option<&CryptConfig>) -> Result<BackupManifest, Error> { pub fn from_data(data: &[u8], crypt_config: Option<&CryptConfig>) -> Result<BackupManifest, Error> {
let json: Value = serde_json::from_slice(data)?; let json: Value = serde_json::from_slice(data)?;
@ -237,6 +273,19 @@ impl BackupManifest {
if let Some(ref crypt_config) = crypt_config { if let Some(ref crypt_config) = crypt_config {
if let Some(signature) = signature { if let Some(signature) = signature {
let expected_signature = proxmox::tools::digest_to_hex(&Self::json_signature(&json, crypt_config)?); let expected_signature = proxmox::tools::digest_to_hex(&Self::json_signature(&json, crypt_config)?);
let fingerprint = &json["unprotected"]["key-fingerprint"];
if fingerprint != &Value::Null {
let fingerprint = serde_json::from_value(fingerprint.clone())?;
let config_fp = crypt_config.fingerprint();
if config_fp != fingerprint {
bail!(
"wrong key - unable to verify signature since manifest's key {} does not match provided key {}",
fingerprint,
config_fp
);
}
}
if signature != expected_signature { if signature != expected_signature {
bail!("wrong signature in manifest"); bail!("wrong signature in manifest");
} }

View File

@ -498,28 +498,38 @@ pub fn verify_all_backups(
) -> Result<Vec<String>, Error> { ) -> Result<Vec<String>, Error> {
let mut errors = Vec::new(); let mut errors = Vec::new();
task_log!(worker, "verify datastore {}", datastore.name());
if let Some(owner) = &owner { if let Some(owner) = &owner {
task_log!( task_log!(worker, "limiting to backups owned by {}", owner);
worker,
"verify datastore {} - limiting to backups owned by {}",
datastore.name(),
owner
);
} }
let filter_by_owner = |group: &BackupGroup| { let filter_by_owner = |group: &BackupGroup| {
if let Some(owner) = &owner { match (datastore.get_owner(group), &owner) {
match datastore.get_owner(group) { (Ok(ref group_owner), Some(owner)) => {
Ok(ref group_owner) => {
group_owner == owner group_owner == owner
|| (group_owner.is_token() || (group_owner.is_token()
&& !owner.is_token() && !owner.is_token()
&& group_owner.user() == owner.user()) && group_owner.user() == owner.user())
}, },
Err(_) => false, (Ok(_), None) => true,
} (Err(err), Some(_)) => {
} else { // intentionally not in task log
// the task user might not be allowed to see this group!
println!("Failed to get owner of group '{}' - {}", group, err);
false
},
(Err(err), None) => {
// we don't filter by owner, but we want to log the error
task_log!(
worker,
"Failed to get owner of group '{} - {}",
group,
err,
);
errors.push(group.to_string());
true true
},
} }
}; };
@ -532,8 +542,7 @@ pub fn verify_all_backups(
Err(err) => { Err(err) => {
task_log!( task_log!(
worker, worker,
"verify datastore {} - unable to list backups: {}", "unable to list backups: {}",
datastore.name(),
err, err,
); );
return Ok(errors); return Ok(errors);
@ -553,7 +562,7 @@ pub fn verify_all_backups(
// start with 64 chunks since we assume there are few corrupt ones // start with 64 chunks since we assume there are few corrupt ones
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64))); let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
task_log!(worker, "verify datastore {} ({} snapshots)", datastore.name(), snapshot_count); task_log!(worker, "found {} snapshots", snapshot_count);
let mut done = 0; let mut done = 0;
for group in list { for group in list {

View File

@ -76,6 +76,7 @@ async fn run() -> Result<(), Error> {
}) })
) )
}, },
"proxmox-backup.service",
); );
server::write_pid(buildcfg::PROXMOX_BACKUP_API_PID_FN)?; server::write_pid(buildcfg::PROXMOX_BACKUP_API_PID_FN)?;

View File

@ -193,8 +193,12 @@ pub fn complete_repository(_arg: &str, _param: &HashMap<String, String>) -> Vec<
result result
} }
fn connect(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, Error> { fn connect(repo: &BackupRepository) -> Result<HttpClient, Error> {
connect_do(repo.host(), repo.port(), repo.auth_id())
.map_err(|err| format_err!("error building client for repository {} - {}", repo, err))
}
fn connect_do(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, Error> {
let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok(); let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
use std::env::VarError::*; use std::env::VarError::*;
@ -366,7 +370,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/groups", repo.store()); let path = format!("api2/json/admin/datastore/{}/groups", repo.store());
@ -435,7 +439,7 @@ async fn change_backup_owner(group: String, mut param: Value) -> Result<(), Erro
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?; let mut client = connect(&repo)?;
param.as_object_mut().unwrap().remove("repository"); param.as_object_mut().unwrap().remove("repository");
@ -478,7 +482,7 @@ async fn list_snapshots(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() { let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?) Some(path.parse()?)
@ -543,7 +547,7 @@ async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let path = tools::required_string_param(&param, "snapshot")?; let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?; let snapshot: BackupDir = path.parse()?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?; let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store()); let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
@ -573,7 +577,7 @@ async fn api_login(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
client.login().await?; client.login().await?;
record_repository(&repo); record_repository(&repo);
@ -630,7 +634,7 @@ async fn api_version(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param); let repo = extract_repository_from_value(&param);
if let Ok(repo) = repo { if let Ok(repo) = repo {
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
match client.get("api2/json/version", None).await { match client.get("api2/json/version", None).await {
Ok(mut result) => version_info["server"] = result["data"].take(), Ok(mut result) => version_info["server"] = result["data"].take(),
@ -680,7 +684,7 @@ async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store()); let path = format!("api2/json/admin/datastore/{}/files", repo.store());
@ -724,7 +728,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?; let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/gc", repo.store()); let path = format!("api2/json/admin/datastore/{}/gc", repo.store());
@ -798,7 +802,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
let keydata = match (keyfile, key_fd) { let keydata = match (keyfile, key_fd) {
(None, None) => None, (None, None) => None,
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"), (Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
(Some(keyfile), None) => Some(file_get_contents(keyfile)?), (Some(keyfile), None) => {
println!("Using encryption key file: {}", keyfile);
Some(file_get_contents(keyfile)?)
},
(None, Some(fd)) => { (None, Some(fd)) => {
let input = unsafe { std::fs::File::from_raw_fd(fd) }; let input = unsafe { std::fs::File::from_raw_fd(fd) };
let mut data = Vec::new(); let mut data = Vec::new();
@ -806,6 +813,7 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
.map_err(|err| { .map_err(|err| {
format_err!("error reading encryption key from fd {}: {}", fd, err) format_err!("error reading encryption key from fd {}: {}", fd, err)
})?; })?;
println!("Using encryption key from file descriptor");
Some(data) Some(data)
} }
}; };
@ -813,7 +821,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
Ok(match (keydata, crypt_mode) { Ok(match (keydata, crypt_mode) {
// no parameters: // no parameters:
(None, None) => match key::read_optional_default_encryption_key()? { (None, None) => match key::read_optional_default_encryption_key()? {
Some(key) => (Some(key), CryptMode::Encrypt), Some(key) => {
println!("Encrypting with default encryption key!");
(Some(key), CryptMode::Encrypt)
},
None => (None, CryptMode::None), None => (None, CryptMode::None),
}, },
@ -823,7 +834,10 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
// just --crypt-mode other than none // just --crypt-mode other than none
(None, Some(crypt_mode)) => match key::read_optional_default_encryption_key()? { (None, Some(crypt_mode)) => match key::read_optional_default_encryption_key()? {
None => bail!("--crypt-mode without --keyfile and no default key file available"), None => bail!("--crypt-mode without --keyfile and no default key file available"),
Some(key) => (Some(key), crypt_mode), Some(key) => {
println!("Encrypting with default encryption key!");
(Some(key), crypt_mode)
},
} }
// just --keyfile // just --keyfile
@ -861,6 +875,11 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
description: "Path to file.", description: "Path to file.",
} }
}, },
"all-file-systems": {
type: Boolean,
description: "Include all mounted subdirectories.",
optional: true,
},
keyfile: { keyfile: {
schema: KEYFILE_SCHEMA, schema: KEYFILE_SCHEMA,
optional: true, optional: true,
@ -1036,7 +1055,7 @@ async fn create_backup(
let backup_time = backup_time_opt.unwrap_or_else(|| epoch_i64()); let backup_time = backup_time_opt.unwrap_or_else(|| epoch_i64());
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
record_repository(&repo); record_repository(&repo);
println!("Starting backup: {}/{}/{}", backup_type, backup_id, BackupDir::backup_time_to_string(backup_time)?); println!("Starting backup: {}/{}/{}", backup_type, backup_id, BackupDir::backup_time_to_string(backup_time)?);
@ -1050,7 +1069,8 @@ async fn create_backup(
let (crypt_config, rsa_encrypted_key) = match keydata { let (crypt_config, rsa_encrypted_key) = match keydata {
None => (None, None), None => (None, None),
Some(key) => { Some(key) => {
let (key, created) = decrypt_key(&key, &key::get_encryption_key_password)?; let (key, created, fingerprint) = decrypt_key(&key, &key::get_encryption_key_password)?;
println!("Encryption key fingerprint: {}", fingerprint);
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
@ -1059,6 +1079,8 @@ async fn create_backup(
let pem_data = file_get_contents(path)?; let pem_data = file_get_contents(path)?;
let rsa = openssl::rsa::Rsa::public_key_from_pem(&pem_data)?; let rsa = openssl::rsa::Rsa::public_key_from_pem(&pem_data)?;
let enc_key = crypt_config.generate_rsa_encoded_key(rsa, created)?; let enc_key = crypt_config.generate_rsa_encoded_key(rsa, created)?;
println!("Master key '{:?}'", path);
(Some(Arc::new(crypt_config)), Some(enc_key)) (Some(Arc::new(crypt_config)), Some(enc_key))
} }
_ => (Some(Arc::new(crypt_config)), None), _ => (Some(Arc::new(crypt_config)), None),
@ -1077,8 +1099,40 @@ async fn create_backup(
false false
).await?; ).await?;
let previous_manifest = if let Ok(previous_manifest) = client.download_previous_manifest().await { let download_previous_manifest = match client.previous_backup_time().await {
Some(Arc::new(previous_manifest)) Ok(Some(backup_time)) => {
println!(
"Downloading previous manifest ({})",
strftime_local("%c", backup_time)?
);
true
}
Ok(None) => {
println!("No previous manifest available.");
false
}
Err(_) => {
// Fallback for outdated server, TODO remove/bubble up with 2.0
true
}
};
let previous_manifest = if download_previous_manifest {
match client.download_previous_manifest().await {
Ok(previous_manifest) => {
match previous_manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref)) {
Ok(()) => Some(Arc::new(previous_manifest)),
Err(err) => {
println!("Couldn't re-use previous manifest - {}", err);
None
}
}
}
Err(err) => {
println!("Couldn't download previous manifest - {}", err);
None
}
}
} else { } else {
None None
}; };
@ -1339,7 +1393,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
record_repository(&repo); record_repository(&repo);
@ -1361,7 +1415,8 @@ async fn restore(param: Value) -> Result<Value, Error> {
let crypt_config = match keydata { let crypt_config = match keydata {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _) = decrypt_key(&key, &key::get_encryption_key_password)?; let (key, _, fingerprint) = decrypt_key(&key, &key::get_encryption_key_password)?;
println!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?)) Some(Arc::new(CryptConfig::new(key)?))
} }
}; };
@ -1377,6 +1432,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
).await?; ).await?;
let (manifest, backup_index_data) = client.download_manifest().await?; let (manifest, backup_index_data) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let (archive_name, archive_type) = parse_archive_type(archive_name); let (archive_name, archive_type) = parse_archive_type(archive_name);
@ -1512,14 +1568,14 @@ async fn upload_log(param: Value) -> Result<Value, Error> {
let snapshot = tools::required_string_param(&param, "snapshot")?; let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?; let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?; let mut client = connect(&repo)?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?; let (keydata, crypt_mode) = keyfile_parameters(&param)?;
let crypt_config = match keydata { let crypt_config = match keydata {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _created) = decrypt_key(&key, &key::get_encryption_key_password)?; let (key, _created, _) = decrypt_key(&key, &key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }
@ -1583,7 +1639,7 @@ fn prune<'a>(
async fn prune_async(mut param: Value) -> Result<Value, Error> { async fn prune_async(mut param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?; let mut client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/prune", repo.store()); let path = format!("api2/json/admin/datastore/{}/prune", repo.store());
@ -1669,7 +1725,7 @@ async fn status(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
let path = format!("api2/json/admin/datastore/{}/status", repo.store()); let path = format!("api2/json/admin/datastore/{}/status", repo.store());

View File

@ -245,7 +245,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let mut client = connect()?; let mut client = connect()?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
let _ = client.delete(&path, None).await?; let _ = client.delete(&path, None).await?;
Ok(Value::Null) Ok(Value::Null)
@ -363,6 +363,43 @@ async fn report() -> Result<Value, Error> {
Ok(Value::Null) Ok(Value::Null)
} }
#[api(
input: {
properties: {
verbose: {
type: Boolean,
optional: true,
default: false,
description: "Output verbose package information. It is ignored if output-format is specified.",
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
}
}
}
)]
/// List package versions for important Proxmox Backup Server packages.
async fn get_versions(verbose: bool, param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let packages = crate::api2::node::apt::get_versions()?;
let mut packages = json!(if verbose { &packages[..] } else { &packages[1..2] });
let options = default_table_format_options()
.disable_sort()
.noborder(true) // just not helpfull for version info which gets copy pasted often
.column(ColumnConfig::new("Package"))
.column(ColumnConfig::new("Version"))
.column(ColumnConfig::new("ExtraInfo").header("Extra Info"))
;
let schema = &crate::api2::node::apt::API_RETURN_SCHEMA_GET_VERSIONS;
format_and_print_result_full(&mut packages, schema, &output_format, &options);
Ok(Value::Null)
}
fn main() { fn main() {
proxmox_backup::tools::setup_safe_path_env(); proxmox_backup::tools::setup_safe_path_env();
@ -396,6 +433,9 @@ fn main() {
) )
.insert("report", .insert("report",
CliCommand::new(&API_METHOD_REPORT) CliCommand::new(&API_METHOD_REPORT)
)
.insert("versions",
CliCommand::new(&API_METHOD_GET_VERSIONS)
); );

View File

@ -133,6 +133,7 @@ async fn run() -> Result<(), Error> {
.map(|_| ()) .map(|_| ())
) )
}, },
"proxmox-backup-proxy.service",
); );
server::write_pid(buildcfg::PROXMOX_BACKUP_PROXY_PID_FN)?; server::write_pid(buildcfg::PROXMOX_BACKUP_PROXY_PID_FN)?;
@ -592,9 +593,9 @@ async fn schedule_task_log_rotate() {
.ok_or_else(|| format_err!("could not get API auth log file names"))?; .ok_or_else(|| format_err!("could not get API auth log file names"))?;
if logrotate.rotate(max_size, None, Some(max_files))? { if logrotate.rotate(max_size, None, Some(max_files))? {
worker.log(format!("API access log was rotated")); worker.log(format!("API authentication log was rotated"));
} else { } else {
worker.log(format!("API access log was not rotated")); worker.log(format!("API authentication log was not rotated"));
} }
Ok(()) Ok(())

View File

@ -151,7 +151,7 @@ pub async fn benchmark(
let crypt_config = match keyfile { let crypt_config = match keyfile {
None => None, None => None,
Some(path) => { Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?; let (key, _, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }
@ -225,7 +225,7 @@ async fn test_upload_speed(
let backup_time = proxmox::tools::time::epoch_i64(); let backup_time = proxmox::tools::time::epoch_i64();
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
record_repository(&repo); record_repository(&repo);
if verbose { eprintln!("Connecting to backup server"); } if verbose { eprintln!("Connecting to backup server"); }

View File

@ -73,13 +73,13 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
let crypt_config = match keydata { let crypt_config = match keydata {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?; let (key, _created, _fingerprint) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }
}; };
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
let client = BackupReader::start( let client = BackupReader::start(
client, client,
@ -92,6 +92,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
).await?; ).await?;
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let index = client.download_dynamic_index(&manifest, CATALOG_NAME).await?; let index = client.download_dynamic_index(&manifest, CATALOG_NAME).await?;
@ -153,7 +154,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
/// Shell to interactively inspect and restore snapshots. /// Shell to interactively inspect and restore snapshots.
async fn catalog_shell(param: Value) -> Result<(), Error> { async fn catalog_shell(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
let path = tools::required_string_param(&param, "snapshot")?; let path = tools::required_string_param(&param, "snapshot")?;
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;
@ -170,7 +171,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
let crypt_config = match keydata { let crypt_config = match keydata {
None => None, None => None,
Some(key) => { Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?; let (key, _created, _fingerprint) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?; let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config)) Some(Arc::new(crypt_config))
} }
@ -199,6 +200,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
.open("/tmp")?; .open("/tmp")?;
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?; let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8); let most_used = index.find_most_used_chunks(8);

View File

@ -4,14 +4,28 @@ use std::process::{Stdio, Command};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value;
use proxmox::api::api; use proxmox::api::api;
use proxmox::api::cli::{CliCommand, CliCommandMap}; use proxmox::api::cli::{
ColumnConfig,
CliCommand,
CliCommandMap,
format_and_print_result_full,
get_output_format,
OUTPUT_FORMAT,
};
use proxmox::sys::linux::tty; use proxmox::sys::linux::tty;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions}; use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox_backup::backup::{ use proxmox_backup::backup::{
encrypt_key_with_passphrase, load_and_decrypt_key, store_key_config, KeyConfig, encrypt_key_with_passphrase,
load_and_decrypt_key,
store_key_config,
CryptConfig,
Kdf,
KeyConfig,
KeyDerivationConfig,
}; };
use proxmox_backup::tools; use proxmox_backup::tools;
@ -71,27 +85,6 @@ pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
bail!("no password input mechanism available"); bail!("no password input mechanism available");
} }
#[api(
default: "scrypt",
)]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -120,7 +113,10 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let kdf = kdf.unwrap_or_default(); let kdf = kdf.unwrap_or_default();
let key = proxmox::sys::linux::random_data(32)?; let mut key_array = [0u8; 32];
proxmox::sys::linux::fill_with_random_data(&mut key_array)?;
let crypt_config = CryptConfig::new(key_array.clone())?;
let key = key_array.to_vec();
match kdf { match kdf {
Kdf::None => { Kdf::None => {
@ -134,10 +130,11 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
created, created,
modified: created, modified: created,
data: key, data: key,
fingerprint: Some(crypt_config.fingerprint()),
}, },
)?; )?;
} }
Kdf::Scrypt => { Kdf::Scrypt | Kdf::PBKDF2 => {
// always read passphrase from tty // always read passphrase from tty
if !tty::stdin_isatty() { if !tty::stdin_isatty() {
bail!("unable to read passphrase - no tty"); bail!("unable to read passphrase - no tty");
@ -145,7 +142,8 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let password = tty::read_and_verify_password("Encryption Key Password: ")?; let password = tty::read_and_verify_password("Encryption Key Password: ")?;
let key_config = encrypt_key_with_passphrase(&key, &password)?; let mut key_config = encrypt_key_with_passphrase(&key, &password, kdf)?;
key_config.fingerprint = Some(crypt_config.fingerprint());
store_key_config(&path, false, key_config)?; store_key_config(&path, false, key_config)?;
} }
@ -188,7 +186,7 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
bail!("unable to change passphrase - no tty"); bail!("unable to change passphrase - no tty");
} }
let (key, created) = load_and_decrypt_key(&path, &get_encryption_key_password)?; let (key, created, fingerprint) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
match kdf { match kdf {
Kdf::None => { Kdf::None => {
@ -202,14 +200,16 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
created, // keep original value created, // keep original value
modified, modified,
data: key.to_vec(), data: key.to_vec(),
fingerprint: Some(fingerprint),
}, },
)?; )?;
} }
Kdf::Scrypt => { Kdf::Scrypt | Kdf::PBKDF2 => {
let password = tty::read_and_verify_password("New Password: ")?; let password = tty::read_and_verify_password("New Password: ")?;
let mut new_key_config = encrypt_key_with_passphrase(&key, &password)?; let mut new_key_config = encrypt_key_with_passphrase(&key, &password, kdf)?;
new_key_config.created = created; // keep original value new_key_config.created = created; // keep original value
new_key_config.fingerprint = Some(fingerprint);
store_key_config(&path, true, new_key_config)?; store_key_config(&path, true, new_key_config)?;
} }
@ -218,6 +218,91 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
Ok(()) Ok(())
} }
#[api(
properties: {
kdf: {
type: Kdf,
},
},
)]
#[derive(Deserialize, Serialize)]
/// Encryption Key Information
struct KeyInfo {
/// Path to key
path: String,
kdf: Kdf,
/// Key creation time
pub created: i64,
/// Key modification time
pub modified: i64,
/// Key fingerprint
pub fingerprint: Option<String>,
}
#[api(
input: {
properties: {
path: {
description: "Key file. Without this the default key's metadata will be shown.",
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
},
},
)]
/// Print the encryption key's metadata.
fn show_key(
path: Option<String>,
param: Value,
) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => {
let path = find_default_encryption_key()?
.ok_or_else(|| {
format_err!("no encryption file provided and no default file found")
})?;
path
}
};
let config: KeyConfig = serde_json::from_slice(&file_get_contents(path.clone())?)?;
let output_format = get_output_format(&param);
let info = KeyInfo {
path: format!("{:?}", path),
kdf: match config.kdf {
Some(KeyDerivationConfig::PBKDF2 { .. }) => Kdf::PBKDF2,
Some(KeyDerivationConfig::Scrypt { .. }) => Kdf::Scrypt,
None => Kdf::None,
},
created: config.created,
modified: config.modified,
fingerprint: match config.fingerprint {
Some(ref fp) => Some(format!("{}", fp)),
None => None,
},
};
let options = proxmox::api::cli::default_table_format_options()
.column(ColumnConfig::new("path"))
.column(ColumnConfig::new("kdf"))
.column(ColumnConfig::new("created").renderer(tools::format::render_epoch))
.column(ColumnConfig::new("modified").renderer(tools::format::render_epoch))
.column(ColumnConfig::new("fingerprint"));
let schema = &KeyInfo::API_SCHEMA;
format_and_print_result_full(&mut serde_json::to_value(info)?, schema, &output_format, &options);
Ok(())
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -313,13 +398,47 @@ fn paper_key(
}; };
let data = file_get_contents(&path)?; let data = file_get_contents(&path)?;
let data = std::str::from_utf8(&data)?; let data = String::from_utf8(data)?;
let (data, is_private_key) = if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data
.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
(lines, true)
} else {
match serde_json::from_str::<KeyConfig>(&data) {
Ok(key_config) => {
let lines = serde_json::to_string_pretty(&key_config)?
.lines()
.map(String::from)
.collect();
(lines, false)
},
Err(err) => {
eprintln!("Couldn't parse '{:?}' as KeyConfig - {}", path, err);
bail!("Neither a PEM-formatted private key, nor a PBS key file.");
},
}
};
let format = output_format.unwrap_or(PaperkeyFormat::Html); let format = output_format.unwrap_or(PaperkeyFormat::Html);
match format { match format {
PaperkeyFormat::Html => paperkey_html(data, subject), PaperkeyFormat::Html => paperkey_html(&data, subject, is_private_key),
PaperkeyFormat::Text => paperkey_text(data, subject), PaperkeyFormat::Text => paperkey_text(&data, subject, is_private_key),
} }
} }
@ -337,6 +456,10 @@ pub fn cli() -> CliCommandMap {
.arg_param(&["path"]) .arg_param(&["path"])
.completion_cb("path", tools::complete_file_name); .completion_cb("path", tools::complete_file_name);
let key_show_cmd_def = CliCommand::new(&API_METHOD_SHOW_KEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let paper_key_cmd_def = CliCommand::new(&API_METHOD_PAPER_KEY) let paper_key_cmd_def = CliCommand::new(&API_METHOD_PAPER_KEY)
.arg_param(&["path"]) .arg_param(&["path"])
.completion_cb("path", tools::complete_file_name); .completion_cb("path", tools::complete_file_name);
@ -346,10 +469,11 @@ pub fn cli() -> CliCommandMap {
.insert("create-master-key", key_create_master_key_cmd_def) .insert("create-master-key", key_create_master_key_cmd_def)
.insert("import-master-pubkey", key_import_master_pubkey_cmd_def) .insert("import-master-pubkey", key_import_master_pubkey_cmd_def)
.insert("change-passphrase", key_change_passphrase_cmd_def) .insert("change-passphrase", key_change_passphrase_cmd_def)
.insert("show", key_show_cmd_def)
.insert("paperkey", paper_key_cmd_def) .insert("paperkey", paper_key_cmd_def)
} }
fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> { fn paperkey_html(lines: &[String], subject: Option<String>, is_private: bool) -> Result<(), Error> {
let img_size_pt = 500; let img_size_pt = 500;
@ -378,21 +502,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("<p>Subject: {}</p>", subject); println!("<p>Subject: {}</p>", subject);
} }
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") { if is_private {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
const BLOCK_SIZE: usize = 20; const BLOCK_SIZE: usize = 20;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE; let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
@ -413,8 +523,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>"); println!("</p>");
let data = data.join("\n"); let qr_code = generate_qr_code("svg", data)?;
let qr_code = generate_qr_code("svg", data.as_bytes())?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD); let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>"); println!("<center>");
@ -430,16 +539,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
return Ok(()); return Ok(());
} }
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("<div style=\"page-break-inside: avoid\">"); println!("<div style=\"page-break-inside: avoid\">");
println!("<p>"); println!("<p>");
println!("-----BEGIN PROXMOX BACKUP KEY-----"); println!("-----BEGIN PROXMOX BACKUP KEY-----");
for line in key_text.lines() { for line in lines {
println!("{}", line); println!("{}", line);
} }
@ -447,7 +553,7 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>"); println!("</p>");
let qr_code = generate_qr_code("svg", key_text.as_bytes())?; let qr_code = generate_qr_code("svg", lines)?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD); let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>"); println!("<center>");
@ -464,27 +570,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
Ok(()) Ok(())
} }
fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> { fn paperkey_text(lines: &[String], subject: Option<String>, is_private: bool) -> Result<(), Error> {
if let Some(subject) = subject { if let Some(subject) = subject {
println!("Subject: {}\n", subject); println!("Subject: {}\n", subject);
} }
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") { if is_private {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
const BLOCK_SIZE: usize = 5; const BLOCK_SIZE: usize = 5;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE; let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
@ -499,8 +591,7 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
for l in start..end { for l in start..end {
println!("{:-2}: {}", l, lines[l]); println!("{:-2}: {}", l, lines[l]);
} }
let data = data.join("\n"); let qr_code = generate_qr_code("utf8i", data)?;
let qr_code = generate_qr_code("utf8i", data.as_bytes())?;
let qr_code = String::from_utf8(qr_code) let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?; .map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
println!("{}", qr_code); println!("{}", qr_code);
@ -510,14 +601,13 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
return Ok(()); return Ok(());
} }
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("-----BEGIN PROXMOX BACKUP KEY-----"); println!("-----BEGIN PROXMOX BACKUP KEY-----");
println!("{}", key_text); for line in lines {
println!("{}", line);
}
println!("-----END PROXMOX BACKUP KEY-----"); println!("-----END PROXMOX BACKUP KEY-----");
let qr_code = generate_qr_code("utf8i", key_text.as_bytes())?; let qr_code = generate_qr_code("utf8i", &lines)?;
let qr_code = String::from_utf8(qr_code) let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?; .map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
@ -526,8 +616,7 @@ fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
Ok(()) Ok(())
} }
fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> { fn generate_qr_code(output_type: &str, lines: &[String]) -> Result<Vec<u8>, Error> {
let mut child = Command::new("qrencode") let mut child = Command::new("qrencode")
.args(&["-t", output_type, "-m0", "-s1", "-lm", "--output", "-"]) .args(&["-t", output_type, "-m0", "-s1", "-lm", "--output", "-"])
.stdin(Stdio::piped()) .stdin(Stdio::piped())
@ -537,7 +626,8 @@ fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> {
{ {
let stdin = child.stdin.as_mut() let stdin = child.stdin.as_mut()
.ok_or_else(|| format_err!("Failed to open stdin"))?; .ok_or_else(|| format_err!("Failed to open stdin"))?;
stdin.write_all(data) let data = lines.join("\n");
stdin.write_all(data.as_bytes())
.map_err(|_| format_err!("Failed to write to stdin"))?; .map_err(|_| format_err!("Failed to write to stdin"))?;
} }

View File

@ -163,7 +163,7 @@ fn mount(
async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> { async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
let target = param["target"].as_str(); let target = param["target"].as_str();
@ -182,7 +182,9 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let crypt_config = match keyfile { let crypt_config = match keyfile {
None => None, None => None,
Some(path) => { Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?; println!("Encryption key file: '{:?}'", path);
let (key, _, fingerprint) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
println!("Encryption key fingerprint: '{}'", fingerprint);
Some(Arc::new(CryptConfig::new(key)?)) Some(Arc::new(CryptConfig::new(key)?))
} }
}; };
@ -212,6 +214,7 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
).await?; ).await?;
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
let file_info = manifest.lookup_file_info(&server_archive_name)?; let file_info = manifest.lookup_file_info(&server_archive_name)?;

View File

@ -48,7 +48,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize; let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false); let running = !param["all"].as_bool().unwrap_or(false);
@ -96,7 +96,7 @@ async fn task_log(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let upid = tools::required_string_param(&param, "upid")?; let upid = tools::required_string_param(&param, "upid")?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?; let client = connect(&repo)?;
display_task_log(client, upid, true).await?; display_task_log(client, upid, true).await?;
@ -122,9 +122,9 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let upid_str = tools::required_string_param(&param, "upid")?; let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?; let mut client = connect(&repo)?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
let _ = client.delete(&path, None).await?; let _ = client.delete(&path, None).await?;
Ok(Value::Null) Ok(Value::Null)

View File

@ -475,6 +475,13 @@ impl BackupWriter {
Ok(index) Ok(index)
} }
/// Retrieve backup time of last backup
pub async fn previous_backup_time(&self) -> Result<Option<i64>, Error> {
let data = self.h2.get("previous_backup_time", None).await?;
serde_json::from_value(data)
.map_err(|err| format_err!("Failed to parse backup time value returned by server - {}", err))
}
/// Download backup manifest (index.json) of last backup /// Download backup manifest (index.json) of last backup
pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> { pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {

View File

@ -534,6 +534,15 @@ impl HttpClient {
self.request(req).await self.request(req).await
} }
pub async fn put(
&mut self,
path: &str,
data: Option<Value>,
) -> Result<Value, Error> {
let req = Self::request_builder(&self.server, self.port, "PUT", path, data)?;
self.request(req).await
}
pub async fn download( pub async fn download(
&mut self, &mut self,
path: &str, path: &str,

View File

@ -3,6 +3,7 @@ use std::task::{Context, Poll};
use anyhow::{Error}; use anyhow::{Error};
use futures::*; use futures::*;
use pin_project::pin_project;
use crate::backup::ChunkInfo; use crate::backup::ChunkInfo;
@ -15,7 +16,9 @@ pub trait MergeKnownChunks: Sized {
fn merge_known_chunks(self) -> MergeKnownChunksQueue<Self>; fn merge_known_chunks(self) -> MergeKnownChunksQueue<Self>;
} }
#[pin_project]
pub struct MergeKnownChunksQueue<S> { pub struct MergeKnownChunksQueue<S> {
#[pin]
input: S, input: S,
buffer: Option<MergedChunkInfo>, buffer: Option<MergedChunkInfo>,
} }
@ -39,10 +42,10 @@ where
type Item = Result<MergedChunkInfo, Error>; type Item = Result<MergedChunkInfo, Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> { fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
let this = unsafe { self.get_unchecked_mut() }; let mut this = self.project();
loop { loop {
match ready!(unsafe { Pin::new_unchecked(&mut this.input) }.poll_next(cx)) { match ready!(this.input.as_mut().poll_next(cx)) {
Some(Err(err)) => return Poll::Ready(Some(Err(err))), Some(Err(err)) => return Poll::Ready(Some(Err(err))),
None => { None => {
if let Some(last) = this.buffer.take() { if let Some(last) = this.buffer.take() {
@ -58,13 +61,13 @@ where
match last { match last {
None => { None => {
this.buffer = Some(MergedChunkInfo::Known(list)); *this.buffer = Some(MergedChunkInfo::Known(list));
// continue // continue
} }
Some(MergedChunkInfo::Known(mut last_list)) => { Some(MergedChunkInfo::Known(mut last_list)) => {
last_list.extend_from_slice(&list); last_list.extend_from_slice(&list);
let len = last_list.len(); let len = last_list.len();
this.buffer = Some(MergedChunkInfo::Known(last_list)); *this.buffer = Some(MergedChunkInfo::Known(last_list));
if len >= 64 { if len >= 64 {
return Poll::Ready(this.buffer.take().map(Ok)); return Poll::Ready(this.buffer.take().map(Ok));
@ -72,7 +75,7 @@ where
// continue // continue
} }
Some(MergedChunkInfo::New(_)) => { Some(MergedChunkInfo::New(_)) => {
this.buffer = Some(MergedChunkInfo::Known(list)); *this.buffer = Some(MergedChunkInfo::Known(list));
return Poll::Ready(last.map(Ok)); return Poll::Ready(last.map(Ok));
} }
} }
@ -80,7 +83,7 @@ where
MergedChunkInfo::New(chunk_info) => { MergedChunkInfo::New(chunk_info) => {
let new = MergedChunkInfo::New(chunk_info); let new = MergedChunkInfo::New(chunk_info);
if let Some(last) = this.buffer.take() { if let Some(last) = this.buffer.take() {
this.buffer = Some(new); *this.buffer = Some(new);
return Poll::Ready(Some(Ok(last))); return Poll::Ready(Some(Ok(last)));
} else { } else {
return Poll::Ready(Some(Ok(new))); return Poll::Ready(Some(Ok(new)));

View File

@ -528,7 +528,16 @@ pub async fn pull_store(
for (groups_done, item) in list.into_iter().enumerate() { for (groups_done, item) in list.into_iter().enumerate() {
let group = BackupGroup::new(&item.backup_type, &item.backup_id); let group = BackupGroup::new(&item.backup_type, &item.backup_id);
let (owner, _lock_guard) = tgt_store.create_locked_backup_group(&group, &auth_id)?; let (owner, _lock_guard) = match tgt_store.create_locked_backup_group(&group, &auth_id) {
Ok(result) => result,
Err(err) => {
worker.log(format!("sync group {}/{} failed - group lock failed: {}",
item.backup_type, item.backup_id, err));
errors = true; // do not stop here, instead continue
continue;
}
};
// permission check // permission check
if auth_id != owner { // only the owner is allowed to create additional snapshots if auth_id != owner { // only the owner is allowed to create additional snapshots
worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})", worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})",

View File

@ -2,6 +2,7 @@ use anyhow::{bail, Error};
use serde_json::json; use serde_json::json;
use super::HttpClient; use super::HttpClient;
use crate::tools;
pub async fn display_task_log( pub async fn display_task_log(
client: HttpClient, client: HttpClient,
@ -9,7 +10,7 @@ pub async fn display_task_log(
strip_date: bool, strip_date: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let path = format!("api2/json/nodes/localhost/tasks/{}/log", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}/log", tools::percent_encode_component(upid_str));
let mut start = 1; let mut start = 1;
let limit = 500; let limit = 500;

View File

@ -44,7 +44,7 @@ pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth t
description: "The (optional) port", description: "The (optional) port",
type: u16, type: u16,
}, },
userid: { "auth-id": {
type: Authid, type: Authid,
}, },
password: { password: {
@ -57,6 +57,7 @@ pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth t
} }
)] )]
#[derive(Serialize,Deserialize)] #[derive(Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Remote properties. /// Remote properties.
pub struct Remote { pub struct Remote {
pub name: String, pub name: String,
@ -65,7 +66,7 @@ pub struct Remote {
pub host: String, pub host: String,
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub port: Option<u16>, pub port: Option<u16>,
pub userid: Authid, pub auth_id: Authid,
#[serde(skip_serializing_if="String::is_empty")] #[serde(skip_serializing_if="String::is_empty")]
#[serde(with = "proxmox::tools::serde::string_as_base64")] #[serde(with = "proxmox::tools::serde::string_as_base64")]
pub password: String, pub password: String,

View File

@ -89,7 +89,21 @@ struct HardLinkInfo {
st_ino: u64, st_ino: u64,
} }
/// In case we want to collect them or redirect them we can just add this here: /// TODO: make a builder for the create_archive call for fewer parameters and add a method to add a
/// logger which does not write to stderr.
struct Logger;
impl std::io::Write for Logger {
fn write(&mut self, data: &[u8]) -> io::Result<usize> {
std::io::stderr().write(data)
}
fn flush(&mut self) -> io::Result<()> {
std::io::stderr().flush()
}
}
/// And the error case.
struct ErrorReporter; struct ErrorReporter;
impl std::io::Write for ErrorReporter { impl std::io::Write for ErrorReporter {
@ -116,6 +130,7 @@ struct Archiver<'a, 'b> {
device_set: Option<HashSet<u64>>, device_set: Option<HashSet<u64>>,
hardlinks: HashMap<HardLinkInfo, (PathBuf, LinkOffset)>, hardlinks: HashMap<HardLinkInfo, (PathBuf, LinkOffset)>,
errors: ErrorReporter, errors: ErrorReporter,
logger: Logger,
file_copy_buffer: Vec<u8>, file_copy_buffer: Vec<u8>,
} }
@ -181,6 +196,7 @@ where
device_set, device_set,
hardlinks: HashMap::new(), hardlinks: HashMap::new(),
errors: ErrorReporter, errors: ErrorReporter,
logger: Logger,
file_copy_buffer: vec::undefined(4 * 1024 * 1024), file_copy_buffer: vec::undefined(4 * 1024 * 1024),
}; };
@ -221,7 +237,15 @@ impl<'a, 'b> Archiver<'a, 'b> {
let old_patterns_count = self.patterns.len(); let old_patterns_count = self.patterns.len();
self.read_pxar_excludes(dir.as_raw_fd())?; self.read_pxar_excludes(dir.as_raw_fd())?;
let file_list = self.generate_directory_file_list(&mut dir, is_root)?; let mut file_list = self.generate_directory_file_list(&mut dir, is_root)?;
if is_root && old_patterns_count > 0 {
file_list.push(FileListEntry {
name: CString::new(".pxarexclude-cli").unwrap(),
path: PathBuf::new(),
stat: unsafe { std::mem::zeroed() },
});
}
let dir_fd = dir.as_raw_fd(); let dir_fd = dir.as_raw_fd();
@ -231,7 +255,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
let file_name = file_entry.name.to_bytes(); let file_name = file_entry.name.to_bytes();
if is_root && file_name == b".pxarexclude-cli" { if is_root && file_name == b".pxarexclude-cli" {
self.encode_pxarexclude_cli(encoder, &file_entry.name)?; self.encode_pxarexclude_cli(encoder, &file_entry.name, old_patterns_count)?;
continue; continue;
} }
@ -363,8 +387,9 @@ impl<'a, 'b> Archiver<'a, 'b> {
&mut self, &mut self,
encoder: &mut Encoder, encoder: &mut Encoder,
file_name: &CStr, file_name: &CStr,
patterns_count: usize,
) -> Result<(), Error> { ) -> Result<(), Error> {
let content = generate_pxar_excludes_cli(&self.patterns); let content = generate_pxar_excludes_cli(&self.patterns[..patterns_count]);
if let Some(ref mut catalog) = self.catalog { if let Some(ref mut catalog) = self.catalog {
catalog.add_file(file_name, content.len() as u64, 0)?; catalog.add_file(file_name, content.len() as u64, 0)?;
@ -388,14 +413,6 @@ impl<'a, 'b> Archiver<'a, 'b> {
let mut file_list = Vec::new(); let mut file_list = Vec::new();
if is_root && !self.patterns.is_empty() {
file_list.push(FileListEntry {
name: CString::new(".pxarexclude-cli").unwrap(),
path: PathBuf::new(),
stat: unsafe { std::mem::zeroed() },
});
}
for file in dir.iter() { for file in dir.iter() {
let file = file?; let file = file?;
@ -409,10 +426,6 @@ impl<'a, 'b> Archiver<'a, 'b> {
continue; continue;
} }
if file_name_bytes == b".pxarexclude" {
continue;
}
let os_file_name = OsStr::from_bytes(file_name_bytes); let os_file_name = OsStr::from_bytes(file_name_bytes);
assert_single_path_component(os_file_name)?; assert_single_path_component(os_file_name)?;
let full_path = self.path.join(os_file_name); let full_path = self.path.join(os_file_name);
@ -427,9 +440,10 @@ impl<'a, 'b> Archiver<'a, 'b> {
Err(err) => bail!("stat failed on {:?}: {}", full_path, err), Err(err) => bail!("stat failed on {:?}: {}", full_path, err),
}; };
let match_path = PathBuf::from("/").join(full_path.clone());
if self if self
.patterns .patterns
.matches(full_path.as_os_str().as_bytes(), Some(stat.st_mode as u32)) .matches(match_path.as_os_str().as_bytes(), Some(stat.st_mode as u32))
== Some(MatchType::Exclude) == Some(MatchType::Exclude)
{ {
continue; continue;
@ -632,6 +646,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
} }
let result = if skip_contents { let result = if skip_contents {
writeln!(self.logger, "skipping mount point: {:?}", self.path)?;
Ok(()) Ok(())
} else { } else {
self.archive_dir_contents(&mut encoder, dir, false) self.archive_dir_contents(&mut encoder, dir, false)

View File

@ -81,7 +81,7 @@ const VERIFY_ERR_TEMPLATE: &str = r###"
Job ID: {{job.id}} Job ID: {{job.id}}
Datastore: {{job.store}} Datastore: {{job.store}}
Verification failed on these snapshots: Verification failed on these snapshots/groups:
{{#each errors}} {{#each errors}}
{{this~}} {{this~}}

View File

@ -20,6 +20,7 @@ fn files() -> Vec<&'static str> {
fn commands() -> Vec<(&'static str, Vec<&'static str>)> { fn commands() -> Vec<(&'static str, Vec<&'static str>)> {
vec![ vec![
// ("<command>", vec![<arg [, arg]>]) // ("<command>", vec![<arg [, arg]>])
("proxmox-backup-manager", vec!["versions", "--verbose"]),
("df", vec!["-h"]), ("df", vec!["-h"]),
("lsblk", vec!["--ascii"]), ("lsblk", vec!["--ascii"]),
("zpool", vec!["status"]), ("zpool", vec!["status"]),
@ -54,10 +55,10 @@ pub fn generate_report() -> String {
.map(|file_name| { .map(|file_name| {
let content = match file_read_optional_string(Path::new(file_name)) { let content = match file_read_optional_string(Path::new(file_name)) {
Ok(Some(content)) => content, Ok(Some(content)) => content,
Ok(None) => String::from("# file does not exists"), Ok(None) => String::from("# file does not exist"),
Err(err) => err.to_string(), Err(err) => err.to_string(),
}; };
format!("# cat '{}'\n{}", file_name, content) format!("$ cat '{}'\n{}", file_name, content)
}) })
.collect::<Vec<String>>() .collect::<Vec<String>>()
.join("\n\n"); .join("\n\n");
@ -73,14 +74,14 @@ pub fn generate_report() -> String {
Ok(output) => String::from_utf8_lossy(&output.stdout).to_string(), Ok(output) => String::from_utf8_lossy(&output.stdout).to_string(),
Err(err) => err.to_string(), Err(err) => err.to_string(),
}; };
format!("# `{} {}`\n{}", command, args.join(" "), output) format!("$ `{} {}`\n{}", command, args.join(" "), output)
}) })
.collect::<Vec<String>>() .collect::<Vec<String>>()
.join("\n\n"); .join("\n\n");
let function_outputs = function_calls() let function_outputs = function_calls()
.iter() .iter()
.map(|(desc, function)| format!("# {}\n{}", desc, function())) .map(|(desc, function)| format!("$ {}\n{}", desc, function()))
.collect::<Vec<String>>() .collect::<Vec<String>>()
.join("\n\n"); .join("\n\n");

View File

@ -623,6 +623,10 @@ fn check_auth(
.ok_or_else(|| format_err!("failed to split API token header"))?; .ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?; let tokenid: Authid = tokenid.parse()?;
if !user_info.is_active_auth_id(&tokenid) {
bail!("user account or token disabled or expired.");
}
let tokensecret = parts.next() let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?; .ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret) let tokensecret = percent_decode_str(tokensecret)

View File

@ -69,8 +69,15 @@ pub fn do_verification_job(
let result = verify_all_backups(datastore, worker.clone(), worker.upid(), None, Some(&filter)); let result = verify_all_backups(datastore, worker.clone(), worker.upid(), None, Some(&filter));
let job_result = match result { let job_result = match result {
Ok(ref errors) if errors.is_empty() => Ok(()), Ok(ref failed_dirs) if failed_dirs.is_empty() => Ok(()),
Ok(_) => Err(format_err!("verification failed - please check the log for details")), Ok(ref failed_dirs) => {
worker.log("Failed to verify the following snapshots/groups:");
for dir in failed_dirs {
worker.log(format!("\t{}", dir));
}
Err(format_err!("verification failed - please check the log for details"))
},
Err(_) => Err(format_err!("verification failed - job aborted")), Err(_) => Err(format_err!("verification failed - job aborted")),
}; };

View File

@ -5,16 +5,14 @@ use std::any::Any;
use std::collections::HashMap; use std::collections::HashMap;
use std::hash::BuildHasher; use std::hash::BuildHasher;
use std::fs::File; use std::fs::File;
use std::io::{self, BufRead, ErrorKind, Read, Seek, SeekFrom}; use std::io::{self, BufRead, Read, Seek, SeekFrom};
use std::os::unix::io::RawFd; use std::os::unix::io::RawFd;
use std::path::Path; use std::path::Path;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::Value; use serde_json::Value;
use openssl::hash::{hash, DigestBytes, MessageDigest}; use openssl::hash::{hash, DigestBytes, MessageDigest};
use percent_encoding::AsciiSet; use percent_encoding::{utf8_percent_encode, AsciiSet};
use proxmox::tools::vec;
pub use proxmox::tools::fd::Fd; pub use proxmox::tools::fd::Fd;
@ -25,45 +23,43 @@ pub mod borrow;
pub mod cert; pub mod cert;
pub mod daemon; pub mod daemon;
pub mod disks; pub mod disks;
pub mod fs;
pub mod format; pub mod format;
pub mod lru_cache; pub mod fs;
pub mod runtime; pub mod fuse_loop;
pub mod ticket; pub mod http;
pub mod statistics;
pub mod systemd;
pub mod nom;
pub mod logrotate; pub mod logrotate;
pub mod loopdev; pub mod loopdev;
pub mod fuse_loop; pub mod lru_cache;
pub mod nom;
pub mod runtime;
pub mod socket; pub mod socket;
pub mod statistics;
pub mod subscription; pub mod subscription;
pub mod systemd;
pub mod ticket;
pub mod xattr;
pub mod zip; pub mod zip;
pub mod http;
mod parallel_handler; pub mod parallel_handler;
pub use parallel_handler::*; pub use parallel_handler::ParallelHandler;
mod wrapped_reader_stream; mod wrapped_reader_stream;
pub use wrapped_reader_stream::*; pub use wrapped_reader_stream::{AsyncReaderStream, StdChannelStream, WrappedReaderStream};
mod async_channel_writer; mod async_channel_writer;
pub use async_channel_writer::*; pub use async_channel_writer::AsyncChannelWriter;
mod std_channel_writer; mod std_channel_writer;
pub use std_channel_writer::*; pub use std_channel_writer::StdChannelWriter;
pub mod xattr;
mod process_locker; mod process_locker;
pub use process_locker::*; pub use process_locker::{ProcessLocker, ProcessLockExclusiveGuard, ProcessLockSharedGuard};
mod file_logger; mod file_logger;
pub use file_logger::*; pub use file_logger::{FileLogger, FileLogOptions};
mod broadcast_future; mod broadcast_future;
pub use broadcast_future::*; pub use broadcast_future::{BroadcastData, BroadcastFuture};
/// The `BufferedRead` trait provides a single function /// The `BufferedRead` trait provides a single function
/// `buffered_read`. It returns a reference to an internal buffer. The /// `buffered_read`. It returns a reference to an internal buffer. The
@ -75,76 +71,6 @@ pub trait BufferedRead {
fn buffered_read(&mut self, offset: u64) -> Result<&[u8], Error>; fn buffered_read(&mut self, offset: u64) -> Result<&[u8], Error>;
} }
/// Split a file into equal sized chunks. The last chunk may be
/// smaller. Note: We cannot implement an `Iterator`, because iterators
/// cannot return a borrowed buffer ref (we want zero-copy)
pub fn file_chunker<C, R>(mut file: R, chunk_size: usize, mut chunk_cb: C) -> Result<(), Error>
where
C: FnMut(usize, &[u8]) -> Result<bool, Error>,
R: Read,
{
const READ_BUFFER_SIZE: usize = 4 * 1024 * 1024; // 4M
if chunk_size > READ_BUFFER_SIZE {
bail!("chunk size too large!");
}
let mut buf = vec::undefined(READ_BUFFER_SIZE);
let mut pos = 0;
let mut file_pos = 0;
loop {
let mut eof = false;
let mut tmp = &mut buf[..];
// try to read large portions, at least chunk_size
while pos < chunk_size {
match file.read(tmp) {
Ok(0) => {
eof = true;
break;
}
Ok(n) => {
pos += n;
if pos > chunk_size {
break;
}
tmp = &mut tmp[n..];
}
Err(ref e) if e.kind() == ErrorKind::Interrupted => { /* try again */ }
Err(e) => bail!("read chunk failed - {}", e.to_string()),
}
}
let mut start = 0;
while start + chunk_size <= pos {
if !(chunk_cb)(file_pos, &buf[start..start + chunk_size])? {
break;
}
file_pos += chunk_size;
start += chunk_size;
}
if eof {
if start < pos {
(chunk_cb)(file_pos, &buf[start..pos])?;
//file_pos += pos - start;
}
break;
} else {
let rest = pos - start;
if rest > 0 {
let ptr = buf.as_mut_ptr();
unsafe {
std::ptr::copy_nonoverlapping(ptr.add(start), ptr, rest);
}
pos = rest;
} else {
pos = 0;
}
}
}
Ok(())
}
pub fn json_object_to_query(data: Value) -> Result<String, Error> { pub fn json_object_to_query(data: Value) -> Result<String, Error> {
let mut query = url::form_urlencoded::Serializer::new(String::new()); let mut query = url::form_urlencoded::Serializer::new(String::new());
@ -363,6 +289,11 @@ pub fn extract_cookie(cookie: &str, cookie_name: &str) -> Option<String> {
None None
} }
/// percent encode a url component
pub fn percent_encode_component(comp: &str) -> String {
utf8_percent_encode(comp, percent_encoding::NON_ALPHANUMERIC).to_string()
}
pub fn join(data: &Vec<String>, sep: char) -> String { pub fn join(data: &Vec<String>, sep: char) -> String {
let mut list = String::new(); let mut list = String::new();

View File

@ -128,19 +128,26 @@ fn get_changelog_url(
None => bail!("incompatible filename, doesn't match regex") None => bail!("incompatible filename, doesn't match regex")
}; };
if component == "pbs-enterprise" {
return Ok(format!("https://enterprise.proxmox.com/{}/{}_{}.changelog",
base, package, version));
} else {
return Ok(format!("http://download.proxmox.com/{}/{}_{}.changelog", return Ok(format!("http://download.proxmox.com/{}/{}_{}.changelog",
base, package, version)); base, package, version));
} }
}
bail!("unknown origin ({}) or component ({})", origin, component) bail!("unknown origin ({}) or component ({})", origin, component)
} }
pub struct FilterData<'a> { pub struct FilterData<'a> {
// this is version info returned by APT /// package name
pub package: &'a str,
/// this is version info returned by APT
pub installed_version: Option<&'a str>, pub installed_version: Option<&'a str>,
pub candidate_version: &'a str, pub candidate_version: &'a str,
// this is the version info the filter is supposed to check /// this is the version info the filter is supposed to check
pub active_version: &'a str, pub active_version: &'a str,
} }
@ -270,6 +277,7 @@ where
let mut long_desc = "".to_owned(); let mut long_desc = "".to_owned();
let fd = FilterData { let fd = FilterData {
package: package.as_str(),
installed_version: current_version.as_deref(), installed_version: current_version.as_deref(),
candidate_version: &candidate_version, candidate_version: &candidate_version,
active_version: &version, active_version: &version,
@ -353,6 +361,7 @@ where
}, },
priority: priority_res, priority: priority_res,
section: section_res, section: section_res,
extra_info: None,
}); });
} }
} }

View File

@ -16,18 +16,18 @@ pub enum EitherStream<L, R> {
Right(R), Right(R),
} }
impl<L: AsyncRead, R: AsyncRead> AsyncRead for EitherStream<L, R> { impl<L: AsyncRead + Unpin, R: AsyncRead + Unpin> AsyncRead for EitherStream<L, R> {
fn poll_read( fn poll_read(
self: Pin<&mut Self>, self: Pin<&mut Self>,
cx: &mut Context, cx: &mut Context,
buf: &mut [u8], buf: &mut [u8],
) -> Poll<Result<usize, io::Error>> { ) -> Poll<Result<usize, io::Error>> {
match unsafe { self.get_unchecked_mut() } { match self.get_mut() {
EitherStream::Left(ref mut s) => { EitherStream::Left(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_read(cx, buf) Pin::new(s).poll_read(cx, buf)
} }
EitherStream::Right(ref mut s) => { EitherStream::Right(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_read(cx, buf) Pin::new(s).poll_read(cx, buf)
} }
} }
} }
@ -47,51 +47,51 @@ impl<L: AsyncRead, R: AsyncRead> AsyncRead for EitherStream<L, R> {
where where
B: bytes::BufMut, B: bytes::BufMut,
{ {
match unsafe { self.get_unchecked_mut() } { match self.get_mut() {
EitherStream::Left(ref mut s) => { EitherStream::Left(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_read_buf(cx, buf) Pin::new(s).poll_read_buf(cx, buf)
} }
EitherStream::Right(ref mut s) => { EitherStream::Right(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_read_buf(cx, buf) Pin::new(s).poll_read_buf(cx, buf)
} }
} }
} }
} }
impl<L: AsyncWrite, R: AsyncWrite> AsyncWrite for EitherStream<L, R> { impl<L: AsyncWrite + Unpin, R: AsyncWrite + Unpin> AsyncWrite for EitherStream<L, R> {
fn poll_write( fn poll_write(
self: Pin<&mut Self>, self: Pin<&mut Self>,
cx: &mut Context, cx: &mut Context,
buf: &[u8], buf: &[u8],
) -> Poll<Result<usize, io::Error>> { ) -> Poll<Result<usize, io::Error>> {
match unsafe { self.get_unchecked_mut() } { match self.get_mut() {
EitherStream::Left(ref mut s) => { EitherStream::Left(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_write(cx, buf) Pin::new(s).poll_write(cx, buf)
} }
EitherStream::Right(ref mut s) => { EitherStream::Right(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_write(cx, buf) Pin::new(s).poll_write(cx, buf)
} }
} }
} }
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), io::Error>> { fn poll_flush(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), io::Error>> {
match unsafe { self.get_unchecked_mut() } { match self.get_mut() {
EitherStream::Left(ref mut s) => { EitherStream::Left(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_flush(cx) Pin::new(s).poll_flush(cx)
} }
EitherStream::Right(ref mut s) => { EitherStream::Right(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_flush(cx) Pin::new(s).poll_flush(cx)
} }
} }
} }
fn poll_shutdown(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), io::Error>> { fn poll_shutdown(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), io::Error>> {
match unsafe { self.get_unchecked_mut() } { match self.get_mut() {
EitherStream::Left(ref mut s) => { EitherStream::Left(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_shutdown(cx) Pin::new(s).poll_shutdown(cx)
} }
EitherStream::Right(ref mut s) => { EitherStream::Right(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_shutdown(cx) Pin::new(s).poll_shutdown(cx)
} }
} }
} }
@ -104,12 +104,12 @@ impl<L: AsyncWrite, R: AsyncWrite> AsyncWrite for EitherStream<L, R> {
where where
B: bytes::Buf, B: bytes::Buf,
{ {
match unsafe { self.get_unchecked_mut() } { match self.get_mut() {
EitherStream::Left(ref mut s) => { EitherStream::Left(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_write_buf(cx, buf) Pin::new(s).poll_write_buf(cx, buf)
} }
EitherStream::Right(ref mut s) => { EitherStream::Right(ref mut s) => {
unsafe { Pin::new_unchecked(s) }.poll_write_buf(cx, buf) Pin::new(s).poll_write_buf(cx, buf)
} }
} }
} }
@ -180,7 +180,7 @@ pub struct HyperAccept<T>(pub T);
impl<T, I> hyper::server::accept::Accept for HyperAccept<T> impl<T, I> hyper::server::accept::Accept for HyperAccept<T>
where where
T: TryStream<Ok = I>, T: TryStream<Ok = I> + Unpin,
I: AsyncRead + AsyncWrite, I: AsyncRead + AsyncWrite,
{ {
type Conn = I; type Conn = I;
@ -190,7 +190,7 @@ where
self: Pin<&mut Self>, self: Pin<&mut Self>,
cx: &mut Context, cx: &mut Context,
) -> Poll<Option<Result<Self::Conn, Self::Error>>> { ) -> Poll<Option<Result<Self::Conn, Self::Error>>> {
let this = unsafe { self.map_unchecked_mut(|this| &mut this.0) }; let this = Pin::new(&mut self.get_mut().0);
this.try_poll_next(cx) this.try_poll_next(cx)
} }
} }

View File

@ -260,6 +260,7 @@ impl Future for NotifyReady {
pub async fn create_daemon<F, S>( pub async fn create_daemon<F, S>(
address: std::net::SocketAddr, address: std::net::SocketAddr,
create_service: F, create_service: F,
service_name: &str,
) -> Result<(), Error> ) -> Result<(), Error>
where where
F: FnOnce(tokio::net::TcpListener, NotifyReady) -> Result<S, Error>, F: FnOnce(tokio::net::TcpListener, NotifyReady) -> Result<S, Error>,
@ -301,10 +302,39 @@ where
if let Some(future) = finish_future { if let Some(future) = finish_future {
future.await; future.await;
} }
// FIXME: this is a hack, replace with sd_notify_barrier when available
if server::is_reload_request() {
wait_service_is_active(service_name).await?;
}
log::info!("daemon shut down..."); log::info!("daemon shut down...");
Ok(()) Ok(())
} }
// hack, do not use if unsure!
async fn wait_service_is_active(service: &str) -> Result<(), Error> {
tokio::time::delay_for(std::time::Duration::new(1, 0)).await;
loop {
let text = match tokio::process::Command::new("systemctl")
.args(&["is-active", service])
.output()
.await
{
Ok(output) => match String::from_utf8(output.stdout) {
Ok(text) => text,
Err(err) => bail!("output of 'systemctl is-active' not valid UTF-8 - {}", err),
},
Err(err) => bail!("executing 'systemctl is-active' failed - {}", err),
};
if text.trim().trim_start() != "reloading" {
return Ok(());
}
tokio::time::delay_for(std::time::Duration::new(5, 0)).await;
}
}
#[link(name = "systemd")] #[link(name = "systemd")]
extern "C" { extern "C" {
fn sd_notify(unset_environment: c_int, state: *const c_char) -> c_int; fn sd_notify(unset_environment: c_int, state: *const c_char) -> c_int;

View File

@ -102,6 +102,40 @@ impl From<u64> for HumanByte {
} }
} }
pub fn as_fingerprint(bytes: &[u8]) -> String {
proxmox::tools::digest_to_hex(bytes)
.as_bytes()
.chunks(2)
.map(|v| std::str::from_utf8(v).unwrap())
.collect::<Vec<&str>>().join(":")
}
pub mod bytes_as_fingerprint {
use serde::{Deserialize, Serializer, Deserializer};
pub fn serialize<S>(
bytes: &[u8; 32],
serializer: S,
) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let s = crate::tools::format::as_fingerprint(bytes);
serializer.serialize_str(&s)
}
pub fn deserialize<'de, D>(
deserializer: D,
) -> Result<[u8; 32], D::Error>
where
D: Deserializer<'de>,
{
let mut s = String::deserialize(deserializer)?;
s.retain(|c| c != ':');
proxmox::tools::hex_to_digest(&s).map_err(serde::de::Error::custom)
}
}
#[test] #[test]
fn correct_byte_convert() { fn correct_byte_convert() {
fn convert(b: usize) -> String { fn convert(b: usize) -> String {

View File

@ -2,6 +2,7 @@ use anyhow::{Error, format_err, bail};
use lazy_static::lazy_static; use lazy_static::lazy_static;
use std::task::{Context, Poll}; use std::task::{Context, Poll};
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
use std::collections::HashMap;
use hyper::{Uri, Body}; use hyper::{Uri, Body};
use hyper::client::{Client, HttpConnector}; use hyper::client::{Client, HttpConnector};
@ -26,8 +27,21 @@ lazy_static! {
}; };
} }
pub async fn get_string(uri: &str) -> Result<String, Error> { pub async fn get_string(uri: &str, extra_headers: Option<&HashMap<String, String>>) -> Result<String, Error> {
let res = HTTP_CLIENT.get(uri.parse()?).await?; let mut request = Request::builder()
.method("GET")
.uri(uri)
.header("User-Agent", "proxmox-backup-client/1.0");
if let Some(hs) = extra_headers {
for (h, v) in hs.iter() {
request = request.header(h, v);
}
}
let request = request.body(Body::empty())?;
let res = HTTP_CLIENT.request(request).await?;
let status = res.status(); let status = res.status();
if !status.is_success() { if !status.is_success() {
@ -115,7 +129,12 @@ impl hyper::service::Service<Uri> for HttpsConnector {
.to_string(); .to_string();
let config = this.ssl_connector.configure(); let config = this.ssl_connector.configure();
let conn = this.http.call(dst).await?; let dst_str = dst.to_string(); // for error messages
let conn = this
.http
.call(dst)
.await
.map_err(|err| format_err!("error connecting to {} - {}", dst_str, err))?;
let _ = set_tcp_keepalive(conn.as_raw_fd(), PROXMOX_BACKUP_TCP_KEEPALIVE_TIME); let _ = set_tcp_keepalive(conn.as_raw_fd(), PROXMOX_BACKUP_TCP_KEEPALIVE_TIME);

View File

@ -7,6 +7,7 @@ use std::task::{Context, Poll, RawWaker, Waker};
use std::thread::{self, Thread}; use std::thread::{self, Thread};
use lazy_static::lazy_static; use lazy_static::lazy_static;
use pin_utils::pin_mut;
use tokio::runtime::{self, Runtime}; use tokio::runtime::{self, Runtime};
thread_local! { thread_local! {
@ -154,9 +155,8 @@ pub fn main<F: Future>(fut: F) -> F::Output {
block_on(fut) block_on(fut)
} }
fn block_on_local_future<F: Future>(mut fut: F) -> F::Output { fn block_on_local_future<F: Future>(fut: F) -> F::Output {
use std::pin::Pin; pin_mut!(fut);
let mut fut = unsafe { Pin::new_unchecked(&mut fut) };
let waker = Arc::new(thread::current()); let waker = Arc::new(thread::current());
let waker = thread_waker_clone(Arc::into_raw(waker) as *const ()); let waker = thread_waker_clone(Arc::into_raw(waker) as *const ());

View File

@ -36,8 +36,7 @@ Ext.define('PBS.Application', {
xtype: view, xtype: view,
}); });
if (skipCheck !== true) { if (skipCheck !== true) {
// fixme: Proxmox.Utils.checked_command(Ext.emptyFn);
// Proxmox.Utils.checked_command(function() {}); // display subscription status
} }
}, },

View File

@ -332,12 +332,13 @@ Ext.define('PBS.Dashboard', {
Ext.String.format(gettext('{0} days'), '{days}') + ')', Ext.String.format(gettext('{0} days'), '{days}') + ')',
}, },
xtype: 'pbsTaskSummary', xtype: 'pbsTaskSummary',
height: 200,
reference: 'tasksummary', reference: 'tasksummary',
}, },
{ {
iconCls: 'fa fa-ticket', iconCls: 'fa fa-ticket',
title: 'Subscription', title: 'Subscription',
height: 166, height: 200,
reference: 'subscription', reference: 'subscription',
xtype: 'pbsSubscriptionInfo', xtype: 'pbsSubscriptionInfo',
}, },
@ -394,7 +395,7 @@ Ext.define('PBS.dashboard.SubscriptionInfo', {
break; break;
case 0: case 0:
icon = 'times-circle critical'; icon = 'times-circle critical';
message = gettext('This node does not have a subscription.'); message = gettext('<h1>No valid subscription</h1>' + PBS.Utils.noSubKeyHtml);
break; break;
default: default:
throw 'invalid subscription status'; throw 'invalid subscription status';

View File

@ -202,11 +202,6 @@ Ext.define('PBS.MainView', {
padding: '0 0 0 5', padding: '0 0 0 5',
xtype: 'versioninfo', xtype: 'versioninfo',
}, },
{
padding: 5,
html: '<a href="https://bugzilla.proxmox.com" target="_blank">BETA</a>',
baseCls: 'x-plain',
},
{ {
flex: 1, flex: 1,
baseCls: 'x-plain', baseCls: 'x-plain',

View File

@ -43,6 +43,8 @@ JSSRC= \
panel/Tasks.js \ panel/Tasks.js \
panel/XtermJsConsole.js \ panel/XtermJsConsole.js \
panel/AccessControl.js \ panel/AccessControl.js \
panel/StorageAndDisks.js \
panel/UsageChart.js \
ZFSList.js \ ZFSList.js \
DirectoryList.js \ DirectoryList.js \
LoginView.js \ LoginView.js \
@ -56,6 +58,8 @@ JSSRC= \
datastore/Content.js \ datastore/Content.js \
datastore/OptionView.js \ datastore/OptionView.js \
datastore/Panel.js \ datastore/Panel.js \
datastore/DataStoreListSummary.js \
datastore/DataStoreList.js \
ServerStatus.js \ ServerStatus.js \
ServerAdministration.js \ ServerAdministration.js \
Dashboard.js \ Dashboard.js \
@ -79,7 +83,7 @@ js/proxmox-backup-gui.js: .lint-incremental js OnlineHelpInfo.js ${JSSRC}
.PHONY: check .PHONY: check
check: check:
eslint ${JSSRC} eslint --strict ${JSSRC}
touch ".lint-incremental" touch ".lint-incremental"
.lint-incremental: ${JSSRC} .lint-incremental: ${JSSRC}

View File

@ -62,31 +62,18 @@ Ext.define('PBS.store.NavigationStore', {
leaf: true, leaf: true,
}, },
{ {
text: gettext('Disks'), text: gettext('Storage / Disks'),
iconCls: 'fa fa-hdd-o', iconCls: 'fa fa-hdd-o',
path: 'pmxDiskList', path: 'pbsStorageAndDiskPanel',
leaf: false,
children: [
{
text: Proxmox.Utils.directoryText,
iconCls: 'fa fa-folder',
path: 'pbsDirectoryList',
leaf: true, leaf: true,
}, },
{
text: "ZFS",
iconCls: 'fa fa-th-large',
path: 'pbsZFSList',
leaf: true,
},
],
},
], ],
}, },
{ {
text: gettext('Datastore'), text: gettext('Datastore'),
iconCls: 'fa fa-archive', iconCls: 'fa fa-archive',
id: 'datastores', id: 'datastores',
path: 'pbsDataStores',
expanded: true, expanded: true,
expandable: false, expandable: false,
leaf: false, leaf: false,
@ -174,9 +161,6 @@ Ext.define('PBS.view.main.NavigationTree', {
listeners: { listeners: {
itemclick: function(tl, info) { itemclick: function(tl, info) {
if (info.node.data.id === 'datastores') {
return false;
}
if (info.node.data.id === 'addbutton') { if (info.node.data.id === 'addbutton') {
let me = this; let me = this;
Ext.create('PBS.DataStoreEdit', { Ext.create('PBS.DataStoreEdit', {

View File

@ -3,38 +3,102 @@ const proxmoxOnlineHelpInfo = {
"link": "/docs/index.html", "link": "/docs/index.html",
"title": "Proxmox Backup Server Documentation Index" "title": "Proxmox Backup Server Documentation Index"
}, },
"id1": {
"link": "/docs/calendarevents.html#id1",
"title": "Calendar Events"
},
"id2": {
"link": "/docs/backup-client.html#id2",
"title": "Encryption"
},
"changing-backup-owner": {
"link": "/docs/backup-client.html#changing-backup-owner",
"title": "Changing the Owner of a Backup Group"
},
"backup-pruning": { "backup-pruning": {
"link": "/docs/backup-client.html#backup-pruning", "link": "/docs/backup-client.html#backup-pruning",
"title": "Pruning and Removing Backups" "title": "Pruning and Removing Backups"
}, },
"id3": {
"link": "/docs/backup-client.html#id3",
"title": "Garbage Collection"
},
"pxar-format": {
"link": "/docs/file-formats.html#pxar-format",
"title": "Proxmox File Archive Format (``.pxar``)"
},
"sysadmin-package-repositories": {
"link": "/docs/package-repositories.html#sysadmin-package-repositories",
"title": "Debian Package Repositories"
},
"get-help": {
"link": "/docs/introduction.html#get-help",
"title": "Getting Help"
},
"chapter-zfs": { "chapter-zfs": {
"link": "/docs/sysadmin.html#chapter-zfs", "link": "/docs/sysadmin.html#chapter-zfs",
"title": "ZFS on Linux" "title": "ZFS on Linux"
}, },
"local-zfs-special-device": {
"link": "/docs/sysadmin.html#local-zfs-special-device",
"title": "ZFS Special Device"
},
"maintenance-pruning": { "maintenance-pruning": {
"link": "/docs/maintenance.html#maintenance-pruning", "link": "/docs/maintenance.html#maintenance-pruning",
"title": "Pruning" "title": "Pruning"
}, },
"maintenance-gc": {
"link": "/docs/maintenance.html#maintenance-gc",
"title": "Garbage Collection"
},
"maintenance-verification": {
"link": "/docs/maintenance.html#maintenance-verification",
"title": "Verification"
},
"maintenance-notification": { "maintenance-notification": {
"link": "/docs/maintenance.html#maintenance-notification", "link": "/docs/maintenance.html#maintenance-notification",
"title": "Notifications" "title": "Notifications"
}, },
"backup-remote": { "backup-remote": {
"link": "/docs/managing-remotes.html#backup-remote", "link": "/docs/managing-remotes.html#backup-remote",
"title": ":term:`Remote`" "title": "Remote"
}, },
"syncjobs": { "syncjobs": {
"link": "/docs/managing-remotes.html#syncjobs", "link": "/docs/managing-remotes.html#syncjobs",
"title": "Sync Jobs" "title": "Sync Jobs"
}, },
"sysadmin-network-configuration": {
"link": "/docs/network-management.html#sysadmin-network-configuration",
"title": "Network Management"
},
"pve-integration": {
"link": "/docs/pve-integration.html#pve-integration",
"title": "`Proxmox VE`_ Integration"
},
"storage-disk-management": {
"link": "/docs/storage.html#storage-disk-management",
"title": "Disk Management"
},
"datastore-intro": { "datastore-intro": {
"link": "/docs/storage.html#datastore-intro", "link": "/docs/storage.html#datastore-intro",
"title": ":term:`DataStore`" "title": "Datastore"
},
"storage-datastore-create": {
"link": "/docs/storage.html#storage-datastore-create",
"title": "Creating a Datastore"
},
"sysadmin-host-administration": {
"link": "/docs/sysadmin.html#sysadmin-host-administration",
"title": "Host System Administration"
}, },
"user-mgmt": { "user-mgmt": {
"link": "/docs/user-management.html#user-mgmt", "link": "/docs/user-management.html#user-mgmt",
"title": "User Management" "title": "User Management"
}, },
"user-tokens": {
"link": "/docs/user-management.html#user-tokens",
"title": "API Tokens"
},
"user-acl": { "user-acl": {
"link": "/docs/user-management.html#user-acl", "link": "/docs/user-management.html#user-acl",
"title": "Access Control" "title": "Access Control"

View File

@ -7,6 +7,8 @@ Ext.define('PBS.ServerAdministration', {
border: true, border: true,
defaults: { border: false }, defaults: { border: false },
tools: [PBS.Utils.get_help_tool("sysadmin-host-administration")],
controller: { controller: {
xclass: 'Ext.app.ViewController', xclass: 'Ext.app.ViewController',

View File

@ -51,6 +51,78 @@ Ext.define('PBS.ServerStatus', {
scrollable: true, scrollable: true,
showVersions: function() {
let me = this;
// Note: use simply text/html here, as ExtJS grid has problems with cut&paste
let panel = Ext.createWidget('component', {
autoScroll: true,
id: 'pkgversions',
padding: 5,
style: {
'background-color': 'white',
'white-space': 'pre',
'font-family': 'monospace',
},
});
let win = Ext.create('Ext.window.Window', {
title: gettext('Package versions'),
width: 600,
height: 600,
layout: 'fit',
modal: true,
items: [panel],
buttons: [
{
xtype: 'button',
iconCls: 'fa fa-clipboard',
handler: function(button) {
window.getSelection().selectAllChildren(
document.getElementById('pkgversions'),
);
document.execCommand("copy");
},
text: gettext('Copy'),
},
{
text: gettext('Ok'),
handler: function() {
this.up('window').close();
},
},
],
});
Proxmox.Utils.API2Request({
waitMsgTarget: me,
url: `/nodes/localhost/apt/versions`,
method: 'GET',
failure: function(response, opts) {
win.close();
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
success: function(response, opts) {
let text = '';
Ext.Array.each(response.result.data, function(rec) {
let version = "not correctly installed";
let pkg = rec.Package;
if (rec.OldVersion && rec.OldVersion !== 'unknown') {
version = rec.OldVersion;
}
if (rec.ExtraInfo) {
text += `${pkg}: ${version} (${rec.ExtraInfo})\n`;
} else {
text += `${pkg}: ${version}\n`;
}
});
win.show();
panel.update(Ext.htmlEncode(text));
},
});
},
initComponent: function() { initComponent: function() {
var me = this; var me = this;
@ -94,7 +166,15 @@ Ext.define('PBS.ServerStatus', {
}, },
}); });
me.tbar = [consoleBtn, restartBtn, shutdownBtn, '->', { xtype: 'proxmoxRRDTypeSelector' }]; let version_btn = new Ext.Button({
text: gettext('Package versions'),
iconCls: 'fa fa-gift',
handler: function() {
Proxmox.Utils.checked_command(function() { me.showVersions(); });
},
});
me.tbar = [version_btn, '-', consoleBtn, '-', restartBtn, shutdownBtn, '->', { xtype: 'proxmoxRRDTypeSelector' }];
var rrdstore = Ext.create('Proxmox.data.RRDStore', { var rrdstore = Ext.create('Proxmox.data.RRDStore', {
rrdurl: "/api2/json/nodes/localhost/rrd", rrdurl: "/api2/json/nodes/localhost/rrd",

View File

@ -2,13 +2,14 @@ Ext.define('PBS.SubscriptionKeyEdit', {
extend: 'Proxmox.window.Edit', extend: 'Proxmox.window.Edit',
title: gettext('Upload Subscription Key'), title: gettext('Upload Subscription Key'),
width: 300, width: 320,
autoLoad: true, autoLoad: true,
onlineHelp: 'getting_help', onlineHelp: 'get_help',
items: { items: {
xtype: 'textfield', xtype: 'textfield',
labelWidth: 120,
name: 'key', name: 'key',
value: '', value: '',
fieldLabel: gettext('Subscription Key'), fieldLabel: gettext('Subscription Key'),
@ -22,7 +23,8 @@ Ext.define('PBS.Subscription', {
title: gettext('Subscription'), title: gettext('Subscription'),
border: true, border: true,
onlineHelp: 'getting_help', onlineHelp: 'get_help',
tools: [PBS.Utils.get_help_tool("get_help")],
viewConfig: { viewConfig: {
enableTextSelection: true, enableTextSelection: true,
@ -140,16 +142,18 @@ Ext.define('PBS.Subscription', {
tbar: [ tbar: [
{ {
text: gettext('Upload Subscription Key'), text: gettext('Upload Subscription Key'),
iconCls: 'fa fa-ticket',
handler: function() { handler: function() {
let win = Ext.create('PBS.SubscriptionKeyEdit', { let win = Ext.create('PBS.SubscriptionKeyEdit', {
url: '/api2/extjs/' + baseurl, url: '/api2/extjs/' + baseurl,
autoShow: true,
}); });
win.show();
win.on('destroy', reload); win.on('destroy', reload);
}, },
}, },
{ {
text: gettext('Check'), text: gettext('Check'),
iconCls: 'fa fa-check-square-o',
handler: function() { handler: function() {
Proxmox.Utils.API2Request({ Proxmox.Utils.API2Request({
params: { force: 1 }, params: { force: 1 },
@ -171,10 +175,12 @@ Ext.define('PBS.Subscription', {
dangerous: true, dangerous: true,
selModel: false, selModel: false,
callback: reload, callback: reload,
iconCls: 'fa fa-trash-o',
}, },
'-', '-',
{ {
text: gettext('System Report'), text: gettext('System Report'),
iconCls: 'fa fa-stethoscope',
handler: function() { handler: function() {
Proxmox.Utils.checked_command(function() { me.showReport(); }); Proxmox.Utils.checked_command(function() { me.showReport(); });
}, },

View File

@ -6,6 +6,7 @@ Ext.define('PBS.SystemConfiguration', {
border: true, border: true,
scrollable: true, scrollable: true,
defaults: { border: false }, defaults: { border: false },
tools: [PBS.Utils.get_help_tool("sysadmin-network-configuration")],
items: [ items: [
{ {
title: gettext('Network/Time'), title: gettext('Network/Time'),

View File

@ -50,6 +50,8 @@ Ext.define('PBS.Utils', {
} }
}, },
noSubKeyHtml: 'You do not have a valid subscription for this server. Please visit <a target="_blank" href="https://www.proxmox.com/proxmox-backup-server/pricing">www.proxmox.com</a> to get a list of available options.',
getDataStoreFromPath: function(path) { getDataStoreFromPath: function(path) {
return path.slice(PBS.Utils.dataStorePrefix.length); return path.slice(PBS.Utils.dataStorePrefix.length);
}, },
@ -145,7 +147,7 @@ Ext.define('PBS.Utils', {
}, },
render_datastore_worker_id: function(id, what) { render_datastore_worker_id: function(id, what) {
const res = id.match(/^(\S+?)_(\S+?)_(\S+?)(_(.+))?$/); const res = id.match(/^(\S+?):(\S+?)\/(\S+?)(\/(.+))?$/);
if (res) { if (res) {
let datastore = res[1], backupGroup = `${res[2]}/${res[3]}`; let datastore = res[1], backupGroup = `${res[2]}/${res[3]}`;
if (res[4] !== undefined) { if (res[4] !== undefined) {
@ -159,6 +161,39 @@ Ext.define('PBS.Utils', {
return `Datastore ${what} ${id}`; return `Datastore ${what} ${id}`;
}, },
// mimics Display trait in backend
renderKeyID: function(fingerprint) {
return fingerprint.substring(0, 23);
},
parse_datastore_worker_id: function(type, id) {
let result;
let res;
if (type.startsWith('verif')) {
res = PBS.Utils.VERIFICATION_JOB_ID_RE.exec(id);
if (res) {
result = res[1];
}
} else if (type.startsWith('sync')) {
res = PBS.Utils.SYNC_JOB_ID_RE.exec(id);
if (res) {
result = res[3];
}
} else if (type === 'backup') {
res = PBS.Utils.BACKUP_JOB_ID_RE.exec(id);
if (res) {
result = res[1];
}
} else if (type === 'garbage_collection') {
return id;
} else if (type === 'prune') {
return id;
}
return result;
},
extractTokenUser: function(tokenid) { extractTokenUser: function(tokenid) {
return tokenid.match(/^(.+)!([^!]+)$/)[1]; return tokenid.match(/^(.+)!([^!]+)$/)[1];
}, },
@ -167,9 +202,75 @@ Ext.define('PBS.Utils', {
return tokenid.match(/^(.+)!([^!]+)$/)[2]; return tokenid.match(/^(.+)!([^!]+)$/)[2];
}, },
render_estimate: function(value) {
if (!value) {
return gettext('Not enough data');
}
let now = new Date();
let estimate = new Date(value*1000);
let timespan = (estimate - now)/1000;
if (Number(estimate) <= Number(now) || isNaN(timespan)) {
return gettext('Never');
}
let duration = Proxmox.Utils.format_duration_human(timespan);
return Ext.String.format(gettext("in {0}"), duration);
},
render_size_usage: function(val, max) {
if (max === 0) {
return gettext('N/A');
}
return (val*100/max).toFixed(2) + '% (' +
Ext.String.format(gettext('{0} of {1}'),
Proxmox.Utils.format_size(val), Proxmox.Utils.format_size(max)) + ')';
},
get_help_tool: function(blockid) {
let info = Proxmox.Utils.get_help_info(blockid);
if (info === undefined) {
info = Proxmox.Utils.get_help_info('pbs_documentation_index');
}
if (info === undefined) {
throw "get_help_info failed"; // should not happen
}
let docsURI = window.location.origin + info.link;
let title = info.title;
if (info.subtitle) {
title += ' - ' + info.subtitle;
}
return {
type: 'help',
tooltip: title,
handler: function() {
window.open(docsURI);
},
};
},
calculate_dedup_factor: function(gcstatus) {
let dedup = 1.0;
if (gcstatus['disk-bytes'] > 0) {
dedup = (gcstatus['index-data-bytes'] || 0)/gcstatus['disk-bytes'];
}
return dedup;
},
constructor: function() { constructor: function() {
var me = this; var me = this;
let PROXMOX_SAFE_ID_REGEX = "([A-Za-z0-9_][A-Za-z0-9._-]*)";
// only anchored at beginning
// only parses datastore for now
me.VERIFICATION_JOB_ID_RE = new RegExp("^" + PROXMOX_SAFE_ID_REGEX + ':?');
me.SYNC_JOB_ID_RE = new RegExp("^" + PROXMOX_SAFE_ID_REGEX + ':' +
PROXMOX_SAFE_ID_REGEX + ':' + PROXMOX_SAFE_ID_REGEX + ':');
me.BACKUP_JOB_ID_RE = new RegExp("^" + PROXMOX_SAFE_ID_REGEX + ':');
// do whatever you want here // do whatever you want here
Proxmox.Utils.override_task_descriptions({ Proxmox.Utils.override_task_descriptions({
backup: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Backup')), backup: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Backup')),

View File

@ -22,7 +22,12 @@ Ext.define('PBS.config.ACLView', {
title: gettext('Permissions'), title: gettext('Permissions'),
// Show only those permissions, which can affect this and children paths.
// That means that also higher up, "shorter" paths are included, as those
// can have a say in the rights on the asked path.
aclPath: undefined, aclPath: undefined,
// tell API to only return ACLs matching exactly the aclPath config.
aclExact: undefined, aclExact: undefined,
controller: { controller: {
@ -83,12 +88,37 @@ Ext.define('PBS.config.ACLView', {
let proxy = view.getStore().rstore.getProxy(); let proxy = view.getStore().rstore.getProxy();
let params = {}; let params = {};
if (typeof view.aclPath === "string") {
let pathFilter = Ext.create('Ext.util.Filter', {
filterPath: view.aclPath,
filterAtoms: view.aclPath.split('/'),
filterFn: function(item) {
let me = this;
let path = item.data.path;
if (path === "/" || path === me.filterPath) {
return true;
} else if (path.length > me.filterPath.length) {
return path.startsWith(me.filterPath + '/');
}
let pathAtoms = path.split('/');
let commonLength = Math.min(pathAtoms.length, me.filterAtoms.length);
for (let i = 1; i < commonLength; i++) {
if (me.filterAtoms[i] !== pathAtoms[i]) {
return false;
}
}
return true;
},
});
view.getStore().addFilter(pathFilter);
}
if (view.aclExact !== undefined) {
if (view.aclPath !== undefined) { if (view.aclPath !== undefined) {
params.path = view.aclPath; params.path = view.aclPath;
} }
if (view.aclExact !== undefined) {
params.exact = view.aclExact; params.exact = view.aclExact;
} }
proxy.setExtraParams(params); proxy.setExtraParams(params);
Proxmox.Utils.monStoreErrors(view, view.getStore().rstore); Proxmox.Utils.monStoreErrors(view, view.getStore().rstore);
}, },

View File

@ -1,6 +1,6 @@
Ext.define('pmx-remotes', { Ext.define('pmx-remotes', {
extend: 'Ext.data.Model', extend: 'Ext.data.Model',
fields: ['name', 'host', 'port', 'userid', 'fingerprint', 'comment', fields: ['name', 'host', 'port', 'auth-id', 'fingerprint', 'comment',
{ {
name: 'server', name: 'server',
calculate: function(data) { calculate: function(data) {
@ -32,6 +32,8 @@ Ext.define('PBS.config.RemoteView', {
title: gettext('Remotes'), title: gettext('Remotes'),
tools: [PBS.Utils.get_help_tool("backup-remote")],
controller: { controller: {
xclass: 'Ext.app.ViewController', xclass: 'Ext.app.ViewController',
@ -127,11 +129,11 @@ Ext.define('PBS.config.RemoteView', {
dataIndex: 'server', dataIndex: 'server',
}, },
{ {
header: gettext('User name'), header: gettext('Auth ID'),
width: 200, width: 200,
sortable: true, sortable: true,
renderer: Ext.String.htmlEncode, renderer: Ext.String.htmlEncode,
dataIndex: 'userid', dataIndex: 'auth-id',
}, },
{ {
header: gettext('Fingerprint'), header: gettext('Fingerprint'),

View File

@ -162,9 +162,11 @@ Ext.define('PBS.config.SyncJobView', {
reload: function() { this.getView().getStore().rstore.load(); }, reload: function() { this.getView().getStore().rstore.load(); },
init: function(view) { init: function(view) {
view.getStore().rstore.getProxy().setExtraParams({ let params = {};
store: view.datastore, if (view.datastore !== undefined) {
}); params.store = view.datastore;
}
view.getStore().rstore.getProxy().setExtraParams(params);
Proxmox.Utils.monStoreErrors(view, view.getStore().rstore); Proxmox.Utils.monStoreErrors(view, view.getStore().rstore);
}, },
}, },
@ -237,6 +239,12 @@ Ext.define('PBS.config.SyncJobView', {
flex: 1, flex: 1,
sortable: true, sortable: true,
}, },
{
header: gettext('Local Store'),
dataIndex: 'store',
width: 120,
sortable: true,
},
{ {
header: gettext('Remote'), header: gettext('Remote'),
dataIndex: 'remote', dataIndex: 'remote',
@ -249,12 +257,6 @@ Ext.define('PBS.config.SyncJobView', {
width: 120, width: 120,
sortable: true, sortable: true,
}, },
{
header: gettext('Local Store'),
dataIndex: 'store',
width: 120,
sortable: true,
},
{ {
header: gettext('Owner'), header: gettext('Owner'),
dataIndex: 'owner', dataIndex: 'owner',
@ -304,4 +306,18 @@ Ext.define('PBS.config.SyncJobView', {
sortable: true, sortable: true,
}, },
], ],
initComponent: function() {
let me = this;
let hideLocalDatastore = !!me.datastore;
for (let column of me.columns) {
if (column.dataIndex === 'store') {
column.hidden = hideLocalDatastore;
break;
}
}
me.callParent();
},
}); });

View File

@ -76,6 +76,17 @@ Ext.define('PBS.config.UserView', {
}).show(); }).show();
}, },
renderName: function(val, cell, rec) {
let name = [];
if (rec.data.firstname) {
name.push(rec.data.firstname);
}
if (rec.data.lastname) {
name.push(rec.data.lastname);
}
return name.join(' ');
},
renderUsername: function(userid) { renderUsername: function(userid) {
return Ext.String.htmlEncode(userid.match(/^(.+)@([^@]+)$/)[1]); return Ext.String.htmlEncode(userid.match(/^(.+)@([^@]+)$/)[1]);
}, },
@ -181,6 +192,7 @@ Ext.define('PBS.config.UserView', {
width: 150, width: 150,
sortable: true, sortable: true,
dataIndex: 'firstname', dataIndex: 'firstname',
renderer: 'renderName',
}, },
{ {
header: gettext('Comment'), header: gettext('Comment'),

View File

@ -157,9 +157,11 @@ Ext.define('PBS.config.VerifyJobView', {
reload: function() { this.getView().getStore().rstore.load(); }, reload: function() { this.getView().getStore().rstore.load(); },
init: function(view) { init: function(view) {
view.getStore().rstore.getProxy().setExtraParams({ let params = {};
store: view.datastore, if (view.datastore !== undefined) {
}); params.store = view.datastore;
}
view.getStore().rstore.getProxy().setExtraParams(params);
Proxmox.Utils.monStoreErrors(view, view.getStore().rstore); Proxmox.Utils.monStoreErrors(view, view.getStore().rstore);
}, },
}, },
@ -232,6 +234,11 @@ Ext.define('PBS.config.VerifyJobView', {
flex: 1, flex: 1,
sortable: true, sortable: true,
}, },
{
header: gettext('Datastore'),
dataIndex: 'store',
flex: 1,
},
{ {
header: gettext('Skip Verified'), header: gettext('Skip Verified'),
dataIndex: 'ignore-verified', dataIndex: 'ignore-verified',
@ -288,4 +295,18 @@ Ext.define('PBS.config.VerifyJobView', {
sortable: true, sortable: true,
}, },
], ],
initComponent: function() {
let me = this;
let hideDatastore = !!me.datastore;
for (let column of me.columns) {
if (column.dataIndex === 'store') {
column.hidden = hideDatastore;
break;
}
}
me.callParent();
},
}); });

View File

@ -241,3 +241,15 @@ span.snapshot-comment-column {
font-weight: bold; font-weight: bold;
content: "V."; content: "V.";
} }
.x-action-col-icon {
margin: 0 1px;
font-size: 14px;
}
.x-action-col-icon:before, .x-action-col-icon:after {
font-size: 14px;
}
.x-action-col-icon:hover:before, .x-action-col-icon:hover:after {
text-shadow: 1px 1px 1px #AAA;
font-weight: 800;
}

View File

@ -47,24 +47,6 @@ Ext.define('PBS.DatastoreStatistics', {
controller: { controller: {
xclass: 'Ext.app.ViewController', xclass: 'Ext.app.ViewController',
render_estimate: function(value) {
if (!value) {
return gettext('Not enough data');
}
let now = new Date();
let estimate = new Date(value*1000);
let timespan = (estimate - now)/1000;
if (Number(estimate) <= Number(now) || isNaN(timespan)) {
return gettext('Never');
}
let duration = Proxmox.Utils.format_duration_human(timespan);
return Ext.String.format(gettext("in {0}"), duration);
},
init: function(view) { init: function(view) {
Proxmox.Utils.monStoreErrors(view, view.getStore().rstore); Proxmox.Utils.monStoreErrors(view, view.getStore().rstore);
}, },
@ -111,7 +93,7 @@ Ext.define('PBS.DatastoreStatistics', {
text: gettext('Estimated Full'), text: gettext('Estimated Full'),
dataIndex: 'estimated-full-date', dataIndex: 'estimated-full-date',
sortable: true, sortable: true,
renderer: 'render_estimate', renderer: PBS.Utils.render_estimate,
flex: 1, flex: 1,
minWidth: 130, minWidth: 130,
maxWidth: 200, maxWidth: 200,

View File

@ -4,9 +4,6 @@ Ext.define('PBS.TaskSummary', {
title: gettext('Task Summary'), title: gettext('Task Summary'),
controller: {
xclass: 'Ext.app.ViewController',
states: [ states: [
"", "",
"error", "error",
@ -30,14 +27,24 @@ Ext.define('PBS.TaskSummary', {
"verify": gettext('Verify'), "verify": gettext('Verify'),
}, },
// set true to show the onclick panel as modal grid
subPanelModal: false,
// the datastore the onclick panel is filtered by
datastore: undefined,
controller: {
xclass: 'Ext.app.ViewController',
openTaskList: function(grid, td, cellindex, record, tr, rowindex) { openTaskList: function(grid, td, cellindex, record, tr, rowindex) {
let me = this; let me = this;
let view = me.getView(); let view = me.getView();
if (cellindex > 0) { if (cellindex > 0) {
let tasklist = view.tasklist; let tasklist = view.tasklist;
let state = me.states[cellindex]; let state = view.states[cellindex];
let type = me.types[rowindex]; let type = view.types[rowindex];
let filterParam = { let filterParam = {
limit: 0, limit: 0,
'statusfilter': state, 'statusfilter': state,
@ -48,7 +55,11 @@ Ext.define('PBS.TaskSummary', {
filterParam.since = me.since; filterParam.since = me.since;
} }
if (record.data[state] === 0) { if (view.datastore) {
filterParam.store = view.datastore;
}
if (record.data[state] === 0 || record.data[state] === undefined) {
return; return;
} }
@ -137,7 +148,7 @@ Ext.define('PBS.TaskSummary', {
tasklist.cidx = cellindex; tasklist.cidx = cellindex;
tasklist.ridx = rowindex; tasklist.ridx = rowindex;
let task = me.titles[type]; let task = view.titles[type];
let status = ""; let status = "";
switch (state) { switch (state) {
case 'ok': status = gettext("OK"); break; case 'ok': status = gettext("OK"); break;
@ -149,7 +160,13 @@ Ext.define('PBS.TaskSummary', {
tasklist.getStore().getProxy().setExtraParams(filterParam); tasklist.getStore().getProxy().setExtraParams(filterParam);
tasklist.getStore().removeAll(); tasklist.getStore().removeAll();
if (view.subPanelModal) {
tasklist.modal = true;
tasklist.showBy(Ext.getBody(), 'c-c');
} else {
tasklist.modal = false;
tasklist.showBy(td, 'bl-tl'); tasklist.showBy(td, 'bl-tl');
}
setTimeout(() => tasklist.getStore().reload(), 10); setTimeout(() => tasklist.getStore().reload(), 10);
} }
}, },
@ -182,8 +199,13 @@ Ext.define('PBS.TaskSummary', {
render_count: function(value, md, record, rowindex, colindex) { render_count: function(value, md, record, rowindex, colindex) {
let me = this; let me = this;
let icon = me.render_icon(me.states[colindex], value); let view = me.getView();
return `${icon} ${value}`; let count = value || 0;
if (count > 0) {
md.tdCls = 'pointer';
}
let icon = me.render_icon(view.states[colindex], count);
return `${icon} ${value || 0}`;
}, },
}, },
@ -191,8 +213,18 @@ Ext.define('PBS.TaskSummary', {
let me = this; let me = this;
let controller = me.getController(); let controller = me.getController();
let data = []; let data = [];
controller.types.forEach((type) => { if (!source) {
source[type].type = controller.titles[type]; source = {};
}
me.types.forEach((type) => {
if (!source[type]) {
source[type] = {
error: 0,
warning: 0,
ok: 0,
};
}
source[type].type = me.titles[type];
data.push(source[type]); data.push(source[type]);
}); });
me.lookup('grid').getStore().setData(data); me.lookup('grid').getStore().setData(data);
@ -214,7 +246,7 @@ Ext.define('PBS.TaskSummary', {
rowLines: false, rowLines: false,
viewConfig: { viewConfig: {
stripeRows: false, stripeRows: false,
trackOver: false, trackOver: true,
}, },
scrollable: false, scrollable: false,
disableSelection: true, disableSelection: true,

View File

@ -12,6 +12,7 @@ Ext.define('pbs-data-store-snapshots', {
'files', 'files',
'owner', 'owner',
'verification', 'verification',
'fingerprint',
{ name: 'size', type: 'int', allowNull: true }, { name: 'size', type: 'int', allowNull: true },
{ {
name: 'crypt-mode', name: 'crypt-mode',
@ -182,6 +183,7 @@ Ext.define('PBS.DataStoreContent', {
for (const file of data.files) { for (const file of data.files) {
file.text = file.filename; file.text = file.filename;
file['crypt-mode'] = PBS.Utils.cryptmap.indexOf(file['crypt-mode']); file['crypt-mode'] = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
file.fingerprint = data.fingerprint;
file.leaf = true; file.leaf = true;
file.matchesFilter = true; file.matchesFilter = true;
@ -595,6 +597,7 @@ Ext.define('PBS.DataStoreContent', {
header: gettext('Actions'), header: gettext('Actions'),
xtype: 'actioncolumn', xtype: 'actioncolumn',
dataIndex: 'text', dataIndex: 'text',
width: 140,
items: [ items: [
{ {
handler: 'onVerify', handler: 'onVerify',
@ -698,7 +701,16 @@ Ext.define('PBS.DataStoreContent', {
if (iconCls) { if (iconCls) {
iconTxt = `<i class="fa fa-fw fa-${iconCls}"></i> `; iconTxt = `<i class="fa fa-fw fa-${iconCls}"></i> `;
} }
return (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText; let tip;
if (v !== PBS.Utils.cryptmap.indexOf('none') && record.data.fingerprint !== undefined) {
tip = "Key: " + PBS.Utils.renderKeyID(record.data.fingerprint);
}
let txt = (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText;
if (record.parentNode.id === 'root' || tip === undefined) {
return txt;
} else {
return `<span data-qtip="${tip}">${txt}</span>`;
}
}, },
}, },
{ {

View File

@ -0,0 +1,257 @@
// Overview over all datastores
Ext.define('PBS.datastore.DataStoreList', {
extend: 'Ext.panel.Panel',
alias: 'widget.pbsDataStoreList',
title: gettext('Summary'),
scrollable: true,
bodyPadding: 5,
defaults: {
xtype: 'pbsDataStoreListSummary',
padding: 5,
},
datastores: {},
tasks: {},
updateTasks: function(taskStore, records, success) {
let me = this;
if (!success) {
return;
}
for (const store of Object.keys(me.tasks)) {
me.tasks[store] = {};
}
records.forEach(record => {
let task = record.data;
if (!task.worker_id) {
return;
}
let type = task.worker_type;
if (type === 'syncjob') {
type = 'sync';
}
if (type.startsWith('verif')) {
type = 'verify';
}
let datastore = PBS.Utils.parse_datastore_worker_id(type, task.worker_id);
if (!datastore) {
return;
}
if (!me.tasks[datastore]) {
me.tasks[datastore] = {};
}
if (!me.tasks[datastore][type]) {
me.tasks[datastore][type] = {};
}
if (me.tasks[datastore][type] && task.status) {
let parsed = Proxmox.Utils.parse_task_status(task.status);
if (!me.tasks[datastore][type][parsed]) {
me.tasks[datastore][type][parsed] = 0;
}
me.tasks[datastore][type][parsed]++;
}
});
for (const [store, panel] of Object.entries(me.datastores)) {
panel.setTasks(me.tasks[store], me.since);
}
},
updateStores: function(usageStore, records, success) {
let me = this;
if (!success) {
return;
}
let found = {};
records.forEach((rec) => {
found[rec.data.store] = true;
me.addSorted(rec.data);
});
for (const [store, panel] of Object.entries(me.datastores)) {
if (!found[store]) {
me.remove(panel);
delete me.datastores[store];
}
}
let hasDatastores = Object.keys(me.datastores).length > 0;
me.getComponent('emptybox').setHidden(hasDatastores);
},
addSorted: function(data) {
let me = this;
let i = 1;
let datastores = Object
.keys(me.datastores)
.sort((a, b) => a.localeCompare(b));
for (const datastore of datastores) {
let result = datastore.localeCompare(data.store);
if (result === 0) {
me.datastores[datastore].setStatus(data);
return;
} else if (result > 0) {
break;
}
i++;
}
me.datastores[data.store] = me.insert(i, {
datastore: data.store,
});
me.datastores[data.store].setStatus(data);
me.datastores[data.store].setTasks(me.tasks[data.store], me.since);
},
initComponent: function() {
let me = this;
me.items = [
{
itemId: 'emptybox',
hidden: true,
xtype: 'box',
html: gettext('No Datastores configured'),
},
];
me.datastores = {};
// todo make configurable?
me.since = (Date.now()/1000 - 30 * 24*3600).toFixed(0);
me.usageStore = Ext.create('Proxmox.data.UpdateStore', {
storeid: 'datastore-overview-usage',
interval: 5000,
proxy: {
type: 'proxmox',
url: '/api2/json/status/datastore-usage',
},
listeners: {
load: {
fn: me.updateStores,
scope: me,
},
},
});
me.taskStore = Ext.create('Proxmox.data.UpdateStore', {
storeid: 'datastore-overview-tasks',
interval: 15000,
model: 'proxmox-tasks',
proxy: {
type: 'proxmox',
url: '/api2/json/nodes/localhost/tasks',
extraParams: {
limit: 0,
since: me.since,
},
},
listeners: {
load: {
fn: me.updateTasks,
scope: me,
},
},
});
me.callParent();
Proxmox.Utils.monStoreErrors(me, me.usageStore);
Proxmox.Utils.monStoreErrors(me, me.taskStore);
me.on('activate', function() {
me.usageStore.startUpdate();
me.taskStore.startUpdate();
});
me.on('destroy', function() {
me.usageStore.stopUpdate();
me.taskStore.stopUpdate();
});
me.on('deactivate', function() {
me.usageStore.stopUpdate();
me.taskStore.stopUpdate();
});
},
});
Ext.define('PBS.datastore.DataStores', {
extend: 'Ext.tab.Panel',
alias: 'widget.pbsDataStores',
title: gettext('Datastores'),
stateId: 'pbs-datastores-panel',
stateful: true,
stateEvents: ['tabchange'],
applyState: function(state) {
let me = this;
if (state.tab !== undefined && me.rendered) {
me.setActiveTab(state.tab);
} else if (state.tab) {
// if we are not rendered yet, defer setting the activetab
setTimeout(function() {
me.setActiveTab(state.tab);
}, 10);
}
},
getState: function() {
let me = this;
return {
tab: me.getActiveTab().getItemId(),
};
},
border: false,
defaults: {
border: false,
},
tools: [PBS.Utils.get_help_tool("datastore_intro")],
items: [
{
xtype: 'pbsDataStoreList',
iconCls: 'fa fa-book',
},
{
iconCls: 'fa fa-refresh',
itemId: 'syncjobs',
xtype: 'pbsSyncJobView',
},
{
iconCls: 'fa fa-check-circle',
itemId: 'verifyjobs',
xtype: 'pbsVerifyJobView',
},
{
itemId: 'acl',
xtype: 'pbsACLView',
iconCls: 'fa fa-unlock',
aclPath: '/datastore',
},
],
initComponent: function() {
let me = this;
// remove invalid activeTab settings
if (me.activeTab && !me.items.some((item) => item.itemId === me.activeTab)) {
delete me.activeTab;
}
me.callParent();
},
});

View File

@ -0,0 +1,182 @@
// Summary Panel for a single datastore in overview
Ext.define('PBS.datastore.DataStoreListSummary', {
extend: 'Ext.panel.Panel',
alias: 'widget.pbsDataStoreListSummary',
mixins: ['Proxmox.Mixin.CBind'],
cbind: {
title: '{datastore}',
},
referenceHolder: true,
bodyPadding: 10,
layout: {
type: 'hbox',
align: 'stretch',
},
viewModel: {
data: {
full: "N/A",
stillbad: 0,
deduplication: 1.0,
},
},
setTasks: function(taskdata, since) {
let me = this;
me.down('pbsTaskSummary').updateTasks(taskdata, since);
},
setStatus: function(statusData) {
let me = this;
let vm = me.getViewModel();
let usage = statusData.used/statusData.total;
let usagetext = Ext.String.format(gettext('{0} of {1}'),
Proxmox.Utils.format_size(statusData.used),
Proxmox.Utils.format_size(statusData.total),
);
let usagePanel = me.lookup('usage');
usagePanel.updateValue(usage, usagetext);
let estimate = PBS.Utils.render_estimate(statusData['estimated-full-date']);
vm.set('full', estimate);
vm.set('deduplication', PBS.Utils.calculate_dedup_factor(statusData['gc-status']).toFixed(2));
vm.set('stillbad', statusData['gc-status']['still-bad']);
let last = 0;
let time = statusData['history-start'];
let delta = statusData['history-delta'];
let data = statusData.history.map((val) => {
if (val === null) {
val = last;
} else {
last = val;
}
let entry = {
time: time*1000, // js Dates are ms since epoch
val,
};
time += delta;
return entry;
});
me.lookup('historychart').setData(data);
},
items: [
{
xtype: 'container',
layout: {
type: 'vbox',
align: 'stretch',
},
width: 375,
padding: '5 25 5 5',
defaults: {
padding: 2,
},
items: [
{
xtype: 'proxmoxGauge',
warningThreshold: 0.8,
criticalThreshold: 0.95,
flex: 1,
reference: 'usage',
},
{
xtype: 'pmxInfoWidget',
iconCls: 'fa fa-fw fa-line-chart',
title: gettext('Estimated Full'),
printBar: false,
bind: {
data: {
text: '{full}',
},
},
},
{
xtype: 'pmxInfoWidget',
iconCls: 'fa fa-fw fa-compress',
title: gettext('Deduplication Factor'),
printBar: false,
bind: {
data: {
text: '{deduplication}',
},
},
},
{
xtype: 'pmxInfoWidget',
iconCls: 'fa critical fa-fw fa-exclamation-triangle',
title: gettext('Bad Chunks'),
printBar: false,
hidden: true,
bind: {
data: {
text: '{stillbad}',
},
visible: '{stillbad}',
},
},
],
},
{
xtype: 'container',
layout: {
type: 'vbox',
align: 'stretch',
},
flex: 1,
items: [
{
padding: 5,
xtype: 'pbsUsageChart',
reference: 'historychart',
title: gettext('Usage History'),
height: 100,
},
{
xtype: 'container',
flex: 1,
layout: {
type: 'vbox',
align: 'stretch',
},
defaults: {
padding: 5,
},
items: [
{
xtype: 'label',
text: gettext('Task Summary')
+ ` (${Ext.String.format(gettext('{0} days'), 30)})`,
},
{
xtype: 'pbsTaskSummary',
border: false,
header: false,
subPanelModal: true,
flex: 2,
bodyPadding: 0,
minHeight: 0,
cbind: {
datastore: '{datastore}',
},
},
],
},
],
},
],
});

View File

@ -56,7 +56,7 @@ Ext.define('PBS.DataStoreNotes', {
url: me.url, url: me.url,
waitMsgTarget: me, waitMsgTarget: me,
failure: function(response, opts) { failure: function(response, opts) {
me.update(gettext('Error') + " " + response.htmlStatus); Ext.Msg.alert(gettext('Error'), response.htmlStatus);
me.setCollapsed(false); me.setCollapsed(false);
}, },
success: function(response, opts) { success: function(response, opts) {

Some files were not shown because too many files have changed in this diff Show More