Compare commits

...

92 Commits

Author SHA1 Message Date
1a48cbf164 bump version to 0.9.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 16:19:49 +02:00
3480777d89 d/control: bump versioned dependency of proxmox-widget-toolkit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 15:30:08 +02:00
a71bc08ff4 src/tools/parallel_handler.rs: remove lifetime hacks, require 'static
In theory, one can do std::mem::forget, and ignore the drop handler. With
the lifetime hack, this could result in a crash.

So we simply require 'static lifetime now (futures also needs that).
2020-10-01 14:52:48 +02:00
df766e668f d/control: add pve-eslint to build dependencies
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 14:46:30 +02:00
0a8f3ae0b3 src/tools/parallel_handler.rs: cleanup check_abort code 2020-10-01 14:37:29 +02:00
da6e67b321 rrd: fix integer underflow
Causes a panic if last_update is smaller than RRD_DATA_ENTRIES*reso,
which (I believe) can happen when inserting the first value for a DB.

Clamp the value to 0 in that case.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-01 14:30:32 +02:00
dec00364b3 ParallelHandler: check for errors during thread join
Fix a potential bug where errors that happen after the SendHandle has
been dropped while doing the thread join might have been ignored.
Requires internal check_abort to be moved out of 'impl SendHandle' since
we only have the Mutex left, not the SendHandle.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-01 14:30:32 +02:00
5637087cc9 www: do incremental lint for development, full for build
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 13:14:03 +02:00
5ad4bdc482 eslint fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 13:03:14 +02:00
823867f5b7 datastore: gc: avoid unsafe call into libc, use epoch_i64 helper
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 12:38:38 +02:00
c6772c92b8 datastore: gc: comment exclusive process lock
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 12:38:04 +02:00
79f6a79cfc assume correct backup, avoid verifying chunk existance
This can slow things down by a lot on setups with (relatively) high
seek time, in the order of doubling the backup times if cache isn't
populated with the last backups chunk inode info.

Effectively there's nothing known this protects us from in the
codebase. The only thing which was theorized about was the case
where a really long running backup job (over 24 hours) is still
running and writing new chunks, not indexed yet anywhere, then an
update (or manual action) triggers a reload of the proxy. There was
some theory that then a GC in the new daemon would not know about the
oldest writer in the old one, and thus use a less strict atime limit
for chunk sweeping - opening up a window for deleting chunks from the
long running backup.
But, this simply cannot happen as we have a per datastore process
wide flock, which is acquired shared by backup jobs and exclusive by
GC. In the same process GC and backup can both get it, as it has a
process locking granularity. If there's an old daemon with a writer,
that also has the lock open shared, and so no GC in the new process
can get exclusive access to it.

So, with that confirmed we have no need for a "half-assed"
verification in the backup finish step. Rather, we plan to add an
opt-in "full verify each backup on finish" option (see #2988)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 12:06:59 +02:00
4c7f100d22 src/api2/reader.rs: fix speedtest description 2020-10-01 11:16:15 +02:00
9070d11f4c src/api2/backup.rs: use block_in_place for remove_backup 2020-10-01 11:11:14 +02:00
124b93f31c upload_chunk: use block_in_place 2020-10-01 11:00:23 +02:00
0f22f53b36 ui: RemoteEdit: remove port field and parse it from host field
use our hostport regexes to parse out a potential port from the host field
and send it individually

this makes for a simpler and cleaner ui

this additionally checks the field for valid input before sending it to
the backend

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-01 10:12:04 +02:00
3784dbf029 ui: RemoteView: improve host columns
do not show the default (8007) port
and only add brackets [] to ipv6 addresses if there is a port

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-01 10:11:31 +02:00
4c95d58c41 api2/types: fix DNS_NAME Regexes
We forgot to put braces around the DNS_NAME regex, and in
DNS_NAME_OR_IP_REGEX

this is wrong because the regex

 ^foo|bar$

matches 'foo' at the beginning and 'bar' at the end, so either

 foobaz
 bazbar

would match. only

 ^(foo|bar)$

 matches only 'foo' and 'bar'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-01 06:09:34 +02:00
38d4675921 fix ipv6 handling for remotes/sync jobs
* add square brackets to ipv6 adresses in BackupRepository if they not
already have some (we save them without in the remote config)

* in get_pull_parameters, we now create a BackupRepository first and use
  those values (which does the [] mapping), this also has the advantage
  that we have one place less were we hardcode 8007 as port

* in the ui, add square brackets for ipv6 adresses for remotes

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 13:40:03 +02:00
7b8aa893fa src/client/pull.rs: log progress 2020-09-30 13:35:09 +02:00
fb2678f96e www/index.hbs: add nodename to title 2020-09-30 12:10:04 +02:00
486ed27299 ui: improve running task overlay
by setting a maxHeight+scrollable
(i used 500px to be still visible on our 'min screen size' 1280x720)

and by disabling emptyText deferral, which now shows the text instantly

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 11:07:08 +02:00
df4827f2c0 tasks: improve behaviour on upgrade
when upgrading from a version where we stored all tasks in the 'active' file,
we did not completly account for finished tasks still there

we should update the file when encountering any finished task in
'active' as well as filter them out on the api call (if they get through)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 11:05:50 +02:00
ef1b436350 paperkey: add html output 2020-09-30 10:49:20 +02:00
b19b4bfcb0 examples: fix HttpClient::new usage 2020-09-30 10:49:20 +02:00
e64b9f9204 src/tools.rs: make command_output return Vec<u8>
And add a new helper to return output as string.
2020-09-30 10:49:20 +02:00
9c33683c25 ui: add port support for remotes
by adding a field to RemoteEdit and showing it in the grid

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 10:49:20 +02:00
ba20987ae7 client/remote: add support to specify port number
this adds the ability to add port numbers in the backup repo spec
as well as remotes, so that user that are behind a
NAT/Firewall/Reverse proxy can still use it

also adds some explanation and examples to the docs to make it clearer
for h2 client i left the localhost:8007 part, since it is not
configurable where we bind to

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 10:49:20 +02:00
729d41fe6a api: disks/zfs: check template exsits before enabling zfs-import service
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-30 09:34:21 +02:00
905147a5ee api2/node/disks/zfs: instantiate import service
When creating a new zpool for a datastore, also instantiate an
import-unit for it. This helps in cases where '/etc/zfs/zool.cache'
get corrupted and thus the pool is not imported upon boot.

This patch needs the corresponding addition of 'zfs-import@.service' in
the zfsonlinux repository.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-09-30 08:43:38 +02:00
0c41e0d06b ui: add task description for logrotation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 06:17:07 +02:00
b37b59b726 ui: RemoteEdit: make comment and fingerprint deletable
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 06:16:53 +02:00
60b9b48e71 require square brackets for ipv6 addresses
we need this, because we append the port to this to get a target url
e.g. we print

format!("https://{}:8007/", address)

if address is now an ipv6 (e.g. fe80::1) it would become

https://fe80::1:8007/ which is a valid ipv6 on its own

by using square brackets we get:

https://[fe80::1]:8007/ which now connects to the correct ip/port

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 06:16:27 +02:00
abf8b5d475 docs: fix wrong user in repository explanation
we use 'root@pam' by default, not 'root'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 06:14:36 +02:00
7eebe1483e server/worker_task: fix panic on slice range when index is empty
since len() and MAX_INDEX_TASKS are both usize, they underflow
instead of getting negative values

instead check the sizes and set them accordingly

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 06:11:06 +02:00
9a76091785 proxmox-backup-proxy: add task archive rotation
this starts a task once a day at "00:00" that rotates the task log
archive if it is bigger than 500k

if we want, we can make the schedule/size limit/etc. configurable,
but for now it's ok to set fixed values for that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:41:18 +02:00
c386b06fc6 server/worker_task: remove unecessary read_task_list
since there are no users of this anymore and we now have a nicer
TaskListInfoIterator to use, we can drop this function

this also means that 'update_active_workers' does not need to return
a list anymore since we never used that result besides in
read_task_list

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:40:50 +02:00
6bcfc5c1a4 api2/status: use the TaskListInfoIterator here
this means that limiting with epoch now works correctly
also change the api type to i64, since that is what the starttime is
saved as

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:40:24 +02:00
768e10d0b3 api2/node/tasks: use TaskListInfoIterator instead of read_task_list
this makes the filtering/limiting much nicer and readable

since we now have potentially an 'infinite' amount of tasks we iterate over,
and cannot now beforehand how many there are, we return the total count
as always 1 higher then requested iff we are not at the end (this is
the case when the amount of entries is smaller than the requested limit)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:40:02 +02:00
e7244387c7 server/worker_task: add TaskListInfoIterator
this is an iterator that reads/parses/updates the task list as
necessary and returns the tasks in descending order (newest first)

it does this by using our logrotate iterator and using a vecdeque

we can use this to iterate over all tasks, even if they are in the
archive and even if the archive is logrotated but only read
as much as we need

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:39:16 +02:00
5ade6c25f3 server/worker_task: write older tasks into archive file
instead of removing tasks beyond the 1000 that are in the index
write them into an archive file by appending them at the end
this way we can later still read them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:38:44 +02:00
784fa1c2e3 server/worker_task: split task list file into two
one for only the active tasks and one for up to 1000 finished tasks

factor out the parsing of a task file (we will later need this again)
and use iterator combinators for easier code

we now sort the tasks ascending (this will become important in a later patch)
but reverse (for now) it to keep compatibility

this code also omits the converting into an intermittent hash
since it cannot really happen that we have duplicate tasks in this list
(since the call is locked by an flock, and it is the only place where we
write into the lists)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:38:28 +02:00
66f4e6a809 server/worker_task: refactor locking of the task list
also add the functionality of having a 'shared' (read) lock for the list
we will need this later

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:37:54 +02:00
8074d2b0c3 tools: add logrotate module
this is a helper to rotate and iterate over log files
there is an iterator for open filehandles as well as
only the filename

also it has the possibilty to rotate them
for compression, zstd is used

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:33:21 +02:00
b02d49ab26 proxmox_backup_client key: allow to generate paperkey for master key 2020-09-29 08:29:42 +02:00
82a0cd2ad4 proxmox_backup_client key: add new paper-key command 2020-09-29 08:29:42 +02:00
ee1a9c3230 parallel_handler: clippy: 'while_let_loop'
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-09-29 08:13:51 +02:00
db24c01106 parallel_handler: explicit Arc::clone
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-09-28 13:40:03 +02:00
ae3cfa8f0d parallel_handler: formatting cleanup, doc comment typo fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-09-28 13:40:03 +02:00
b56c111e93 depend on proxmox 0.4.2 2020-09-28 10:50:44 +02:00
bbeb0256f1 server/worker_task: factor out task list rendering
we will need this later again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-28 07:31:27 +02:00
005a5b9677 api2/node/tasks: move userfilter to function signature
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-28 07:18:13 +02:00
55bee04856 src/tools/parallel_handler.rs: remove unnecessary Sync bound 2020-09-26 16:16:11 +02:00
42fd40a124 src/bin/proxmox_backup_client/benchmark.rs: avoid compiler warning 2020-09-26 16:13:19 +02:00
f21508b9e1 src/backup/verify.rs: use ParallelHandler to verify chunks 2020-09-26 11:14:37 +02:00
ee7a308de4 src/backup/verify.rs: cleanup use clause 2020-09-26 10:23:44 +02:00
636e674ee7 src/client/pull.rs: simplify code 2020-09-26 10:09:51 +02:00
b02b374b46 src/tools/parallel_handler.rs: remove static lifetime bound from handler_fn 2020-09-26 09:26:06 +02:00
1c13afa8f9 src/tools/parallel_handler.rs: join all threads in drop handler 2020-09-26 08:47:56 +02:00
69b92fab7e src/tools/parallel_handler.rs: remove unnecessary Sync trait bound 2020-09-26 07:38:44 +02:00
6ab77df3f5 ui: some more eslint auto-fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-25 18:47:25 +02:00
264c19582b ui: some more eslint auto-fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-25 18:36:58 +02:00
8acd4d9afc ui: some more eslint auto-fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-25 18:34:54 +02:00
65b0cea6bd ui: some eslint auto-fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-25 18:29:42 +02:00
cfe01b2e6a bump version to 0.8.21-1 2020-09-25 13:20:35 +02:00
b19b032be3 debian/control: update 2020-09-25 13:17:49 +02:00
5441708634 src/client/pull.rs: use new ParallelHandler 2020-09-25 12:58:20 +02:00
3c9b370255 src/tools/parallel_handler.rs: execute closure inside a thread pool 2020-09-25 12:58:20 +02:00
510544770b depend on crossbeam-channel 2020-09-25 12:58:20 +02:00
e8293841c2 docs: html: show "Proxmox Backup" in navi for small devices
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 20:03:17 +02:00
46114bf28e docs: html: improve css for small displays
fixed-width navi/toc links were not switched in color for small width
displays, and thus they were barely readable as the background
switches to dark for small widths.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 20:03:17 +02:00
0d7e61f06f docs: buildsys: add more dependencies to html target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 19:45:23 +02:00
fd6a54dfbc docs: conf: fix conf for new alabaster theme version
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 19:44:50 +02:00
1ea5722b8f docs: html: adapt custom css
highlighting the current chapter and some other small formatting
improvements

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 19:44:00 +02:00
bc8fadf494 docs: index: hide todo list toctree and genindex
I do not found another way to disable inclusion in the sidebar...

The genindex information is alredy provided through glossary

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 19:43:18 +02:00
a76934ad33 docs: html: adapt sidebar in index page
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 19:41:19 +02:00
d7a122a026 use jobstate mechanism for verify/garbage_collection schedules
also changes:
* correct comment about reset (replace 'sync' with 'action')
* check schedule change correctly (only when it is actually changed)

with this changes, we can drop the 'lookup_last_worker' method

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-24 17:06:12 +02:00
6c25588e63 proxy: fix error handling in prune scheduling
we rely on the jobstate handling to write the error of the worker
into its state file, but we used '?' here in a block which does not
return the error to the block, but to the function/closure instead

so if a prune job failed because of such an '?', we did not write
into the statefile and got a wrong state there

instead use our try_block! macro that wraps the code in a closure

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-24 17:06:09 +02:00
17a1f579d0 bump version to 0.8.20-1 2020-09-24 13:17:06 +02:00
998db63933 src/client/pull.rs: decode, verify and write in a separate threads
To maximize throughput.
2020-09-24 13:12:04 +02:00
c0fa14d94a src/backup/data_blob.rs: add is_encrypted helper 2020-09-24 13:00:16 +02:00
6fd129844d remove DummyCatalogWriter
we're using an `Option` instead now

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-09-24 09:13:54 +02:00
baae780c99 benchmark: use compressable data to get more realistic result
And add a benchmatrk to test chunk verify speed (decompress+sha256).
2020-09-24 08:58:13 +02:00
09a1da25ed src/backup/data_blob.rs: improve decompress speed 2020-09-24 08:52:35 +02:00
298c6aaef6 docs: add onlineHelp to some panels
name sections according to the title or content and add
the respective onlineHelp to the following panels:
- datastore
- user management
- ACL
- backup remote

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
2020-09-22 19:48:32 +02:00
a329324139 bump version to 0.8.19-1 2020-09-22 13:30:52 +02:00
a83e2ffeab src/api2/reader.rs: use std::fs::read instead of tokio::fs::read
Because it is about 10%& faster this way.
2020-09-22 13:27:23 +02:00
5d7449a121 bump version to 0.8.18-1 2020-09-22 12:39:47 +02:00
ebbe4958c6 src/client/pull.rs: avoid duplicate downloads using in memory HashSet 2020-09-22 12:34:06 +02:00
73b2cc4977 src/client/pull.rs: allow up to 20 concurrent download streams 2020-09-22 11:39:31 +02:00
7ecfde8150 remote_chunk_reader.rs: use Arc for cache_hint to make clone faster 2020-09-22 11:39:31 +02:00
796480a38b docs: add version and date to HTML index
Similar to the PDF output or the Proxmox VE docs.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-22 09:00:12 +02:00
87 changed files with 2138 additions and 821 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "0.8.17" version = "0.9.0"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"] authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018" edition = "2018"
license = "AGPL-3" license = "AGPL-3"
@ -38,7 +38,7 @@ pam-sys = "0.5"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-utils = "0.1.0" pin-utils = "0.1.0"
pathpatterns = "0.1.2" pathpatterns = "0.1.2"
proxmox = { version = "0.4.1", features = [ "sortable-macro", "api-macro", "websocket" ] } proxmox = { version = "0.4.2", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0" proxmox-fuse = "0.1.0"
@ -61,6 +61,7 @@ walkdir = "2"
xdg = "2.2" xdg = "2.2"
zstd = { version = "0.4", features = [ "bindgen" ] } zstd = { version = "0.4", features = [ "bindgen" ] }
nom = "5.1" nom = "5.1"
crossbeam-channel = "0.4"
[features] [features]
default = [] default = []

69
debian/changelog vendored
View File

@ -1,3 +1,72 @@
rust-proxmox-backup (0.9.0-1) unstable; urgency=medium
* use ParallelHandler to verify chunks
* client: add new paper-key command to CLI tool
* server: split task list in active and archived
* tools: add logrotate module and use it for archived tasks, allowing to save
more than 100 thousands of tasks efficiently in the archive
* require square [brackets] for ipv6 addresses and fix ipv6 handling for
remotes/sync jobs
* ui: RemoteEdit: make comment and fingerprint deletable
* api/disks: create zfs: enable import systemd service unit for newly created
ZFS pools
* client and remotes: add support to specify a custom port number. The server
is still always listening on 8007, but you can now use things like reverse
proxies or port mapping.
* ui: RemoteEdit: allow to specify a port in the host field
* client pull: log progress
* various fixes and improvements
-- Proxmox Support Team <support@proxmox.com> Thu, 01 Oct 2020 16:19:40 +0200
rust-proxmox-backup (0.8.21-1) unstable; urgency=medium
* depend on crossbeam-channel
* speedup sync jobs (allow up to 4 worker threads)
* improve docs
* use jobstate mechanism for verify/garbage_collection schedules
* proxy: fix error handling in prune scheduling
-- Proxmox Support Team <support@proxmox.com> Fri, 25 Sep 2020 13:20:19 +0200
rust-proxmox-backup (0.8.20-1) unstable; urgency=medium
* improve sync speed
* benchmark: use compressable data to get more realistic result
* docs: add onlineHelp to some panels
-- Proxmox Support Team <support@proxmox.com> Thu, 24 Sep 2020 13:15:45 +0200
rust-proxmox-backup (0.8.19-1) unstable; urgency=medium
* src/api2/reader.rs: use std::fs::read instead of tokio::fs::read
-- Proxmox Support Team <support@proxmox.com> Tue, 22 Sep 2020 13:30:27 +0200
rust-proxmox-backup (0.8.18-1) unstable; urgency=medium
* src/client/pull.rs: allow up to 20 concurrent download streams
* docs: add version and date to HTML index
-- Proxmox Support Team <support@proxmox.com> Tue, 22 Sep 2020 12:39:26 +0200
rust-proxmox-backup (0.8.17-1) unstable; urgency=medium rust-proxmox-backup (0.8.17-1) unstable; urgency=medium
* src/client/pull.rs: open temporary manifest with truncate(true) * src/client/pull.rs: open temporary manifest with truncate(true)

12
debian/control vendored
View File

@ -12,6 +12,7 @@ Build-Depends: debhelper (>= 11),
librust-bitflags-1+default-dev (>= 1.2.1-~~), librust-bitflags-1+default-dev (>= 1.2.1-~~),
librust-bytes-0.5+default-dev, librust-bytes-0.5+default-dev,
librust-crc32fast-1+default-dev, librust-crc32fast-1+default-dev,
librust-crossbeam-channel-0.4+default-dev,
librust-endian-trait-0.6+arrays-dev, librust-endian-trait-0.6+arrays-dev,
librust-endian-trait-0.6+default-dev, librust-endian-trait-0.6+default-dev,
librust-futures-0.3+default-dev, librust-futures-0.3+default-dev,
@ -33,10 +34,10 @@ Build-Depends: debhelper (>= 11),
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~), librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~), librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-utils-0.1+default-dev, librust-pin-utils-0.1+default-dev,
librust-proxmox-0.4+api-macro-dev (>= 0.4.1-~~), librust-proxmox-0.4+api-macro-dev (>= 0.4.2-~~),
librust-proxmox-0.4+default-dev (>= 0.4.1-~~), librust-proxmox-0.4+default-dev (>= 0.4.2-~~),
librust-proxmox-0.4+sortable-macro-dev (>= 0.4.1-~~), librust-proxmox-0.4+sortable-macro-dev (>= 0.4.2-~~),
librust-proxmox-0.4+websocket-dev (>= 0.4.1-~~), librust-proxmox-0.4+websocket-dev (>= 0.4.2-~~),
librust-proxmox-fuse-0.1+default-dev, librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.6+default-dev (>= 0.6.1-~~), librust-pxar-0.6+default-dev (>= 0.6.1-~~),
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~), librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),
@ -77,6 +78,7 @@ Build-Depends: debhelper (>= 11),
uuid-dev, uuid-dev,
debhelper (>= 12~), debhelper (>= 12~),
bash-completion, bash-completion,
pve-eslint,
python3-docutils, python3-docutils,
python3-pygments, python3-pygments,
rsync, rsync,
@ -117,7 +119,7 @@ Description: Proxmox Backup Server daemon with tools and GUI
Package: proxmox-backup-client Package: proxmox-backup-client
Architecture: any Architecture: any
Depends: ${misc:Depends}, ${shlibs:Depends} Depends: qrencode ${misc:Depends}, ${shlibs:Depends}
Description: Proxmox Backup Client tools Description: Proxmox Backup Client tools
This package contains the Proxmox Backup client, which provides a This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups. simple command line tool to create and restore backups.

4
debian/control.in vendored
View File

@ -7,7 +7,7 @@ Depends: fonts-font-awesome,
pbs-i18n, pbs-i18n,
proxmox-backup-docs, proxmox-backup-docs,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4), proxmox-widget-toolkit (>= 2.3-1),
pve-xtermjs (>= 4.7.0-1), pve-xtermjs (>= 4.7.0-1),
smartmontools, smartmontools,
${misc:Depends}, ${misc:Depends},
@ -19,7 +19,7 @@ Description: Proxmox Backup Server daemon with tools and GUI
Package: proxmox-backup-client Package: proxmox-backup-client
Architecture: any Architecture: any
Depends: ${misc:Depends}, ${shlibs:Depends} Depends: qrencode ${misc:Depends}, ${shlibs:Depends}
Description: Proxmox Backup Client tools Description: Proxmox Backup Client tools
This package contains the Proxmox Backup client, which provides a This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups. simple command line tool to create and restore backups.

View File

@ -14,6 +14,7 @@ section = "admin"
build_depends = [ build_depends = [
"debhelper (>= 12~)", "debhelper (>= 12~)",
"bash-completion", "bash-completion",
"pve-eslint",
"python3-docutils", "python3-docutils",
"python3-pygments", "python3-pygments",
"rsync", "rsync",

View File

@ -74,7 +74,7 @@ onlinehelpinfo:
@echo "Build finished. OnlineHelpInfo.js is in $(BUILDDIR)/scanrefs." @echo "Build finished. OnlineHelpInfo.js is in $(BUILDDIR)/scanrefs."
.PHONY: html .PHONY: html
html: ${GENERATED_SYNOPSIS} html: ${GENERATED_SYNOPSIS} images/proxmox-logo.svg custom.css conf.py
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
cp images/proxmox-logo.svg $(BUILDDIR)/html/_static/ cp images/proxmox-logo.svg $(BUILDDIR)/html/_static/
cp custom.css $(BUILDDIR)/html/_static/ cp custom.css $(BUILDDIR)/html/_static/

11
docs/_templates/index-sidebar.html vendored Normal file
View File

@ -0,0 +1,11 @@
<h3>Navigation</h3>
{{ toctree(includehidden=theme_sidebar_includehidden, collapse=True, titles_only=True) }}
{% if theme_extra_nav_links %}
<hr />
<h3>Links</h3>
<ul>
{% for text, uri in theme_extra_nav_links.items() %}
<li class="toctree-l1"><a href="{{ uri }}">{{ text }}</a></li>
{% endfor %}
</ul>
{% endif %}

7
docs/_templates/sidebar-header.html vendored Normal file
View File

@ -0,0 +1,7 @@
<p class="logo">
<a href="index.html">
<img class="logo" src="_static/proxmox-logo.svg" alt="Logo">
</a>
</p>
<h1 class="logo logo-name"><a href="index.html">Proxmox Backup</a></h1>
<hr style="width:100%;">

View File

@ -127,7 +127,7 @@ Backup Server Management
The command line tool to configure and manage the backup server is called The command line tool to configure and manage the backup server is called
:command:`proxmox-backup-manager`. :command:`proxmox-backup-manager`.
.. _datastore_intro:
:term:`DataStore` :term:`DataStore`
~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~
@ -364,7 +364,7 @@ directories will store the chunked data after a backup operation has been execut
276489 drwxr-xr-x 3 backup backup 4.0K Jul 8 12:35 .. 276489 drwxr-xr-x 3 backup backup 4.0K Jul 8 12:35 ..
276490 drwxr-x--- 1 backup backup 1.1M Jul 8 12:35 . 276490 drwxr-x--- 1 backup backup 1.1M Jul 8 12:35 .
.. _user_mgmt:
User Management User Management
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
@ -448,6 +448,8 @@ Or completely remove the user with:
# proxmox-backup-manager user remove john@pbs # proxmox-backup-manager user remove john@pbs
.. _user_acl:
Access Control Access Control
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
@ -631,6 +633,8 @@ You can also configure DNS settings, from the **DNS** section
of **Configuration** or by using the ``dns`` subcommand of of **Configuration** or by using the ``dns`` subcommand of
``proxmox-backup-manager``. ``proxmox-backup-manager``.
.. _backup_remote:
:term:`Remote` :term:`Remote`
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
@ -728,15 +732,33 @@ Repository Locations
The client uses the following notation to specify a datastore repository The client uses the following notation to specify a datastore repository
on the backup server. on the backup server.
[[username@]server:]datastore [[username@]server[:port]:]datastore
The default value for ``username`` ist ``root``. If no server is specified, The default value for ``username`` ist ``root@pam``. If no server is specified,
the default is the local host (``localhost``). the default is the local host (``localhost``).
You can specify a port if your backup server is only reachable on a different
port (e.g. with NAT and port forwarding).
Note that if the server is an IPv6 address, you have to write it with
square brackets (e.g. [fe80::01]).
You can pass the repository with the ``--repository`` command You can pass the repository with the ``--repository`` command
line option, or by setting the ``PBS_REPOSITORY`` environment line option, or by setting the ``PBS_REPOSITORY`` environment
variable. variable.
Here some examples of valid repositories and the real values
================================ ============ ================== ===========
Example User Host:Port Datastore
================================ ============ ================== ===========
mydatastore ``root@pam`` localhost:8007 mydatastore
myhostname:mydatastore ``root@pam`` myhostname:8007 mydatastore
user@pbs@myhostname:mydatastore ``user@pbs`` myhostname:8007 mydatastore
192.168.55.55:1234:mydatastore ``root@pam`` 192.168.55.55:1234 mydatastore
[ff80::51]:mydatastore ``root@pam`` [ff80::51]:8007 mydatastore
[ff80::51]:1234:mydatastore ``root@pam`` [ff80::51]:1234 mydatastore
================================ ============ ================== ===========
Environment Variables Environment Variables
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~

View File

@ -97,12 +97,10 @@ language = None
# There are two options for replacing |today|: either, you set today to some # There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used: # non-false value, then it is used:
#
# today = '' # today = ''
# #
# Else, today_fmt is used as the format for a strftime call. # Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%A, %d %B %Y'
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
@ -164,18 +162,19 @@ html_theme = 'alabaster'
# #
html_theme_options = { html_theme_options = {
'fixed_sidebar': True, 'fixed_sidebar': True,
#'sidebar_includehidden': False, 'sidebar_includehidden': False,
'sidebar_collapse': False, # FIXME: documented, but does not works?! 'sidebar_collapse': False,
'show_relbar_bottom': True, # FIXME: documented, but does not works?! 'globaltoc_collapse': False,
'show_relbar_bottom': True,
'show_powered_by': False, 'show_powered_by': False,
'logo': 'proxmox-logo.svg', 'extra_nav_links': {
'logo_name': True, # show project name below logo 'Proxmox Homepage': 'https://proxmox.com',
#'logo_text_align': 'center', 'PDF': 'proxmox-backup.pdf',
#'description': 'Fast, Secure & Efficient.', },
'sidebar_width': '300px', 'sidebar_width': '320px',
'page_width': '1280px', 'page_width': '1320px',
# font styles # font styles
'head_font_family': 'Lato, sans-serif', 'head_font_family': 'Lato, sans-serif',
'caption_font_family': 'Lato, sans-serif', 'caption_font_family': 'Lato, sans-serif',
@ -183,6 +182,24 @@ html_theme_options = {
'font_family': 'Open Sans, sans-serif', 'font_family': 'Open Sans, sans-serif',
} }
# Alabaster theme recommends setting this fixed.
# If you switch theme this needs to removed, probably.
html_sidebars = {
'**': [
'sidebar-header.html',
'searchbox.html',
'navigation.html',
'relations.html',
],
'index': [
'sidebar-header.html',
'searchbox.html',
'index-sidebar.html',
]
}
# Add any paths that contain custom themes here, relative to this directory. # Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = [] # html_theme_path = []
@ -228,10 +245,6 @@ html_static_path = ['_static']
# #
# html_use_smartypants = True # html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to # Additional templates that should be rendered to pages, maps page names to
# template names. # template names.
# #

View File

@ -13,3 +13,40 @@ div.body img {
pre { pre {
padding: 5px 10px; padding: 5px 10px;
} }
li a.current {
font-weight: bold;
border-bottom: 1px solid #000;
}
ul li.toctree-l1 {
margin-top: 0.5em;
}
ul li.toctree-l1 > a {
color: #000;
}
div.sphinxsidebar form.search {
margin-bottom: 5px;
}
div.sphinxsidebar h3 {
width: 100%;
}
div.sphinxsidebar h1.logo-name {
display: none;
}
@media screen and (max-width: 875px) {
div.sphinxsidebar p.logo {
display: initial;
}
div.sphinxsidebar h1.logo-name {
display: block;
}
div.sphinxsidebar span {
color: #AAA;
}
ul li.toctree-l1 > a {
color: #FFF;
}
}

View File

@ -2,8 +2,8 @@
Welcome to the Proxmox Backup documentation! Welcome to the Proxmox Backup documentation!
============================================ ============================================
| Copyright (C) 2019-2020 Proxmox Server Solutions GmbH
Copyright (C) 2019-2020 Proxmox Server Solutions GmbH | Version |version| -- |today|
Permission is granted to copy, distribute and/or modify this document under the Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.3 or any later version terms of the GNU Free Documentation License, Version 1.3 or any later version
@ -45,9 +45,10 @@ in the section entitled "GNU Free Documentation License".
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:hidden:
:caption: Developer Appendix :caption: Developer Appendix
todos.rst todos.rst
* :ref:`genindex` .. # * :ref:`genindex`

View File

@ -32,7 +32,7 @@ async fn run() -> Result<(), Error> {
.interactive(true) .interactive(true)
.ticket_cache(true); .ticket_cache(true);
let client = HttpClient::new(host, username, options)?; let client = HttpClient::new(host, 8007, username, options)?;
let backup_time = proxmox::tools::time::parse_rfc3339("2019-06-28T10:49:48Z")?; let backup_time = proxmox::tools::time::parse_rfc3339("2019-06-28T10:49:48Z")?;

View File

@ -14,7 +14,7 @@ async fn upload_speed() -> Result<f64, Error> {
.interactive(true) .interactive(true)
.ticket_cache(true); .ticket_cache(true);
let client = HttpClient::new(host, username, options)?; let client = HttpClient::new(host, 8007, username, options)?;
let backup_time = proxmox::tools::time::epoch_i64(); let backup_time = proxmox::tools::time::epoch_i64();

View File

@ -175,7 +175,7 @@ pub fn update_acl(
_rpcenv: &mut dyn RpcEnvironment, _rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(acl::ACL_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(acl::ACL_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut tree, expected_digest) = acl::config()?; let (mut tree, expected_digest) = acl::config()?;

View File

@ -100,7 +100,7 @@ pub fn list_users(
/// Create new user. /// Create new user.
pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error> { pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let user: user::User = serde_json::from_value(param)?; let user: user::User = serde_json::from_value(param)?;
@ -211,7 +211,7 @@ pub fn update_user(
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?; let (mut config, expected_digest) = user::config()?;
@ -285,7 +285,7 @@ pub fn update_user(
/// Remove a user from the configuration file. /// Remove a user from the configuration file.
pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error> { pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?; let (mut config, expected_digest) = user::config()?;

View File

@ -200,7 +200,7 @@ async move {
}; };
if benchmark { if benchmark {
env.log("benchmark finished successfully"); env.log("benchmark finished successfully");
env.remove_backup()?; tools::runtime::block_in_place(|| env.remove_backup())?;
return Ok(()); return Ok(());
} }
match (res, env.ensure_finished()) { match (res, env.ensure_finished()) {
@ -222,7 +222,7 @@ async move {
(Err(err), Err(_)) => { (Err(err), Err(_)) => {
env.log(format!("backup failed: {}", err)); env.log(format!("backup failed: {}", err));
env.log("removing failed backup"); env.log("removing failed backup");
env.remove_backup()?; tools::runtime::block_in_place(|| env.remove_backup())?;
Err(err) Err(err)
}, },
} }

View File

@ -9,7 +9,7 @@ use proxmox::tools::digest_to_hex;
use proxmox::tools::fs::{replace_file, CreateOptions}; use proxmox::tools::fs::{replace_file, CreateOptions};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType}; use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::api2::types::{Userid, SnapshotVerifyState, VerifyState}; use crate::api2::types::Userid;
use crate::backup::*; use crate::backup::*;
use crate::server::WorkerTask; use crate::server::WorkerTask;
use crate::server::formatter::*; use crate::server::formatter::*;
@ -66,8 +66,8 @@ struct FixedWriterState {
incremental: bool, incremental: bool,
} }
// key=digest, value=(length, existance checked) // key=digest, value=length
type KnownChunksMap = HashMap<[u8;32], (u32, bool)>; type KnownChunksMap = HashMap<[u8;32], u32>;
struct SharedBackupState { struct SharedBackupState {
finished: bool, finished: bool,
@ -156,7 +156,7 @@ impl BackupEnvironment {
state.ensure_unfinished()?; state.ensure_unfinished()?;
state.known_chunks.insert(digest, (length, false)); state.known_chunks.insert(digest, length);
Ok(()) Ok(())
} }
@ -198,7 +198,7 @@ impl BackupEnvironment {
if is_duplicate { data.upload_stat.duplicates += 1; } if is_duplicate { data.upload_stat.duplicates += 1; }
// register chunk // register chunk
state.known_chunks.insert(digest, (size, true)); state.known_chunks.insert(digest, size);
Ok(()) Ok(())
} }
@ -231,7 +231,7 @@ impl BackupEnvironment {
if is_duplicate { data.upload_stat.duplicates += 1; } if is_duplicate { data.upload_stat.duplicates += 1; }
// register chunk // register chunk
state.known_chunks.insert(digest, (size, true)); state.known_chunks.insert(digest, size);
Ok(()) Ok(())
} }
@ -240,7 +240,7 @@ impl BackupEnvironment {
let state = self.state.lock().unwrap(); let state = self.state.lock().unwrap();
match state.known_chunks.get(digest) { match state.known_chunks.get(digest) {
Some((len, _)) => Some(*len), Some(len) => Some(*len),
None => None, None => None,
} }
} }
@ -457,47 +457,6 @@ impl BackupEnvironment {
Ok(()) Ok(())
} }
/// Ensure all chunks referenced in this backup actually exist.
/// Only call *after* all writers have been closed, to avoid race with GC.
/// In case of error, mark the previous backup as 'verify failed'.
fn verify_chunk_existance(&self, known_chunks: &KnownChunksMap) -> Result<(), Error> {
for (digest, (_, checked)) in known_chunks.iter() {
if !checked && !self.datastore.chunk_path(digest).0.exists() {
let mark_msg = if let Some(ref last_backup) = self.last_backup {
let last_dir = &last_backup.backup_dir;
let verify_state = SnapshotVerifyState {
state: VerifyState::Failed,
upid: self.worker.upid().clone(),
};
let res = proxmox::try_block!{
let (mut manifest, _) = self.datastore.load_manifest(last_dir)?;
manifest.unprotected["verify_state"] = serde_json::to_value(verify_state)?;
self.datastore.store_manifest(last_dir, serde_json::to_value(manifest)?)
};
if let Err(err) = res {
format!("tried marking previous snapshot as bad, \
but got error accessing manifest: {}", err)
} else {
"marked previous snapshot as bad, please use \
'verify' for a detailed check".to_owned()
}
} else {
"internal error: no base backup registered to mark invalid".to_owned()
};
bail!(
"chunk '{}' was attempted to be reused but doesn't exist - {}",
digest_to_hex(digest),
mark_msg
);
}
}
Ok(())
}
/// Mark backup as finished /// Mark backup as finished
pub fn finish_backup(&self) -> Result<(), Error> { pub fn finish_backup(&self) -> Result<(), Error> {
let mut state = self.state.lock().unwrap(); let mut state = self.state.lock().unwrap();
@ -534,8 +493,6 @@ impl BackupEnvironment {
} }
} }
self.verify_chunk_existance(&state.known_chunks)?;
// marks the backup as successful // marks the backup as successful
state.finished = true; state.finished = true;

View File

@ -61,12 +61,15 @@ impl Future for UploadChunk {
let (is_duplicate, compressed_size) = match proxmox::try_block! { let (is_duplicate, compressed_size) = match proxmox::try_block! {
let mut chunk = DataBlob::from_raw(raw_data)?; let mut chunk = DataBlob::from_raw(raw_data)?;
chunk.verify_unencrypted(this.size as usize, &this.digest)?; tools::runtime::block_in_place(|| {
chunk.verify_unencrypted(this.size as usize, &this.digest)?;
// always comput CRC at server side // always comput CRC at server side
chunk.set_crc(chunk.compute_crc()); chunk.set_crc(chunk.compute_crc());
this.store.insert_chunk(&chunk, &this.digest)
})
this.store.insert_chunk(&chunk, &this.digest)
} { } {
Ok(res) => res, Ok(res) => res,
Err(err) => break err, Err(err) => break err,

View File

@ -112,7 +112,7 @@ pub fn list_datastores(
/// Create new datastore config. /// Create new datastore config.
pub fn create_datastore(param: Value) -> Result<(), Error> { pub fn create_datastore(param: Value) -> Result<(), Error> {
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let datastore: datastore::DataStoreConfig = serde_json::from_value(param.clone())?; let datastore: datastore::DataStoreConfig = serde_json::from_value(param.clone())?;
@ -132,6 +132,8 @@ pub fn create_datastore(param: Value) -> Result<(), Error> {
datastore::save_config(&config)?; datastore::save_config(&config)?;
crate::config::jobstate::create_state_file("prune", &datastore.name)?; crate::config::jobstate::create_state_file("prune", &datastore.name)?;
crate::config::jobstate::create_state_file("garbage_collection", &datastore.name)?;
crate::config::jobstate::create_state_file("verify", &datastore.name)?;
Ok(()) Ok(())
} }
@ -275,7 +277,7 @@ pub fn update_datastore(
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
// pass/compare digest // pass/compare digest
let (mut config, expected_digest) = datastore::config()?; let (mut config, expected_digest) = datastore::config()?;
@ -313,13 +315,23 @@ pub fn update_datastore(
} }
} }
if gc_schedule.is_some() { data.gc_schedule = gc_schedule; } let mut gc_schedule_changed = false;
if gc_schedule.is_some() {
gc_schedule_changed = data.gc_schedule != gc_schedule;
data.gc_schedule = gc_schedule;
}
let mut prune_schedule_changed = false; let mut prune_schedule_changed = false;
if prune_schedule.is_some() { if prune_schedule.is_some() {
prune_schedule_changed = true; prune_schedule_changed = data.prune_schedule != prune_schedule;
data.prune_schedule = prune_schedule; data.prune_schedule = prune_schedule;
} }
if verify_schedule.is_some() { data.verify_schedule = verify_schedule; }
let mut verify_schedule_changed = false;
if verify_schedule.is_some() {
verify_schedule_changed = data.verify_schedule != verify_schedule;
data.verify_schedule = verify_schedule;
}
if keep_last.is_some() { data.keep_last = keep_last; } if keep_last.is_some() { data.keep_last = keep_last; }
if keep_hourly.is_some() { data.keep_hourly = keep_hourly; } if keep_hourly.is_some() { data.keep_hourly = keep_hourly; }
@ -332,12 +344,20 @@ pub fn update_datastore(
datastore::save_config(&config)?; datastore::save_config(&config)?;
// we want to reset the statefile, to avoid an immediate sync in some cases // we want to reset the statefiles, to avoid an immediate action in some cases
// (e.g. going from monthly to weekly in the second week of the month) // (e.g. going from monthly to weekly in the second week of the month)
if gc_schedule_changed {
crate::config::jobstate::create_state_file("garbage_collection", &name)?;
}
if prune_schedule_changed { if prune_schedule_changed {
crate::config::jobstate::create_state_file("prune", &name)?; crate::config::jobstate::create_state_file("prune", &name)?;
} }
if verify_schedule_changed {
crate::config::jobstate::create_state_file("verify", &name)?;
}
Ok(()) Ok(())
} }
@ -361,7 +381,7 @@ pub fn update_datastore(
/// Remove a datastore configuration. /// Remove a datastore configuration.
pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = datastore::config()?; let (mut config, expected_digest) = datastore::config()?;
@ -377,7 +397,10 @@ pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Erro
datastore::save_config(&config)?; datastore::save_config(&config)?;
crate::config::jobstate::remove_state_file("prune", &name)?; // ignore errors
let _ = crate::config::jobstate::remove_state_file("prune", &name);
let _ = crate::config::jobstate::remove_state_file("garbage_collection", &name);
let _ = crate::config::jobstate::remove_state_file("verify", &name);
Ok(()) Ok(())
} }

View File

@ -60,6 +60,12 @@ pub fn list_remotes(
host: { host: {
schema: DNS_NAME_OR_IP_SCHEMA, schema: DNS_NAME_OR_IP_SCHEMA,
}, },
port: {
description: "The (optional) port.",
type: u16,
optional: true,
default: 8007,
},
userid: { userid: {
type: Userid, type: Userid,
}, },
@ -79,7 +85,7 @@ pub fn list_remotes(
/// Create new remote. /// Create new remote.
pub fn create_remote(password: String, param: Value) -> Result<(), Error> { pub fn create_remote(password: String, param: Value) -> Result<(), Error> {
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let mut data = param.clone(); let mut data = param.clone();
data["password"] = Value::from(base64::encode(password.as_bytes())); data["password"] = Value::from(base64::encode(password.as_bytes()));
@ -136,6 +142,8 @@ pub enum DeletableProperty {
comment, comment,
/// Delete the fingerprint property. /// Delete the fingerprint property.
fingerprint, fingerprint,
/// Delete the port property.
port,
} }
#[api( #[api(
@ -153,6 +161,11 @@ pub enum DeletableProperty {
optional: true, optional: true,
schema: DNS_NAME_OR_IP_SCHEMA, schema: DNS_NAME_OR_IP_SCHEMA,
}, },
port: {
description: "The (optional) port.",
type: u16,
optional: true,
},
userid: { userid: {
optional: true, optional: true,
type: Userid, type: Userid,
@ -188,6 +201,7 @@ pub fn update_remote(
name: String, name: String,
comment: Option<String>, comment: Option<String>,
host: Option<String>, host: Option<String>,
port: Option<u16>,
userid: Option<Userid>, userid: Option<Userid>,
password: Option<String>, password: Option<String>,
fingerprint: Option<String>, fingerprint: Option<String>,
@ -195,7 +209,7 @@ pub fn update_remote(
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = remote::config()?; let (mut config, expected_digest) = remote::config()?;
@ -211,6 +225,7 @@ pub fn update_remote(
match delete_prop { match delete_prop {
DeletableProperty::comment => { data.comment = None; }, DeletableProperty::comment => { data.comment = None; },
DeletableProperty::fingerprint => { data.fingerprint = None; }, DeletableProperty::fingerprint => { data.fingerprint = None; },
DeletableProperty::port => { data.port = None; },
} }
} }
} }
@ -224,6 +239,7 @@ pub fn update_remote(
} }
} }
if let Some(host) = host { data.host = host; } if let Some(host) = host { data.host = host; }
if port.is_some() { data.port = port; }
if let Some(userid) = userid { data.userid = userid; } if let Some(userid) = userid { data.userid = userid; }
if let Some(password) = password { data.password = password; } if let Some(password) = password { data.password = password; }
@ -256,7 +272,7 @@ pub fn update_remote(
/// Remove a remote from the configuration file. /// Remove a remote from the configuration file.
pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = remote::config()?; let (mut config, expected_digest) = remote::config()?;

View File

@ -69,7 +69,7 @@ pub fn list_sync_jobs(
/// Create a new sync job. /// Create a new sync job.
pub fn create_sync_job(param: Value) -> Result<(), Error> { pub fn create_sync_job(param: Value) -> Result<(), Error> {
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let sync_job: sync::SyncJobConfig = serde_json::from_value(param.clone())?; let sync_job: sync::SyncJobConfig = serde_json::from_value(param.clone())?;
@ -187,7 +187,7 @@ pub fn update_sync_job(
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
// pass/compare digest // pass/compare digest
let (mut config, expected_digest) = sync::config()?; let (mut config, expected_digest) = sync::config()?;
@ -250,7 +250,7 @@ pub fn update_sync_job(
/// Remove a sync job configuration /// Remove a sync job configuration
pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = sync::config()?; let (mut config, expected_digest) = sync::config()?;

View File

@ -25,6 +25,8 @@ use crate::server::WorkerTask;
use crate::api2::types::*; use crate::api2::types::*;
use crate::tools::systemd;
pub const DISK_ARRAY_SCHEMA: Schema = ArraySchema::new( pub const DISK_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Disk name list.", &BLOCKDEVICE_NAME_SCHEMA) "Disk name list.", &BLOCKDEVICE_NAME_SCHEMA)
.schema(); .schema();
@ -355,6 +357,11 @@ pub fn create_zpool(
let output = crate::tools::run_command(command, None)?; let output = crate::tools::run_command(command, None)?;
worker.log(output); worker.log(output);
if std::path::Path::new("/lib/systemd/system/zfs-import@.service").exists() {
let import_unit = format!("zfs-import@{}.service", systemd::escape_unit(&name, false));
systemd::enable_unit(&import_unit)?;
}
if let Some(compression) = compression { if let Some(compression) = compression {
let mut command = std::process::Command::new("zfs"); let mut command = std::process::Command::new("zfs");
command.args(&["set", &format!("compression={}", compression), &name]); command.args(&["set", &format!("compression={}", compression), &name]);

View File

@ -241,7 +241,7 @@ pub fn create_interface(
let interface_type = crate::tools::required_string_param(&param, "type")?; let interface_type = crate::tools::required_string_param(&param, "type")?;
let interface_type: NetworkInterfaceType = serde_json::from_value(interface_type.into())?; let interface_type: NetworkInterfaceType = serde_json::from_value(interface_type.into())?;
let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, _digest) = network::config()?; let (mut config, _digest) = network::config()?;
@ -505,7 +505,7 @@ pub fn update_interface(
param: Value, param: Value,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = network::config()?; let (mut config, expected_digest) = network::config()?;
@ -646,7 +646,7 @@ pub fn update_interface(
/// Remove network interface configuration. /// Remove network interface configuration.
pub fn delete_interface(iface: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_interface(iface: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = network::config()?; let (mut config, expected_digest) = network::config()?;

View File

@ -10,7 +10,7 @@ use proxmox::{identity, list_subdirs_api_method, sortable};
use crate::tools; use crate::tools;
use crate::api2::types::*; use crate::api2::types::*;
use crate::server::{self, UPID, TaskState}; use crate::server::{self, UPID, TaskState, TaskListInfoIterator};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY}; use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
@ -303,6 +303,7 @@ pub fn list_tasks(
limit: u64, limit: u64,
errors: bool, errors: bool,
running: bool, running: bool,
userfilter: Option<String>,
param: Value, param: Value,
mut rpcenv: &mut dyn RpcEnvironment, mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> { ) -> Result<Vec<TaskListItem>, Error> {
@ -315,57 +316,55 @@ pub fn list_tasks(
let store = param["store"].as_str(); let store = param["store"].as_str();
let userfilter = param["userfilter"].as_str(); let list = TaskListInfoIterator::new(running)?;
let list = server::read_task_list()?; let result: Vec<TaskListItem> = list
.take_while(|info| !info.is_err())
.filter_map(|info| {
let info = match info {
Ok(info) => info,
Err(_) => return None,
};
let mut result = vec![]; if !list_all && info.upid.userid != userid { return None; }
let mut count = 0; if let Some(userid) = &userfilter {
if !info.upid.userid.as_str().contains(userid) { return None; }
for info in list {
if !list_all && info.upid.userid != userid { continue; }
if let Some(userid) = userfilter {
if !info.upid.userid.as_str().contains(userid) { continue; }
} }
if let Some(store) = store { if let Some(store) = store {
// Note: useful to select all tasks spawned by proxmox-backup-client // Note: useful to select all tasks spawned by proxmox-backup-client
let worker_id = match &info.upid.worker_id { let worker_id = match &info.upid.worker_id {
Some(w) => w, Some(w) => w,
None => continue, // skip None => return None, // skip
}; };
if info.upid.worker_type == "backup" || info.upid.worker_type == "restore" || if info.upid.worker_type == "backup" || info.upid.worker_type == "restore" ||
info.upid.worker_type == "prune" info.upid.worker_type == "prune"
{ {
let prefix = format!("{}_", store); let prefix = format!("{}_", store);
if !worker_id.starts_with(&prefix) { continue; } if !worker_id.starts_with(&prefix) { return None; }
} else if info.upid.worker_type == "garbage_collection" { } else if info.upid.worker_type == "garbage_collection" {
if worker_id != store { continue; } if worker_id != store { return None; }
} else { } else {
continue; // skip return None; // skip
} }
} }
if let Some(ref state) = info.state { match info.state {
if running { continue; } Some(_) if running => return None,
match state { Some(crate::server::TaskState::OK { .. }) if errors => return None,
crate::server::TaskState::OK { .. } if errors => continue, _ => {},
_ => {},
}
} }
if (count as u64) < start { Some(info.into())
count += 1; }).skip(start as usize)
continue; .take(limit as usize)
} else { .collect();
count += 1;
}
if (result.len() as u64) < limit { result.push(info.into()); }; let mut count = result.len() + start as usize;
if result.len() > 0 && result.len() >= limit as usize { // we have a 'virtual' entry as long as we have any new
count += 1;
} }
rpcenv["total"] = Value::from(count); rpcenv["total"] = Value::from(count);

View File

@ -55,12 +55,13 @@ pub async fn get_pull_parameters(
.password(Some(remote.password.clone())) .password(Some(remote.password.clone()))
.fingerprint(remote.fingerprint.clone()); .fingerprint(remote.fingerprint.clone());
let client = HttpClient::new(&remote.host, &remote.userid, options)?; let src_repo = BackupRepository::new(Some(remote.userid.clone()), Some(remote.host.clone()), remote.port, remote_store.to_string());
let client = HttpClient::new(&src_repo.host(), src_repo.port(), &src_repo.user(), options)?;
let _auth_info = client.login() // make sure we can auth let _auth_info = client.login() // make sure we can auth
.await .await
.map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?; .map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
let src_repo = BackupRepository::new(Some(remote.userid), Some(remote.host), remote_store.to_string());
Ok((client, src_repo, tgt_store)) Ok((client, src_repo, tgt_store))
} }

View File

@ -229,8 +229,7 @@ fn download_chunk(
env.debug(format!("download chunk {:?}", path)); env.debug(format!("download chunk {:?}", path));
let data = tokio::fs::read(path) let data = tools::runtime::block_in_place(|| std::fs::read(path))
.await
.map_err(move |err| http_err!(BAD_REQUEST, "reading file {:?} failed: {}", path2, err))?; .map_err(move |err| http_err!(BAD_REQUEST, "reading file {:?} failed: {}", path2, err))?;
let body = Body::from(data); let body = Body::from(data);
@ -287,7 +286,7 @@ fn download_chunk_old(
pub const API_METHOD_SPEEDTEST: ApiMethod = ApiMethod::new( pub const API_METHOD_SPEEDTEST: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&speedtest), &ApiHandler::AsyncHttp(&speedtest),
&ObjectSchema::new("Test 4M block download speed.", &[]) &ObjectSchema::new("Test 1M block download speed.", &[])
); );
fn speedtest( fn speedtest(

View File

@ -182,7 +182,7 @@ fn datastore_status(
input: { input: {
properties: { properties: {
since: { since: {
type: u64, type: i64,
description: "Only list tasks since this UNIX epoch.", description: "Only list tasks since this UNIX epoch.",
optional: true, optional: true,
}, },
@ -200,6 +200,7 @@ fn datastore_status(
)] )]
/// List tasks. /// List tasks.
pub fn list_tasks( pub fn list_tasks(
since: Option<i64>,
_param: Value, _param: Value,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> { ) -> Result<Vec<TaskListItem>, Error> {
@ -209,13 +210,28 @@ pub fn list_tasks(
let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]); let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0; let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
let since = since.unwrap_or_else(|| 0);
// TODO: replace with call that gets all task since 'since' epoch let list: Vec<TaskListItem> = server::TaskListInfoIterator::new(false)?
let list: Vec<TaskListItem> = server::read_task_list()? .take_while(|info| {
.into_iter() match info {
.map(TaskListItem::from) Ok(info) => info.upid.starttime > since,
.filter(|entry| list_all || entry.user == userid) Err(_) => false
.collect(); }
})
.filter_map(|info| {
match info {
Ok(info) => {
if list_all || info.upid.userid == userid {
Some(Ok(TaskListItem::from(info)))
} else {
None
}
}
Err(err) => Some(Err(err))
}
})
.collect::<Result<Vec<TaskListItem>, Error>>()?;
Ok(list.into()) Ok(list.into())
} }

View File

@ -3,7 +3,7 @@ use serde::{Deserialize, Serialize};
use proxmox::api::{api, schema::*}; use proxmox::api::{api, schema::*};
use proxmox::const_regex; use proxmox::const_regex;
use proxmox::{IPRE, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32}; use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::CryptMode; use crate::backup::CryptMode;
use crate::server::UPID; use crate::server::UPID;
@ -30,7 +30,7 @@ pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
}); });
macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") } macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_NAME { () => (concat!(r"(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!())) } macro_rules! DNS_NAME { () => (concat!(r"(?:(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!(), ")")) }
macro_rules! CIDR_V4_REGEX_STR { () => (concat!(r"(?:", IPV4RE!(), r"/\d{1,2})$")) } macro_rules! CIDR_V4_REGEX_STR { () => (concat!(r"(?:", IPV4RE!(), r"/\d{1,2})$")) }
macro_rules! CIDR_V6_REGEX_STR { () => (concat!(r"(?:", IPV6RE!(), r"/\d{1,3})$")) } macro_rules! CIDR_V6_REGEX_STR { () => (concat!(r"(?:", IPV6RE!(), r"/\d{1,3})$")) }
@ -63,9 +63,9 @@ const_regex!{
pub DNS_NAME_REGEX = concat!(r"^", DNS_NAME!(), r"$"); pub DNS_NAME_REGEX = concat!(r"^", DNS_NAME!(), r"$");
pub DNS_NAME_OR_IP_REGEX = concat!(r"^", DNS_NAME!(), "|", IPRE!(), r"$"); pub DNS_NAME_OR_IP_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE!(), r")$");
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE!() ,"):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$"); pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE_BRACKET!() ,"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$"; pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";

View File

@ -198,7 +198,10 @@ impl DataBlob {
Ok(data) Ok(data)
} else if magic == &COMPRESSED_BLOB_MAGIC_1_0 { } else if magic == &COMPRESSED_BLOB_MAGIC_1_0 {
let data_start = std::mem::size_of::<DataBlobHeader>(); let data_start = std::mem::size_of::<DataBlobHeader>();
let data = zstd::block::decompress(&self.raw_data[data_start..], MAX_BLOB_SIZE)?; let mut reader = &self.raw_data[data_start..];
let data = zstd::stream::decode_all(&mut reader)?;
// zstd::block::decompress is abou 10% slower
// let data = zstd::block::decompress(&self.raw_data[data_start..], MAX_BLOB_SIZE)?;
if let Some(digest) = digest { if let Some(digest) = digest {
Self::verify_digest(&data, None, digest)?; Self::verify_digest(&data, None, digest)?;
} }
@ -268,6 +271,12 @@ impl DataBlob {
} }
} }
/// Returns if chunk is encrypted
pub fn is_encrypted(&self) -> bool {
let magic = self.magic();
magic == &ENCR_COMPR_BLOB_MAGIC_1_0 || magic == &ENCRYPTED_BLOB_MAGIC_1_0
}
/// Verify digest and data length for unencrypted chunks. /// Verify digest and data length for unencrypted chunks.
/// ///
/// To do that, we need to decompress data first. Please note that /// To do that, we need to decompress data first. Please note that

View File

@ -478,9 +478,12 @@ impl DataStore {
if let Ok(ref mut _mutex) = self.gc_mutex.try_lock() { if let Ok(ref mut _mutex) = self.gc_mutex.try_lock() {
// avoids that we run GC if an old daemon process has still a
// running backup writer, which is not save as we have no "oldest
// writer" information and thus no safe atime cutoff
let _exclusive_lock = self.chunk_store.try_exclusive_lock()?; let _exclusive_lock = self.chunk_store.try_exclusive_lock()?;
let phase1_start_time = unsafe { libc::time(std::ptr::null_mut()) }; let phase1_start_time = proxmox::tools::time::epoch_i64();
let oldest_writer = self.chunk_store.oldest_writer().unwrap_or(phase1_start_time); let oldest_writer = self.chunk_store.oldest_writer().unwrap_or(phase1_start_time);
let mut gc_status = GarbageCollectionStatus::default(); let mut gc_status = GarbageCollectionStatus::default();

View File

@ -5,13 +5,22 @@ use std::time::Instant;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use crate::server::WorkerTask; use crate::{
use crate::api2::types::*; server::WorkerTask,
api2::types::*,
use super::{ tools::ParallelHandler,
DataStore, DataBlob, BackupGroup, BackupDir, BackupInfo, IndexFile, backup::{
CryptMode, DataStore,
FileInfo, ArchiveType, archive_type, DataBlob,
BackupGroup,
BackupDir,
BackupInfo,
IndexFile,
CryptMode,
FileInfo,
ArchiveType,
archive_type,
},
}; };
fn verify_blob(datastore: Arc<DataStore>, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> { fn verify_blob(datastore: Arc<DataStore>, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
@ -66,55 +75,6 @@ fn rename_corrupted_chunk(
}; };
} }
// We use a separate thread to read/load chunks, so that we can do
// load and verify in parallel to increase performance.
fn chunk_reader_thread(
datastore: Arc<DataStore>,
index: Box<dyn IndexFile + Send>,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
errors: Arc<AtomicUsize>,
worker: Arc<WorkerTask>,
) -> std::sync::mpsc::Receiver<(DataBlob, [u8;32], u64)> {
let (sender, receiver) = std::sync::mpsc::sync_channel(3); // buffer up to 3 chunks
std::thread::spawn(move|| {
for pos in 0..index.index_count() {
let info = index.chunk_info(pos).unwrap();
let size = info.range.end - info.range.start;
if verified_chunks.lock().unwrap().contains(&info.digest) {
continue; // already verified
}
if corrupt_chunks.lock().unwrap().contains(&info.digest) {
let digest_str = proxmox::tools::digest_to_hex(&info.digest);
worker.log(format!("chunk {} was marked as corrupt", digest_str));
errors.fetch_add(1, Ordering::SeqCst);
continue;
}
match datastore.load_chunk(&info.digest) {
Err(err) => {
corrupt_chunks.lock().unwrap().insert(info.digest);
worker.log(format!("can't verify chunk, load failed - {}", err));
errors.fetch_add(1, Ordering::SeqCst);
rename_corrupted_chunk(datastore.clone(), &info.digest, worker.clone());
continue;
}
Ok(chunk) => {
if sender.send((chunk, info.digest, size)).is_err() {
break; // receiver gone - simply stop
}
}
}
}
});
receiver
}
fn verify_index_chunks( fn verify_index_chunks(
datastore: Arc<DataStore>, datastore: Arc<DataStore>,
index: Box<dyn IndexFile + Send>, index: Box<dyn IndexFile + Send>,
@ -128,60 +88,87 @@ fn verify_index_chunks(
let start_time = Instant::now(); let start_time = Instant::now();
let chunk_channel = chunk_reader_thread(
datastore.clone(),
index,
verified_chunks.clone(),
corrupt_chunks.clone(),
errors.clone(),
worker.clone(),
);
let mut read_bytes = 0; let mut read_bytes = 0;
let mut decoded_bytes = 0; let mut decoded_bytes = 0;
loop { let worker2 = Arc::clone(&worker);
let datastore2 = Arc::clone(&datastore);
let corrupt_chunks2 = Arc::clone(&corrupt_chunks);
let verified_chunks2 = Arc::clone(&verified_chunks);
let errors2 = Arc::clone(&errors);
let decoder_pool = ParallelHandler::new(
"verify chunk decoder", 4,
move |(chunk, digest, size): (DataBlob, [u8;32], u64)| {
let chunk_crypt_mode = match chunk.crypt_mode() {
Err(err) => {
corrupt_chunks2.lock().unwrap().insert(digest);
worker2.log(format!("can't verify chunk, unknown CryptMode - {}", err));
errors2.fetch_add(1, Ordering::SeqCst);
return Ok(());
},
Ok(mode) => mode,
};
if chunk_crypt_mode != crypt_mode {
worker2.log(format!(
"chunk CryptMode {:?} does not match index CryptMode {:?}",
chunk_crypt_mode,
crypt_mode
));
errors2.fetch_add(1, Ordering::SeqCst);
}
if let Err(err) = chunk.verify_unencrypted(size as usize, &digest) {
corrupt_chunks2.lock().unwrap().insert(digest);
worker2.log(format!("{}", err));
errors2.fetch_add(1, Ordering::SeqCst);
rename_corrupted_chunk(datastore2.clone(), &digest, worker2.clone());
} else {
verified_chunks2.lock().unwrap().insert(digest);
}
Ok(())
}
);
for pos in 0..index.index_count() {
worker.fail_on_abort()?; worker.fail_on_abort()?;
crate::tools::fail_on_shutdown()?; crate::tools::fail_on_shutdown()?;
let (chunk, digest, size) = match chunk_channel.recv() { let info = index.chunk_info(pos).unwrap();
Ok(tuple) => tuple, let size = info.size();
Err(std::sync::mpsc::RecvError) => break,
};
read_bytes += chunk.raw_size(); if verified_chunks.lock().unwrap().contains(&info.digest) {
decoded_bytes += size; continue; // already verified
let chunk_crypt_mode = match chunk.crypt_mode() {
Err(err) => {
corrupt_chunks.lock().unwrap().insert(digest);
worker.log(format!("can't verify chunk, unknown CryptMode - {}", err));
errors.fetch_add(1, Ordering::SeqCst);
continue;
},
Ok(mode) => mode,
};
if chunk_crypt_mode != crypt_mode {
worker.log(format!(
"chunk CryptMode {:?} does not match index CryptMode {:?}",
chunk_crypt_mode,
crypt_mode
));
errors.fetch_add(1, Ordering::SeqCst);
} }
if let Err(err) = chunk.verify_unencrypted(size as usize, &digest) { if corrupt_chunks.lock().unwrap().contains(&info.digest) {
corrupt_chunks.lock().unwrap().insert(digest); let digest_str = proxmox::tools::digest_to_hex(&info.digest);
worker.log(format!("{}", err)); worker.log(format!("chunk {} was marked as corrupt", digest_str));
errors.fetch_add(1, Ordering::SeqCst); errors.fetch_add(1, Ordering::SeqCst);
rename_corrupted_chunk(datastore.clone(), &digest, worker.clone()); continue;
} else { }
verified_chunks.lock().unwrap().insert(digest);
match datastore.load_chunk(&info.digest) {
Err(err) => {
corrupt_chunks.lock().unwrap().insert(info.digest);
worker.log(format!("can't verify chunk, load failed - {}", err));
errors.fetch_add(1, Ordering::SeqCst);
rename_corrupted_chunk(datastore.clone(), &info.digest, worker.clone());
continue;
}
Ok(chunk) => {
read_bytes += chunk.raw_size();
decoder_pool.send((chunk, info.digest, size))?;
decoded_bytes += size;
}
} }
} }
decoder_pool.complete()?;
let elapsed = start_time.elapsed().as_secs_f64(); let elapsed = start_time.elapsed().as_secs_f64();
let read_bytes_mib = (read_bytes as f64)/(1024.0*1024.0); let read_bytes_mib = (read_bytes as f64)/(1024.0*1024.0);

View File

@ -192,7 +192,7 @@ pub fn complete_repository(_arg: &str, _param: &HashMap<String, String>) -> Vec<
result result
} }
fn connect(server: &str, userid: &Userid) -> Result<HttpClient, Error> { fn connect(server: &str, port: u16, userid: &Userid) -> Result<HttpClient, Error> {
let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok(); let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
@ -211,7 +211,7 @@ fn connect(server: &str, userid: &Userid) -> Result<HttpClient, Error> {
.fingerprint_cache(true) .fingerprint_cache(true)
.ticket_cache(true); .ticket_cache(true);
HttpClient::new(server, userid, options) HttpClient::new(server, port, userid, options)
} }
async fn view_task_result( async fn view_task_result(
@ -365,7 +365,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/groups", repo.store()); let path = format!("api2/json/admin/datastore/{}/groups", repo.store());
@ -438,7 +438,7 @@ async fn list_snapshots(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() { let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?) Some(path.parse()?)
@ -503,7 +503,7 @@ async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let path = tools::required_string_param(&param, "snapshot")?; let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?; let snapshot: BackupDir = path.parse()?;
let mut client = connect(repo.host(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store()); let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
@ -533,7 +533,7 @@ async fn api_login(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
client.login().await?; client.login().await?;
record_repository(&repo); record_repository(&repo);
@ -590,7 +590,7 @@ async fn api_version(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param); let repo = extract_repository_from_value(&param);
if let Ok(repo) = repo { if let Ok(repo) = repo {
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
match client.get("api2/json/version", None).await { match client.get("api2/json/version", None).await {
Ok(mut result) => version_info["server"] = result["data"].take(), Ok(mut result) => version_info["server"] = result["data"].take(),
@ -640,7 +640,7 @@ async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store()); let path = format!("api2/json/admin/datastore/{}/files", repo.store());
@ -684,7 +684,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let mut client = connect(repo.host(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/gc", repo.store()); let path = format!("api2/json/admin/datastore/{}/gc", repo.store());
@ -996,7 +996,7 @@ async fn create_backup(
let backup_time = backup_time_opt.unwrap_or_else(|| epoch_i64()); let backup_time = backup_time_opt.unwrap_or_else(|| epoch_i64());
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
record_repository(&repo); record_repository(&repo);
println!("Starting backup: {}/{}/{}", backup_type, backup_id, BackupDir::backup_time_to_string(backup_time)?); println!("Starting backup: {}/{}/{}", backup_type, backup_id, BackupDir::backup_time_to_string(backup_time)?);
@ -1299,7 +1299,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
record_repository(&repo); record_repository(&repo);
@ -1472,7 +1472,7 @@ async fn upload_log(param: Value) -> Result<Value, Error> {
let snapshot = tools::required_string_param(&param, "snapshot")?; let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?; let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(repo.host(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.user())?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?; let (keydata, crypt_mode) = keyfile_parameters(&param)?;
@ -1543,7 +1543,7 @@ fn prune<'a>(
async fn prune_async(mut param: Value) -> Result<Value, Error> { async fn prune_async(mut param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/prune", repo.store()); let path = format!("api2/json/admin/datastore/{}/prune", repo.store());
@ -1626,7 +1626,7 @@ async fn status(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/status", repo.store()); let path = format!("api2/json/admin/datastore/{}/status", repo.store());
@ -1671,7 +1671,7 @@ async fn try_get(repo: &BackupRepository, url: &str) -> Value {
.fingerprint_cache(true) .fingerprint_cache(true)
.ticket_cache(true); .ticket_cache(true);
let client = match HttpClient::new(repo.host(), repo.user(), options) { let client = match HttpClient::new(repo.host(), repo.port(), repo.user(), options) {
Ok(v) => v, Ok(v) => v,
_ => return Value::Null, _ => return Value::Null,
}; };

View File

@ -62,10 +62,10 @@ fn connect() -> Result<HttpClient, Error> {
let ticket = Ticket::new("PBS", Userid::root_userid())? let ticket = Ticket::new("PBS", Userid::root_userid())?
.sign(private_auth_key(), None)?; .sign(private_auth_key(), None)?;
options = options.password(Some(ticket)); options = options.password(Some(ticket));
HttpClient::new("localhost", Userid::root_userid(), options)? HttpClient::new("localhost", 8007, Userid::root_userid(), options)?
} else { } else {
options = options.ticket_cache(true).interactive(true); options = options.ticket_cache(true).interactive(true);
HttpClient::new("localhost", Userid::root_userid(), options)? HttpClient::new("localhost", 8007, Userid::root_userid(), options)?
}; };
Ok(client) Ok(client)
@ -410,6 +410,7 @@ pub fn complete_remote_datastore_name(_arg: &str, param: &HashMap<String, String
let client = HttpClient::new( let client = HttpClient::new(
&remote.host, &remote.host,
remote.port.unwrap_or(8007),
&remote.userid, &remote.userid,
options, options,
)?; )?;

View File

@ -198,44 +198,19 @@ async fn schedule_tasks() -> Result<(), Error> {
schedule_datastore_prune().await; schedule_datastore_prune().await;
schedule_datastore_verification().await; schedule_datastore_verification().await;
schedule_datastore_sync_jobs().await; schedule_datastore_sync_jobs().await;
schedule_task_log_rotate().await;
Ok(()) Ok(())
} }
fn lookup_last_worker(worker_type: &str, worker_id: &str) -> Result<Option<server::UPID>, Error> {
let list = proxmox_backup::server::read_task_list()?;
let mut last: Option<&server::UPID> = None;
for entry in list.iter() {
if entry.upid.worker_type == worker_type {
if let Some(ref id) = entry.upid.worker_id {
if id == worker_id {
match last {
Some(ref upid) => {
if upid.starttime < entry.upid.starttime {
last = Some(&entry.upid)
}
}
None => {
last = Some(&entry.upid)
}
}
}
}
}
}
Ok(last.cloned())
}
async fn schedule_datastore_garbage_collection() { async fn schedule_datastore_garbage_collection() {
use proxmox_backup::backup::DataStore; use proxmox_backup::backup::DataStore;
use proxmox_backup::server::{UPID, WorkerTask}; use proxmox_backup::server::{UPID, WorkerTask};
use proxmox_backup::config::datastore::{self, DataStoreConfig}; use proxmox_backup::config::{
jobstate::{self, Job},
datastore::{self, DataStoreConfig}
};
use proxmox_backup::tools::systemd::time::{ use proxmox_backup::tools::systemd::time::{
parse_calendar_event, compute_next_event}; parse_calendar_event, compute_next_event};
@ -291,11 +266,10 @@ async fn schedule_datastore_garbage_collection() {
} }
} }
} else { } else {
match lookup_last_worker(worker_type, &store) { match jobstate::last_run_time(worker_type, &store) {
Ok(Some(upid)) => upid.starttime, Ok(time) => time,
Ok(None) => 0,
Err(err) => { Err(err) => {
eprintln!("lookup_last_job_start failed: {}", err); eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue; continue;
} }
} }
@ -314,6 +288,11 @@ async fn schedule_datastore_garbage_collection() {
if next > now { continue; } if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let store2 = store.clone(); let store2 = store.clone();
if let Err(err) = WorkerTask::new_thread( if let Err(err) = WorkerTask::new_thread(
@ -322,9 +301,20 @@ async fn schedule_datastore_garbage_collection() {
Userid::backup_userid().clone(), Userid::backup_userid().clone(),
false, false,
move |worker| { move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting garbage collection on store {}", store)); worker.log(format!("starting garbage collection on store {}", store));
worker.log(format!("task triggered by schedule '{}'", event_str)); worker.log(format!("task triggered by schedule '{}'", event_str));
datastore.garbage_collection(&worker)
let result = datastore.garbage_collection(&worker);
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
result
} }
) { ) {
eprintln!("unable to start garbage collection on store {} - {}", store2, err); eprintln!("unable to start garbage collection on store {} - {}", store2, err);
@ -434,7 +424,7 @@ async fn schedule_datastore_prune() {
job.start(&worker.upid().to_string())?; job.start(&worker.upid().to_string())?;
let result = { let result = try_block!({
worker.log(format!("Starting datastore prune on store \"{}\"", store)); worker.log(format!("Starting datastore prune on store \"{}\"", store));
worker.log(format!("task triggered by schedule '{}'", event_str)); worker.log(format!("task triggered by schedule '{}'", event_str));
@ -463,7 +453,7 @@ async fn schedule_datastore_prune() {
} }
} }
Ok(()) Ok(())
}; });
let status = worker.create_state(&result); let status = worker.create_state(&result);
@ -482,7 +472,10 @@ async fn schedule_datastore_prune() {
async fn schedule_datastore_verification() { async fn schedule_datastore_verification() {
use proxmox_backup::backup::{DataStore, verify_all_backups}; use proxmox_backup::backup::{DataStore, verify_all_backups};
use proxmox_backup::server::{WorkerTask}; use proxmox_backup::server::{WorkerTask};
use proxmox_backup::config::datastore::{self, DataStoreConfig}; use proxmox_backup::config::{
jobstate::{self, Job},
datastore::{self, DataStoreConfig}
};
use proxmox_backup::tools::systemd::time::{ use proxmox_backup::tools::systemd::time::{
parse_calendar_event, compute_next_event}; parse_calendar_event, compute_next_event};
@ -526,16 +519,10 @@ async fn schedule_datastore_verification() {
let worker_type = "verify"; let worker_type = "verify";
let last = match lookup_last_worker(worker_type, &store) { let last = match jobstate::last_run_time(worker_type, &store) {
Ok(Some(upid)) => { Ok(time) => time,
if proxmox_backup::server::worker_is_active_local(&upid) {
continue;
}
upid.starttime
}
Ok(None) => 0,
Err(err) => { Err(err) => {
eprintln!("lookup_last_job_start failed: {}", err); eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue; continue;
} }
}; };
@ -553,6 +540,11 @@ async fn schedule_datastore_verification() {
if next > now { continue; } if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let worker_id = store.clone(); let worker_id = store.clone();
let store2 = store.clone(); let store2 = store.clone();
if let Err(err) = WorkerTask::new_thread( if let Err(err) = WorkerTask::new_thread(
@ -561,18 +553,29 @@ async fn schedule_datastore_verification() {
Userid::backup_userid().clone(), Userid::backup_userid().clone(),
false, false,
move |worker| { move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting verification on store {}", store2)); worker.log(format!("starting verification on store {}", store2));
worker.log(format!("task triggered by schedule '{}'", event_str)); worker.log(format!("task triggered by schedule '{}'", event_str));
if let Ok(failed_dirs) = verify_all_backups(datastore, worker.clone()) { let result = try_block!({
let failed_dirs = verify_all_backups(datastore, worker.clone())?;
if failed_dirs.len() > 0 { if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:"); worker.log("Failed to verify following snapshots:");
for dir in failed_dirs { for dir in failed_dirs {
worker.log(format!("\t{}", dir)); worker.log(format!("\t{}", dir));
} }
bail!("verification failed - please check the log for details"); Err(format_err!("verification failed - please check the log for details"))
} else {
Ok(())
} }
});
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
} }
Ok(())
result
}, },
) { ) {
eprintln!("unable to start verification on store {} - {}", store, err); eprintln!("unable to start verification on store {} - {}", store, err);
@ -653,6 +656,101 @@ async fn schedule_datastore_sync_jobs() {
} }
} }
async fn schedule_task_log_rotate() {
use proxmox_backup::{
config::jobstate::{self, Job},
server::rotate_task_log_archive,
};
use proxmox_backup::server::WorkerTask;
use proxmox_backup::tools::systemd::time::{
parse_calendar_event, compute_next_event};
let worker_type = "logrotate";
let job_id = "task-archive";
let last = match jobstate::last_run_time(worker_type, job_id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of task log archive rotation: {}", err);
return;
}
};
// schedule daily at 00:00 like normal logrotate
let schedule = "00:00";
let event = match parse_calendar_event(schedule) {
Ok(event) => event,
Err(err) => {
// should not happen?
eprintln!("unable to parse schedule '{}' - {}", schedule, err);
return;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => return,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", schedule, err);
return;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now {
// if we never ran the rotation, schedule instantly
match jobstate::JobState::load(worker_type, job_id) {
Ok(state) => match state {
jobstate::JobState::Created { .. } => {},
_ => return,
},
_ => return,
}
}
let mut job = match Job::new(worker_type, job_id) {
Ok(job) => job,
Err(_) => return, // could not get lock
};
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(job_id.to_string()),
Userid::backup_userid().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting task log rotation"));
// one entry has normally about ~100-150 bytes
let max_size = 500000; // at least 5000 entries
let max_files = 20; // at least 100000 entries
let result = try_block!({
let has_rotated = rotate_task_log_archive(max_size, true, Some(max_files))?;
if has_rotated {
worker.log(format!("task log archive was rotated"));
} else {
worker.log(format!("task log archive was not rotated"));
}
Ok(())
});
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
result
},
) {
eprintln!("unable to start task log rotation: {}", err);
}
}
async fn run_stat_generator() { async fn run_stat_generator() {
let mut count = 0; let mut count = 0;

View File

@ -21,6 +21,7 @@ use proxmox_backup::backup::{
load_and_decrypt_key, load_and_decrypt_key,
CryptConfig, CryptConfig,
KeyDerivationConfig, KeyDerivationConfig,
DataChunkBuilder,
}; };
use proxmox_backup::client::*; use proxmox_backup::client::*;
@ -60,6 +61,9 @@ struct Speed {
"aes256_gcm": { "aes256_gcm": {
type: Speed, type: Speed,
}, },
"verify": {
type: Speed,
},
}, },
)] )]
#[derive(Copy, Clone, Serialize)] #[derive(Copy, Clone, Serialize)]
@ -75,9 +79,10 @@ struct BenchmarkResult {
decompress: Speed, decompress: Speed,
/// AES256 GCM encryption speed /// AES256 GCM encryption speed
aes256_gcm: Speed, aes256_gcm: Speed,
/// Verify speed
verify: Speed,
} }
static BENCHMARK_RESULT_2020_TOP: BenchmarkResult = BenchmarkResult { static BENCHMARK_RESULT_2020_TOP: BenchmarkResult = BenchmarkResult {
tls: Speed { tls: Speed {
speed: None, speed: None,
@ -85,19 +90,23 @@ static BENCHMARK_RESULT_2020_TOP: BenchmarkResult = BenchmarkResult {
}, },
sha256: Speed { sha256: Speed {
speed: None, speed: None,
top: 1_000_000.0 * 2120.0, // AMD Ryzen 7 2700X top: 1_000_000.0 * 2022.0, // AMD Ryzen 7 2700X
}, },
compress: Speed { compress: Speed {
speed: None, speed: None,
top: 1_000_000.0 * 2158.0, // AMD Ryzen 7 2700X top: 1_000_000.0 * 752.0, // AMD Ryzen 7 2700X
}, },
decompress: Speed { decompress: Speed {
speed: None, speed: None,
top: 1_000_000.0 * 8062.0, // AMD Ryzen 7 2700X top: 1_000_000.0 * 1198.0, // AMD Ryzen 7 2700X
}, },
aes256_gcm: Speed { aes256_gcm: Speed {
speed: None, speed: None,
top: 1_000_000.0 * 3803.0, // AMD Ryzen 7 2700X top: 1_000_000.0 * 3645.0, // AMD Ryzen 7 2700X
},
verify: Speed {
speed: None,
top: 1_000_000.0 * 758.0, // AMD Ryzen 7 2700X
}, },
}; };
@ -194,7 +203,10 @@ fn render_result(
.column(ColumnConfig::new("decompress") .column(ColumnConfig::new("decompress")
.header("ZStd level 1 decompression speed") .header("ZStd level 1 decompression speed")
.right_align(false).renderer(render_speed)) .right_align(false).renderer(render_speed))
.column(ColumnConfig::new("aes256_gcm") .column(ColumnConfig::new("verify")
.header("Chunk verification speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("aes256_gcm")
.header("AES256 GCM encryption speed") .header("AES256 GCM encryption speed")
.right_align(false).renderer(render_speed)); .right_align(false).renderer(render_speed));
@ -213,7 +225,7 @@ async fn test_upload_speed(
let backup_time = proxmox::tools::time::epoch_i64(); let backup_time = proxmox::tools::time::epoch_i64();
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
record_repository(&repo); record_repository(&repo);
if verbose { eprintln!("Connecting to backup server"); } if verbose { eprintln!("Connecting to backup server"); }
@ -257,7 +269,17 @@ fn test_crypt_speed(
let crypt_config = CryptConfig::new(testkey)?; let crypt_config = CryptConfig::new(testkey)?;
let random_data = proxmox::sys::linux::random_data(1024*1024)?; //let random_data = proxmox::sys::linux::random_data(1024*1024)?;
let mut random_data = vec![];
// generate pseudo random byte sequence
for i in 0..256*1024 {
for j in 0..4 {
let byte = ((i >> (j<<3))&0xff) as u8;
random_data.push(byte);
}
}
assert_eq!(random_data.len(), 1024*1024);
let start_time = std::time::Instant::now(); let start_time = std::time::Instant::now();
@ -322,5 +344,23 @@ fn test_crypt_speed(
eprintln!("AES256/GCM speed: {:.2} MB/s", speed/1_000_000_.0); eprintln!("AES256/GCM speed: {:.2} MB/s", speed/1_000_000_.0);
let start_time = std::time::Instant::now();
let (chunk, digest) = DataChunkBuilder::new(&random_data)
.compress(true)
.build()?;
let mut bytes = 0;
loop {
chunk.verify_unencrypted(random_data.len(), &digest)?;
bytes += random_data.len();
if start_time.elapsed().as_micros() > 1_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.verify.speed = Some(speed);
eprintln!("Verify speed: {:.2} MB/s", speed/1_000_000_.0);
Ok(()) Ok(())
} }

View File

@ -79,7 +79,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
} }
}; };
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let client = BackupReader::start( let client = BackupReader::start(
client, client,
@ -153,7 +153,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
/// Shell to interactively inspect and restore snapshots. /// Shell to interactively inspect and restore snapshots.
async fn catalog_shell(param: Value) -> Result<(), Error> { async fn catalog_shell(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let path = tools::required_string_param(&param, "snapshot")?; let path = tools::required_string_param(&param, "snapshot")?;
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;

View File

@ -1,4 +1,6 @@
use std::path::PathBuf; use std::path::PathBuf;
use std::io::Write;
use std::process::{Stdio, Command};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@ -13,6 +15,17 @@ use proxmox_backup::backup::{
}; };
use proxmox_backup::tools; use proxmox_backup::tools;
#[api()]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Paperkey output format
pub enum PaperkeyFormat {
/// Format as Utf8 text. Includes QR codes as ascii-art.
Text,
/// Format as Html. Includes QR codes as png images.
Html,
}
pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json"; pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
pub const MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem"; pub const MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
@ -261,6 +274,55 @@ fn create_master_key() -> Result<(), Error> {
Ok(()) Ok(())
} }
#[api(
input: {
properties: {
path: {
description: "Key file. Without this the default key's will be used.",
optional: true,
},
subject: {
description: "Include the specified subject as titel text.",
optional: true,
},
"output-format": {
type: PaperkeyFormat,
description: "Output format. Text or Html.",
optional: true,
},
},
},
)]
/// Generate a printable, human readable text file containing the encryption key.
///
/// This also includes a scanable QR code for fast key restore.
fn paper_key(
path: Option<String>,
subject: Option<String>,
output_format: Option<PaperkeyFormat>,
) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => {
let path = find_default_encryption_key()?
.ok_or_else(|| {
format_err!("no encryption file provided and no default file found")
})?;
path
}
};
let data = file_get_contents(&path)?;
let data = std::str::from_utf8(&data)?;
let format = output_format.unwrap_or(PaperkeyFormat::Html);
match format {
PaperkeyFormat::Html => paperkey_html(data, subject),
PaperkeyFormat::Text => paperkey_text(data, subject),
}
}
pub fn cli() -> CliCommandMap { pub fn cli() -> CliCommandMap {
let key_create_cmd_def = CliCommand::new(&API_METHOD_CREATE) let key_create_cmd_def = CliCommand::new(&API_METHOD_CREATE)
.arg_param(&["path"]) .arg_param(&["path"])
@ -275,9 +337,214 @@ pub fn cli() -> CliCommandMap {
.arg_param(&["path"]) .arg_param(&["path"])
.completion_cb("path", tools::complete_file_name); .completion_cb("path", tools::complete_file_name);
let paper_key_cmd_def = CliCommand::new(&API_METHOD_PAPER_KEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
CliCommandMap::new() CliCommandMap::new()
.insert("create", key_create_cmd_def) .insert("create", key_create_cmd_def)
.insert("create-master-key", key_create_master_key_cmd_def) .insert("create-master-key", key_create_master_key_cmd_def)
.insert("import-master-pubkey", key_import_master_pubkey_cmd_def) .insert("import-master-pubkey", key_import_master_pubkey_cmd_def)
.insert("change-passphrase", key_change_passphrase_cmd_def) .insert("change-passphrase", key_change_passphrase_cmd_def)
.insert("paper-key", paper_key_cmd_def)
}
fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
let img_size_pt = 500;
println!("<!DOCTYPE html>");
println!("<html lang=\"en\">");
println!("<head>");
println!("<meta charset=\"utf-8\">");
println!("<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">");
println!("<title>Proxmox Backup Paperkey</title>");
println!("<style type=\"text/css\">");
println!(" p {{");
println!(" font-size: 12pt;");
println!(" font-family: monospace;");
println!(" white-space: pre-wrap;");
println!(" line-break: anywhere;");
println!(" }}");
println!("</style>");
println!("</head>");
println!("<body>");
if let Some(subject) = subject {
println!("<p>Subject: {}</p>", subject);
}
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
const BLOCK_SIZE: usize = 20;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
for i in 0..blocks {
let start = i*BLOCK_SIZE;
let mut end = start + BLOCK_SIZE;
if end > lines.len() {
end = lines.len();
}
let data = &lines[start..end];
println!("<div style=\"page-break-inside: avoid;page-break-after: always\">");
println!("<p>");
for l in start..end {
println!("{:02}: {}", l, lines[l]);
}
println!("</p>");
let data = data.join("\n");
let qr_code = generate_qr_code("png", data.as_bytes())?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>");
println!("<img");
println!("width=\"{}pt\" height=\"{}pt\"", img_size_pt, img_size_pt);
println!("src=\"data:image/png;base64,{}\"/>", qr_code);
println!("</center>");
println!("</div>");
}
println!("</body>");
println!("</html>");
return Ok(());
}
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("<div style=\"page-break-inside: avoid\">");
println!("<p>");
println!("-----BEGIN PROXMOX BACKUP KEY-----");
for line in key_text.lines() {
println!("{}", line);
}
println!("-----END PROXMOX BACKUP KEY-----");
println!("</p>");
let qr_code = generate_qr_code("png", key_text.as_bytes())?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>");
println!("<img");
println!("width=\"{}pt\" height=\"{}pt\"", img_size_pt, img_size_pt);
println!("src=\"data:image/png;base64,{}\"/>", qr_code);
println!("</center>");
println!("</div>");
println!("</body>");
println!("</html>");
Ok(())
}
fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
if let Some(subject) = subject {
println!("Subject: {}\n", subject);
}
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
const BLOCK_SIZE: usize = 5;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
for i in 0..blocks {
let start = i*BLOCK_SIZE;
let mut end = start + BLOCK_SIZE;
if end > lines.len() {
end = lines.len();
}
let data = &lines[start..end];
for l in start..end {
println!("{:-2}: {}", l, lines[l]);
}
let data = data.join("\n");
let qr_code = generate_qr_code("utf8i", data.as_bytes())?;
let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
println!("{}", qr_code);
println!("{}", char::from(12u8)); // page break
}
return Ok(());
}
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("-----BEGIN PROXMOX BACKUP KEY-----");
println!("{}", key_text);
println!("-----END PROXMOX BACKUP KEY-----");
let qr_code = generate_qr_code("utf8i", key_text.as_bytes())?;
let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
println!("{}", qr_code);
Ok(())
}
fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> {
let mut child = Command::new("qrencode")
.args(&["-t", output_type, "-m0", "-s1", "-lm", "--output", "-"])
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn()?;
{
let stdin = child.stdin.as_mut()
.ok_or_else(|| format_err!("Failed to open stdin"))?;
stdin.write_all(data)
.map_err(|_| format_err!("Failed to write to stdin"))?;
}
let output = child.wait_with_output()
.map_err(|_| format_err!("Failed to read stdout"))?;
let output = crate::tools::command_output(output, None)?;
Ok(output)
} }

View File

@ -101,7 +101,7 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;
let target = tools::required_string_param(&param, "target")?; let target = tools::required_string_param(&param, "target")?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
record_repository(&repo); record_repository(&repo);

View File

@ -48,7 +48,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize; let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false); let running = !param["all"].as_bool().unwrap_or(false);
@ -96,7 +96,7 @@ async fn task_log(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let upid = tools::required_string_param(&param, "upid")?; let upid = tools::required_string_param(&param, "upid")?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
display_task_log(client, upid, true).await?; display_task_log(client, upid, true).await?;
@ -122,7 +122,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let upid_str = tools::required_string_param(&param, "upid")?; let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect(repo.host(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let _ = client.delete(&path, None).await?; let _ = client.delete(&path, None).await?;

View File

@ -54,7 +54,7 @@ impl BackupReader {
"store": datastore, "store": datastore,
"debug": debug, "debug": debug,
}); });
let req = HttpClient::request_builder(client.server(), "GET", "/api2/json/reader", Some(param)).unwrap(); let req = HttpClient::request_builder(client.server(), client.port(), "GET", "/api2/json/reader", Some(param)).unwrap();
let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_READER_PROTOCOL_ID_V1!())).await?; let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_READER_PROTOCOL_ID_V1!())).await?;

View File

@ -19,14 +19,22 @@ pub struct BackupRepository {
user: Option<Userid>, user: Option<Userid>,
/// The host name or IP address /// The host name or IP address
host: Option<String>, host: Option<String>,
/// The port
port: Option<u16>,
/// The name of the datastore /// The name of the datastore
store: String, store: String,
} }
impl BackupRepository { impl BackupRepository {
pub fn new(user: Option<Userid>, host: Option<String>, store: String) -> Self { pub fn new(user: Option<Userid>, host: Option<String>, port: Option<u16>, store: String) -> Self {
Self { user, host, store } let host = match host {
Some(host) if (IP_V6_REGEX.regex_obj)().is_match(&host) => {
Some(format!("[{}]", host))
},
other => other,
};
Self { user, host, port, store }
} }
pub fn user(&self) -> &Userid { pub fn user(&self) -> &Userid {
@ -43,6 +51,13 @@ impl BackupRepository {
"localhost" "localhost"
} }
pub fn port(&self) -> u16 {
if let Some(port) = self.port {
return port;
}
8007
}
pub fn store(&self) -> &str { pub fn store(&self) -> &str {
&self.store &self.store
} }
@ -50,13 +65,12 @@ impl BackupRepository {
impl fmt::Display for BackupRepository { impl fmt::Display for BackupRepository {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
if let Some(ref user) = self.user { match (&self.user, &self.host, self.port) {
write!(f, "{}@{}:{}", user, self.host(), self.store) (Some(user), _, _) => write!(f, "{}@{}:{}:{}", user, self.host(), self.port(), self.store),
} else if let Some(ref host) = self.host { (None, Some(host), None) => write!(f, "{}:{}", host, self.store),
write!(f, "{}:{}", host, self.store) (None, _, Some(port)) => write!(f, "{}:{}:{}", self.host(), port, self.store),
} else { (None, None, None) => write!(f, "{}", self.store),
write!(f, "{}", self.store) }
}
} }
} }
@ -76,7 +90,8 @@ impl std::str::FromStr for BackupRepository {
Ok(Self { Ok(Self {
user: cap.get(1).map(|m| Userid::try_from(m.as_str().to_owned())).transpose()?, user: cap.get(1).map(|m| Userid::try_from(m.as_str().to_owned())).transpose()?,
host: cap.get(2).map(|m| m.as_str().to_owned()), host: cap.get(2).map(|m| m.as_str().to_owned()),
store: cap[3].to_owned(), port: cap.get(3).map(|m| m.as_str().parse::<u16>()).transpose()?,
store: cap[4].to_owned(),
}) })
} }
} }

View File

@ -65,7 +65,7 @@ impl BackupWriter {
}); });
let req = HttpClient::request_builder( let req = HttpClient::request_builder(
client.server(), "GET", "/api2/json/backup", Some(param)).unwrap(); client.server(), client.port(), "GET", "/api2/json/backup", Some(param)).unwrap();
let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_PROTOCOL_ID_V1!())).await?; let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_PROTOCOL_ID_V1!())).await?;

View File

@ -99,6 +99,7 @@ impl HttpClientOptions {
pub struct HttpClient { pub struct HttpClient {
client: Client<HttpsConnector>, client: Client<HttpsConnector>,
server: String, server: String,
port: u16,
fingerprint: Arc<Mutex<Option<String>>>, fingerprint: Arc<Mutex<Option<String>>>,
first_auth: BroadcastFuture<()>, first_auth: BroadcastFuture<()>,
auth: Arc<RwLock<AuthInfo>>, auth: Arc<RwLock<AuthInfo>>,
@ -250,6 +251,7 @@ fn load_ticket_info(prefix: &str, server: &str, userid: &Userid) -> Option<(Stri
impl HttpClient { impl HttpClient {
pub fn new( pub fn new(
server: &str, server: &str,
port: u16,
userid: &Userid, userid: &Userid,
mut options: HttpClientOptions, mut options: HttpClientOptions,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
@ -338,7 +340,7 @@ impl HttpClient {
let authinfo = auth2.read().unwrap().clone(); let authinfo = auth2.read().unwrap().clone();
(authinfo.userid, authinfo.ticket) (authinfo.userid, authinfo.ticket)
}; };
match Self::credentials(client2.clone(), server2.clone(), userid, ticket).await { match Self::credentials(client2.clone(), server2.clone(), port, userid, ticket).await {
Ok(auth) => { Ok(auth) => {
if use_ticket_cache & &prefix2.is_some() { if use_ticket_cache & &prefix2.is_some() {
let _ = store_ticket_info(prefix2.as_ref().unwrap(), &server2, &auth.userid.to_string(), &auth.ticket, &auth.token); let _ = store_ticket_info(prefix2.as_ref().unwrap(), &server2, &auth.userid.to_string(), &auth.ticket, &auth.token);
@ -358,6 +360,7 @@ impl HttpClient {
let login_future = Self::credentials( let login_future = Self::credentials(
client.clone(), client.clone(),
server.to_owned(), server.to_owned(),
port,
userid.to_owned(), userid.to_owned(),
password.to_owned(), password.to_owned(),
).map_ok({ ).map_ok({
@ -377,6 +380,7 @@ impl HttpClient {
Ok(Self { Ok(Self {
client, client,
server: String::from(server), server: String::from(server),
port,
fingerprint: verified_fingerprint, fingerprint: verified_fingerprint,
auth, auth,
ticket_abort, ticket_abort,
@ -486,7 +490,7 @@ impl HttpClient {
path: &str, path: &str,
data: Option<Value>, data: Option<Value>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let req = Self::request_builder(&self.server, "GET", path, data).unwrap(); let req = Self::request_builder(&self.server, self.port, "GET", path, data)?;
self.request(req).await self.request(req).await
} }
@ -495,7 +499,7 @@ impl HttpClient {
path: &str, path: &str,
data: Option<Value>, data: Option<Value>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let req = Self::request_builder(&self.server, "DELETE", path, data).unwrap(); let req = Self::request_builder(&self.server, self.port, "DELETE", path, data)?;
self.request(req).await self.request(req).await
} }
@ -504,7 +508,7 @@ impl HttpClient {
path: &str, path: &str,
data: Option<Value>, data: Option<Value>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let req = Self::request_builder(&self.server, "POST", path, data).unwrap(); let req = Self::request_builder(&self.server, self.port, "POST", path, data)?;
self.request(req).await self.request(req).await
} }
@ -513,7 +517,7 @@ impl HttpClient {
path: &str, path: &str,
output: &mut (dyn Write + Send), output: &mut (dyn Write + Send),
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut req = Self::request_builder(&self.server, "GET", path, None).unwrap(); let mut req = Self::request_builder(&self.server, self.port, "GET", path, None)?;
let client = self.client.clone(); let client = self.client.clone();
@ -549,7 +553,7 @@ impl HttpClient {
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let path = path.trim_matches('/'); let path = path.trim_matches('/');
let mut url = format!("https://{}:8007/{}", &self.server, path); let mut url = format!("https://{}:{}/{}", &self.server, self.port, path);
if let Some(data) = data { if let Some(data) = data {
let query = tools::json_object_to_query(data).unwrap(); let query = tools::json_object_to_query(data).unwrap();
@ -624,11 +628,12 @@ impl HttpClient {
async fn credentials( async fn credentials(
client: Client<HttpsConnector>, client: Client<HttpsConnector>,
server: String, server: String,
port: u16,
username: Userid, username: Userid,
password: String, password: String,
) -> Result<AuthInfo, Error> { ) -> Result<AuthInfo, Error> {
let data = json!({ "username": username, "password": password }); let data = json!({ "username": username, "password": password });
let req = Self::request_builder(&server, "POST", "/api2/json/access/ticket", Some(data)).unwrap(); let req = Self::request_builder(&server, port, "POST", "/api2/json/access/ticket", Some(data))?;
let cred = Self::api_request(client, req).await?; let cred = Self::api_request(client, req).await?;
let auth = AuthInfo { let auth = AuthInfo {
userid: cred["data"]["username"].as_str().unwrap().parse()?, userid: cred["data"]["username"].as_str().unwrap().parse()?,
@ -672,9 +677,13 @@ impl HttpClient {
&self.server &self.server
} }
pub fn request_builder(server: &str, method: &str, path: &str, data: Option<Value>) -> Result<Request<Body>, Error> { pub fn port(&self) -> u16 {
self.port
}
pub fn request_builder(server: &str, port: u16, method: &str, path: &str, data: Option<Value>) -> Result<Request<Body>, Error> {
let path = path.trim_matches('/'); let path = path.trim_matches('/');
let url: Uri = format!("https://{}:8007/{}", server, path).parse()?; let url: Uri = format!("https://{}:{}/{}", server, port, path).parse()?;
if let Some(data) = data { if let Some(data) = data {
if method == "POST" { if method == "POST" {
@ -687,7 +696,7 @@ impl HttpClient {
return Ok(request); return Ok(request);
} else { } else {
let query = tools::json_object_to_query(data)?; let query = tools::json_object_to_query(data)?;
let url: Uri = format!("https://{}:8007/{}?{}", server, path, query).parse()?; let url: Uri = format!("https://{}:{}/{}?{}", server, port, path, query).parse()?;
let request = Request::builder() let request = Request::builder()
.method(method) .method(method)
.uri(url) .uri(url)

View File

@ -3,13 +3,15 @@
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::json; use serde_json::json;
use std::convert::TryFrom; use std::convert::TryFrom;
use std::sync::Arc; use std::sync::{Arc, Mutex};
use std::collections::HashMap; use std::collections::{HashSet, HashMap};
use std::io::{Seek, SeekFrom}; use std::io::{Seek, SeekFrom};
use std::time::SystemTime;
use std::sync::atomic::{AtomicUsize, Ordering};
use proxmox::api::error::{StatusCode, HttpError}; use proxmox::api::error::{StatusCode, HttpError};
use crate::{ use crate::{
tools::compute_file_csum, tools::{ParallelHandler, compute_file_csum},
server::WorkerTask, server::WorkerTask,
backup::*, backup::*,
api2::types::*, api2::types::*,
@ -22,27 +24,86 @@ use crate::{
// Todo: correctly lock backup groups // Todo: correctly lock backup groups
async fn pull_index_chunks<I: IndexFile>( async fn pull_index_chunks<I: IndexFile>(
_worker: &WorkerTask, worker: &WorkerTask,
chunk_reader: &mut RemoteChunkReader, chunk_reader: RemoteChunkReader,
target: Arc<DataStore>, target: Arc<DataStore>,
index: I, index: I,
downloaded_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
) -> Result<(), Error> { ) -> Result<(), Error> {
use futures::stream::{self, StreamExt, TryStreamExt};
for pos in 0..index.index_count() { let start_time = SystemTime::now();
let info = index.chunk_info(pos).unwrap();
let chunk_exists = target.cond_touch_chunk(&info.digest, false)?;
if chunk_exists {
//worker.log(format!("chunk {} exists {}", pos, proxmox::tools::digest_to_hex(digest)));
continue;
}
//worker.log(format!("sync {} chunk {}", pos, proxmox::tools::digest_to_hex(digest)));
let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
chunk.verify_unencrypted(info.size() as usize, &info.digest)?; let stream = stream::iter(
(0..index.index_count())
.map(|pos| index.chunk_info(pos).unwrap())
.filter(|info| {
let mut guard = downloaded_chunks.lock().unwrap();
let done = guard.contains(&info.digest);
if !done {
// Note: We mark a chunk as downloaded before its actually downloaded
// to avoid duplicate downloads.
guard.insert(info.digest);
}
!done
})
);
target.insert_chunk(&chunk, &info.digest)?; let target2 = target.clone();
} let verify_pool = ParallelHandler::new(
"sync chunk writer", 4,
move |(chunk, digest, size): (DataBlob, [u8;32], u64)| {
// println!("verify and write {}", proxmox::tools::digest_to_hex(&digest));
chunk.verify_unencrypted(size as usize, &digest)?;
target2.insert_chunk(&chunk, &digest)?;
Ok(())
}
);
let verify_and_write_channel = verify_pool.channel();
let bytes = Arc::new(AtomicUsize::new(0));
stream
.map(|info| {
let target = Arc::clone(&target);
let chunk_reader = chunk_reader.clone();
let bytes = Arc::clone(&bytes);
let verify_and_write_channel = verify_and_write_channel.clone();
Ok::<_, Error>(async move {
let chunk_exists = crate::tools::runtime::block_in_place(|| target.cond_touch_chunk(&info.digest, false))?;
if chunk_exists {
//worker.log(format!("chunk {} exists {}", pos, proxmox::tools::digest_to_hex(digest)));
return Ok::<_, Error>(());
}
//worker.log(format!("sync {} chunk {}", pos, proxmox::tools::digest_to_hex(digest)));
let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
let raw_size = chunk.raw_size() as usize;
// decode, verify and write in a separate threads to maximize throughput
crate::tools::runtime::block_in_place(|| verify_and_write_channel.send((chunk, info.digest, info.size())))?;
bytes.fetch_add(raw_size, Ordering::SeqCst);
Ok(())
})
})
.try_buffer_unordered(20)
.try_for_each(|_res| futures::future::ok(()))
.await?;
drop(verify_and_write_channel);
verify_pool.complete()?;
let elapsed = start_time.elapsed()?.as_secs_f64();
let bytes = bytes.load(Ordering::SeqCst);
worker.log(format!("downloaded {} bytes ({} MiB/s)", bytes, (bytes as f64)/(1024.0*1024.0*elapsed)));
Ok(()) Ok(())
} }
@ -89,6 +150,7 @@ async fn pull_single_archive(
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
snapshot: &BackupDir, snapshot: &BackupDir,
archive_info: &FileInfo, archive_info: &FileInfo,
downloaded_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let archive_name = &archive_info.filename; let archive_name = &archive_info.filename;
@ -115,7 +177,7 @@ async fn pull_single_archive(
let (csum, size) = index.compute_csum(); let (csum, size) = index.compute_csum();
verify_archive(archive_info, &csum, size)?; verify_archive(archive_info, &csum, size)?;
pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?; pull_index_chunks(worker, chunk_reader.clone(), tgt_store.clone(), index, downloaded_chunks).await?;
} }
ArchiveType::FixedIndex => { ArchiveType::FixedIndex => {
let index = FixedIndexReader::new(tmpfile) let index = FixedIndexReader::new(tmpfile)
@ -123,7 +185,7 @@ async fn pull_single_archive(
let (csum, size) = index.compute_csum(); let (csum, size) = index.compute_csum();
verify_archive(archive_info, &csum, size)?; verify_archive(archive_info, &csum, size)?;
pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?; pull_index_chunks(worker, chunk_reader.clone(), tgt_store.clone(), index, downloaded_chunks).await?;
} }
ArchiveType::Blob => { ArchiveType::Blob => {
let (csum, size) = compute_file_csum(&mut tmpfile)?; let (csum, size) = compute_file_csum(&mut tmpfile)?;
@ -169,6 +231,7 @@ async fn pull_snapshot(
reader: Arc<BackupReader>, reader: Arc<BackupReader>,
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
snapshot: &BackupDir, snapshot: &BackupDir,
downloaded_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut manifest_name = tgt_store.base_path(); let mut manifest_name = tgt_store.base_path();
@ -278,6 +341,7 @@ async fn pull_snapshot(
tgt_store.clone(), tgt_store.clone(),
snapshot, snapshot,
&item, &item,
downloaded_chunks.clone(),
).await?; ).await?;
} }
@ -300,6 +364,7 @@ pub async fn pull_snapshot_from(
reader: Arc<BackupReader>, reader: Arc<BackupReader>,
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
snapshot: &BackupDir, snapshot: &BackupDir,
downloaded_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let (_path, is_new, _snap_lock) = tgt_store.create_locked_backup_dir(&snapshot)?; let (_path, is_new, _snap_lock) = tgt_store.create_locked_backup_dir(&snapshot)?;
@ -307,7 +372,7 @@ pub async fn pull_snapshot_from(
if is_new { if is_new {
worker.log(format!("sync snapshot {:?}", snapshot.relative_path())); worker.log(format!("sync snapshot {:?}", snapshot.relative_path()));
if let Err(err) = pull_snapshot(worker, reader, tgt_store.clone(), &snapshot).await { if let Err(err) = pull_snapshot(worker, reader, tgt_store.clone(), &snapshot, downloaded_chunks).await {
if let Err(cleanup_err) = tgt_store.remove_backup_dir(&snapshot, true) { if let Err(cleanup_err) = tgt_store.remove_backup_dir(&snapshot, true) {
worker.log(format!("cleanup error - {}", cleanup_err)); worker.log(format!("cleanup error - {}", cleanup_err));
} }
@ -316,7 +381,7 @@ pub async fn pull_snapshot_from(
worker.log(format!("sync snapshot {:?} done", snapshot.relative_path())); worker.log(format!("sync snapshot {:?} done", snapshot.relative_path()));
} else { } else {
worker.log(format!("re-sync snapshot {:?}", snapshot.relative_path())); worker.log(format!("re-sync snapshot {:?}", snapshot.relative_path()));
pull_snapshot(worker, reader, tgt_store.clone(), &snapshot).await?; pull_snapshot(worker, reader, tgt_store.clone(), &snapshot, downloaded_chunks).await?;
worker.log(format!("re-sync snapshot {:?} done", snapshot.relative_path())); worker.log(format!("re-sync snapshot {:?} done", snapshot.relative_path()));
} }
@ -330,6 +395,7 @@ pub async fn pull_group(
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
group: &BackupGroup, group: &BackupGroup,
delete: bool, delete: bool,
progress: Option<(usize, usize)>, // (groups_done, group_count)
) -> Result<(), Error> { ) -> Result<(), Error> {
let path = format!("api2/json/admin/datastore/{}/snapshots", src_repo.store()); let path = format!("api2/json/admin/datastore/{}/snapshots", src_repo.store());
@ -351,7 +417,20 @@ pub async fn pull_group(
let mut remote_snapshots = std::collections::HashSet::new(); let mut remote_snapshots = std::collections::HashSet::new();
for item in list { let (per_start, per_group) = if let Some((groups_done, group_count)) = progress {
let per_start = (groups_done as f64)/(group_count as f64);
let per_group = 1.0/(group_count as f64);
(per_start, per_group)
} else {
(0.0, 1.0)
};
// start with 16384 chunks (up to 65GB)
let downloaded_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*64)));
let snapshot_count = list.len();
for (pos, item) in list.into_iter().enumerate() {
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?; let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
// in-progress backups can't be synced // in-progress backups can't be synced
@ -372,7 +451,7 @@ pub async fn pull_group(
.password(Some(auth_info.ticket.clone())) .password(Some(auth_info.ticket.clone()))
.fingerprint(fingerprint.clone()); .fingerprint(fingerprint.clone());
let new_client = HttpClient::new(src_repo.host(), src_repo.user(), options)?; let new_client = HttpClient::new(src_repo.host(), src_repo.port(), src_repo.user(), options)?;
let reader = BackupReader::start( let reader = BackupReader::start(
new_client, new_client,
@ -384,7 +463,13 @@ pub async fn pull_group(
true, true,
).await?; ).await?;
pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot).await?; let result = pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot, downloaded_chunks.clone()).await;
let percentage = (pos as f64)/(snapshot_count as f64);
let percentage = per_start + percentage*per_group;
worker.log(format!("percentage done: {:.2}%", percentage*100.0));
result?; // stop on error
} }
if delete { if delete {
@ -434,6 +519,9 @@ pub async fn pull_store(
new_groups.insert(BackupGroup::new(&item.backup_type, &item.backup_id)); new_groups.insert(BackupGroup::new(&item.backup_type, &item.backup_id));
} }
let group_count = list.len();
let mut groups_done = 0;
for item in list { for item in list {
let group = BackupGroup::new(&item.backup_type, &item.backup_id); let group = BackupGroup::new(&item.backup_type, &item.backup_id);
@ -442,15 +530,24 @@ pub async fn pull_store(
if userid != owner { // only the owner is allowed to create additional snapshots if userid != owner { // only the owner is allowed to create additional snapshots
worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})", worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})",
item.backup_type, item.backup_id, userid, owner)); item.backup_type, item.backup_id, userid, owner));
errors = true; errors = true; // do not stop here, instead continue
continue; // do not stop here, instead continue
}
if let Err(err) = pull_group(worker, client, src_repo, tgt_store.clone(), &group, delete).await { } else {
worker.log(format!("sync group {}/{} failed - {}", item.backup_type, item.backup_id, err));
errors = true; if let Err(err) = pull_group(
continue; // do not stop here, instead continue worker,
client,
src_repo,
tgt_store.clone(),
&group,
delete,
Some((groups_done, group_count)),
).await {
worker.log(format!("sync group {}/{} failed - {}", item.backup_type, item.backup_id, err));
errors = true; // do not stop here, instead continue
}
} }
groups_done += 1;
} }
if delete { if delete {

View File

@ -15,7 +15,7 @@ pub struct RemoteChunkReader {
client: Arc<BackupReader>, client: Arc<BackupReader>,
crypt_config: Option<Arc<CryptConfig>>, crypt_config: Option<Arc<CryptConfig>>,
crypt_mode: CryptMode, crypt_mode: CryptMode,
cache_hint: HashMap<[u8; 32], usize>, cache_hint: Arc<HashMap<[u8; 32], usize>>,
cache: Arc<Mutex<HashMap<[u8; 32], Vec<u8>>>>, cache: Arc<Mutex<HashMap<[u8; 32], Vec<u8>>>>,
} }
@ -33,7 +33,7 @@ impl RemoteChunkReader {
client, client,
crypt_config, crypt_config,
crypt_mode, crypt_mode,
cache_hint, cache_hint: Arc::new(cache_hint),
cache: Arc::new(Mutex::new(HashMap::new())), cache: Arc::new(Mutex::new(HashMap::new())),
} }
} }

View File

@ -97,7 +97,7 @@ where
{ {
let mut path = path.as_ref().to_path_buf(); let mut path = path.as_ref().to_path_buf();
path.set_extension("lck"); path.set_extension("lck");
let lock = open_file_locked(&path, Duration::new(10, 0))?; let lock = open_file_locked(&path, Duration::new(10, 0), true)?;
let backup_user = crate::backup::backup_user()?; let backup_user = crate::backup::backup_user()?;
nix::unistd::chown(&path, Some(backup_user.uid), Some(backup_user.gid))?; nix::unistd::chown(&path, Some(backup_user.uid), Some(backup_user.gid))?;
Ok(lock) Ok(lock)

View File

@ -149,7 +149,7 @@ pub fn compute_file_diff(filename: &str, shadow: &str) -> Result<String, Error>
.output() .output()
.map_err(|err| format_err!("failed to execute diff - {}", err))?; .map_err(|err| format_err!("failed to execute diff - {}", err))?;
let diff = crate::tools::command_output(output, Some(|c| c == 0 || c == 1)) let diff = crate::tools::command_output_as_string(output, Some(|c| c == 0 || c == 1))
.map_err(|err| format_err!("diff failed: {}", err))?; .map_err(|err| format_err!("diff failed: {}", err))?;
Ok(diff) Ok(diff)

View File

@ -39,6 +39,11 @@ pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth t
host: { host: {
schema: DNS_NAME_OR_IP_SCHEMA, schema: DNS_NAME_OR_IP_SCHEMA,
}, },
port: {
optional: true,
description: "The (optional) port",
type: u16,
},
userid: { userid: {
type: Userid, type: Userid,
}, },
@ -58,6 +63,8 @@ pub struct Remote {
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>, pub comment: Option<String>,
pub host: String, pub host: String,
#[serde(skip_serializing_if="Option::is_none")]
pub port: Option<u16>,
pub userid: Userid, pub userid: Userid,
#[serde(skip_serializing_if="String::is_empty")] #[serde(skip_serializing_if="String::is_empty")]
#[serde(with = "proxmox::tools::serde::string_as_base64")] #[serde(with = "proxmox::tools::serde::string_as_base64")]

View File

@ -17,17 +17,3 @@ pub trait BackupCatalogWriter {
fn add_fifo(&mut self, name: &CStr) -> Result<(), Error>; fn add_fifo(&mut self, name: &CStr) -> Result<(), Error>;
fn add_socket(&mut self, name: &CStr) -> Result<(), Error>; fn add_socket(&mut self, name: &CStr) -> Result<(), Error>;
} }
pub struct DummyCatalogWriter();
impl BackupCatalogWriter for DummyCatalogWriter {
fn start_directory(&mut self, _name: &CStr) -> Result<(), Error> { Ok(()) }
fn end_directory(&mut self) -> Result<(), Error> { Ok(()) }
fn add_file(&mut self, _name: &CStr, _size: u64, _mtime: u64) -> Result<(), Error> { Ok(()) }
fn add_symlink(&mut self, _name: &CStr) -> Result<(), Error> { Ok(()) }
fn add_hardlink(&mut self, _name: &CStr) -> Result<(), Error> { Ok(()) }
fn add_block_device(&mut self, _name: &CStr) -> Result<(), Error> { Ok(()) }
fn add_char_device(&mut self, _name: &CStr) -> Result<(), Error> { Ok(()) }
fn add_fifo(&mut self, _name: &CStr) -> Result<(), Error> { Ok(()) }
fn add_socket(&mut self, _name: &CStr) -> Result<(), Error> { Ok(()) }
}

View File

@ -60,7 +60,7 @@ impl RRA {
let min_time = epoch - (RRD_DATA_ENTRIES as u64)*reso; let min_time = epoch - (RRD_DATA_ENTRIES as u64)*reso;
let min_time = (min_time/reso + 1)*reso; let min_time = (min_time/reso + 1)*reso;
let mut t = last_update - (RRD_DATA_ENTRIES as u64)*reso; let mut t = last_update.saturating_sub((RRD_DATA_ENTRIES as u64)*reso);
let mut index = ((t/reso) % (RRD_DATA_ENTRIES as u64)) as usize; let mut index = ((t/reso) % (RRD_DATA_ENTRIES as u64)) as usize;
for _ in 0..RRD_DATA_ENTRIES { for _ in 0..RRD_DATA_ENTRIES {
t += reso; index = (index + 1) % RRD_DATA_ENTRIES; t += reso; index = (index + 1) % RRD_DATA_ENTRIES;

View File

@ -1,6 +1,7 @@
use std::collections::HashMap; use std::collections::{HashMap, VecDeque};
use std::fs::File; use std::fs::File;
use std::io::{Read, BufRead, BufReader}; use std::path::Path;
use std::io::{Read, Write, BufRead, BufReader};
use std::panic::UnwindSafe; use std::panic::UnwindSafe;
use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
@ -19,6 +20,7 @@ use proxmox::tools::fs::{create_path, open_file_locked, replace_file, CreateOpti
use super::UPID; use super::UPID;
use crate::tools::logrotate::{LogRotate, LogRotateFiles};
use crate::tools::FileLogger; use crate::tools::FileLogger;
use crate::api2::types::Userid; use crate::api2::types::Userid;
@ -31,6 +33,10 @@ pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!();
pub const PROXMOX_BACKUP_TASK_DIR: &str = PROXMOX_BACKUP_TASK_DIR_M!(); pub const PROXMOX_BACKUP_TASK_DIR: &str = PROXMOX_BACKUP_TASK_DIR_M!();
pub const PROXMOX_BACKUP_TASK_LOCK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/.active.lock"); pub const PROXMOX_BACKUP_TASK_LOCK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/.active.lock");
pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/active"); pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/active");
pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/index");
pub const PROXMOX_BACKUP_ARCHIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/archive");
const MAX_INDEX_TASKS: usize = 1000;
lazy_static! { lazy_static! {
static ref WORKER_TASK_LIST: Mutex<HashMap<usize, Arc<WorkerTask>>> = Mutex::new(HashMap::new()); static ref WORKER_TASK_LIST: Mutex<HashMap<usize, Arc<WorkerTask>>> = Mutex::new(HashMap::new());
@ -325,86 +331,85 @@ pub struct TaskListInfo {
pub state: Option<TaskState>, // endtime, status pub state: Option<TaskState>, // endtime, status
} }
fn lock_task_list_files(exclusive: bool) -> Result<std::fs::File, Error> {
let backup_user = crate::backup::backup_user()?;
let lock = open_file_locked(PROXMOX_BACKUP_TASK_LOCK_FN, std::time::Duration::new(10, 0), exclusive)?;
nix::unistd::chown(PROXMOX_BACKUP_TASK_LOCK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
Ok(lock)
}
/// checks if the Task Archive is bigger that 'size_threshold' bytes, and
/// rotates it if it is
pub fn rotate_task_log_archive(size_threshold: u64, compress: bool, max_files: Option<usize>) -> Result<bool, Error> {
let _lock = lock_task_list_files(true)?;
let path = Path::new(PROXMOX_BACKUP_ARCHIVE_TASK_FN);
let metadata = path.metadata()?;
if metadata.len() > size_threshold {
let mut logrotate = LogRotate::new(PROXMOX_BACKUP_ARCHIVE_TASK_FN, compress).ok_or_else(|| format_err!("could not get archive file names"))?;
let backup_user = crate::backup::backup_user()?;
logrotate.rotate(
CreateOptions::new()
.owner(backup_user.uid)
.group(backup_user.gid),
max_files,
)?;
Ok(true)
} else {
Ok(false)
}
}
// atomically read/update the task list, update status of finished tasks // atomically read/update the task list, update status of finished tasks
// new_upid is added to the list when specified. // new_upid is added to the list when specified.
// Returns a sorted list of known tasks, fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, Error> {
let backup_user = crate::backup::backup_user()?; let backup_user = crate::backup::backup_user()?;
let lock = open_file_locked(PROXMOX_BACKUP_TASK_LOCK_FN, std::time::Duration::new(10, 0))?; let lock = lock_task_list_files(true)?;
nix::unistd::chown(PROXMOX_BACKUP_TASK_LOCK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
let reader = match File::open(PROXMOX_BACKUP_ACTIVE_TASK_FN) { let mut finish_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN)?;
Ok(f) => Some(BufReader::new(f)), let mut active_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?
Err(err) => { .into_iter()
if err.kind() == std::io::ErrorKind::NotFound { .filter_map(|info| {
None if info.state.is_some() {
} else { // this can happen when the active file still includes finished tasks
bail!("unable to open active worker {:?} - {}", PROXMOX_BACKUP_ACTIVE_TASK_FN, err); finish_list.push(info);
return None;
} }
}
};
let mut active_list = vec![]; if !worker_is_active_local(&info.upid) {
let mut finish_list = vec![]; println!("Detected stopped UPID {}", &info.upid_str);
let now = proxmox::tools::time::epoch_i64();
if let Some(lines) = reader.map(|r| r.lines()) { let status = upid_read_status(&info.upid)
.unwrap_or_else(|_| TaskState::Unknown { endtime: now });
for line in lines { finish_list.push(TaskListInfo {
let line = line?; upid: info.upid,
match parse_worker_status_line(&line) { upid_str: info.upid_str,
Err(err) => bail!("unable to parse active worker status '{}' - {}", line, err), state: Some(status)
Ok((upid_str, upid, state)) => match state { });
None if worker_is_active_local(&upid) => { return None;
active_list.push(TaskListInfo { upid, upid_str, state: None });
},
None => {
println!("Detected stopped UPID {}", upid_str);
let now = proxmox::tools::time::epoch_i64();
let status = upid_read_status(&upid)
.unwrap_or_else(|_| TaskState::Unknown { endtime: now });
finish_list.push(TaskListInfo {
upid, upid_str, state: Some(status)
});
},
Some(status) => {
finish_list.push(TaskListInfo {
upid, upid_str, state: Some(status)
})
}
}
} }
}
} Some(info)
}).collect();
if let Some(upid) = new_upid { if let Some(upid) = new_upid {
active_list.push(TaskListInfo { upid: upid.clone(), upid_str: upid.to_string(), state: None }); active_list.push(TaskListInfo { upid: upid.clone(), upid_str: upid.to_string(), state: None });
} }
// assemble list without duplicates let active_raw = render_task_list(&active_list);
// we include all active tasks,
// and fill up to 1000 entries with finished tasks
let max = 1000; replace_file(
PROXMOX_BACKUP_ACTIVE_TASK_FN,
active_raw.as_bytes(),
CreateOptions::new()
.owner(backup_user.uid)
.group(backup_user.gid),
)?;
let mut task_hash = HashMap::new(); finish_list.sort_unstable_by(|a, b| {
for info in active_list {
task_hash.insert(info.upid_str.clone(), info);
}
for info in finish_list {
if task_hash.len() > max { break; }
if !task_hash.contains_key(&info.upid_str) {
task_hash.insert(info.upid_str.clone(), info);
}
}
let mut task_list: Vec<TaskListInfo> = vec![];
for (_, info) in task_hash { task_list.push(info); }
task_list.sort_unstable_by(|b, a| { // lastest on top
match (&a.state, &b.state) { match (&a.state, &b.state) {
(Some(s1), Some(s2)) => s1.cmp(&s2), (Some(s1), Some(s2)) => s1.cmp(&s2),
(Some(_), None) => std::cmp::Ordering::Less, (Some(_), None) => std::cmp::Ordering::Less,
@ -413,34 +418,198 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
} }
}); });
let mut raw = String::new();
for info in &task_list { let start = if finish_list.len() > MAX_INDEX_TASKS {
if let Some(status) = &info.state { finish_list.len() - MAX_INDEX_TASKS
raw.push_str(&format!("{} {:08X} {}\n", info.upid_str, status.endtime(), status)); } else {
} else { 0
raw.push_str(&info.upid_str); };
raw.push('\n');
} let end = (start+MAX_INDEX_TASKS).min(finish_list.len());
}
let index_raw = if end > start {
render_task_list(&finish_list[start..end])
} else {
"".to_string()
};
replace_file( replace_file(
PROXMOX_BACKUP_ACTIVE_TASK_FN, PROXMOX_BACKUP_INDEX_TASK_FN,
raw.as_bytes(), index_raw.as_bytes(),
CreateOptions::new() CreateOptions::new()
.owner(backup_user.uid) .owner(backup_user.uid)
.group(backup_user.gid), .group(backup_user.gid),
)?; )?;
if !finish_list.is_empty() && start > 0 {
match std::fs::OpenOptions::new().append(true).create(true).open(PROXMOX_BACKUP_ARCHIVE_TASK_FN) {
Ok(mut writer) => {
for info in &finish_list[0..start] {
writer.write_all(render_task_line(&info).as_bytes())?;
}
},
Err(err) => bail!("could not write task archive - {}", err),
}
nix::unistd::chown(PROXMOX_BACKUP_ARCHIVE_TASK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
}
drop(lock); drop(lock);
Ok(task_list) Ok(())
} }
/// Returns a sorted list of known tasks fn render_task_line(info: &TaskListInfo) -> String {
/// let mut raw = String::new();
/// The list is sorted by `(starttime, endtime)` in ascending order if let Some(status) = &info.state {
pub fn read_task_list() -> Result<Vec<TaskListInfo>, Error> { raw.push_str(&format!("{} {:08X} {}\n", info.upid_str, status.endtime(), status));
update_active_workers(None) } else {
raw.push_str(&info.upid_str);
raw.push('\n');
}
raw
}
fn render_task_list(list: &[TaskListInfo]) -> String {
let mut raw = String::new();
for info in list {
raw.push_str(&render_task_line(&info));
}
raw
}
// note this is not locked, caller has to make sure it is
// this will skip (and log) lines that are not valid status lines
fn read_task_file<R: Read>(reader: R) -> Result<Vec<TaskListInfo>, Error>
{
let reader = BufReader::new(reader);
let mut list = Vec::new();
for line in reader.lines() {
let line = line?;
match parse_worker_status_line(&line) {
Ok((upid_str, upid, state)) => list.push(TaskListInfo {
upid_str,
upid,
state
}),
Err(err) => {
eprintln!("unable to parse worker status '{}' - {}", line, err);
continue;
}
};
}
Ok(list)
}
// note this is not locked, caller has to make sure it is
fn read_task_file_from_path<P>(path: P) -> Result<Vec<TaskListInfo>, Error>
where
P: AsRef<std::path::Path> + std::fmt::Debug,
{
let file = match File::open(&path) {
Ok(f) => f,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => return Ok(Vec::new()),
Err(err) => bail!("unable to open task list {:?} - {}", path, err),
};
read_task_file(file)
}
enum TaskFile {
Active,
Index,
Archive,
End,
}
pub struct TaskListInfoIterator {
list: VecDeque<TaskListInfo>,
file: TaskFile,
archive: Option<LogRotateFiles>,
lock: Option<File>,
}
impl TaskListInfoIterator {
pub fn new(active_only: bool) -> Result<Self, Error> {
let (read_lock, active_list) = {
let lock = lock_task_list_files(false)?;
let active_list = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?;
let needs_update = active_list
.iter()
.any(|info| info.state.is_some() || !worker_is_active_local(&info.upid));
if needs_update {
drop(lock);
update_active_workers(None)?;
let lock = lock_task_list_files(false)?;
let active_list = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?;
(lock, active_list)
} else {
(lock, active_list)
}
};
let archive = if active_only {
None
} else {
let logrotate = LogRotate::new(PROXMOX_BACKUP_ARCHIVE_TASK_FN, true).ok_or_else(|| format_err!("could not get archive file names"))?;
Some(logrotate.files())
};
let file = if active_only { TaskFile::End } else { TaskFile::Active };
let lock = if active_only { None } else { Some(read_lock) };
Ok(Self {
list: active_list.into(),
file,
archive,
lock,
})
}
}
impl Iterator for TaskListInfoIterator {
type Item = Result<TaskListInfo, Error>;
fn next(&mut self) -> Option<Self::Item> {
loop {
if let Some(element) = self.list.pop_back() {
return Some(Ok(element));
} else {
match self.file {
TaskFile::Active => {
let index = match read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN) {
Ok(index) => index,
Err(err) => return Some(Err(err)),
};
self.list.append(&mut index.into());
self.file = TaskFile::Index;
},
TaskFile::Index | TaskFile::Archive => {
if let Some(mut archive) = self.archive.take() {
if let Some(file) = archive.next() {
let list = match read_task_file(file) {
Ok(list) => list,
Err(err) => return Some(Err(err)),
};
self.list.append(&mut list.into());
self.archive = Some(archive);
self.file = TaskFile::Archive;
continue;
}
}
self.file = TaskFile::End;
self.lock.take();
return None;
}
TaskFile::End => return None,
}
}
}
}
} }
/// Launch long running worker tasks. /// Launch long running worker tasks.

View File

@ -32,6 +32,10 @@ pub mod ticket;
pub mod statistics; pub mod statistics;
pub mod systemd; pub mod systemd;
pub mod nom; pub mod nom;
pub mod logrotate;
mod parallel_handler;
pub use parallel_handler::*;
mod wrapped_reader_stream; mod wrapped_reader_stream;
pub use wrapped_reader_stream::*; pub use wrapped_reader_stream::*;
@ -401,7 +405,7 @@ pub fn normalize_uri_path(path: &str) -> Result<(String, Vec<&str>), Error> {
pub fn command_output( pub fn command_output(
output: std::process::Output, output: std::process::Output,
exit_code_check: Option<fn(i32) -> bool>, exit_code_check: Option<fn(i32) -> bool>,
) -> Result<String, Error> { ) -> Result<Vec<u8>, Error> {
if !output.status.success() { if !output.status.success() {
match output.status.code() { match output.status.code() {
@ -422,8 +426,19 @@ pub fn command_output(
} }
} }
let output = String::from_utf8(output.stdout)?; Ok(output.stdout)
}
/// Helper to check result from std::process::Command output, returns String.
///
/// The exit_code_check() function should return true if the exit code
/// is considered successful.
pub fn command_output_as_string(
output: std::process::Output,
exit_code_check: Option<fn(i32) -> bool>,
) -> Result<String, Error> {
let output = command_output(output, exit_code_check)?;
let output = String::from_utf8(output)?;
Ok(output) Ok(output)
} }
@ -435,7 +450,7 @@ pub fn run_command(
let output = command.output() let output = command.output()
.map_err(|err| format_err!("failed to execute {:?} - {}", command, err))?; .map_err(|err| format_err!("failed to execute {:?} - {}", command, err))?;
let output = crate::tools::command_output(output, exit_code_check) let output = crate::tools::command_output_as_string(output, exit_code_check)
.map_err(|err| format_err!("command {:?} failed - {}", command, err))?; .map_err(|err| format_err!("command {:?} failed - {}", command, err))?;
Ok(output) Ok(output)

184
src/tools/logrotate.rs Normal file
View File

@ -0,0 +1,184 @@
use std::path::{Path, PathBuf};
use std::fs::{File, rename};
use std::os::unix::io::FromRawFd;
use std::io::Read;
use anyhow::{bail, Error};
use nix::unistd;
use proxmox::tools::fs::{CreateOptions, make_tmp_file, replace_file};
/// Used for rotating log files and iterating over them
pub struct LogRotate {
base_path: PathBuf,
compress: bool,
}
impl LogRotate {
/// Creates a new instance if the path given is a valid file name
/// (iow. does not end with ..)
/// 'compress' decides if compresses files will be created on
/// rotation, and if it will search '.zst' files when iterating
pub fn new<P: AsRef<Path>>(path: P, compress: bool) -> Option<Self> {
if path.as_ref().file_name().is_some() {
Some(Self {
base_path: path.as_ref().to_path_buf(),
compress,
})
} else {
None
}
}
/// Returns an iterator over the logrotated file names that exist
pub fn file_names(&self) -> LogRotateFileNames {
LogRotateFileNames {
base_path: self.base_path.clone(),
count: 0,
compress: self.compress
}
}
/// Returns an iterator over the logrotated file handles
pub fn files(&self) -> LogRotateFiles {
LogRotateFiles {
file_names: self.file_names(),
}
}
/// Rotates the files up to 'max_files'
/// if the 'compress' option was given it will compress the newest file
///
/// e.g. rotates
/// foo.2.zst => foo.3.zst
/// foo.1.zst => foo.2.zst
/// foo => foo.1.zst
/// => foo
pub fn rotate(&mut self, options: CreateOptions, max_files: Option<usize>) -> Result<(), Error> {
let mut filenames: Vec<PathBuf> = self.file_names().collect();
if filenames.is_empty() {
return Ok(()); // no file means nothing to rotate
}
let mut next_filename = self.base_path.clone().canonicalize()?.into_os_string();
if self.compress {
next_filename.push(format!(".{}.zst", filenames.len()));
} else {
next_filename.push(format!(".{}", filenames.len()));
}
filenames.push(PathBuf::from(next_filename));
let count = filenames.len();
// rotate all but the first, that we maybe have to compress
for i in (1..count-1).rev() {
rename(&filenames[i], &filenames[i+1])?;
}
if self.compress {
let mut source = File::open(&filenames[0])?;
let (fd, tmp_path) = make_tmp_file(&filenames[1], options.clone())?;
let target = unsafe { File::from_raw_fd(fd) };
let mut encoder = match zstd::stream::write::Encoder::new(target, 0) {
Ok(encoder) => encoder,
Err(err) => {
let _ = unistd::unlink(&tmp_path);
bail!("creating zstd encoder failed - {}", err);
}
};
if let Err(err) = std::io::copy(&mut source, &mut encoder) {
let _ = unistd::unlink(&tmp_path);
bail!("zstd encoding failed for file {:?} - {}", &filenames[1], err);
}
if let Err(err) = encoder.finish() {
let _ = unistd::unlink(&tmp_path);
bail!("zstd finish failed for file {:?} - {}", &filenames[1], err);
}
if let Err(err) = rename(&tmp_path, &filenames[1]) {
let _ = unistd::unlink(&tmp_path);
bail!("rename failed for file {:?} - {}", &filenames[1], err);
}
unistd::unlink(&filenames[0])?;
} else {
rename(&filenames[0], &filenames[1])?;
}
// create empty original file
replace_file(&filenames[0], b"", options)?;
if let Some(max_files) = max_files {
// delete all files > max_files
for file in filenames.iter().skip(max_files) {
if let Err(err) = unistd::unlink(file) {
eprintln!("could not remove {:?}: {}", &file, err);
}
}
}
Ok(())
}
}
/// Iterator over logrotated file names
pub struct LogRotateFileNames {
base_path: PathBuf,
count: usize,
compress: bool,
}
impl Iterator for LogRotateFileNames {
type Item = PathBuf;
fn next(&mut self) -> Option<Self::Item> {
if self.count > 0 {
let mut path: std::ffi::OsString = self.base_path.clone().into();
path.push(format!(".{}", self.count));
self.count += 1;
if Path::new(&path).is_file() {
Some(path.into())
} else if self.compress {
path.push(".zst");
if Path::new(&path).is_file() {
Some(path.into())
} else {
None
}
} else {
None
}
} else if self.base_path.is_file() {
self.count += 1;
Some(self.base_path.to_path_buf())
} else {
None
}
}
}
/// Iterator over logrotated files by returning a boxed reader
pub struct LogRotateFiles {
file_names: LogRotateFileNames,
}
impl Iterator for LogRotateFiles {
type Item = Box<dyn Read + Send>;
fn next(&mut self) -> Option<Self::Item> {
let filename = self.file_names.next()?;
let file = File::open(&filename).ok()?;
if filename.extension().unwrap_or(std::ffi::OsStr::new("")) == "zst" {
let encoder = zstd::stream::read::Decoder::new(file).ok()?;
return Some(Box::new(encoder));
}
Some(Box::new(file))
}
}

View File

@ -0,0 +1,166 @@
use std::sync::{Arc, Mutex};
use std::thread::JoinHandle;
use anyhow::{bail, format_err, Error};
use crossbeam_channel::{bounded, Sender};
/// A handle to send data to the worker thread (implements clone)
pub struct SendHandle<I> {
input: Sender<I>,
abort: Arc<Mutex<Option<String>>>,
}
/// Returns the first error happened, if any
pub fn check_abort(abort: &Mutex<Option<String>>) -> Result<(), Error> {
let guard = abort.lock().unwrap();
if let Some(err_msg) = &*guard {
return Err(format_err!("{}", err_msg));
}
Ok(())
}
impl<I: Send> SendHandle<I> {
/// Send data to the worker threads
pub fn send(&self, input: I) -> Result<(), Error> {
check_abort(&self.abort)?;
match self.input.send(input) {
Ok(()) => Ok(()),
Err(_) => bail!("send failed - channel closed"),
}
}
}
/// A thread pool which run the supplied closure
///
/// The send command sends data to the worker threads. If one handler
/// returns an error, we mark the channel as failed and it is no
/// longer possible to send data.
///
/// When done, the 'complete()' method needs to be called to check for
/// outstanding errors.
pub struct ParallelHandler<I> {
handles: Vec<JoinHandle<()>>,
name: String,
input: Option<SendHandle<I>>,
}
impl<I> Clone for SendHandle<I> {
fn clone(&self) -> Self {
Self {
input: self.input.clone(),
abort: Arc::clone(&self.abort),
}
}
}
impl<I: Send + 'static> ParallelHandler<I> {
/// Create a new thread pool, each thread processing incoming data
/// with 'handler_fn'.
pub fn new<F>(name: &str, threads: usize, handler_fn: F) -> Self
where F: Fn(I) -> Result<(), Error> + Send + Clone + 'static,
{
let mut handles = Vec::new();
let (input_tx, input_rx) = bounded::<I>(threads);
let abort = Arc::new(Mutex::new(None));
for i in 0..threads {
let input_rx = input_rx.clone();
let abort = Arc::clone(&abort);
let handler_fn = handler_fn.clone();
handles.push(
std::thread::Builder::new()
.name(format!("{} ({})", name, i))
.spawn(move || loop {
let data = match input_rx.recv() {
Ok(data) => data,
Err(_) => return,
};
match (handler_fn)(data) {
Ok(()) => (),
Err(err) => {
let mut guard = abort.lock().unwrap();
if guard.is_none() {
*guard = Some(err.to_string());
}
}
}
})
.unwrap()
);
}
Self {
handles,
name: name.to_string(),
input: Some(SendHandle {
input: input_tx,
abort,
}),
}
}
/// Returns a cloneable channel to send data to the worker threads
pub fn channel(&self) -> SendHandle<I> {
self.input.as_ref().unwrap().clone()
}
/// Send data to the worker threads
pub fn send(&self, input: I) -> Result<(), Error> {
self.input.as_ref().unwrap().send(input)?;
Ok(())
}
/// Wait for worker threads to complete and check for errors
pub fn complete(mut self) -> Result<(), Error> {
let input = self.input.take().unwrap();
let abort = Arc::clone(&input.abort);
check_abort(&abort)?;
drop(input);
let msg_list = self.join_threads();
// an error might be encountered while waiting for the join
check_abort(&abort)?;
if msg_list.is_empty() {
return Ok(());
}
Err(format_err!("{}", msg_list.join("\n")))
}
fn join_threads(&mut self) -> Vec<String> {
let mut msg_list = Vec::new();
let mut i = 0;
loop {
let handle = match self.handles.pop() {
Some(handle) => handle,
None => break,
};
if let Err(panic) = handle.join() {
match panic.downcast::<&str>() {
Ok(panic_msg) => msg_list.push(
format!("thread {} ({}) paniced: {}", self.name, i, panic_msg)
),
Err(_) => msg_list.push(
format!("thread {} ({}) paniced", self.name, i)
),
}
}
i += 1;
}
msg_list
}
}
// Note: We make sure that all threads will be joined
impl<I> Drop for ParallelHandler<I> {
fn drop(&mut self) {
drop(self.input.take());
while let Some(handle) = self.handles.pop() {
let _ = handle.join();
}
}
}

View File

@ -1,4 +1,3 @@
/*global Proxmox*/
Ext.define('PBS.Application', { Ext.define('PBS.Application', {
extend: 'Ext.app.Application', extend: 'Ext.app.Application',
@ -6,7 +5,7 @@ Ext.define('PBS.Application', {
appProperty: 'app', appProperty: 'app',
stores: [ stores: [
'NavigationStore' 'NavigationStore',
], ],
layout: 'fit', layout: 'fit',
@ -29,7 +28,7 @@ Ext.define('PBS.Application', {
PBS.view = view; PBS.view = view;
me.view = view; me.view = view;
if (me.currentView != undefined) { if (me.currentView !== undefined) {
me.currentView.destroy(); me.currentView.destroy();
} }
@ -58,7 +57,7 @@ Ext.define('PBS.Application', {
} else { } else {
me.changeView('mainview', true); me.changeView('mainview', true);
} }
} },
}); });
Ext.application('PBS.Application'); Ext.application('PBS.Application');

View File

@ -13,7 +13,7 @@ Ext.define('PBS.Dashboard', {
width: 300, width: 300,
title: gettext('Dashboard Options'), title: gettext('Dashboard Options'),
layout: { layout: {
type: 'auto' type: 'auto',
}, },
items: [{ items: [{
xtype: 'form', xtype: 'form',
@ -28,7 +28,7 @@ Ext.define('PBS.Dashboard', {
minValue: 1, minValue: 1,
maxValue: 24, maxValue: 24,
value: viewModel.get('hours'), value: viewModel.get('hours'),
fieldLabel: gettext('Hours to show') fieldLabel: gettext('Hours to show'),
}], }],
buttons: [{ buttons: [{
text: gettext('Save'), text: gettext('Save'),
@ -39,9 +39,9 @@ Ext.define('PBS.Dashboard', {
var hours = win.down('#hours').getValue(); var hours = win.down('#hours').getValue();
me.setHours(hours, true); me.setHours(hours, true);
win.close(); win.close();
} },
}] }],
}] }],
}).show(); }).show();
}, },
@ -119,7 +119,7 @@ Ext.define('PBS.Dashboard', {
el.select(); el.select();
document.execCommand("copy"); document.execCommand("copy");
}, },
text: gettext('Copy') text: gettext('Copy'),
}, },
{ {
text: gettext('Ok'), text: gettext('Ok'),
@ -140,10 +140,10 @@ Ext.define('PBS.Dashboard', {
me.lookup('longesttasks').updateTasks(top10); me.lookup('longesttasks').updateTasks(top10);
let data = { let data = {
backup: { error: 0, warning: 0, ok: 0, }, backup: { error: 0, warning: 0, ok: 0 },
prune: { error: 0, warning: 0, ok: 0, }, prune: { error: 0, warning: 0, ok: 0 },
garbage_collection: { error: 0, warning: 0, ok: 0, }, garbage_collection: { error: 0, warning: 0, ok: 0 },
sync: { error: 0, warning: 0, ok: 0, }, sync: { error: 0, warning: 0, ok: 0 },
}; };
records.forEach(record => { records.forEach(record => {
@ -166,7 +166,7 @@ Ext.define('PBS.Dashboard', {
var sp = Ext.state.Manager.getProvider(); var sp = Ext.state.Manager.getProvider();
var hours = sp.get('dashboard-hours') || 12; var hours = sp.get('dashboard-hours') || 12;
me.setHours(hours, false); me.setHours(hours, false);
} },
}, },
viewModel: { viewModel: {
@ -177,7 +177,7 @@ Ext.define('PBS.Dashboard', {
fingerprint: "", fingerprint: "",
'bytes_in': 0, 'bytes_in': 0,
'bytes_out': 0, 'bytes_out': 0,
'avg_ptime': 0.0 'avg_ptime': 0.0,
}, },
formulas: { formulas: {
@ -194,11 +194,11 @@ Ext.define('PBS.Dashboard', {
autoDestroy: true, autoDestroy: true,
proxy: { proxy: {
type: 'proxmox', type: 'proxmox',
url: '/api2/json/nodes/localhost/status' url: '/api2/json/nodes/localhost/status',
}, },
listeners: { listeners: {
load: 'updateUsageStats' load: 'updateUsageStats',
} },
}, },
subscription: { subscription: {
storeid: 'dash-subscription', storeid: 'dash-subscription',
@ -209,11 +209,11 @@ Ext.define('PBS.Dashboard', {
autoDestroy: true, autoDestroy: true,
proxy: { proxy: {
type: 'proxmox', type: 'proxmox',
url: '/api2/json/nodes/localhost/subscription' url: '/api2/json/nodes/localhost/subscription',
}, },
listeners: { listeners: {
load: 'updateSubscription' load: 'updateSubscription',
} },
}, },
tasks: { tasks: {
storeid: 'dash-tasks', storeid: 'dash-tasks',
@ -225,19 +225,19 @@ Ext.define('PBS.Dashboard', {
model: 'proxmox-tasks', model: 'proxmox-tasks',
proxy: { proxy: {
type: 'proxmox', type: 'proxmox',
url: '/api2/json/status/tasks' url: '/api2/json/status/tasks',
}, },
listeners: { listeners: {
load: 'updateTasks' load: 'updateTasks',
} },
}, },
} },
}, },
title: gettext('Dashboard') + ' - WIP', title: gettext('Dashboard') + ' - WIP',
layout: { layout: {
type: 'column' type: 'column',
}, },
bodyPadding: '20 0 0 20', bodyPadding: '20 0 0 20',
@ -245,7 +245,7 @@ Ext.define('PBS.Dashboard', {
defaults: { defaults: {
columnWidth: 0.49, columnWidth: 0.49,
xtype: 'panel', xtype: 'panel',
margin: '0 20 20 0' margin: '0 20 20 0',
}, },
scrollable: true, scrollable: true,
@ -268,27 +268,27 @@ Ext.define('PBS.Dashboard', {
], ],
layout: { layout: {
type: 'hbox', type: 'hbox',
align: 'center' align: 'center',
}, },
defaults: { defaults: {
xtype: 'proxmoxGauge', xtype: 'proxmoxGauge',
spriteFontSize: '20px', spriteFontSize: '20px',
flex: 1 flex: 1,
}, },
items: [ items: [
{ {
title: gettext('CPU'), title: gettext('CPU'),
reference: 'cpu' reference: 'cpu',
}, },
{ {
title: gettext('Memory'), title: gettext('Memory'),
reference: 'mem' reference: 'mem',
}, },
{ {
title: gettext('Root Disk'), title: gettext('Root Disk'),
reference: 'root' reference: 'root',
} },
] ],
}, },
{ {
xtype: 'pbsDatastoresStatistics', xtype: 'pbsDatastoresStatistics',
@ -314,7 +314,7 @@ Ext.define('PBS.Dashboard', {
reference: 'subscription', reference: 'subscription',
xtype: 'pbsSubscriptionInfo', xtype: 'pbsSubscriptionInfo',
}, },
] ],
}); });
Ext.define('PBS.dashboard.SubscriptionInfo', { Ext.define('PBS.dashboard.SubscriptionInfo', {
@ -322,7 +322,7 @@ Ext.define('PBS.dashboard.SubscriptionInfo', {
xtype: 'pbsSubscriptionInfo', xtype: 'pbsSubscriptionInfo',
style: { style: {
cursor: 'pointer' cursor: 'pointer',
}, },
layout: { layout: {
@ -382,7 +382,7 @@ Ext.define('PBS.dashboard.SubscriptionInfo', {
fn: function() { fn: function() {
var mainview = this.component.up('mainview'); var mainview = this.component.up('mainview');
mainview.getController().redirectTo('pbsSubscription'); mainview.getController().redirectTo('pbsSubscription');
} },
} },
} },
}); });

View File

@ -6,9 +6,9 @@ Ext.define('pbs-prune-list', {
{ {
name: 'backup-time', name: 'backup-time',
type: 'date', type: 'date',
dateFormat: 'timestamp' dateFormat: 'timestamp',
}, },
] ],
}); });
Ext.define('PBS.DataStorePruneInputPanel', { Ext.define('PBS.DataStorePruneInputPanel', {
@ -52,21 +52,21 @@ Ext.define('PBS.DataStorePruneInputPanel', {
method: "POST", method: "POST",
params: params, params: params,
callback: function() { callback: function() {
return; // for easy breakpoint setting // for easy breakpoint setting
}, },
failure: function (response, opts) { failure: function(response, opts) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus); Ext.Msg.alert(gettext('Error'), response.htmlStatus);
}, },
success: function(response, options) { success: function(response, options) {
var data = response.result.data; var data = response.result.data;
view.prune_store.setData(data); view.prune_store.setData(data);
} },
}); });
}, },
control: { control: {
field: { change: 'reload' } field: { change: 'reload' },
} },
}, },
column1: [ column1: [
@ -111,16 +111,16 @@ Ext.define('PBS.DataStorePruneInputPanel', {
allowBlank: true, allowBlank: true,
fieldLabel: gettext('keep-yearly'), fieldLabel: gettext('keep-yearly'),
minValue: 1, minValue: 1,
} },
], ],
initComponent : function() { initComponent: function() {
var me = this; var me = this;
me.prune_store = Ext.create('Ext.data.Store', { me.prune_store = Ext.create('Ext.data.Store', {
model: 'pbs-prune-list', model: 'pbs-prune-list',
sorters: { property: 'backup-time', direction: 'DESC' } sorters: { property: 'backup-time', direction: 'DESC' },
}); });
me.column2 = [ me.column2 = [
@ -145,14 +145,14 @@ Ext.define('PBS.DataStorePruneInputPanel', {
}, },
{ {
text: "keep", text: "keep",
dataIndex: 'keep' dataIndex: 'keep',
} },
] ],
} },
]; ];
me.callParent(); me.callParent();
} },
}); });
Ext.define('PBS.DataStorePrune', { Ext.define('PBS.DataStorePrune', {
@ -163,7 +163,7 @@ Ext.define('PBS.DataStorePrune', {
isCreate: true, isCreate: true,
initComponent : function() { initComponent: function() {
var me = this; var me = this;
if (!me.datastore) { if (!me.datastore) {
@ -183,10 +183,10 @@ Ext.define('PBS.DataStorePrune', {
xtype: 'pbsDataStorePruneInputPanel', xtype: 'pbsDataStorePruneInputPanel',
url: '/api2/extjs/admin/datastore/' + me.datastore + "/prune", url: '/api2/extjs/admin/datastore/' + me.datastore + "/prune",
backup_type: me.backup_type, backup_type: me.backup_type,
backup_id: me.backup_id backup_id: me.backup_id,
}] }],
}); });
me.callParent(); me.callParent();
} },
}); });

View File

@ -19,10 +19,10 @@ Ext.define('pve-rrd-datastore', {
return 0; return 0;
} }
return (data.io_ticks*1000.0)/ios; return (data.io_ticks*1000.0)/ios;
} },
}, },
{ type: 'date', dateFormat: 'timestamp', name: 'time' } { type: 'date', dateFormat: 'timestamp', name: 'time' },
] ],
}); });
Ext.define('PBS.DataStoreStatistic', { Ext.define('PBS.DataStoreStatistic', {
@ -40,11 +40,11 @@ Ext.define('PBS.DataStoreStatistic', {
throw "no datastore specified"; throw "no datastore specified";
} }
me.tbar = [ '->', { xtype: 'proxmoxRRDTypeSelector' } ]; me.tbar = ['->', { xtype: 'proxmoxRRDTypeSelector' }];
var rrdstore = Ext.create('Proxmox.data.RRDStore', { var rrdstore = Ext.create('Proxmox.data.RRDStore', {
rrdurl: "/api2/json/admin/datastore/" + me.datastore + "/rrd", rrdurl: "/api2/json/admin/datastore/" + me.datastore + "/rrd",
model: 'pve-rrd-datastore' model: 'pve-rrd-datastore',
}); });
me.items = { me.items = {
@ -55,38 +55,38 @@ Ext.define('PBS.DataStoreStatistic', {
defaults: { defaults: {
minHeight: 320, minHeight: 320,
padding: 5, padding: 5,
columnWidth: 1 columnWidth: 1,
}, },
items: [ items: [
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('Storage usage (bytes)'), title: gettext('Storage usage (bytes)'),
fields: ['total','used'], fields: ['total', 'used'],
fieldTitles: [gettext('Total'), gettext('Storage usage')], fieldTitles: [gettext('Total'), gettext('Storage usage')],
store: rrdstore store: rrdstore,
}, },
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('Transfer Rate (bytes/second)'), title: gettext('Transfer Rate (bytes/second)'),
fields: ['read_bytes','write_bytes'], fields: ['read_bytes', 'write_bytes'],
fieldTitles: [gettext('Read'), gettext('Write')], fieldTitles: [gettext('Read'), gettext('Write')],
store: rrdstore store: rrdstore,
}, },
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('Input/Output Operations per Second (IOPS)'), title: gettext('Input/Output Operations per Second (IOPS)'),
fields: ['read_ios','write_ios'], fields: ['read_ios', 'write_ios'],
fieldTitles: [gettext('Read'), gettext('Write')], fieldTitles: [gettext('Read'), gettext('Write')],
store: rrdstore store: rrdstore,
}, },
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('IO Delay (ms)'), title: gettext('IO Delay (ms)'),
fields: ['io_delay'], fields: ['io_delay'],
fieldTitles: [gettext('IO Delay')], fieldTitles: [gettext('IO Delay')],
store: rrdstore store: rrdstore,
}, },
] ],
}; };
me.listeners = { me.listeners = {
@ -99,6 +99,6 @@ Ext.define('PBS.DataStoreStatistic', {
}; };
me.callParent(); me.callParent();
} },
}); });

View File

@ -7,7 +7,6 @@ Ext.define('PBS.LoginView', {
submitForm: function() { submitForm: function() {
var me = this; var me = this;
var view = me.getView();
var loginForm = me.lookupReference('loginForm'); var loginForm = me.lookupReference('loginForm');
var unField = me.lookupReference('usernameField'); var unField = me.lookupReference('usernameField');
var saveunField = me.lookupReference('saveunField'); var saveunField = me.lookupReference('saveunField');
@ -19,7 +18,7 @@ Ext.define('PBS.LoginView', {
let params = loginForm.getValues(); let params = loginForm.getValues();
params.username = params.username + '@' + params.realm; params.username = params.username + '@' + params.realm;
delete(params.realm); delete params.realm;
if (loginForm.isVisible()) { if (loginForm.isVisible()) {
loginForm.mask(gettext('Please wait...'), 'x-mask-loading'); loginForm.mask(gettext('Please wait...'), 'x-mask-loading');
@ -48,9 +47,9 @@ Ext.define('PBS.LoginView', {
loginForm.unmask(); loginForm.unmask();
Ext.MessageBox.alert( Ext.MessageBox.alert(
gettext('Error'), gettext('Error'),
gettext('Login failed. Please try again') gettext('Login failed. Please try again'),
); );
} },
}); });
}, },
@ -63,7 +62,7 @@ Ext.define('PBS.LoginView', {
pf.focus(false); pf.focus(false);
} }
} }
} },
}, },
'field[name=lang]': { 'field[name=lang]': {
change: function(f, value) { change: function(f, value) {
@ -71,10 +70,10 @@ Ext.define('PBS.LoginView', {
Ext.util.Cookies.set('PBSLangCookie', value, dt); Ext.util.Cookies.set('PBSLangCookie', value, dt);
this.getView().mask(gettext('Please wait...'), 'x-mask-loading'); this.getView().mask(gettext('Please wait...'), 'x-mask-loading');
window.location.reload(); window.location.reload();
} },
}, },
'button[reference=loginButton]': { 'button[reference=loginButton]': {
click: 'submitForm' click: 'submitForm',
}, },
'window[reference=loginwindow]': { 'window[reference=loginwindow]': {
show: function() { show: function() {
@ -85,21 +84,21 @@ Ext.define('PBS.LoginView', {
var checked = sp.get(checkboxField.getStateId()); var checked = sp.get(checkboxField.getStateId());
checkboxField.setValue(checked); checkboxField.setValue(checked);
if(checked === true) { if (checked === true) {
var username = sp.get(unField.getStateId()); var username = sp.get(unField.getStateId());
unField.setValue(username); unField.setValue(username);
var pwField = this.lookupReference('passwordField'); var pwField = this.lookupReference('passwordField');
pwField.focus(); pwField.focus();
} }
} },
} },
} },
}, },
plugins: 'viewport', plugins: 'viewport',
layout: { layout: {
type: 'border' type: 'border',
}, },
items: [ items: [
@ -108,7 +107,7 @@ Ext.define('PBS.LoginView', {
xtype: 'container', xtype: 'container',
layout: { layout: {
type: 'hbox', type: 'hbox',
align: 'middle' align: 'middle',
}, },
margin: '2 5 2 5', margin: '2 5 2 5',
height: 38, height: 38,
@ -119,12 +118,12 @@ Ext.define('PBS.LoginView', {
}, },
{ {
xtype: 'versioninfo', xtype: 'versioninfo',
makeApiCall: false makeApiCall: false,
} },
] ],
}, },
{ {
region: 'center' region: 'center',
}, },
{ {
xtype: 'window', xtype: 'window',
@ -138,7 +137,7 @@ Ext.define('PBS.LoginView', {
defaultFocus: 'usernameField', defaultFocus: 'usernameField',
layout: { layout: {
type: 'auto' type: 'auto',
}, },
title: gettext('Proxmox Backup Server Login'), title: gettext('Proxmox Backup Server Login'),
@ -147,7 +146,7 @@ Ext.define('PBS.LoginView', {
{ {
xtype: 'form', xtype: 'form',
layout: { layout: {
type: 'form' type: 'form',
}, },
defaultButton: 'loginButton', defaultButton: 'loginButton',
url: '/api2/extjs/access/ticket', url: '/api2/extjs/access/ticket',
@ -155,7 +154,7 @@ Ext.define('PBS.LoginView', {
fieldDefaults: { fieldDefaults: {
labelAlign: 'right', labelAlign: 'right',
allowBlank: false allowBlank: false,
}, },
items: [ items: [
@ -165,7 +164,7 @@ Ext.define('PBS.LoginView', {
name: 'username', name: 'username',
itemId: 'usernameField', itemId: 'usernameField',
reference: 'usernameField', reference: 'usernameField',
stateId: 'login-username' stateId: 'login-username',
}, },
{ {
xtype: 'textfield', xtype: 'textfield',
@ -177,7 +176,7 @@ Ext.define('PBS.LoginView', {
}, },
{ {
xtype: 'pmxRealmComboBox', xtype: 'pmxRealmComboBox',
name: 'realm' name: 'realm',
}, },
{ {
xtype: 'proxmoxLanguageSelector', xtype: 'proxmoxLanguageSelector',
@ -185,8 +184,8 @@ Ext.define('PBS.LoginView', {
value: Ext.util.Cookies.get('PBSLangCookie') || Proxmox.defaultLang || 'en', value: Ext.util.Cookies.get('PBSLangCookie') || Proxmox.defaultLang || 'en',
name: 'lang', name: 'lang',
reference: 'langField', reference: 'langField',
submitValue: false submitValue: false,
} },
], ],
buttons: [ buttons: [
{ {
@ -197,16 +196,16 @@ Ext.define('PBS.LoginView', {
stateId: 'login-saveusername', stateId: 'login-saveusername',
labelWidth: 250, labelWidth: 250,
labelAlign: 'right', labelAlign: 'right',
submitValue: false submitValue: false,
}, },
{ {
text: gettext('Login'), text: gettext('Login'),
reference: 'loginButton', reference: 'loginButton',
formBind: true formBind: true,
} },
] ],
} },
] ],
} },
] ],
}); });

View File

@ -10,11 +10,11 @@ Ext.define('PBS.MainView', {
':path:subpath': { ':path:subpath': {
action: 'changePath', action: 'changePath',
before: 'beforeChangePath', before: 'beforeChangePath',
conditions : { conditions: {
':path' : '(?:([%a-zA-Z0-9\\-\\_\\s,\.]+))', ':path': '(?:([%a-zA-Z0-9\\-\\_\\s,.]+))',
':subpath' : '(?:(?::)([%a-zA-Z0-9\\-\\_\\s,]+))?' ':subpath': '(?:(?::)([%a-zA-Z0-9\\-\\_\\s,]+))?',
} },
} },
}, },
beforeChangePath: function(path, subpath, action) { beforeChangePath: function(path, subpath, action) {
@ -79,7 +79,7 @@ Ext.define('PBS.MainView', {
obj = contentpanel.add({ obj = contentpanel.add({
xtype: path, xtype: path,
nodename: 'localhost', nodename: 'localhost',
border: false border: false,
}); });
} }
@ -113,7 +113,6 @@ Ext.define('PBS.MainView', {
if (lastpanel) { if (lastpanel) {
contentpanel.remove(lastpanel, { destroy: true }); contentpanel.remove(lastpanel, { destroy: true });
} }
}, },
logout: function() { logout: function() {
@ -126,8 +125,8 @@ Ext.define('PBS.MainView', {
control: { control: {
'[reference=logoutButton]': { '[reference=logoutButton]': {
click: 'logout' click: 'logout',
} },
}, },
init: function(view) { init: function(view) {
@ -139,7 +138,7 @@ Ext.define('PBS.MainView', {
// show login on requestexception // show login on requestexception
// fixme: what about other errors // fixme: what about other errors
Ext.Ajax.on('requestexception', function(conn, response, options) { Ext.Ajax.on('requestexception', function(conn, response, options) {
if (response.status == 401) { // auth failure if (response.status === 401 || response.status === '401') { // auth failure
me.logout(); me.logout();
} }
}); });
@ -155,7 +154,7 @@ Ext.define('PBS.MainView', {
Ext.Ajax.request({ Ext.Ajax.request({
params: { params: {
username: Proxmox.UserName, username: Proxmox.UserName,
password: ticket password: ticket,
}, },
url: '/api2/json/access/ticket', url: '/api2/json/access/ticket',
method: 'POST', method: 'POST',
@ -165,17 +164,17 @@ Ext.define('PBS.MainView', {
success: function(response, opts) { success: function(response, opts) {
var obj = Ext.decode(response.responseText); var obj = Ext.decode(response.responseText);
PBS.Utils.updateLoginData(obj.data); PBS.Utils.updateLoginData(obj.data);
} },
}); });
}, },
interval: 15*60*1000 interval: 15*60*1000,
}); });
// select treeitem and load page from url fragment, if set // select treeitem and load page from url fragment, if set
let token = Ext.util.History.getToken() || 'pbsDashboard'; let token = Ext.util.History.getToken() || 'pbsDashboard';
this.redirectTo(token, true); this.redirectTo(token, true);
} },
}, },
plugins: 'viewport', plugins: 'viewport',
@ -188,7 +187,7 @@ Ext.define('PBS.MainView', {
xtype: 'container', xtype: 'container',
layout: { layout: {
type: 'hbox', type: 'hbox',
align: 'middle' align: 'middle',
}, },
margin: '2 0 2 5', margin: '2 0 2 5',
height: 38, height: 38,
@ -229,7 +228,7 @@ Ext.define('PBS.MainView', {
style: { style: {
// proxmox dark grey p light grey as border // proxmox dark grey p light grey as border
backgroundColor: '#464d4d', backgroundColor: '#464d4d',
borderColor: '#ABBABA' borderColor: '#ABBABA',
}, },
margin: '0 5 0 0', margin: '0 5 0 0',
iconCls: 'fa fa-user', iconCls: 'fa fa-user',
@ -241,7 +240,7 @@ Ext.define('PBS.MainView', {
}, },
], ],
}, },
] ],
}, },
{ {
xtype: 'panel', xtype: 'panel',
@ -250,7 +249,7 @@ Ext.define('PBS.MainView', {
region: 'west', region: 'west',
layout: { layout: {
type: 'vbox', type: 'vbox',
align: 'stretch' align: 'stretch',
}, },
items: [{ items: [{
xtype: 'navigationtree', xtype: 'navigationtree',
@ -260,20 +259,20 @@ Ext.define('PBS.MainView', {
// because of a bug where a viewcontroller does not detect // because of a bug where a viewcontroller does not detect
// the selectionchange event of a treelist // the selectionchange event of a treelist
listeners: { listeners: {
selectionchange: 'navigate' selectionchange: 'navigate',
} },
}, { }, {
xtype: 'box', xtype: 'box',
cls: 'x-treelist-nav', cls: 'x-treelist-nav',
flex: 1 flex: 1,
}] }],
}, },
{ {
xtype: 'panel', xtype: 'panel',
layout: { type: 'card' }, layout: { type: 'card' },
region: 'center', region: 'center',
border: false, border: false,
reference: 'contentpanel' reference: 'contentpanel',
} },
] ],
}); });

View File

@ -59,18 +59,23 @@ OnlineHelpInfo.js:
$(MAKE) -C ../docs onlinehelpinfo $(MAKE) -C ../docs onlinehelpinfo
mv ../docs/output/scanrefs/OnlineHelpInfo.js . mv ../docs/output/scanrefs/OnlineHelpInfo.js .
js/proxmox-backup-gui.js: js OnlineHelpInfo.js ${JSSRC} js/proxmox-backup-gui.js: .lint-incremental js OnlineHelpInfo.js ${JSSRC}
cat OnlineHelpInfo.js ${JSSRC} >$@.tmp cat OnlineHelpInfo.js ${JSSRC} >$@.tmp
mv $@.tmp $@ mv $@.tmp $@
.PHONY: lint .PHONY: check
lint: ${JSSRC} check:
eslint ${JSSRC} eslint ${JSSRC}
touch ".lint-incremental"
.lint-incremental: ${JSSRC}
eslint $?
touch "$@"
.PHONY: clean .PHONY: clean
clean: clean:
find . -name '*~' -exec rm {} ';' find . -name '*~' -exec rm {} ';'
rm -rf js rm -rf js .lint-incremental
install: js/proxmox-backup-gui.js css/ext6-pbs.css index.hbs install: js/proxmox-backup-gui.js css/ext6-pbs.css index.hbs
install -dm755 $(DESTDIR)$(JSDIR) install -dm755 $(DESTDIR)$(JSDIR)

View File

@ -10,7 +10,7 @@ Ext.define('PBS.store.NavigationStore', {
text: gettext('Dashboard'), text: gettext('Dashboard'),
iconCls: 'fa fa-tachometer', iconCls: 'fa fa-tachometer',
path: 'pbsDashboard', path: 'pbsDashboard',
leaf: true leaf: true,
}, },
{ {
text: gettext('Configuration'), text: gettext('Configuration'),
@ -22,13 +22,13 @@ Ext.define('PBS.store.NavigationStore', {
text: gettext('User Management'), text: gettext('User Management'),
iconCls: 'fa fa-user', iconCls: 'fa fa-user',
path: 'pbsUserView', path: 'pbsUserView',
leaf: true leaf: true,
}, },
{ {
text: gettext('Permissions'), text: gettext('Permissions'),
iconCls: 'fa fa-unlock', iconCls: 'fa fa-unlock',
path: 'pbsACLView', path: 'pbsACLView',
leaf: true leaf: true,
}, },
{ {
text: gettext('Remotes'), text: gettext('Remotes'),
@ -46,9 +46,9 @@ Ext.define('PBS.store.NavigationStore', {
text: gettext('Subscription'), text: gettext('Subscription'),
iconCls: 'fa fa-support', iconCls: 'fa fa-support',
path: 'pbsSubscription', path: 'pbsSubscription',
leaf: true leaf: true,
} },
] ],
}, },
{ {
text: gettext('Administration'), text: gettext('Administration'),
@ -75,19 +75,19 @@ Ext.define('PBS.store.NavigationStore', {
path: 'pbsZFSList', path: 'pbsZFSList',
leaf: true, leaf: true,
}, },
] ],
} },
] ],
}, },
{ {
text: gettext('Datastore'), text: gettext('Datastore'),
iconCls: 'fa fa-archive', iconCls: 'fa fa-archive',
path: 'pbsDataStoreConfig', path: 'pbsDataStoreConfig',
expanded: true, expanded: true,
leaf: false leaf: false,
}, },
] ],
} },
}); });
Ext.define('PBS.view.main.NavigationTree', { Ext.define('PBS.view.main.NavigationTree', {
@ -98,13 +98,12 @@ Ext.define('PBS.view.main.NavigationTree', {
xclass: 'Ext.app.ViewController', xclass: 'Ext.app.ViewController',
init: function(view) { init: function(view) {
view.rstore = Ext.create('Proxmox.data.UpdateStore', { view.rstore = Ext.create('Proxmox.data.UpdateStore', {
autoStart: true, autoStart: true,
interval: 15 * 1000, interval: 15 * 1000,
storeId: 'pbs-datastore-list', storeId: 'pbs-datastore-list',
storeid: 'pbs-datastore-list', storeid: 'pbs-datastore-list',
model: 'pbs-datastore-list' model: 'pbs-datastore-list',
}); });
view.rstore.on('load', this.onLoad, this); view.rstore.on('load', this.onLoad, this);
@ -119,7 +118,7 @@ Ext.define('PBS.view.main.NavigationTree', {
// FIXME: newly added always get appended to the end.. // FIXME: newly added always get appended to the end..
records.sort((a, b) => { records.sort((a, b) => {
if (a.id > b.id) return 1; if (a.id > b.id) return 1;
if (a.id < b.id) return -1; if (a.id < b.id) return -1;
return 0; return 0;
}); });
@ -128,29 +127,28 @@ Ext.define('PBS.view.main.NavigationTree', {
var length = records.length; var length = records.length;
var lookup_hash = {}; var lookup_hash = {};
for (var i = 0; i < length; i++) { for (var i = 0; i < length; i++) {
var name = records[i].id; let name = records[i].id;
lookup_hash[name] = true; lookup_hash[name] = true;
if (!list.findChild('text', name, false)) { if (!list.findChild('text', name, false)) {
list.appendChild({ list.appendChild({
text: name, text: name,
path: `DataStore-${name}`, path: `DataStore-${name}`,
iconCls: 'fa fa-database', iconCls: 'fa fa-database',
leaf: true leaf: true,
}); });
} }
} }
var erase_list = []; var erase_list = [];
list.eachChild(function(node) { list.eachChild(function(node) {
var name = node.data.text; let name = node.data.text;
if (!lookup_hash[name]) { if (!lookup_hash[name]) {
erase_list.push(node); erase_list.push(node);
} }
}); });
Ext.Array.forEach(erase_list, function(node) { node.erase(); }); Ext.Array.forEach(erase_list, function(node) { node.erase(); });
},
}
}, },
select: function(path) { select: function(path) {
@ -163,5 +161,5 @@ Ext.define('PBS.view.main.NavigationTree', {
expanderOnly: true, expanderOnly: true,
expanderFirst: false, expanderFirst: false,
store: 'NavigationStore', store: 'NavigationStore',
ui: 'nav' ui: 'nav',
}); });

View File

@ -3,6 +3,22 @@ const proxmoxOnlineHelpInfo = {
"link": "/docs/index.html", "link": "/docs/index.html",
"title": "Proxmox Backup Server Documentation Index" "title": "Proxmox Backup Server Documentation Index"
}, },
"datastore-intro": {
"link": "/docs/administration-guide.html#datastore-intro",
"title": ":term:`DataStore`"
},
"user-mgmt": {
"link": "/docs/administration-guide.html#user-mgmt",
"title": "User Management"
},
"user-acl": {
"link": "/docs/administration-guide.html#user-acl",
"title": "Access Control"
},
"backup-remote": {
"link": "/docs/administration-guide.html#backup-remote",
"title": ":term:`Remote`"
},
"syncjobs": { "syncjobs": {
"link": "/docs/administration-guide.html#syncjobs", "link": "/docs/administration-guide.html#syncjobs",
"title": "Sync Jobs" "title": "Sync Jobs"

View File

@ -1,4 +1,3 @@
/*global Proxmox*/
Ext.define('PBS.ServerAdministration', { Ext.define('PBS.ServerAdministration', {
extend: 'Ext.tab.Panel', extend: 'Ext.tab.Panel',
alias: 'widget.pbsServerAdministration', alias: 'widget.pbsServerAdministration',
@ -14,13 +13,13 @@ Ext.define('PBS.ServerAdministration', {
init: function(view) { init: function(view) {
var upgradeBtn = view.lookupReference('upgradeBtn'); var upgradeBtn = view.lookupReference('upgradeBtn');
upgradeBtn.setDisabled(!(Proxmox.UserName && Proxmox.UserName === 'root@pam')); upgradeBtn.setDisabled(!(Proxmox.UserName && Proxmox.UserName === 'root@pam'));
} },
}, },
items: [ items: [
{ {
xtype: 'pbsServerStatus', xtype: 'pbsServerStatus',
itemId: 'status' itemId: 'status',
}, },
{ {
xtype: 'proxmoxNodeServiceView', xtype: 'proxmoxNodeServiceView',
@ -32,7 +31,7 @@ Ext.define('PBS.ServerAdministration', {
'proxmox-backup': true, 'proxmox-backup': true,
'proxmox-backup-proxy': true, 'proxmox-backup-proxy': true,
}, },
nodename: 'localhost' nodename: 'localhost',
}, },
{ {
xtype: 'proxmoxNodeAPT', xtype: 'proxmoxNodeAPT',
@ -44,10 +43,10 @@ Ext.define('PBS.ServerAdministration', {
text: gettext('Upgrade'), text: gettext('Upgrade'),
handler: function() { handler: function() {
Proxmox.Utils.openXtermJsViewer('upgrade', 0, 'localhost'); Proxmox.Utils.openXtermJsViewer('upgrade', 0, 'localhost');
} },
}, },
itemId: 'updates', itemId: 'updates',
nodename: 'localhost' nodename: 'localhost',
}, },
{ {
xtype: 'proxmoxJournalView', xtype: 'proxmoxJournalView',
@ -60,9 +59,9 @@ Ext.define('PBS.ServerAdministration', {
itemId: 'tasks', itemId: 'tasks',
title: gettext('Tasks'), title: gettext('Tasks'),
height: 'auto', height: 'auto',
nodename: 'localhost' nodename: 'localhost',
} },
] ],
}); });

View File

@ -6,14 +6,14 @@ Ext.define('pve-rrd-node', {
// percentage // percentage
convert: function(value) { convert: function(value) {
return value*100; return value*100;
} },
}, },
{ {
name: 'iowait', name: 'iowait',
// percentage // percentage
convert: function(value) { convert: function(value) {
return value*100; return value*100;
} },
}, },
'netin', 'netin',
'netout', 'netout',
@ -33,15 +33,15 @@ Ext.define('pve-rrd-node', {
let ios = 0; let ios = 0;
if (data.read_ios !== undefined) { ios += data.read_ios; } if (data.read_ios !== undefined) { ios += data.read_ios; }
if (data.write_ios !== undefined) { ios += data.write_ios; } if (data.write_ios !== undefined) { ios += data.write_ios; }
if (ios == 0 || data.io_ticks === undefined) { if (ios === 0 || data.io_ticks === undefined) {
return undefined; return undefined;
} }
return (data.io_ticks*1000.0)/ios; return (data.io_ticks*1000.0)/ios;
} },
}, },
'loadavg', 'loadavg',
{ type: 'date', dateFormat: 'timestamp', name: 'time' } { type: 'date', dateFormat: 'timestamp', name: 'time' },
] ],
}); });
Ext.define('PBS.ServerStatus', { Ext.define('PBS.ServerStatus', {
extend: 'Ext.panel.Panel', extend: 'Ext.panel.Panel',
@ -62,7 +62,7 @@ Ext.define('PBS.ServerStatus', {
waitMsgTarget: me, waitMsgTarget: me,
failure: function(response, opts) { failure: function(response, opts) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus); Ext.Msg.alert(gettext('Error'), response.htmlStatus);
} },
}); });
}; };
@ -73,7 +73,7 @@ Ext.define('PBS.ServerStatus', {
handler: function() { handler: function() {
node_command('reboot'); node_command('reboot');
}, },
iconCls: 'fa fa-undo' iconCls: 'fa fa-undo',
}); });
var shutdownBtn = Ext.create('Proxmox.button.Button', { var shutdownBtn = Ext.create('Proxmox.button.Button', {
@ -83,7 +83,7 @@ Ext.define('PBS.ServerStatus', {
handler: function() { handler: function() {
node_command('shutdown'); node_command('shutdown');
}, },
iconCls: 'fa fa-power-off' iconCls: 'fa fa-power-off',
}); });
var consoleBtn = Ext.create('Proxmox.button.Button', { var consoleBtn = Ext.create('Proxmox.button.Button', {
@ -91,14 +91,14 @@ Ext.define('PBS.ServerStatus', {
iconCls: 'fa fa-terminal', iconCls: 'fa fa-terminal',
handler: function() { handler: function() {
Proxmox.Utils.openXtermJsViewer('shell', 0, Proxmox.NodeName); Proxmox.Utils.openXtermJsViewer('shell', 0, Proxmox.NodeName);
} },
}); });
me.tbar = [ consoleBtn, restartBtn, shutdownBtn, '->', { xtype: 'proxmoxRRDTypeSelector' } ]; me.tbar = [consoleBtn, restartBtn, shutdownBtn, '->', { xtype: 'proxmoxRRDTypeSelector' }];
var rrdstore = Ext.create('Proxmox.data.RRDStore', { var rrdstore = Ext.create('Proxmox.data.RRDStore', {
rrdurl: "/api2/json/nodes/localhost/rrd", rrdurl: "/api2/json/nodes/localhost/rrd",
model: 'pve-rrd-node' model: 'pve-rrd-node',
}); });
me.items = { me.items = {
@ -109,72 +109,72 @@ Ext.define('PBS.ServerStatus', {
defaults: { defaults: {
minHeight: 320, minHeight: 320,
padding: 5, padding: 5,
columnWidth: 1 columnWidth: 1,
}, },
items: [ items: [
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('CPU usage'), title: gettext('CPU usage'),
fields: ['cpu','iowait'], fields: ['cpu', 'iowait'],
fieldTitles: [gettext('CPU usage'), gettext('IO wait')], fieldTitles: [gettext('CPU usage'), gettext('IO wait')],
store: rrdstore store: rrdstore,
}, },
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('Server load'), title: gettext('Server load'),
fields: ['loadavg'], fields: ['loadavg'],
fieldTitles: [gettext('Load average')], fieldTitles: [gettext('Load average')],
store: rrdstore store: rrdstore,
}, },
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('Memory usage'), title: gettext('Memory usage'),
fields: ['memtotal','memused'], fields: ['memtotal', 'memused'],
fieldTitles: [gettext('Total'), gettext('RAM usage')], fieldTitles: [gettext('Total'), gettext('RAM usage')],
store: rrdstore store: rrdstore,
}, },
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('Swap usage'), title: gettext('Swap usage'),
fields: ['swaptotal','swapused'], fields: ['swaptotal', 'swapused'],
fieldTitles: [gettext('Total'), gettext('Swap usage')], fieldTitles: [gettext('Total'), gettext('Swap usage')],
store: rrdstore store: rrdstore,
}, },
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('Network traffic'), title: gettext('Network traffic'),
fields: ['netin','netout'], fields: ['netin', 'netout'],
store: rrdstore store: rrdstore,
}, },
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('Root Disk usage'), title: gettext('Root Disk usage'),
fields: ['total','used'], fields: ['total', 'used'],
fieldTitles: [gettext('Total'), gettext('Disk usage')], fieldTitles: [gettext('Total'), gettext('Disk usage')],
store: rrdstore store: rrdstore,
}, },
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('Root Disk Transfer Rate (bytes/second)'), title: gettext('Root Disk Transfer Rate (bytes/second)'),
fields: ['read_bytes','write_bytes'], fields: ['read_bytes', 'write_bytes'],
fieldTitles: [gettext('Read'), gettext('Write')], fieldTitles: [gettext('Read'), gettext('Write')],
store: rrdstore store: rrdstore,
}, },
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('Root Disk Input/Output Operations per Second (IOPS)'), title: gettext('Root Disk Input/Output Operations per Second (IOPS)'),
fields: ['read_ios','write_ios'], fields: ['read_ios', 'write_ios'],
fieldTitles: [gettext('Read'), gettext('Write')], fieldTitles: [gettext('Read'), gettext('Write')],
store: rrdstore store: rrdstore,
}, },
{ {
xtype: 'proxmoxRRDChart', xtype: 'proxmoxRRDChart',
title: gettext('Root Disk IO Delay (ms)'), title: gettext('Root Disk IO Delay (ms)'),
fields: ['io_delay'], fields: ['io_delay'],
fieldTitles: [gettext('IO Delay')], fieldTitles: [gettext('IO Delay')],
store: rrdstore store: rrdstore,
}, },
] ],
}; };
me.listeners = { me.listeners = {
@ -187,6 +187,6 @@ Ext.define('PBS.ServerStatus', {
}; };
me.callParent(); me.callParent();
} },
}); });

View File

@ -1,4 +1,3 @@
/*global Blob,Proxmox*/
Ext.define('PBS.SubscriptionKeyEdit', { Ext.define('PBS.SubscriptionKeyEdit', {
extend: 'Proxmox.window.Edit', extend: 'Proxmox.window.Edit',
@ -12,8 +11,8 @@ Ext.define('PBS.SubscriptionKeyEdit', {
xtype: 'textfield', xtype: 'textfield',
name: 'key', name: 'key',
value: '', value: '',
fieldLabel: gettext('Subscription Key') fieldLabel: gettext('Subscription Key'),
} },
}); });
Ext.define('PBS.Subscription', { Ext.define('PBS.Subscription', {
@ -27,10 +26,10 @@ Ext.define('PBS.Subscription', {
onlineHelp: 'getting_help', onlineHelp: 'getting_help',
viewConfig: { viewConfig: {
enableTextSelection: true enableTextSelection: true,
}, },
initComponent : function() { initComponent: function() {
var me = this; var me = this;
var reload = function() { var reload = function() {
@ -40,7 +39,6 @@ Ext.define('PBS.Subscription', {
var baseurl = '/nodes/localhost/subscription'; var baseurl = '/nodes/localhost/subscription';
var render_status = function(value) { var render_status = function(value) {
var message = me.getObjectValue('message'); var message = me.getObjectValue('message');
if (message) { if (message) {
@ -51,31 +49,31 @@ Ext.define('PBS.Subscription', {
var rows = { var rows = {
productname: { productname: {
header: gettext('Type') header: gettext('Type'),
}, },
key: { key: {
header: gettext('Subscription Key') header: gettext('Subscription Key'),
}, },
status: { status: {
header: gettext('Status'), header: gettext('Status'),
renderer: render_status renderer: render_status,
}, },
message: { message: {
visible: false visible: false,
}, },
serverid: { serverid: {
header: gettext('Server ID') header: gettext('Server ID'),
}, },
sockets: { sockets: {
header: gettext('Sockets') header: gettext('Sockets'),
}, },
checktime: { checktime: {
header: gettext('Last checked'), header: gettext('Last checked'),
renderer: Proxmox.Utils.render_timestamp renderer: Proxmox.Utils.render_timestamp,
}, },
nextduedate: { nextduedate: {
header: gettext('Next due date') header: gettext('Next due date'),
} },
}; };
Ext.apply(me, { Ext.apply(me, {
@ -86,11 +84,11 @@ Ext.define('PBS.Subscription', {
text: gettext('Upload Subscription Key'), text: gettext('Upload Subscription Key'),
handler: function() { handler: function() {
var win = Ext.create('PBS.SubscriptionKeyEdit', { var win = Ext.create('PBS.SubscriptionKeyEdit', {
url: '/api2/extjs/' + baseurl url: '/api2/extjs/' + baseurl,
}); });
win.show(); win.show();
win.on('destroy', reload); win.on('destroy', reload);
} },
}, },
{ {
text: gettext('Check'), text: gettext('Check'),
@ -103,16 +101,16 @@ Ext.define('PBS.Subscription', {
failure: function(response, opts) { failure: function(response, opts) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus); Ext.Msg.alert(gettext('Error'), response.htmlStatus);
}, },
callback: reload callback: reload,
}); });
} },
} },
], ],
rows: rows rows: rows,
}); });
me.callParent(); me.callParent();
reload(); reload();
} },
}); });

View File

@ -1,5 +1,3 @@
/*global Proxmox*/
Ext.define('PBS.SystemConfiguration', { Ext.define('PBS.SystemConfiguration', {
extend: 'Ext.tab.Panel', extend: 'Ext.tab.Panel',
xtype: 'pbsSystemConfiguration', xtype: 'pbsSystemConfiguration',
@ -16,23 +14,23 @@ Ext.define('PBS.SystemConfiguration', {
layout: { layout: {
type: 'vbox', type: 'vbox',
align: 'stretch', align: 'stretch',
multi: true multi: true,
}, },
defaults: { defaults: {
collapsible: true, collapsible: true,
animCollapse: false, animCollapse: false,
margin: '10 10 0 10' margin: '10 10 0 10',
}, },
items: [ items: [
{ {
title: gettext('Time'), title: gettext('Time'),
xtype: 'proxmoxNodeTimeView', xtype: 'proxmoxNodeTimeView',
nodename: 'localhost' nodename: 'localhost',
}, },
{ {
title: gettext('DNS'), title: gettext('DNS'),
xtype: 'proxmoxNodeDNSView', xtype: 'proxmoxNodeDNSView',
nodename: 'localhost' nodename: 'localhost',
}, },
{ {
flex: 1, flex: 1,
@ -41,28 +39,22 @@ Ext.define('PBS.SystemConfiguration', {
xtype: 'proxmoxNodeNetworkView', xtype: 'proxmoxNodeNetworkView',
showApplyBtn: true, showApplyBtn: true,
types: ['bond', 'bridge', 'vlan'], types: ['bond', 'bridge', 'vlan'],
nodename: 'localhost' nodename: 'localhost',
}, },
] ],
// }, },
// {
// itemId: 'options',
// title: gettext('Options'),
// html: "TESWT"
// xtype: 'pbsSystemOptions'
}
], ],
initComponent: function() { initComponent: function() {
var me = this; let me = this;
me.callParent(); me.callParent();
var networktime = me.getComponent('network'); let networktime = me.getComponent('network');
Ext.Array.forEach(networktime.query(), function(item) { Ext.Array.forEach(networktime.query(), function(item) {
item.relayEvents(networktime, [ 'activate', 'deactivate', 'destroy']); item.relayEvents(networktime, ['activate', 'deactivate', 'destroy']);
}); });
} },
}); });

View File

@ -1,4 +1,3 @@
/*global Proxmox */
Ext.ns('PBS'); Ext.ns('PBS');
console.log("Starting Backup Server GUI"); console.log("Starting Backup Server GUI");
@ -7,7 +6,6 @@ Ext.define('PBS.Utils', {
singleton: true, singleton: true,
updateLoginData: function(data) { updateLoginData: function(data) {
Proxmox.Utils.setAuthData(data); Proxmox.Utils.setAuthData(data);
}, },
@ -74,13 +72,13 @@ Ext.define('PBS.Utils', {
render_datastore_worker_id: function(id, what) { render_datastore_worker_id: function(id, what) {
const res = id.match(/^(\S+?)_(\S+?)_(\S+?)(_(.+))?$/); const res = id.match(/^(\S+?)_(\S+?)_(\S+?)(_(.+))?$/);
if (res) { if (res) {
let datastore = res[1], type = res[2], id = res[3]; let datastore = res[1], backupGroup = `${res[2]}/${res[3]}`;
if (res[4] !== undefined) { if (res[4] !== undefined) {
let datetime = Ext.Date.parse(parseInt(res[5], 16), 'U'); let datetime = Ext.Date.parse(parseInt(res[5], 16), 'U');
let utctime = PBS.Utils.render_datetime_utc(datetime); let utctime = PBS.Utils.render_datetime_utc(datetime);
return `Datastore ${datastore} ${what} ${type}/${id}/${utctime}`; return `Datastore ${datastore} ${what} ${backupGroup}/${utctime}`;
} else { } else {
return `Datastore ${datastore} ${what} ${type}/${id}`; return `Datastore ${datastore} ${what} ${backupGroup}`;
} }
} }
return `Datastore ${what} ${id}`; return `Datastore ${what} ${id}`;
@ -91,21 +89,14 @@ Ext.define('PBS.Utils', {
// do whatever you want here // do whatever you want here
Proxmox.Utils.override_task_descriptions({ Proxmox.Utils.override_task_descriptions({
garbage_collection: ['Datastore', gettext('Garbage collect') ], garbage_collection: ['Datastore', gettext('Garbage collect')],
sync: ['Datastore', gettext('Remote Sync') ], sync: ['Datastore', gettext('Remote Sync')],
syncjob: [gettext('Sync Job'), gettext('Remote Sync') ], syncjob: [gettext('Sync Job'), gettext('Remote Sync')],
prune: (type, id) => { prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')),
return PBS.Utils.render_datastore_worker_id(id, gettext('Prune')); verify: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Verify')),
}, backup: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Backup')),
verify: (type, id) => { reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read objects')),
return PBS.Utils.render_datastore_worker_id(id, gettext('Verify')); logrotate: [gettext('Log'), gettext('Rotation')],
},
backup: (type, id) => {
return PBS.Utils.render_datastore_worker_id(id, gettext('Backup'));
},
reader: (type, id) => {
return PBS.Utils.render_datastore_worker_id(id, gettext('Read objects'));
},
}); });
} },
}); });

View File

@ -1,19 +1,18 @@
/*global Proxmox*/ Ext.define('PBS.view.main.VersionInfo', {
Ext.define('PBS.view.main.VersionInfo',{
extend: 'Ext.Component', extend: 'Ext.Component',
xtype: 'versioninfo', xtype: 'versioninfo',
makeApiCall: true, makeApiCall: true,
data: { data: {
version: false version: false,
}, },
tpl: [ tpl: [
'Backup Server', 'Backup Server',
'<tpl if="version">', '<tpl if="version">',
' {version}-{release}', ' {version}-{release}',
'</tpl>' '</tpl>',
], ],
initComponent: function() { initComponent: function() {
@ -26,8 +25,8 @@ Ext.define('PBS.view.main.VersionInfo',{
method: 'GET', method: 'GET',
success: function(response) { success: function(response) {
me.update(response.result.data); me.update(response.result.data);
} },
}); });
} }
} },
}); });

View File

@ -14,7 +14,7 @@ Ext.define('PBS.admin.ZFSList', {
nodename: me.nodename, nodename: me.nodename,
listeners: { listeners: {
destroy: function() { me.reload(); }, destroy: function() { me.reload(); },
} },
}).show(); }).show();
}, },
@ -49,7 +49,7 @@ Ext.define('PBS.admin.ZFSList', {
} }
let url = `/api2/json/nodes/${view.nodename}/disks/zfs`; let url = `/api2/json/nodes/${view.nodename}/disks/zfs`;
view.getStore().getProxy().setUrl(url) view.getStore().getProxy().setUrl(url);
Proxmox.Utils.monStoreErrors(view, view.getStore(), true); Proxmox.Utils.monStoreErrors(view, view.getStore(), true);
@ -61,34 +61,34 @@ Ext.define('PBS.admin.ZFSList', {
{ {
text: gettext('Name'), text: gettext('Name'),
dataIndex: 'name', dataIndex: 'name',
flex: 1 flex: 1,
}, },
{ {
header: gettext('Size'), header: gettext('Size'),
renderer: Proxmox.Utils.format_size, renderer: Proxmox.Utils.format_size,
dataIndex: 'size' dataIndex: 'size',
}, },
{ {
header: gettext('Free'), header: gettext('Free'),
renderer: Proxmox.Utils.format_size, renderer: Proxmox.Utils.format_size,
dataIndex: 'free' dataIndex: 'free',
}, },
{ {
header: gettext('Allocated'), header: gettext('Allocated'),
renderer: Proxmox.Utils.format_size, renderer: Proxmox.Utils.format_size,
dataIndex: 'alloc' dataIndex: 'alloc',
}, },
{ {
header: gettext('Fragmentation'), header: gettext('Fragmentation'),
renderer: function(value) { renderer: function(value) {
return value.toString() + '%'; return value.toString() + '%';
}, },
dataIndex: 'frag' dataIndex: 'frag',
}, },
{ {
header: gettext('Health'), header: gettext('Health'),
renderer: Proxmox.Utils.render_zfs_health, renderer: Proxmox.Utils.render_zfs_health,
dataIndex: 'health' dataIndex: 'health',
}, },
{ {
header: gettext('Deduplication'), header: gettext('Deduplication'),
@ -96,8 +96,8 @@ Ext.define('PBS.admin.ZFSList', {
renderer: function(value) { renderer: function(value) {
return value.toFixed(2).toString() + 'x'; return value.toFixed(2).toString() + 'x';
}, },
dataIndex: 'dedup' dataIndex: 'dedup',
} },
], ],
rootVisible: false, rootVisible: false,
@ -118,7 +118,7 @@ Ext.define('PBS.admin.ZFSList', {
xtype: 'proxmoxButton', xtype: 'proxmoxButton',
disabled: true, disabled: true,
handler: 'openDetailWindow', handler: 'openDetailWindow',
} },
], ],
listeners: { listeners: {
@ -130,7 +130,7 @@ Ext.define('PBS.admin.ZFSList', {
proxy: { proxy: {
type: 'proxmox', type: 'proxmox',
}, },
sorters: 'name' sorters: 'name',
}, },
}); });

View File

@ -47,7 +47,7 @@ Ext.define('PBS.config.ACLView', {
removeACL: function(btn, event, rec) { removeACL: function(btn, event, rec) {
let me = this; let me = this;
Proxmox.Utils.API2Request({ Proxmox.Utils.API2Request({
url:'/access/acl', url: '/access/acl',
method: 'PUT', method: 'PUT',
params: { params: {
'delete': 1, 'delete': 1,
@ -58,7 +58,7 @@ Ext.define('PBS.config.ACLView', {
callback: function() { callback: function() {
me.reload(); me.reload();
}, },
failure: function (response, opts) { failure: function(response, opts) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus); Ext.Msg.alert(gettext('Error'), response.htmlStatus);
}, },
}); });

View File

@ -1,11 +1,11 @@
Ext.define('pbs-datastore-list', { Ext.define('pbs-datastore-list', {
extend: 'Ext.data.Model', extend: 'Ext.data.Model',
fields: [ 'name', 'comment' ], fields: ['name', 'comment'],
proxy: { proxy: {
type: 'proxmox', type: 'proxmox',
url: "/api2/json/admin/datastore" url: "/api2/json/admin/datastore",
}, },
idProperty: 'store' idProperty: 'store',
}); });
Ext.define('pbs-data-store-config', { Ext.define('pbs-data-store-config', {
@ -209,7 +209,7 @@ Ext.define('PBS.DataStoreConfig', {
dataIndex: 'keep-yearly', dataIndex: 'keep-yearly',
width: 70, width: 70,
}, },
] ],
}, },
{ {
header: gettext('Comment'), header: gettext('Comment'),

View File

@ -1,6 +1,21 @@
Ext.define('pmx-remotes', { Ext.define('pmx-remotes', {
extend: 'Ext.data.Model', extend: 'Ext.data.Model',
fields: [ 'name', 'host', 'userid', 'fingerprint', 'comment' ], fields: ['name', 'host', 'port', 'userid', 'fingerprint', 'comment',
{
name: 'server',
calculate: function(data) {
let txt = data.host || "localhost";
let port = data.port || "8007";
if (port.toString() !== "8007") {
if (Proxmox.Utils.IP6_match.test(txt)) {
txt = `[${txt}]`;
}
txt += `:${port}`;
}
return txt;
},
},
],
idProperty: 'name', idProperty: 'name',
proxy: { proxy: {
type: 'proxmox', type: 'proxmox',
@ -109,7 +124,7 @@ Ext.define('PBS.config.RemoteView', {
header: gettext('Host'), header: gettext('Host'),
width: 200, width: 200,
sortable: true, sortable: true,
dataIndex: 'host', dataIndex: 'server',
}, },
{ {
header: gettext('User name'), header: gettext('User name'),

View File

@ -69,7 +69,7 @@ Ext.define('PBS.config.SyncJobView', {
if (!upid) return; if (!upid) return;
Ext.create('Proxmox.window.TaskViewer', { Ext.create('Proxmox.window.TaskViewer', {
upid upid,
}).show(); }).show();
}, },
@ -116,13 +116,13 @@ Ext.define('PBS.config.SyncJobView', {
text = Proxmox.Utils.unknownText; text = Proxmox.Utils.unknownText;
break; break;
case 'error': case 'error':
icon = 'times critical'; icon = 'times critical';
text = Proxmox.Utils.errorText + ': ' + value; text = Proxmox.Utils.errorText + ': ' + value;
break; break;
case 'warning': case 'warning':
icon = 'exclamation warning'; icon = 'exclamation warning';
break; break;
case 'ok': case 'ok':
icon = 'check good'; icon = 'check good';
text = gettext("OK"); text = gettext("OK");
} }

View File

@ -13,7 +13,7 @@ Ext.define('pbs-datastore-statistics', {
} }
return last; return last;
}); });
} },
}, },
{ {
name: 'usage', name: 'usage',
@ -25,13 +25,13 @@ Ext.define('pbs-datastore-statistics', {
} else { } else {
return -1; return -1;
} }
} },
}, },
], ],
proxy: { proxy: {
type: 'proxmox', type: 'proxmox',
url: "/api2/json/status/datastore-usage" url: "/api2/json/status/datastore-usage",
}, },
idProperty: 'store', idProperty: 'store',
}); });
@ -57,7 +57,7 @@ Ext.define('PBS.DatastoreStatistics', {
let timespan = (estimate - now)/1000; let timespan = (estimate - now)/1000;
if (+estimate <= +now || isNaN(timespan)) { if (Number(estimate) <= Number(now) || isNaN(timespan)) {
return gettext('Never'); return gettext('Never');
} }
@ -131,8 +131,8 @@ Ext.define('PBS.DatastoreStatistics', {
lineWidth: 0, lineWidth: 0,
chartRangeMin: 0, chartRangeMin: 0,
chartRangeMax: 1, chartRangeMax: 1,
tipTpl: '{y:number("0.00")*100}%' tipTpl: '{y:number("0.00")*100}%',
} },
}, },
], ],
@ -150,4 +150,4 @@ Ext.define('PBS.DatastoreStatistics', {
}, },
}, },
}) });

View File

@ -14,7 +14,6 @@ Ext.define('PBS.LongestTasks', {
openTask: function(record) { openTask: function(record) {
let me = this; let me = this;
let view = me.getView();
Ext.create('Proxmox.window.TaskViewer', { Ext.create('Proxmox.window.TaskViewer', {
upid: record.data.upid, upid: record.data.upid,
endtime: record.data.endtime, endtime: record.data.endtime,
@ -72,7 +71,7 @@ Ext.define('PBS.LongestTasks', {
model: 'proxmox-tasks', model: 'proxmox-tasks',
proxy: { proxy: {
type: 'memory', type: 'memory',
} },
}, },
}, },

View File

@ -8,6 +8,13 @@ Ext.define('PBS.RunningTasks', {
hideHeaders: true, hideHeaders: true,
rowLines: false, rowLines: false,
scrollable: true,
maxHeight: 500,
viewConfig: {
deferEmptyText: false,
},
controller: { controller: {
xclass: 'Ext.app.ViewController', xclass: 'Ext.app.ViewController',
@ -80,7 +87,7 @@ Ext.define('PBS.RunningTasks', {
dataIndex: 'duration', dataIndex: 'duration',
renderer: function(value, md, record) { renderer: function(value, md, record) {
return Proxmox.Utils.format_duration_human((Date.now() - record.data.starttime)/1000); return Proxmox.Utils.format_duration_human((Date.now() - record.data.starttime)/1000);
} },
}, },
{ {
xtype: 'actioncolumn', xtype: 'actioncolumn',

View File

@ -69,7 +69,7 @@ Ext.define('PBS.TaskSummary', {
disableSelection: true, disableSelection: true,
store: { store: {
data: [] data: [],
}, },
columns: [ columns: [
@ -90,7 +90,7 @@ Ext.define('PBS.TaskSummary', {
renderer: 'render_count', renderer: 'render_count',
}, },
], ],
} },
], ],
}); });

View File

@ -4,7 +4,7 @@
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no"> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no">
<title>Proxmox Backup Server</title> <title>{{ NodeName }} - Proxmox Backup Server</title>
<link rel="icon" sizes="128x128" href="/images/logo-128.png" /> <link rel="icon" sizes="128x128" href="/images/logo-128.png" />
<link rel="apple-touch-icon" sizes="128x128" href="/pve2/images/logo-128.png" /> <link rel="apple-touch-icon" sizes="128x128" href="/pve2/images/logo-128.png" />
<link rel="stylesheet" type="text/css" href="/extjs/theme-crisp/resources/theme-crisp-all.css" /> <link rel="stylesheet" type="text/css" href="/extjs/theme-crisp/resources/theme-crisp-all.css" />

View File

@ -3,6 +3,8 @@ Ext.define('PBS.window.ACLEdit', {
alias: 'widget.pbsACLAdd', alias: 'widget.pbsACLAdd',
mixins: ['Proxmox.Mixin.CBind'], mixins: ['Proxmox.Mixin.CBind'],
onlineHelp: 'user_acl',
url: '/access/acl', url: '/access/acl',
method: 'PUT', method: 'PUT',
isAdd: true, isAdd: true,

View File

@ -71,11 +71,11 @@ Ext.define('PBS.window.BackupFileDownloader', {
control: { control: {
'proxmoxComboGrid': { 'proxmoxComboGrid': {
change: 'changeFile' change: 'changeFile',
}, },
'button': { 'button': {
click: 'downloadFile', click: 'downloadFile',
} },
}, },
}, },
@ -89,7 +89,7 @@ Ext.define('PBS.window.BackupFileDownloader', {
emptyText: gettext('No file selected'), emptyText: gettext('No file selected'),
fieldLabel: gettext('File'), fieldLabel: gettext('File'),
store: { store: {
fields: ['filename', 'size', 'crypt-mode',], fields: ['filename', 'size', 'crypt-mode'],
idProperty: ['filename'], idProperty: ['filename'],
}, },
listConfig: { listConfig: {
@ -115,7 +115,7 @@ Ext.define('PBS.window.BackupFileDownloader', {
mode = PBS.Utils.cryptmap.indexOf(value); mode = PBS.Utils.cryptmap.indexOf(value);
} }
return PBS.Utils.cryptText[mode] || Proxmox.Utils.unknownText; return PBS.Utils.cryptText[mode] || Proxmox.Utils.unknownText;
} },
}, },
], ],
}, },
@ -133,7 +133,7 @@ Ext.define('PBS.window.BackupFileDownloader', {
reference: 'encryptedHint', reference: 'encryptedHint',
hidden: true, hidden: true,
value: gettext('Encrypted Files cannot be decoded on the server directly. Please use the client where the decryption key is located.'), value: gettext('Encrypted Files cannot be decoded on the server directly. Please use the client where the decryption key is located.'),
} },
], ],
buttons: [ buttons: [

View File

@ -3,6 +3,9 @@ Ext.define('PBS.DataStoreEdit', {
alias: 'widget.pbsDataStoreEdit', alias: 'widget.pbsDataStoreEdit',
mixins: ['Proxmox.Mixin.CBind'], mixins: ['Proxmox.Mixin.CBind'],
onlineHelp: 'datastore_intro',
subject: gettext('Datastore'), subject: gettext('Datastore'),
isAdd: true, isAdd: true,

View File

@ -1,7 +1,7 @@
Ext.define('pbs-file-tree', { Ext.define('pbs-file-tree', {
extend: 'Ext.data.Model', extend: 'Ext.data.Model',
fields: [ 'filepath', 'text', 'type', 'size', fields: ['filepath', 'text', 'type', 'size',
{ {
name: 'mtime', name: 'mtime',
type: 'date', type: 'date',
@ -43,7 +43,7 @@ Ext.define('pbs-file-tree', {
return `fa fa-${icon}`; return `fa fa-${icon}`;
}, },
} },
], ],
idProperty: 'filepath', idProperty: 'filepath',
}); });
@ -85,15 +85,15 @@ Ext.define("PBS.window.FileBrowser", {
'backup-type': view['backup-type'], 'backup-type': view['backup-type'],
'backup-time': view['backup-time'], 'backup-time': view['backup-time'],
}; };
params['filepath'] = data.filepath; params.filepath = data.filepath;
atag.download = data.text; atag.download = data.text;
atag.href = me.buildUrl(`/api2/json/admin/datastore/${view.datastore}/pxar-file-download`, params); atag.href = me
.buildUrl(`/api2/json/admin/datastore/${view.datastore}/pxar-file-download`, params);
atag.click(); atag.click();
}, },
fileChanged: function() { fileChanged: function() {
let me = this; let me = this;
let view = me.getView();
let tree = me.lookup('tree'); let tree = me.lookup('tree');
let selection = tree.getSelection(); let selection = tree.getSelection();
if (!selection || selection.length < 1) return; if (!selection || selection.length < 1) return;
@ -204,7 +204,7 @@ Ext.define("PBS.window.FileBrowser", {
return asize - bsize; return asize - bsize;
}, },
} },
}, },
{ {
text: gettext('Modified'), text: gettext('Modified'),
@ -226,10 +226,10 @@ Ext.define("PBS.window.FileBrowser", {
case 's': return gettext('Socket'); case 's': return gettext('Socket');
default: return Proxmox.Utils.unknownText; default: return Proxmox.Utils.unknownText;
} }
} },
}, },
] ],
} },
], ],
buttons: [ buttons: [
@ -238,6 +238,6 @@ Ext.define("PBS.window.FileBrowser", {
handler: 'downloadFile', handler: 'downloadFile',
reference: 'downloadBtn', reference: 'downloadBtn',
disabled: true, disabled: true,
} },
], ],
}); });

View File

@ -3,6 +3,8 @@ Ext.define('PBS.window.RemoteEdit', {
alias: 'widget.pbsRemoteEdit', alias: 'widget.pbsRemoteEdit',
mixins: ['Proxmox.Mixin.CBind'], mixins: ['Proxmox.Mixin.CBind'],
onlineHelp: 'backup_remote',
userid: undefined, userid: undefined,
isAdd: true, isAdd: true,
@ -43,8 +45,45 @@ Ext.define('PBS.window.RemoteEdit', {
{ {
xtype: 'proxmoxtextfield', xtype: 'proxmoxtextfield',
allowBlank: false, allowBlank: false,
name: 'host', name: 'hostport',
submitValue: false,
vtype: 'HostPort',
fieldLabel: gettext('Host'), fieldLabel: gettext('Host'),
listeners: {
change: function(field, newvalue) {
let host = newvalue;
let port;
let match = Proxmox.Utils.HostPort_match.exec(newvalue);
if (match === null) {
match = Proxmox.Utils.HostPortBrackets_match.exec(newvalue);
if (match === null) {
match = Proxmox.Utils.IP6_dotnotation_match.exec(newvalue);
}
}
if (match !== null) {
host = match[1];
if (match[2] !== undefined) {
port = match[2];
}
}
field.up('inputpanel').down('field[name=host]').setValue(host);
field.up('inputpanel').down('field[name=port]').setValue(port);
},
},
},
{
xtype: 'proxmoxtextfield',
hidden: true,
name: 'host',
},
{
xtype: 'proxmoxtextfield',
hidden: true,
deleteEmpty: true,
name: 'port',
}, },
], ],
@ -71,16 +110,33 @@ Ext.define('PBS.window.RemoteEdit', {
{ {
xtype: 'proxmoxtextfield', xtype: 'proxmoxtextfield',
name: 'fingerprint', name: 'fingerprint',
deleteEmpty: true,
fieldLabel: gettext('Fingerprint'), fieldLabel: gettext('Fingerprint'),
}, },
{ {
xtype: 'proxmoxtextfield', xtype: 'proxmoxtextfield',
name: 'comment', name: 'comment',
deleteEmpty: true,
fieldLabel: gettext('Comment'), fieldLabel: gettext('Comment'),
}, },
], ],
}, },
setValues: function(values) {
let me = this;
let host = values.host;
if (values.port !== undefined) {
if (Proxmox.Utils.IP6_match.test(host)) {
host = `[${host}]`;
}
host += `:${values.port}`;
}
values.hostport = host;
return me.callParent([values]);
},
getValues: function() { getValues: function() {
let me = this; let me = this;
let values = me.callParent(arguments); let values = me.callParent(arguments);

View File

@ -3,6 +3,8 @@ Ext.define('PBS.window.UserEdit', {
alias: 'widget.pbsUserEdit', alias: 'widget.pbsUserEdit',
mixins: ['Proxmox.Mixin.CBind'], mixins: ['Proxmox.Mixin.CBind'],
onlineHelp: 'user_mgmt',
userid: undefined, userid: undefined,
isAdd: true, isAdd: true,

View File

@ -30,8 +30,8 @@ Ext.define('PBS.window.CreateZFS', {
xtype: 'proxmoxcheckbox', xtype: 'proxmoxcheckbox',
name: 'add-datastore', name: 'add-datastore',
fieldLabel: gettext('Add as Datastore'), fieldLabel: gettext('Add as Datastore'),
value: '1' value: '1',
} },
], ],
column2: [ column2: [
{ {
@ -45,8 +45,8 @@ Ext.define('PBS.window.CreateZFS', {
['raid10', 'RAID10'], ['raid10', 'RAID10'],
['raidz', 'RAIDZ'], ['raidz', 'RAIDZ'],
['raidz2', 'RAIDZ2'], ['raidz2', 'RAIDZ2'],
['raidz3', 'RAIDZ3'] ['raidz3', 'RAIDZ3'],
] ],
}, },
{ {
xtype: 'proxmoxKVComboBox', xtype: 'proxmoxKVComboBox',
@ -59,8 +59,8 @@ Ext.define('PBS.window.CreateZFS', {
['gzip', 'gzip'], ['gzip', 'gzip'],
['lz4', 'lz4'], ['lz4', 'lz4'],
['lzjb', 'lzjb'], ['lzjb', 'lzjb'],
['zle', 'zle'] ['zle', 'zle'],
] ],
}, },
{ {
xtype: 'proxmoxintegerfield', xtype: 'proxmoxintegerfield',
@ -68,8 +68,8 @@ Ext.define('PBS.window.CreateZFS', {
minValue: 9, minValue: 9,
maxValue: 16, maxValue: 16,
value: '12', value: '12',
name: 'ashift' name: 'ashift',
} },
], ],
columnB: [ columnB: [
{ {
@ -80,8 +80,8 @@ Ext.define('PBS.window.CreateZFS', {
valueField: 'name', valueField: 'name',
height: 200, height: 200,
emptyText: gettext('No Disks unused'), emptyText: gettext('No Disks unused'),
} },
] ],
}, },
{ {
xtype: 'displayfield', xtype: 'displayfield',