Compare commits

...

284 Commits

Author SHA1 Message Date
1a48cbf164 bump version to 0.9.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 16:19:49 +02:00
3480777d89 d/control: bump versioned dependency of proxmox-widget-toolkit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 15:30:08 +02:00
a71bc08ff4 src/tools/parallel_handler.rs: remove lifetime hacks, require 'static
In theory, one can do std::mem::forget, and ignore the drop handler. With
the lifetime hack, this could result in a crash.

So we simply require 'static lifetime now (futures also needs that).
2020-10-01 14:52:48 +02:00
df766e668f d/control: add pve-eslint to build dependencies
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 14:46:30 +02:00
0a8f3ae0b3 src/tools/parallel_handler.rs: cleanup check_abort code 2020-10-01 14:37:29 +02:00
da6e67b321 rrd: fix integer underflow
Causes a panic if last_update is smaller than RRD_DATA_ENTRIES*reso,
which (I believe) can happen when inserting the first value for a DB.

Clamp the value to 0 in that case.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-01 14:30:32 +02:00
dec00364b3 ParallelHandler: check for errors during thread join
Fix a potential bug where errors that happen after the SendHandle has
been dropped while doing the thread join might have been ignored.
Requires internal check_abort to be moved out of 'impl SendHandle' since
we only have the Mutex left, not the SendHandle.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-01 14:30:32 +02:00
5637087cc9 www: do incremental lint for development, full for build
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 13:14:03 +02:00
5ad4bdc482 eslint fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 13:03:14 +02:00
823867f5b7 datastore: gc: avoid unsafe call into libc, use epoch_i64 helper
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 12:38:38 +02:00
c6772c92b8 datastore: gc: comment exclusive process lock
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 12:38:04 +02:00
79f6a79cfc assume correct backup, avoid verifying chunk existance
This can slow things down by a lot on setups with (relatively) high
seek time, in the order of doubling the backup times if cache isn't
populated with the last backups chunk inode info.

Effectively there's nothing known this protects us from in the
codebase. The only thing which was theorized about was the case
where a really long running backup job (over 24 hours) is still
running and writing new chunks, not indexed yet anywhere, then an
update (or manual action) triggers a reload of the proxy. There was
some theory that then a GC in the new daemon would not know about the
oldest writer in the old one, and thus use a less strict atime limit
for chunk sweeping - opening up a window for deleting chunks from the
long running backup.
But, this simply cannot happen as we have a per datastore process
wide flock, which is acquired shared by backup jobs and exclusive by
GC. In the same process GC and backup can both get it, as it has a
process locking granularity. If there's an old daemon with a writer,
that also has the lock open shared, and so no GC in the new process
can get exclusive access to it.

So, with that confirmed we have no need for a "half-assed"
verification in the backup finish step. Rather, we plan to add an
opt-in "full verify each backup on finish" option (see #2988)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-01 12:06:59 +02:00
4c7f100d22 src/api2/reader.rs: fix speedtest description 2020-10-01 11:16:15 +02:00
9070d11f4c src/api2/backup.rs: use block_in_place for remove_backup 2020-10-01 11:11:14 +02:00
124b93f31c upload_chunk: use block_in_place 2020-10-01 11:00:23 +02:00
0f22f53b36 ui: RemoteEdit: remove port field and parse it from host field
use our hostport regexes to parse out a potential port from the host field
and send it individually

this makes for a simpler and cleaner ui

this additionally checks the field for valid input before sending it to
the backend

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-01 10:12:04 +02:00
3784dbf029 ui: RemoteView: improve host columns
do not show the default (8007) port
and only add brackets [] to ipv6 addresses if there is a port

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-01 10:11:31 +02:00
4c95d58c41 api2/types: fix DNS_NAME Regexes
We forgot to put braces around the DNS_NAME regex, and in
DNS_NAME_OR_IP_REGEX

this is wrong because the regex

 ^foo|bar$

matches 'foo' at the beginning and 'bar' at the end, so either

 foobaz
 bazbar

would match. only

 ^(foo|bar)$

 matches only 'foo' and 'bar'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-01 06:09:34 +02:00
38d4675921 fix ipv6 handling for remotes/sync jobs
* add square brackets to ipv6 adresses in BackupRepository if they not
already have some (we save them without in the remote config)

* in get_pull_parameters, we now create a BackupRepository first and use
  those values (which does the [] mapping), this also has the advantage
  that we have one place less were we hardcode 8007 as port

* in the ui, add square brackets for ipv6 adresses for remotes

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 13:40:03 +02:00
7b8aa893fa src/client/pull.rs: log progress 2020-09-30 13:35:09 +02:00
fb2678f96e www/index.hbs: add nodename to title 2020-09-30 12:10:04 +02:00
486ed27299 ui: improve running task overlay
by setting a maxHeight+scrollable
(i used 500px to be still visible on our 'min screen size' 1280x720)

and by disabling emptyText deferral, which now shows the text instantly

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 11:07:08 +02:00
df4827f2c0 tasks: improve behaviour on upgrade
when upgrading from a version where we stored all tasks in the 'active' file,
we did not completly account for finished tasks still there

we should update the file when encountering any finished task in
'active' as well as filter them out on the api call (if they get through)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 11:05:50 +02:00
ef1b436350 paperkey: add html output 2020-09-30 10:49:20 +02:00
b19b4bfcb0 examples: fix HttpClient::new usage 2020-09-30 10:49:20 +02:00
e64b9f9204 src/tools.rs: make command_output return Vec<u8>
And add a new helper to return output as string.
2020-09-30 10:49:20 +02:00
9c33683c25 ui: add port support for remotes
by adding a field to RemoteEdit and showing it in the grid

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 10:49:20 +02:00
ba20987ae7 client/remote: add support to specify port number
this adds the ability to add port numbers in the backup repo spec
as well as remotes, so that user that are behind a
NAT/Firewall/Reverse proxy can still use it

also adds some explanation and examples to the docs to make it clearer
for h2 client i left the localhost:8007 part, since it is not
configurable where we bind to

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 10:49:20 +02:00
729d41fe6a api: disks/zfs: check template exsits before enabling zfs-import service
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-30 09:34:21 +02:00
905147a5ee api2/node/disks/zfs: instantiate import service
When creating a new zpool for a datastore, also instantiate an
import-unit for it. This helps in cases where '/etc/zfs/zool.cache'
get corrupted and thus the pool is not imported upon boot.

This patch needs the corresponding addition of 'zfs-import@.service' in
the zfsonlinux repository.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-09-30 08:43:38 +02:00
0c41e0d06b ui: add task description for logrotation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 06:17:07 +02:00
b37b59b726 ui: RemoteEdit: make comment and fingerprint deletable
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 06:16:53 +02:00
60b9b48e71 require square brackets for ipv6 addresses
we need this, because we append the port to this to get a target url
e.g. we print

format!("https://{}:8007/", address)

if address is now an ipv6 (e.g. fe80::1) it would become

https://fe80::1:8007/ which is a valid ipv6 on its own

by using square brackets we get:

https://[fe80::1]:8007/ which now connects to the correct ip/port

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 06:16:27 +02:00
abf8b5d475 docs: fix wrong user in repository explanation
we use 'root@pam' by default, not 'root'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 06:14:36 +02:00
7eebe1483e server/worker_task: fix panic on slice range when index is empty
since len() and MAX_INDEX_TASKS are both usize, they underflow
instead of getting negative values

instead check the sizes and set them accordingly

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-30 06:11:06 +02:00
9a76091785 proxmox-backup-proxy: add task archive rotation
this starts a task once a day at "00:00" that rotates the task log
archive if it is bigger than 500k

if we want, we can make the schedule/size limit/etc. configurable,
but for now it's ok to set fixed values for that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:41:18 +02:00
c386b06fc6 server/worker_task: remove unecessary read_task_list
since there are no users of this anymore and we now have a nicer
TaskListInfoIterator to use, we can drop this function

this also means that 'update_active_workers' does not need to return
a list anymore since we never used that result besides in
read_task_list

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:40:50 +02:00
6bcfc5c1a4 api2/status: use the TaskListInfoIterator here
this means that limiting with epoch now works correctly
also change the api type to i64, since that is what the starttime is
saved as

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:40:24 +02:00
768e10d0b3 api2/node/tasks: use TaskListInfoIterator instead of read_task_list
this makes the filtering/limiting much nicer and readable

since we now have potentially an 'infinite' amount of tasks we iterate over,
and cannot now beforehand how many there are, we return the total count
as always 1 higher then requested iff we are not at the end (this is
the case when the amount of entries is smaller than the requested limit)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:40:02 +02:00
e7244387c7 server/worker_task: add TaskListInfoIterator
this is an iterator that reads/parses/updates the task list as
necessary and returns the tasks in descending order (newest first)

it does this by using our logrotate iterator and using a vecdeque

we can use this to iterate over all tasks, even if they are in the
archive and even if the archive is logrotated but only read
as much as we need

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:39:16 +02:00
5ade6c25f3 server/worker_task: write older tasks into archive file
instead of removing tasks beyond the 1000 that are in the index
write them into an archive file by appending them at the end
this way we can later still read them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:38:44 +02:00
784fa1c2e3 server/worker_task: split task list file into two
one for only the active tasks and one for up to 1000 finished tasks

factor out the parsing of a task file (we will later need this again)
and use iterator combinators for easier code

we now sort the tasks ascending (this will become important in a later patch)
but reverse (for now) it to keep compatibility

this code also omits the converting into an intermittent hash
since it cannot really happen that we have duplicate tasks in this list
(since the call is locked by an flock, and it is the only place where we
write into the lists)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:38:28 +02:00
66f4e6a809 server/worker_task: refactor locking of the task list
also add the functionality of having a 'shared' (read) lock for the list
we will need this later

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:37:54 +02:00
8074d2b0c3 tools: add logrotate module
this is a helper to rotate and iterate over log files
there is an iterator for open filehandles as well as
only the filename

also it has the possibilty to rotate them
for compression, zstd is used

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-29 08:33:21 +02:00
b02d49ab26 proxmox_backup_client key: allow to generate paperkey for master key 2020-09-29 08:29:42 +02:00
82a0cd2ad4 proxmox_backup_client key: add new paper-key command 2020-09-29 08:29:42 +02:00
ee1a9c3230 parallel_handler: clippy: 'while_let_loop'
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-09-29 08:13:51 +02:00
db24c01106 parallel_handler: explicit Arc::clone
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-09-28 13:40:03 +02:00
ae3cfa8f0d parallel_handler: formatting cleanup, doc comment typo fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-09-28 13:40:03 +02:00
b56c111e93 depend on proxmox 0.4.2 2020-09-28 10:50:44 +02:00
bbeb0256f1 server/worker_task: factor out task list rendering
we will need this later again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-28 07:31:27 +02:00
005a5b9677 api2/node/tasks: move userfilter to function signature
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-28 07:18:13 +02:00
55bee04856 src/tools/parallel_handler.rs: remove unnecessary Sync bound 2020-09-26 16:16:11 +02:00
42fd40a124 src/bin/proxmox_backup_client/benchmark.rs: avoid compiler warning 2020-09-26 16:13:19 +02:00
f21508b9e1 src/backup/verify.rs: use ParallelHandler to verify chunks 2020-09-26 11:14:37 +02:00
ee7a308de4 src/backup/verify.rs: cleanup use clause 2020-09-26 10:23:44 +02:00
636e674ee7 src/client/pull.rs: simplify code 2020-09-26 10:09:51 +02:00
b02b374b46 src/tools/parallel_handler.rs: remove static lifetime bound from handler_fn 2020-09-26 09:26:06 +02:00
1c13afa8f9 src/tools/parallel_handler.rs: join all threads in drop handler 2020-09-26 08:47:56 +02:00
69b92fab7e src/tools/parallel_handler.rs: remove unnecessary Sync trait bound 2020-09-26 07:38:44 +02:00
6ab77df3f5 ui: some more eslint auto-fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-25 18:47:25 +02:00
264c19582b ui: some more eslint auto-fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-25 18:36:58 +02:00
8acd4d9afc ui: some more eslint auto-fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-25 18:34:54 +02:00
65b0cea6bd ui: some eslint auto-fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-25 18:29:42 +02:00
cfe01b2e6a bump version to 0.8.21-1 2020-09-25 13:20:35 +02:00
b19b032be3 debian/control: update 2020-09-25 13:17:49 +02:00
5441708634 src/client/pull.rs: use new ParallelHandler 2020-09-25 12:58:20 +02:00
3c9b370255 src/tools/parallel_handler.rs: execute closure inside a thread pool 2020-09-25 12:58:20 +02:00
510544770b depend on crossbeam-channel 2020-09-25 12:58:20 +02:00
e8293841c2 docs: html: show "Proxmox Backup" in navi for small devices
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 20:03:17 +02:00
46114bf28e docs: html: improve css for small displays
fixed-width navi/toc links were not switched in color for small width
displays, and thus they were barely readable as the background
switches to dark for small widths.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 20:03:17 +02:00
0d7e61f06f docs: buildsys: add more dependencies to html target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 19:45:23 +02:00
fd6a54dfbc docs: conf: fix conf for new alabaster theme version
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 19:44:50 +02:00
1ea5722b8f docs: html: adapt custom css
highlighting the current chapter and some other small formatting
improvements

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 19:44:00 +02:00
bc8fadf494 docs: index: hide todo list toctree and genindex
I do not found another way to disable inclusion in the sidebar...

The genindex information is alredy provided through glossary

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 19:43:18 +02:00
a76934ad33 docs: html: adapt sidebar in index page
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-24 19:41:19 +02:00
d7a122a026 use jobstate mechanism for verify/garbage_collection schedules
also changes:
* correct comment about reset (replace 'sync' with 'action')
* check schedule change correctly (only when it is actually changed)

with this changes, we can drop the 'lookup_last_worker' method

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-24 17:06:12 +02:00
6c25588e63 proxy: fix error handling in prune scheduling
we rely on the jobstate handling to write the error of the worker
into its state file, but we used '?' here in a block which does not
return the error to the block, but to the function/closure instead

so if a prune job failed because of such an '?', we did not write
into the statefile and got a wrong state there

instead use our try_block! macro that wraps the code in a closure

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-24 17:06:09 +02:00
17a1f579d0 bump version to 0.8.20-1 2020-09-24 13:17:06 +02:00
998db63933 src/client/pull.rs: decode, verify and write in a separate threads
To maximize throughput.
2020-09-24 13:12:04 +02:00
c0fa14d94a src/backup/data_blob.rs: add is_encrypted helper 2020-09-24 13:00:16 +02:00
6fd129844d remove DummyCatalogWriter
we're using an `Option` instead now

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-09-24 09:13:54 +02:00
baae780c99 benchmark: use compressable data to get more realistic result
And add a benchmatrk to test chunk verify speed (decompress+sha256).
2020-09-24 08:58:13 +02:00
09a1da25ed src/backup/data_blob.rs: improve decompress speed 2020-09-24 08:52:35 +02:00
298c6aaef6 docs: add onlineHelp to some panels
name sections according to the title or content and add
the respective onlineHelp to the following panels:
- datastore
- user management
- ACL
- backup remote

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
2020-09-22 19:48:32 +02:00
a329324139 bump version to 0.8.19-1 2020-09-22 13:30:52 +02:00
a83e2ffeab src/api2/reader.rs: use std::fs::read instead of tokio::fs::read
Because it is about 10%& faster this way.
2020-09-22 13:27:23 +02:00
5d7449a121 bump version to 0.8.18-1 2020-09-22 12:39:47 +02:00
ebbe4958c6 src/client/pull.rs: avoid duplicate downloads using in memory HashSet 2020-09-22 12:34:06 +02:00
73b2cc4977 src/client/pull.rs: allow up to 20 concurrent download streams 2020-09-22 11:39:31 +02:00
7ecfde8150 remote_chunk_reader.rs: use Arc for cache_hint to make clone faster 2020-09-22 11:39:31 +02:00
796480a38b docs: add version and date to HTML index
Similar to the PDF output or the Proxmox VE docs.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-22 09:00:12 +02:00
4ae6aede60 bump version to 0.8.17-1 2020-09-21 14:09:20 +02:00
e0085e6612 src/client/pull.rs: remove temporary manifest 2020-09-21 14:03:01 +02:00
194da6f867 src/client/pull.rs: open temporary manifest with truncate(true)
To delete any data if the file already exists.
2020-09-21 13:53:35 +02:00
3fade35260 bump proxmox version to 0.4.1 2020-09-21 13:51:33 +02:00
5e39918fe1 fix #3017: check array boundaries before using
else we panic here

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-21 09:22:06 +02:00
f4dc47a805 debian/control: update 2020-09-19 16:22:56 +02:00
12c65bacf1 src/backup/chunk_store.rs: disable debug output 2020-09-19 15:26:21 +02:00
ba37f3562d src/backup/datastore.rs - open_with_path: use Path instead of str 2020-09-19 10:01:57 +02:00
fce4659388 src/backup/datastore.rs: new method open_with_path
To make testing easier.
2020-09-19 09:55:21 +02:00
0a15870a82 depend on proxmox 0.4.0 2020-09-19 06:40:44 +02:00
9866de5e3d datastore/prune schedules: use JobState for tracking of schedules
like the sync jobs, so that if an admin configures a schedule it
really starts the next time that time is reached not immediately

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-19 06:24:37 +02:00
9d3f183ba9 Admin Guide: Add some more detailed info throughout
- Mention config files for: datastores, users, acl,
  remotes, syncjobs
- Expand a little bit on SMART and smartmontools package
- Explain acl config
- Include line in network stating why a bond would be set up
- Note the use of ifupdown2 for network config, and the potential
  need to install it on other systems
- Add note to PVE integration, specifying where to refer to for VM and
  CT backups

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-18 15:51:21 +02:00
fe233f3b3d Small formatting fix up
- Fix permission image.
- Change alt text for ZFS
- Change note block to match the others

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-18 15:50:36 +02:00
be3bd0f90b fix #3015: allow user self-service
listing, updating or deleting a user is now possible for the user
itself, in addition to higher-privileged users that have appropriate
privileges on '/access/users'.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-18 15:45:11 +02:00
3c053adbb5 role api: fix description
wrongly copy-pasted at some point

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-18 14:55:00 +02:00
c040ec22f7 add verification scheduling to proxmox-backup-proxy
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-09-18 12:14:05 +02:00
43f627ba92 ui: add verify-schedule field to edit datastore form
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-09-18 12:13:09 +02:00
2b67de2e3f api2: make verify_schedule deletable
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-09-18 12:12:29 +02:00
477859662a api2: add optional verify-schdule field to create/update datastore endpoint
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-09-18 12:12:16 +02:00
ccd7241e2f add verify_schedule field to DataStoreConfig
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-09-18 12:11:55 +02:00
f37ef25bdd api2: add VERIFY_SCHEDULE_SCHEMA
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-09-18 12:11:39 +02:00
b93bbab454 fix #3014: allow DataStoreAdmins to list DS config
filtered by those they are privileged enough to read individually. this
allows such users to configure prune/GC schedules via the GUI (the API
already allowed it previously).

permission-wise, a user with this privilege can already:
- list all stores they have access to (returns just name/comment)
- read the config of each store they have access to individually
(returns full config of that datastore + digest of whole config)

but combines them to
- read configs of all datastores they have access to (returns full
config of those datastores + digest of whole config)

user that have AUDIT on just /datastore without propagate can now no
longer read all configurations (but this could be added it back, it just
seems to make little sense to me).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-18 12:09:13 +02:00
9cebc837d5 depend on pxar 0.6.1 2020-09-18 12:02:17 +02:00
1bc1d81a00 move compute_file_csum to src/tools.rs 2020-09-17 10:27:04 +02:00
dda72456d7 depend on proxmox 0.3.9 2020-09-17 08:49:50 +02:00
8f2f3dd710 fix #2942: implement lacp bond mode and bond_xmit_hash_policy
this was not yet implemented, should be compatible with pve and the gui

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-17 08:36:25 +02:00
85959a99ea api2/network: add bond-primary parameter
needed for 'active-backup' bond mode

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-17 08:36:14 +02:00
36700a0a87 api2/pull: make pull worker abortable
by selecting between the pull_future and the abort future

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-17 06:11:33 +02:00
dd4b42bac1 fix #2870: renew tickets in HttpClient
by packing the auth into a RwLock and starting a background
future that renews the ticket every 15 minutes

we still use the BroadcastFuture for the first ticket and only
if that is finished we start the scheduled future

we have to store an abort handle for the renewal future and abort it when
the http client is dropped, so we do not request new tickets forever

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-17 06:09:54 +02:00
9626c28619 always allow retrieving (censored) subscription info
like we do for PVE. this is visible on the dashboard, and caused 403 on
each update which bothers me when looking at the dev console.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-17 06:03:25 +02:00
463c03462a fix #2957: allow Sys.Audit access to node RRD
this is the same privilege needed to query the node status.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-17 06:03:25 +02:00
a086427a7d docs: fix epilogs fixme comment
restructured text comment syntax, i.e., everything it cannot parse is
a comment, is a real PITA!

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-16 16:36:54 +02:00
4d431383d3 src/backup/data_blob.rs: expose verify_crc again 2020-09-16 10:43:42 +02:00
d10332a15d SnapshotVerifyState: use enum for state
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-15 13:06:04 +02:00
43772efc6e backup: check all referenced chunks actually exist
A client can omit uploading chunks in the "known_chunks" list, those
then also won't be written on the server side. Check all those chunks
mentioned in the index but not uploaded for existance and report an
error if they don't exist instead of marking a potentially broken backup
as "successful".

This is only important if the base snapshot references corrupted chunks,
but has not been negatively verified. Also, it is important to only
verify this at the end, *after* all index writers are closed, since only
then can it be guaranteed that no GC will sweep referenced chunks away.

If a chunk is found missing, also mark the previous backup with a
verification failure, since we know the missing chunk has to referenced
in it (only way it could have been inserted to known_chunks with
checked=false). This has the benefit of automatically doing a
full-upload backup if the user attempts to retry after seeing the new
error, instead of requiring a manual verify or forget.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-15 10:00:05 +02:00
0af2da0437 backup: check verify state of previous backup before allowing reuse
Do not allow clients to reuse chunks from the previous backup if it has
a failed validation result. This would result in a new "successful"
backup that potentially references broken chunks.

If the previous backup has not been verified, assume it is fine and
continue on.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-15 09:59:29 +02:00
d09db6c2e9 rename BackupDir::new_with_group to BackupDir::with_group 2020-09-15 09:40:03 +02:00
bc871bd19d src/backup/backup_info.rs: new BackupDir::with_rfc3339 2020-09-15 09:34:46 +02:00
b11a6a029d debian/control: update 2020-09-15 09:33:38 +02:00
6a7be83efe avoid chrono dependency, depend on proxmox 0.3.8
- remove chrono dependency

- depend on proxmox 0.3.8

- remove epoch_now, epoch_now_u64 and epoch_now_f64

- remove tm_editor (moved to proxmox crate)

- use new helpers from proxmox 0.3.8
  * epoch_i64 and epoch_f64
  * parse_rfc3339
  * epoch_to_rfc3339_utc
  * strftime_local

- BackupDir changes:
  * store epoch and rfc3339 string instead of DateTime
  * backup_time_to_string now return a Result
  * remove unnecessary TryFrom<(BackupGroup, i64)> for BackupDir

- DynamicIndexHeader: change ctime to i64

- FixedIndexHeader: change ctime to i64
2020-09-15 07:12:57 +02:00
58169da46a www/OnlineHelpInfo.js: update for syncjobs 2020-09-12 15:10:08 +02:00
158f49e246 debian/control: update hyper dependency 2020-09-11 16:03:38 +02:00
3e4a67f350 bump version to 0.8.16-1 2020-09-11 15:55:37 +02:00
e0e5b4426a BackupDir: make constructor fallible
since converting from i64 epoch timestamp to DateTime is not always
possible. previously, passing invalid backup-time from client to server
(or vice-versa) panicked the corresponding tokio task. now we get proper
error messages including the invalid timestamp.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-11 15:49:35 +02:00
7158b304f5 handle invalid mtime when formating entries
otherwise operations like catalog shell panic when viewing pxar archives
containing such entries, e.g. with mtime very far ahead into the future.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-11 15:48:43 +02:00
833eca6d2f use non-panicky timestamp_opt where appropriate
by either printing the original, out-of-range timestamp as-is, or
bailing with a proper error message instead of panicking.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-11 15:48:24 +02:00
151acf5d96 don't truncate DateTime nanoseconds
where we don't care about them anyway..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-11 15:48:10 +02:00
4a363fb4a7 catalog dump: preserve original mtime
even if it can't be handled by chrono. silently replacing it with epoch
0 is confusing..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-11 15:43:54 +02:00
229adeb746 ui/docs: add onlineHelp button for syncjobs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-11 12:17:54 +02:00
1eff9a1e89 docs: add section for calendar events
and move the info defined in 'Schedules' there,
the explanation of calendar events is inspired by the systemd.time
manpage and the pve docs (especially the examples are mostly
copied/adapted from there)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-11 12:17:42 +02:00
ed4f0a0edc ui: fix calendarevent examples
*/x is valid syntax for us, but not systemd, so to not confuse users
write it like systemd would accept it

also an timespec must at least have hours and minutes

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-11 12:17:32 +02:00
13bed6226e tools/systemd/parse_time: enable */x syntax for calendar events
we support this in pve, so also support it here to have a more
consistent syntax

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-11 12:17:22 +02:00
d937daedb3 docs: set html img width limitation through css
avoid hardcoding width in the docs itself, so that other render
outputs can choose another size.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-11 11:10:08 +02:00
8cce51135c docs: do not render TODOs in release builts
they are not useful for endusers...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-11 11:09:00 +02:00
0cfe1b3f13 docs: set GmbH as copyright holder
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-11 11:08:36 +02:00
05c16a6e59 docs: use alabaster theme
It's not all perfect (yet) but way cleaner and simpler to use than
the sphinx one.

Custom do the scrolling for the fixed side bar and make some other
slight adjustments.

Main issue for now is that the "Developer Appendix" is always shown
in the navigation tree, but we only include that toctree for
devbuilds...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-11 11:08:13 +02:00
3294b516d3 faq: fix typo
In note block:
    Proxmox Packup Server -> Proxmox Backup Server

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-10 15:18:13 +02:00
139bcedc53 benchmark: update TLS reference speed
We are now faster with recent patches.
2020-09-10 12:55:43 +02:00
cf9ea3c4c7 server: set http2 max frame size
else we get the default of 16k, which is quite low for our use case.
this improves the TLS upload benchmark speed by about 30-40% for me.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-10 12:43:51 +02:00
e84fde3e14 docs: faq: spell out PBS
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-10 12:35:04 +02:00
1de47507ff Add section "FAQ"
Adds an FAQ to the docs, based on question that have
been appearing on the forum.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-10 11:33:00 +02:00
1a9948a488 examples/upload-speed.rs: pass new benchmark parameter 2020-09-10 09:34:51 +02:00
04c2731349 bump version to 0.8.15-1 2020-09-10 09:26:16 +02:00
5656888cc9 verify: fix done count
We need to filter out benchmark group earlier
2020-09-10 09:06:33 +02:00
5fdc5a6f3d verify: skip benchmark directory 2020-09-10 08:44:18 +02:00
61d7b5013c add benchmark flag to backup creation for proper cleanup when running a benchmark
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2020-09-10 08:25:24 +02:00
871181d984 mount: fix mount subcommand
fixes the error, "manifest does not contain
file 'X.pxar'", that occurs when trying to mount
a pxar archive with 'proxmox-backup-client mount':

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-10 07:21:16 +02:00
02939e178d ui: only mark backup encrypted if there are any files
if we have a stale backup without an manifest, we do not count
the remaining files in the backup dir anymore, but this means
we now have to check here if there are really any encrypted

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-10 07:18:51 +02:00
3be308b949 improve server->client tcp performance for high latency links
similar to the other fix, if we do not set the buffer size manually,
we get better performance for high latency connections

restore benchmark from f.gruenbicher:

no delay, without patch: ~50MB/s
no delay, with patch: ~50MB/s
25ms delay, without patch: ~11MB/s
25ms delay, with path: ~50MB/s

my own restore benchmark:

no delay, without patch: ~1.5GiB/s
no delay, with patch: ~1.5GiB/s
25ms delay, without patch: 30MiB/s
25ms delay, with patch: ~950MiB/s

for some more details about those benchmarks see
https://lists.proxmox.com/pipermail/pbs-devel/2020-September/000600.html

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-10 07:15:25 +02:00
83088644da fix #2983: improve tcp performance
by leaving the buffer sizes on default, we get much better tcp performance
for high latency links

throughput is still impacted by latency, but much less so when
leaving the sizes at default.
the disadvantage is slightly higher memory usage of the server
(details below)

my local benchmarks (proxmox-backup-client benchmark):

pbs client:
PVE Host
Epyc 7351P (16core/32thread)
64GB Memory

pbs server:
VM on Host
1 Socket, 4 Cores (Host CPU type)
4GB Memory

average of 3 runs, rounded to MB/s
                    | no delay |     1ms |     5ms |     10ms |    25ms |
without this patch  |  230MB/s |  55MB/s |  13MB/s |    7MB/s |   3MB/s |
with this patch     |  293MB/s | 293MB/s | 249MB/s |  241MB/s | 104MB/s |

memory usage (resident memory) of proxmox-backup-proxy:

                    | peak during benchmarks | after benchmarks |
without this patch  |                  144MB |            100MB |
with this patch     |                  145MB |            130MB |

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-10 07:15:12 +02:00
14db8b52dc src/backup/chunk_store.rs: use ? insteadf of unwrap 2020-09-10 06:37:37 +02:00
597427afaf clean up .bad file handling in sweep_unused_chunks
Code cleanup, no functional change intended.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-10 06:31:22 +02:00
3cddfb29be backup: ensure no fixed index writers are left over either
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-10 06:29:38 +02:00
e15b76369a buildsys: upload client packages also to PMG repo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 16:48:31 +02:00
d7c1251435 ui: calendar event: disable matchFieldWidth for picker
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 16:48:31 +02:00
ea3ce82a74 ui: calendar event: enable more complex examples again
now that they (should) work.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 16:48:31 +02:00
092378ba92 Change "data store" to "datastore" throughout docs
Before, there were mixed usages of "data store" and
"datastore" throughout the docs.
This improves consistency in the docs by using only
"datastore" throughout.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 13:12:01 +02:00
068e526862 backup: touch all chunks, even if they exist
We need to update the atime of chunk files if they already exist,
otherwise a concurrently running GC could sweep them away.

This is protected with ChunkStore.mutex, so the fstat/unlink does not
race with touching.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:51:03 +02:00
a9767cf7de gc: remove .bad files on garbage collect
The iterator of get_chunk_iterator is extended with a third parameter
indicating whether the current file is a chunk (false) or a .bad file
(true).

Count their sizes to the total of removed bytes, since it also frees
disk space.

.bad files are only deleted if the corresponding chunk exists, i.e. has
been rewritten. Otherwise we might delete data only marked bad because
of transient errors.

While at it, also clean up and use nix::unistd::unlinkat instead of
unsafe libc calls.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:43:13 +02:00
aadcc2815c cleanup rename_corrupted_chunk: avoid duplicate format macro 2020-09-08 12:29:53 +02:00
0f3b7efa84 verify: rename corrupted chunks with .bad extension
This ensures that following backups will always upload the chunk,
thereby replacing it with a correct version again.

Format for renaming is <digest>.<counter>.bad where <counter> is used if
a chunk is found to be bad again before a GC cleans it up.

Care has been taken to deliberately only rename a chunk in conditions
where it is guaranteed to be an error in the chunk itself. Otherwise a
broken index file could lead to an unwanted mass-rename of chunks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:20:57 +02:00
7c77e2f94a verify: fix log units
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:10:19 +02:00
abd4c4cb8c ui: add translation support
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 08:54:33 +02:00
09f12d1cf3 tools: rename extract_auth_cookie to extract_cookie
It does nothing specific to authentication..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 08:54:33 +02:00
1db4cfb308 tools/sytemd/time: add tests for multivalue fields
we did this wrong earlier, so it makes sense to add regression tests

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-08 07:09:43 +02:00
a4c1143664 server/worker_task: fix upid_read_status
a range from high to low in rust results in an empty range
(see std::ops::Range documentation)
so we need to generate the range from 0..data.len() and then reverse it

also, the task log contains a newline at the end, so we have to remove
that (should it exist)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-08 07:06:22 +02:00
0623674f44 Edit section "Network Management"
Following changes made:
    * Remove empty column "method6" from network list output,
      so table fits in console code-block
    * Walkthrough bond, rather than a bridge as it may be a more
      common setup case

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:30:42 +02:00
2dd58db792 PVE integration: Add note about hiding password
Add a note to section "Proxmox VE integration" explaining
how to avoid passing password as plain text when using the
pvesm command.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:19:20 +02:00
e11cfb93c0 change order of "Image Archives" and "File Archives"
Change the order of the "Image Archives" and "File
Archives" subsections, so that they match the order
which they are introduced in, in the section "Backup
Content" (minor readability improvement).

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:19:09 +02:00
bc0608955e Sync Jobs: add screenshots and explanation
Add screenshots of sync jobs panel in web interface
and explain how to carry out related tasks from it.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:18:50 +02:00
36be19218e Network Config: Add screenshots and explanation
Add screenshots for network configuration and explain
how to carry out related tasks using the web interface.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:18:15 +02:00
9fa39a46ba User Management: Add screenshots and explanation
Add screenshots for user management section in web
interface and explain how to carry out relevant tasks
using it.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:18:01 +02:00
ff30b912a0 Datastore Config: add screenshots and explanation
Add screenshots from the datastore section of the
web interface and explain how to carry out tasks using
the web interface.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:17:50 +02:00
b0c10a88a3 Disk Management: Add screenshots and explanation
This adds screenshots from the web interface for the
sections related to disk management and adds explanation
of how to carry out tasks using the web interface.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:12:54 +02:00
ccbe6547a7 Add screenshots of web interface
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:12:17 +02:00
32afd60336 src/tools/systemd/time.rs: derive Clone 2020-09-07 12:37:08 +02:00
02e47b8d6e SYSTEMD_CALENDAR_EVENT_SCHEMA: fix wrong schema description 2020-09-07 09:07:55 +02:00
44055cac4d tools/systemd/time: enable dates for calendarevents
this implements parsing and calculating calendarevents that have a
basic date component (year-mon-day) with the usual syntax options
(*, ranges, lists)

and some special events:
monthly
yearly/annually (like systemd)
quarterly
semiannually,semi-annually (like systemd)

includes some regression tests

the ~ syntax for days (the last x days of the month) is not yet
implemented

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:36:29 +02:00
1dfc09cb6b tools/systemd/time: fix signed conversion
instead of using 'as' and silently converting wrong,
use the TryInto trait and raise an error if we cannot convert

this should only happen if we have a negative year,
but this is expected (we do not want schedules from before the year 0)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:35:38 +02:00
48c56024aa tools/systemd/tm_editor: add setter/getter for months/years/days
add_* are modeled after add_days

subtract one for set_mon to have a consistent interface for all fields
(i.e. getter/setter return/expect the 'real' number, not the ones
in the tm struct)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:34:27 +02:00
cf103266b3 tools/systemd/tm_editor: move conversion of the year into getter and setter
the tm struct contains the year - 1900 but we added that

if we want to use the libc normalization correctly, the tm struct
must have the correct year in it, else the computations for timezones,
etc. fail

instead add a getter that adds the years and a setter that subtracts it again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:34:04 +02:00
d5cf8f606c tools/systemd/time: fix selection for multiple options
if we give multiple options/ranges for a value, e.g.
2,4,8
we always choose the biggest, instead of the smallest that is next

this happens because in DateTimeValue::find_next(value)
'next' can be set multiple times and we set it when the new
value was *bigger* than the last found 'next' value, when in reality
we have to choose the *smallest* next we can find

reverse the comparison operator to fix this

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:33:42 +02:00
ce7ab28cfa tools/systemd/parse_time: error out on invalid ranges
if the range is reverse (bigger..smaller) we will never find a value,
so error out during parsing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:28:48 +02:00
07ca6f6e66 tools/systemd/tm_editor: remove reset_time from add_days and document it
we never passed 'false' to it anyway so remove it
(we can add it again if we should ever need it)

also remove the adding of wday (gets normalized anyway)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:28:24 +02:00
15ec790a40 tools/systemd/time: convert the resulting timestamp into an option
we want to use dates for the calendarspec, and with that there are some
impossible combinations that cannot be detected during parsing
(e.g. some datetimes do not exist in some timezones, and the timezone
can change after setting the schedule)

so finding no timestamp is not an error anymore but a valid result

we omit logging in that case (since it is not an error anymore)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:28:05 +02:00
cb73b2d69c tools/systemd/time: move continue out of the if/else
will be called anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:27:20 +02:00
c931c87173 tools/systemd/time: let libc normalize time for us
mktime/gmtime can normalize time and even can handle special timezone
cases like the fact that the time 2:30 on specific day/timezone combos
do not exists

we have to convert the signature of all functions that use
normalize_time since mktime/gmtime can return an EOVERFLOW
but if this happens there is no way we can find a good time anyway

since normalize_time will always set wday according to the rest of the
time, remove set_wday

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:26:40 +02:00
28a0a9343c tools/systemd/tm_editor: remove TMChanges optimization
while it was correct, there was no measurable speed gain
(a benchmark yielded 2.8 ms for a spec that did not find a timestamp either way)
so remove it for simpler code

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:26:04 +02:00
56b666458c server/worker_task: fix 'unknown' status for some big task logs
when trying to parse the task status, we seek 8k from the end
which may be into the middle of a line, so the datetime parsing
can fail (when the log message contains ': ')

This patch does a fast search for the last line, and avoid the
'lines' iterator.
2020-09-04 10:41:13 +02:00
cd6ddb5a69 depend on proxmox 0.3.5 2020-09-04 08:11:53 +02:00
ecd55041a2 fix #2978: allow non-root to view datastore usage
for datastores where the requesting user has read or write permissions,
since the API method itself filters by that already. this is the same
permission setting and filtering that the datastore list API endpoint
does.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-04 06:18:20 +02:00
e7e8e6d5f7 online help: use a phony target and regenerate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 14:41:03 +02:00
49df8ac115 docs: add prototype sphinx extension for online help
goes through the sections in the documents and creates the
OnlineHelpInfo.js file from the explicitly defined section labels which
are used in the js files with 'onlineHelp' variable.
2020-09-02 14:38:27 +02:00
7397f4a390 bump version to 0.8.14-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 10:41:42 +02:00
8317873c06 gc: improve percentage done logs 2020-09-02 10:04:18 +02:00
deef63699e verify: also fail on server shutdown 2020-09-02 09:50:17 +02:00
c6e07769e9 ui: datastore content: eslint fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 09:30:57 +02:00
423df9b1f4 ui: datastore: show more granular verify state
Allows to differ the following situations:
* some snapshots in a group where not verified
* how many snapshots failed to verify in a group
* all snapshots verified but last verification task was over 30 days
  ago

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 09:30:57 +02:00
c879e5af11 ui: datastore: mark row invalid if last snapshot verification failed
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 09:12:05 +02:00
63d9aca96f verify: log progress 2020-09-02 07:43:28 +02:00
c3b1da9e41 datastore content: search: set emptytext to searched columns
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-01 18:30:54 +02:00
46388e6aef datastore content: reduce count column width
Using 75 as width we can display up to 9999999 which would allow
displaying over 19 years of snapshots done each minute, so quite
enough for the common cases.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-01 18:28:14 +02:00
484d439a7c datastore content: reload after verify
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-01 18:27:30 +02:00
ab6615134c d/postinst: always fixup termproxy user id and for all users
Anyone with a PAM account and Sys.Console access could have started a
termproxy session, adapt the regex.

Always test for broken entries and run the sed expression to make sure
eventually all occurences of the broken syntax are fixed.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-01 18:02:11 +02:00
b1149ebb36 ui: DataStoreContent.js: fix wrong comma
should be semicolon

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-01 15:33:55 +02:00
1bfdae7933 ui: DataStoreContent: improve encrypted column
do not count files where we do not have any information

such files exist in the backup dir, but are not in the manifest
so we cannot use those files for determining if the backups are
encrypted or not

this marks encrypted/signed backups with unencrypted client.log.blob files as
encrypted/signed (respectively) instead of 'Mixed'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-01 15:33:55 +02:00
4f09d31085 src/backup/verify.rs: use global hashes (instead of per group)
This makes verify more predictable.
2020-09-01 13:33:04 +02:00
58d73ddb1d src/backup/data_blob.rs: avoid useless &, data is already a reference 2020-09-01 12:56:25 +02:00
6b809ff59b src/backup/verify.rs: use separate thread to load data 2020-09-01 12:56:25 +02:00
afe08d2755 debian/control: fix versions 2020-09-01 10:19:40 +02:00
a7bc5d4eaf depend on proxmox 0.3.4 2020-08-28 06:32:33 +02:00
97cd0a2a6d bump version to 0.8.13-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-27 16:15:31 +02:00
49a92084a9 gc: use human readable units for summary
and avoid the "percentage done: X %" phrase

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-27 16:06:35 +02:00
9bdeecaee4 bump pxar dep to 0.6.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-27 12:16:21 +02:00
843880f008 bin/backup-proxy: assert that daemon runs as backup user/group
Because if not, the backups it creates have bogus permissions and may
seem like they got broken once the daemon is started again with the
correct user/group.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 10:30:15 +02:00
a6ed5e1273 backup: add BACKUP_GROUP_NAME const and backup_group helper
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 10:27:47 +02:00
74f94d0678 bin/backup-proxy: remove outdated perl comments
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 10:27:47 +02:00
946c3e8a81 bin/backup-proxy: return error directly in main
anyhow makes this a nice error message, similar to the manual
wrapping used.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 10:27:47 +02:00
7b212c1f79 ui: datastore content: show last verify result from a snapshot
Double-click on the verify grid-cell of a specific snapshot (not the
group) opens the relevant task log.

The date of the last verify is shown as tool-tip.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 07:36:16 +02:00
3b2046d263 save last verify result in snapshot manifest
Save the state ("ok" or "failed") and the UPID of the respective
verify task. With this we can easily allow to open the relevant task
log and show when the last verify happened.

As we already load the manifest when listing the snapshots, just add
it there directly.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 07:35:13 +02:00
1ffe030123 various typo fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-25 18:52:31 +02:00
5255e641fa SnapshotListItem: add comment field also to schema
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-25 16:24:36 +02:00
c86b6f40d7 tools/format: implement from u64 for HumanByte helper type
Could be problematic for systems where usize is 32 bit, but we do not
really support those.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-25 14:18:49 +02:00
5a718dce17 api datastore: fix typo in error message
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-25 14:16:40 +02:00
1b32750644 update d/control for pxar 0.5.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-25 12:37:11 +02:00
5aa103c3c3 bump pxar dep to 0.5.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-25 12:37:11 +02:00
fd3f690104 Add section "Garbage Collection"
Add the section "Garbage Collection" to section "Backup Server
Management". This briefly explains the "garbage-collection"
subcommand of "proxmox-backup-manager"

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-25 09:38:03 +02:00
24b638bd9f Add section "Network Management"
Add the section "Network Management", which explains the
"network" subcommand of "proxmox-backup-manager"

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-25 09:37:41 +02:00
9624c5eecb add note about TLS benchmark test. 2020-08-25 09:36:12 +02:00
503dd339a8 Add further explanation to benchmarking
Adds a note, explaing the percentages shown in the output
of the benchmark

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-25 09:33:23 +02:00
36ea5df444 administration-guide.rst: remove debug output from code examples 2020-08-25 09:29:52 +02:00
dce9dd6f70 Add section "Disk Management"
Add the section "Disk Management" to the admin guide, explaining
the use of the "disk" subcommand of "proxmox-backup-manager"

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-25 09:27:48 +02:00
88e28e15e4 debian/control: update for new pxar 0.4 dependency 2020-08-25 09:09:37 +02:00
399e48a1ed bump version to 0.8.12-1 2020-08-25 08:57:12 +02:00
7ae571e7cb verify: speedup - only verify chunks once
We need to do the check before we load the chunk.
2020-08-25 08:52:24 +02:00
4264c5023b verify: sort backup groups 2020-08-25 08:38:47 +02:00
82b7adf90b bump pxar dep to 0.4.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-24 11:56:01 +02:00
71c4a3138f docs: fix PBS wiki link
rst/sphinx and comments are a PITA...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-21 11:09:41 +02:00
52991f239f bump version to 0.8.11-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-19 19:20:22 +02:00
3435f5491b Fix typo in program output
Change "comptation" -> "computation"

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-19 09:06:27 +02:00
aafe8609e5 d/postinst: fixup userid for older termproxy tasks
At the time when we can fix this up the new (and possibly an old)
server daemon process is running, so use the flock CLI tool from
util-linux to ensure we do the same locking as the server and thus we
avoid a race condition.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-19 07:26:58 +02:00
a8d69fcf05 Add "Benchmarking" section
This adds the "Benchmarking" section which discusses
the proxmox-backup-client benchmark subcommand.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-18 14:24:08 +02:00
1e68497c03 Add section describing acl tool
This adds a section how to use the acl subcommand
to manage user access control

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-18 14:24:08 +02:00
74fc844787 Correct erroneous instructions and add clarity
This patch changes the following:
- Provide extra clarity to instruction and information where
  appropriate.
- Fix examples and content that would lead to erroneous behavior
  in a command.
- Insert section about installing on Debian into a caution block

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-18 14:24:08 +02:00
4cda7603c4 minor language and formatting fixup
this fixes minor grammatical errors throughout the pbs docs
and rewords certain sections for improved readability.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-18 14:24:08 +02:00
11e1e27a42 turn UPID into an API type
It's a string-type.
Implement Serialize via Display, Deserialize via FromStr and
add an API_SCHEMA so that it can be used as a type within
the #[api] macro.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-18 11:54:30 +02:00
4ea831bfa1 style fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-18 08:50:14 +02:00
c1d7d708d4 remove map_struct helper
if we ever need this it should be marked as unsafe!

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-17 11:53:02 +02:00
3fa2b983c1 add methods to allocate a DynamicIndexHeader
to avoid `map_struct` which is actually unsafe because it
does not verify alignment constraints at all

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-17 11:50:32 +02:00
a1e9c05738 api2/node/services: turn service api calls into workers
to be in line with pve/pmg and be able to show the progress in the gui

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 12:37:17 +02:00
934deeff2d fix #2904: zpool status: parse vdevs with state but without statistics
some vdevs (e.g. spares) have a 'state' (e.g. AVAIL), but
not statistics like READ/WRITE/etc.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 11:41:32 +02:00
c162df60c8 zfs status: add test with spares
this will fail for now, fixed in the next commit

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 11:41:32 +02:00
98161fddb5 cleanup last patch 2020-08-14 07:30:05 +02:00
be614c625f api2/node/../disks/directory: added DELETE endpoint for removal of mount-units
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-08-14 07:06:10 +02:00
87c4cb7419 Fix #2926: parse_iface_attributes: always break on non-{attribitue, comment} token
There is no requirement to have at least
a blank line, attribute or comment in between two
interface definitions, e.g.
iface lo inet loopback
iface lo inet6 loopback

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-08-14 06:57:07 +02:00
93bb51fe7e config/jobstate: replace Job:load with create_state_file
it really is not necessary, since the only time we are interested in
loading the state from the file is when we list it, and there
we use JobState::load directly to avoid the lock

we still need to create the file on syncjob creation though, so
that we have the correct time for the schedule

to do this we add a new create_state_file that overwrites it on creation
of a syncjob

for safety, we subtract 30 seconds from the in-memory state in case
the statefile is missing

since we call create_state_file from  proxmox-backup-api,
we have to chown the lock file after creating to the backup user,
else the sync job scheduling cannot aquire the lock

also we remove the lock file on statefile removal

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 06:38:02 +02:00
713b66b6ed cleanup: replace id from do_sync_job with info from job
we already have it inside the job itself

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 06:36:43 +02:00
77bd2a469c cleanup: merge endtime into TaskState
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 06:36:19 +02:00
97af919530 ui: syncjob: make some columns smaller
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:51:47 +02:00
c91602316b ui: syncjob: improve task text rendering
to also have the correct icons for warnings and unknown tasks

the text is here "ERROR: ..." now, so leave the 'Error' out

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:51:35 +02:00
a13573c24a syncjob: use do_sync_job also for scheduled sync jobs
and determine the last runtime with the jobstate

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:51:20 +02:00
02543a5c7f api2/pull: extend do_sync_job to also handle schedule and jobstate
so that we can log if triggered by a schedule, and writing to a jobstatefile
also correctly polls now the abort_future of the worker, so that
users can stop a sync

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:49:28 +02:00
42b68f72e6 api/{pull, sync}: refactor to do_sync_job
and move the pull parameters into the worker, so that the task log
contains the error if there is one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:40:52 +02:00
664d8a2765 api2/admin/sync: use JobState for faster access to state info
and delete the statefile again on syncjob removal

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:40:00 +02:00
e6263c2662 config: add JobState helper
this is intended to be a generic helper to (de)serialize job states
(e.g., sync, verify, and so on)

writes a json file into '/var/lib/proxmox-backup/jobstates/TYPE-ID.json'

the api creates the directory with the correct permissions, like
the rrd directory

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:36:10 +02:00
ae197dda23 server/worker_task: let upid_read_status also return the endtime
the endtime should be the timestamp of the last log line
or if there is no log at all, the starttime

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:35:44 +02:00
4c116bafb8 server: change status of a task from a string to an enum
representing a state via an enum makes more sense in this case
we also implement FromStr and Display to make it easy to convet from/to
a string

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:35:19 +02:00
df30017ff8 remove unused import
rustc doesn't warn about this kind of import, however,
clippy does

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-13 09:05:15 +02:00
3f3ae19d63 formatting fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-12 14:30:03 +02:00
72dc68323c replace and remove old ticket functions
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-12 14:28:21 +02:00
593f917742 introduce Ticket struct
and add tests and compatibility tests

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-12 14:28:21 +02:00
639419b049 worker_task: new_thread() - remove unused tokio channel 2020-08-12 08:43:09 +02:00
157 changed files with 5780 additions and 1994 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "0.8.10" version = "0.9.0"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"] authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018" edition = "2018"
license = "AGPL-3" license = "AGPL-3"
@ -18,7 +18,6 @@ apt-pkg-native = "0.3.1" # custom patched version
base64 = "0.12" base64 = "0.12"
bitflags = "1.2.1" bitflags = "1.2.1"
bytes = "0.5" bytes = "0.5"
chrono = "0.4" # Date and time library for Rust
crc32fast = "1" crc32fast = "1"
endian_trait = { version = "0.6", features = ["arrays"] } endian_trait = { version = "0.6", features = ["arrays"] }
anyhow = "1.0" anyhow = "1.0"
@ -26,7 +25,7 @@ futures = "0.3"
h2 = { version = "0.2", features = ["stream"] } h2 = { version = "0.2", features = ["stream"] }
handlebars = "3.0" handlebars = "3.0"
http = "0.2" http = "0.2"
hyper = "0.13" hyper = "0.13.6"
lazy_static = "1.4" lazy_static = "1.4"
libc = "0.2" libc = "0.2"
log = "0.4" log = "0.4"
@ -39,11 +38,11 @@ pam-sys = "0.5"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-utils = "0.1.0" pin-utils = "0.1.0"
pathpatterns = "0.1.2" pathpatterns = "0.1.2"
proxmox = { version = "0.3.3", features = [ "sortable-macro", "api-macro", "websocket" ] } proxmox = { version = "0.4.2", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0" proxmox-fuse = "0.1.0"
pxar = { version = "0.3.0", features = [ "tokio-io", "futures-io" ] } pxar = { version = "0.6.1", features = [ "tokio-io", "futures-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io", "futures-io" ] } #pxar = { path = "../pxar", features = [ "tokio-io", "futures-io" ] }
regex = "1.2" regex = "1.2"
rustyline = "6" rustyline = "6"
@ -62,6 +61,7 @@ walkdir = "2"
xdg = "2.2" xdg = "2.2"
zstd = { version = "0.4", features = [ "bindgen" ] } zstd = { version = "0.4", features = [ "bindgen" ] }
nom = "5.1" nom = "5.1"
crossbeam-channel = "0.4"
[features] [features]
default = [] default = []

View File

@ -150,4 +150,4 @@ upload: ${SERVER_DEB} ${CLIENT_DEB} ${DOC_DEB}
# check if working directory is clean # check if working directory is clean
git diff --exit-code --stat && git diff --exit-code --stat --staged git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve" --dist buster tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve,pmg" --dist buster

211
debian/changelog vendored
View File

@ -1,3 +1,213 @@
rust-proxmox-backup (0.9.0-1) unstable; urgency=medium
* use ParallelHandler to verify chunks
* client: add new paper-key command to CLI tool
* server: split task list in active and archived
* tools: add logrotate module and use it for archived tasks, allowing to save
more than 100 thousands of tasks efficiently in the archive
* require square [brackets] for ipv6 addresses and fix ipv6 handling for
remotes/sync jobs
* ui: RemoteEdit: make comment and fingerprint deletable
* api/disks: create zfs: enable import systemd service unit for newly created
ZFS pools
* client and remotes: add support to specify a custom port number. The server
is still always listening on 8007, but you can now use things like reverse
proxies or port mapping.
* ui: RemoteEdit: allow to specify a port in the host field
* client pull: log progress
* various fixes and improvements
-- Proxmox Support Team <support@proxmox.com> Thu, 01 Oct 2020 16:19:40 +0200
rust-proxmox-backup (0.8.21-1) unstable; urgency=medium
* depend on crossbeam-channel
* speedup sync jobs (allow up to 4 worker threads)
* improve docs
* use jobstate mechanism for verify/garbage_collection schedules
* proxy: fix error handling in prune scheduling
-- Proxmox Support Team <support@proxmox.com> Fri, 25 Sep 2020 13:20:19 +0200
rust-proxmox-backup (0.8.20-1) unstable; urgency=medium
* improve sync speed
* benchmark: use compressable data to get more realistic result
* docs: add onlineHelp to some panels
-- Proxmox Support Team <support@proxmox.com> Thu, 24 Sep 2020 13:15:45 +0200
rust-proxmox-backup (0.8.19-1) unstable; urgency=medium
* src/api2/reader.rs: use std::fs::read instead of tokio::fs::read
-- Proxmox Support Team <support@proxmox.com> Tue, 22 Sep 2020 13:30:27 +0200
rust-proxmox-backup (0.8.18-1) unstable; urgency=medium
* src/client/pull.rs: allow up to 20 concurrent download streams
* docs: add version and date to HTML index
-- Proxmox Support Team <support@proxmox.com> Tue, 22 Sep 2020 12:39:26 +0200
rust-proxmox-backup (0.8.17-1) unstable; urgency=medium
* src/client/pull.rs: open temporary manifest with truncate(true)
* depend on proxmox 0.4.1
* fix #3017: check array boundaries before using
* datastore/prune schedules: use JobState for tracking of schedules
* improve docs
* fix #3015: allow user self-service
* add verification scheduling to proxmox-backup-proxy
* fix #3014: allow DataStoreAdmins to list DS config
* depend on pxar 0.6.1
* fix #2942: implement lacp bond mode and bond_xmit_hash_policy
* api2/pull: make pull worker abortable
* fix #2870: renew tickets in HttpClient
* always allow retrieving (censored) subscription info
* fix #2957: allow Sys.Audit access to node RRD
* backup: check all referenced chunks actually exist
* backup: check verify state of previous backup before allowing reuse
* avoid chrono dependency
-- Proxmox Support Team <support@proxmox.com> Mon, 21 Sep 2020 14:08:32 +0200
rust-proxmox-backup (0.8.16-1) unstable; urgency=medium
* BackupDir: make constructor fallible
* handle invalid mtime when formating entries
* ui/docs: add onlineHelp button for syncjobs
* docs: add section for calendar events
* tools/systemd/parse_time: enable */x syntax for calendar events
* docs: set html img width limitation through css
* docs: use alabaster theme
* server: set http2 max frame size
* doc: Add section "FAQ"
-- Proxmox Support Team <support@proxmox.com> Fri, 11 Sep 2020 15:54:57 +0200
rust-proxmox-backup (0.8.15-1) unstable; urgency=medium
* verify: skip benchmark directory
* add benchmark flag to backup creation for proper cleanup when running
a benchmark
* mount: fix mount subcommand
* ui: only mark backup encrypted if there are any files
* fix #2983: improve tcp performance
* improve ui and docs
* verify: rename corrupted chunks with .bad extension
* gc: remove .bad files on garbage collect
* ui: add translation support
* server/worker_task: fix upid_read_status
* tools/systemd/time: enable dates for calendarevents
* server/worker_task: fix 'unknown' status for some big task logs
-- Proxmox Support Team <support@proxmox.com> Thu, 10 Sep 2020 09:25:59 +0200
rust-proxmox-backup (0.8.14-1) unstable; urgency=medium
* verify speed up: use separate IO thread, use datastore-wide cache (instead
of per group)
* ui: datastore content: improve encrypted column
* ui: datastore content: show more granular verify state, especially for
backup group rows
* verify: log progress in percent
-- Proxmox Support Team <support@proxmox.com> Wed, 02 Sep 2020 09:36:47 +0200
rust-proxmox-backup (0.8.13-1) unstable; urgency=medium
* improve and add to documentation
* save last verify result in snapshot manifest and show it in the GUI
* gc: use human readable units for summary in task log
-- Proxmox Support Team <support@proxmox.com> Thu, 27 Aug 2020 16:12:07 +0200
rust-proxmox-backup (0.8.12-1) unstable; urgency=medium
* verify: speedup - only verify chunks once
* verify: sort backup groups
* bump pxar dep to 0.4.0
-- Proxmox Support Team <support@proxmox.com> Tue, 25 Aug 2020 08:55:52 +0200
rust-proxmox-backup (0.8.11-1) unstable; urgency=medium
* improve sync jobs, allow to stop them and better logging
* fix #2926: make network interfaces parser more flexible
* fix #2904: zpool status: parse also those vdevs without READ/ẀRITE/...
statistics
* api2/node/services: turn service api calls into workers
* docs: add sections describing ACL related commands and describing
benchmarking
* docs: general grammar, wording and typo improvements
-- Proxmox Support Team <support@proxmox.com> Wed, 19 Aug 2020 19:20:03 +0200
rust-proxmox-backup (0.8.10-1) unstable; urgency=medium rust-proxmox-backup (0.8.10-1) unstable; urgency=medium
* ui: acl: add improved permission selector * ui: acl: add improved permission selector
@ -391,4 +601,3 @@ proxmox-backup (0.1-1) unstable; urgency=medium
* first try * first try
-- Proxmox Support Team <support@proxmox.com> Fri, 30 Nov 2018 13:03:28 +0100 -- Proxmox Support Team <support@proxmox.com> Fri, 30 Nov 2018 13:03:28 +0100

22
debian/control vendored
View File

@ -11,8 +11,8 @@ Build-Depends: debhelper (>= 11),
librust-base64-0.12+default-dev, librust-base64-0.12+default-dev,
librust-bitflags-1+default-dev (>= 1.2.1-~~), librust-bitflags-1+default-dev (>= 1.2.1-~~),
librust-bytes-0.5+default-dev, librust-bytes-0.5+default-dev,
librust-chrono-0.4+default-dev,
librust-crc32fast-1+default-dev, librust-crc32fast-1+default-dev,
librust-crossbeam-channel-0.4+default-dev,
librust-endian-trait-0.6+arrays-dev, librust-endian-trait-0.6+arrays-dev,
librust-endian-trait-0.6+default-dev, librust-endian-trait-0.6+default-dev,
librust-futures-0.3+default-dev, librust-futures-0.3+default-dev,
@ -20,7 +20,7 @@ Build-Depends: debhelper (>= 11),
librust-h2-0.2+stream-dev, librust-h2-0.2+stream-dev,
librust-handlebars-3+default-dev, librust-handlebars-3+default-dev,
librust-http-0.2+default-dev, librust-http-0.2+default-dev,
librust-hyper-0.13+default-dev, librust-hyper-0.13+default-dev (>= 0.13.6-~~),
librust-lazy-static-1+default-dev (>= 1.4-~~), librust-lazy-static-1+default-dev (>= 1.4-~~),
librust-libc-0.2+default-dev, librust-libc-0.2+default-dev,
librust-log-0.4+default-dev, librust-log-0.4+default-dev,
@ -34,14 +34,14 @@ Build-Depends: debhelper (>= 11),
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~), librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~), librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-utils-0.1+default-dev, librust-pin-utils-0.1+default-dev,
librust-proxmox-0.3+api-macro-dev (>= 0.3.3-~~), librust-proxmox-0.4+api-macro-dev (>= 0.4.2-~~),
librust-proxmox-0.3+default-dev (>= 0.3.3-~~), librust-proxmox-0.4+default-dev (>= 0.4.2-~~),
librust-proxmox-0.3+sortable-macro-dev (>= 0.3.3-~~), librust-proxmox-0.4+sortable-macro-dev (>= 0.4.2-~~),
librust-proxmox-0.3+websocket-dev (>= 0.3.3-~~), librust-proxmox-0.4+websocket-dev (>= 0.4.2-~~),
librust-proxmox-fuse-0.1+default-dev, librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.3+default-dev, librust-pxar-0.6+default-dev (>= 0.6.1-~~),
librust-pxar-0.3+futures-io-dev, librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),
librust-pxar-0.3+tokio-io-dev, librust-pxar-0.6+tokio-io-dev (>= 0.6.1-~~),
librust-regex-1+default-dev (>= 1.2-~~), librust-regex-1+default-dev (>= 1.2-~~),
librust-rustyline-6+default-dev, librust-rustyline-6+default-dev,
librust-serde-1+default-dev, librust-serde-1+default-dev,
@ -78,6 +78,7 @@ Build-Depends: debhelper (>= 11),
uuid-dev, uuid-dev,
debhelper (>= 12~), debhelper (>= 12~),
bash-completion, bash-completion,
pve-eslint,
python3-docutils, python3-docutils,
python3-pygments, python3-pygments,
rsync, rsync,
@ -103,6 +104,7 @@ Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1), libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8), libzstd1 (>= 1.3.8),
lvm2, lvm2,
pbs-i18n,
proxmox-backup-docs, proxmox-backup-docs,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4), proxmox-widget-toolkit (>= 2.2-4),
@ -117,7 +119,7 @@ Description: Proxmox Backup Server daemon with tools and GUI
Package: proxmox-backup-client Package: proxmox-backup-client
Architecture: any Architecture: any
Depends: ${misc:Depends}, ${shlibs:Depends} Depends: qrencode ${misc:Depends}, ${shlibs:Depends}
Description: Proxmox Backup Client tools Description: Proxmox Backup Client tools
This package contains the Proxmox Backup client, which provides a This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups. simple command line tool to create and restore backups.

5
debian/control.in vendored
View File

@ -4,9 +4,10 @@ Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1), libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8), libzstd1 (>= 1.3.8),
lvm2, lvm2,
pbs-i18n,
proxmox-backup-docs, proxmox-backup-docs,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4), proxmox-widget-toolkit (>= 2.3-1),
pve-xtermjs (>= 4.7.0-1), pve-xtermjs (>= 4.7.0-1),
smartmontools, smartmontools,
${misc:Depends}, ${misc:Depends},
@ -18,7 +19,7 @@ Description: Proxmox Backup Server daemon with tools and GUI
Package: proxmox-backup-client Package: proxmox-backup-client
Architecture: any Architecture: any
Depends: ${misc:Depends}, ${shlibs:Depends} Depends: qrencode ${misc:Depends}, ${shlibs:Depends}
Description: Proxmox Backup Client tools Description: Proxmox Backup Client tools
This package contains the Proxmox Backup client, which provides a This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups. simple command line tool to create and restore backups.

View File

@ -14,6 +14,7 @@ section = "admin"
build_depends = [ build_depends = [
"debhelper (>= 12~)", "debhelper (>= 12~)",
"bash-completion", "bash-completion",
"pve-eslint",
"python3-docutils", "python3-docutils",
"python3-pygments", "python3-pygments",
"rsync", "rsync",

6
debian/postinst vendored
View File

@ -14,6 +14,12 @@ case "$1" in
_dh_action=start _dh_action=start
fi fi
deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active
fi
;; ;;
abort-upgrade|abort-remove|abort-deconfigure) abort-upgrade|abort-remove|abort-deconfigure)

View File

@ -28,7 +28,6 @@ COMPILEDIR := ../target/debug
SPHINXOPTS += -t devbuild SPHINXOPTS += -t devbuild
endif endif
# Sphinx internal variables. # Sphinx internal variables.
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) . ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) .
@ -68,9 +67,17 @@ proxmox-backup-manager.1: proxmox-backup-manager/man1.rst proxmox-backup-manage
proxmox-backup-proxy.1: proxmox-backup-proxy/man1.rst proxmox-backup-proxy/description.rst proxmox-backup-proxy.1: proxmox-backup-proxy/man1.rst proxmox-backup-proxy/description.rst
rst2man $< >$@ rst2man $< >$@
.PHONY: onlinehelpinfo
onlinehelpinfo:
@echo "Generating OnlineHelpInfo.js..."
$(SPHINXBUILD) -b proxmox-scanrefs $(ALLSPHINXOPTS) $(BUILDDIR)/scanrefs
@echo "Build finished. OnlineHelpInfo.js is in $(BUILDDIR)/scanrefs."
.PHONY: html .PHONY: html
html: ${GENERATED_SYNOPSIS} html: ${GENERATED_SYNOPSIS} images/proxmox-logo.svg custom.css conf.py
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
cp images/proxmox-logo.svg $(BUILDDIR)/html/_static/
cp custom.css $(BUILDDIR)/html/_static/
@echo @echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html." @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."

View File

@ -0,0 +1,133 @@
#!/usr/bin/env python3
# debugging stuff
from pprint import pprint
from typing import cast
import json
import re
import os
import io
from docutils import nodes
from sphinx.builders import Builder
from sphinx.util import logging
logger = logging.getLogger(__name__)
# refs are added in the following manner before the title of a section (note underscore and newline before title):
# .. _my-label:
#
# Section to ref
# --------------
#
#
# then referred to like (note missing underscore):
# "see :ref:`my-label`"
#
# the benefit of using this is if a label is explicitly set for a section,
# we can refer to it with this anchor #my-label in the html,
# even if the section name changes.
#
# see https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-ref
def scan_extjs_files(wwwdir="../www"): # a bit rough i know, but we can optimize later
js_files = []
used_anchors = []
logger.info("scanning extjs files for onlineHelp definitions")
for root, dirs, files in os.walk("{}".format(wwwdir)):
#print(root, dirs, files)
for filename in files:
if filename.endswith('.js'):
js_files.append(os.path.join(root, filename))
for js_file in js_files:
fd = open(js_file).read()
match = re.search("onlineHelp:\s*[\'\"](.*?)[\'\"]", fd) # match object is tuple
if match:
anchor = match.groups()[0]
anchor = re.sub('_', '-', anchor) # normalize labels
logger.info("found onlineHelp: {} in {}".format(anchor, js_file))
used_anchors.append(anchor)
return used_anchors
def setup(app):
logger.info('Mapping reference labels...')
app.add_builder(ReflabelMapper)
return {
'version': '0.1',
'parallel_read_safe': True,
'parallel_write_safe': True,
}
class ReflabelMapper(Builder):
name = 'proxmox-scanrefs'
def init(self):
self.docnames = []
self.env.online_help = {}
self.env.online_help['pbs_documentation_index'] = {
'link': '/docs/index.html',
'title': 'Proxmox Backup Server Documentation Index',
}
self.env.used_anchors = scan_extjs_files()
if not os.path.isdir(self.outdir):
os.mkdir(self.outdir)
self.output_filename = os.path.join(self.outdir, 'OnlineHelpInfo.js')
self.output = io.open(self.output_filename, 'w', encoding='UTF-8')
def write_doc(self, docname, doctree):
for node in doctree.traverse(nodes.section):
#pprint(vars(node))
if hasattr(node, 'expect_referenced_by_id') and len(node['ids']) > 1: # explicit labels
filename = self.env.doc2path(docname)
filename_html = re.sub('.rst', '.html', filename)
labelid = node['ids'][1] # [0] is predefined by sphinx, we need [1] for explicit ones
title = cast(nodes.title, node[0])
logger.info('traversing section {}'.format(title.astext()))
ref_name = getattr(title, 'rawsource', title.astext())
self.env.online_help[labelid] = {'link': '', 'title': ''}
self.env.online_help[labelid]['link'] = "/docs/" + os.path.basename(filename_html) + "#{}".format(labelid)
self.env.online_help[labelid]['title'] = ref_name
return
def get_outdated_docs(self):
return 'all documents'
def prepare_writing(self, docnames):
return
def get_target_uri(self, docname, typ=None):
return ''
def validate_anchors(self):
#pprint(self.env.online_help)
to_remove = []
for anchor in self.env.used_anchors:
if anchor not in self.env.online_help:
logger.info("[-] anchor {} is missing from onlinehelp!".format(anchor))
for anchor in self.env.online_help:
if anchor not in self.env.used_anchors and anchor != 'pbs_documentation_index':
logger.info("[*] anchor {} not used! deleting...".format(anchor))
to_remove.append(anchor)
for anchor in to_remove:
self.env.online_help.pop(anchor, None)
return
def finish(self):
# generate OnlineHelpInfo.js output
self.validate_anchors()
self.output.write("const proxmoxOnlineHelpInfo = ")
self.output.write(json.dumps(self.env.online_help, indent=2))
self.output.write(";\n")
self.output.close()
return

11
docs/_templates/index-sidebar.html vendored Normal file
View File

@ -0,0 +1,11 @@
<h3>Navigation</h3>
{{ toctree(includehidden=theme_sidebar_includehidden, collapse=True, titles_only=True) }}
{% if theme_extra_nav_links %}
<hr />
<h3>Links</h3>
<ul>
{% for text, uri in theme_extra_nav_links.items() %}
<li class="toctree-l1"><a href="{{ uri }}">{{ text }}</a></li>
{% endfor %}
</ul>
{% endif %}

7
docs/_templates/sidebar-header.html vendored Normal file
View File

@ -0,0 +1,7 @@
<p class="logo">
<a href="index.html">
<img class="logo" src="_static/proxmox-logo.svg" alt="Logo">
</a>
</p>
<h1 class="logo logo-name"><a href="index.html">Proxmox Backup</a></h1>
<hr style="width:100%;">

View File

@ -24,6 +24,13 @@ good deduplication rates for file archives.
The Proxmox Backup Server supports both strategies. The Proxmox Backup Server supports both strategies.
Image Archives: ``<name>.img``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is used for virtual machine images and other large binary
data. Content is split into fixed-sized chunks.
File Archives: ``<name>.pxar`` File Archives: ``<name>.pxar``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -34,13 +41,6 @@ the :ref:`pxar-format`, split into variable-sized chunks. The format
is optimized to achieve good deduplication rates. is optimized to achieve good deduplication rates.
Image Archives: ``<name>.img``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is used for virtual machine images and other large binary
data. Content is split into fixed-sized chunks.
Binary Data (BLOBs) Binary Data (BLOBs)
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
@ -127,17 +127,18 @@ Backup Server Management
The command line tool to configure and manage the backup server is called The command line tool to configure and manage the backup server is called
:command:`proxmox-backup-manager`. :command:`proxmox-backup-manager`.
.. _datastore_intro:
:term:`DataStore` :term:`DataStore`
~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~
A datastore is a place where backups are stored. The current implementation A datastore refers to a location at which backups are stored. The current
uses a directory inside a standard unix file system (``ext4``, ``xfs`` implementation uses a directory inside a standard unix file system (``ext4``,
or ``zfs``) to store the backup data. ``xfs`` or ``zfs``) to store the backup data.
Datastores are identified by a simple *ID*. You can configure it Datastores are identified by a simple *ID*. You can configure this
when setting up the backup server. when setting up the datastore. The configuration information for datastores
is stored in the file ``/etc/proxmox-backup/datastore.cfg``.
.. note:: The `File Layout`_ requires the file system to support at least *65538* .. note:: The `File Layout`_ requires the file system to support at least *65538*
subdirectories per directory. That number comes from the 2\ :sup:`16` subdirectories per directory. That number comes from the 2\ :sup:`16`
@ -146,26 +147,132 @@ when setting up the backup server.
filesystem configuration from being supported for a datastore. For example, filesystem configuration from being supported for a datastore. For example,
``ext3`` as a whole or ``ext4`` with the ``dir_nlink`` feature manually disabled. ``ext3`` as a whole or ``ext4`` with the ``dir_nlink`` feature manually disabled.
Disk Management
~~~~~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-disks.png
:align: right
:alt: List of disks
Proxmox Backup Server comes with a set of disk utilities, which are
accessed using the ``disk`` subcommand. This subcommand allows you to initialize
disks, create various filesystems, and get information about the disks.
To view the disks connected to the system, navigate to **Administration ->
Disks** in the web interface or use the ``list`` subcommand of
``disk``:
.. code-block:: console
# proxmox-backup-manager disk list
┌──────┬────────┬─────┬───────────┬─────────────┬───────────────┬─────────┬────────┐
│ name │ used │ gpt │ disk-type │ size │ model │ wearout │ status │
╞══════╪════════╪═════╪═══════════╪═════════════╪═══════════════╪═════════╪════════╡
│ sda │ lvm │ 1 │ hdd │ 34359738368 │ QEMU_HARDDISK │ - │ passed │
├──────┼────────┼─────┼───────────┼─────────────┼───────────────┼─────────┼────────┤
│ sdb │ unused │ 1 │ hdd │ 68719476736 │ QEMU_HARDDISK │ - │ passed │
├──────┼────────┼─────┼───────────┼─────────────┼───────────────┼─────────┼────────┤
│ sdc │ unused │ 1 │ hdd │ 68719476736 │ QEMU_HARDDISK │ - │ passed │
└──────┴────────┴─────┴───────────┴─────────────┴───────────────┴─────────┴────────┘
To initialize a disk with a new GPT, use the ``initialize`` subcommand:
.. code-block:: console
# proxmox-backup-manager disk initialize sdX
.. image:: images/screenshots/pbs-gui-disks-dir-create.png
:align: right
:alt: Create a directory
You can create an ``ext4`` or ``xfs`` filesystem on a disk using ``fs
create``, or by navigating to **Administration -> Disks -> Directory** in the
web interface and creating one from there. The following command creates an
``ext4`` filesystem and passes the ``--add-datastore`` parameter, in order to
automatically create a datastore on the disk (in this case ``sdd``). This will
create a datastore at the location ``/mnt/datastore/store1``:
.. code-block:: console
# proxmox-backup-manager disk fs create store1 --disk sdd --filesystem ext4 --add-datastore true
.. image:: images/screenshots/pbs-gui-disks-zfs-create.png
:align: right
:alt: Create ZFS
You can also create a ``zpool`` with various raid levels from **Administration
-> Disks -> Zpool** in the web interface, or by using ``zpool create``. The command
below creates a mirrored ``zpool`` using two disks (``sdb`` & ``sdc``) and
mounts it on the root directory (default):
.. code-block:: console
# proxmox-backup-manager disk zpool create zpool1 --devices sdb,sdc --raidlevel mirror
.. note:: You can also pass the ``--add-datastore`` parameter here, to automatically
create a datastore from the disk.
You can use ``disk fs list`` and ``disk zpool list`` to keep track of your
filesystems and zpools respectively.
Proxmox Backup Server uses the package smartmontools. This is a set of tools
used to monitor and control the S.M.A.R.T. system for local hard disks. If a
disk supports S.M.A.R.T. capability, and you have this enabled, you can
display S.M.A.R.T. attributes from the web interface or by using the command:
.. code-block:: console
# proxmox-backup-manager disk smart-attributes sdX
.. note:: This functionality may also be accessed directly through the use of
the ``smartctl`` command, which comes as part of the smartmontools package
(see ``man smartctl`` for more details).
Datastore Configuration Datastore Configuration
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-datastore.png
:align: right
:alt: Datastore Overview
You can configure multiple datastores. Minimum one datastore needs to be You can configure multiple datastores. Minimum one datastore needs to be
configured. The datastore is identified by a simple `name` and points to a configured. The datastore is identified by a simple *name* and points to a
directory on the filesystem. Each datastore also has associated retention directory on the filesystem. Each datastore also has associated retention
settings of how many backup snapshots for each interval of ``hourly``, settings of how many backup snapshots for each interval of ``hourly``,
``daily``, ``weekly``, ``monthly``, ``yearly`` as well as a time-independent ``daily``, ``weekly``, ``monthly``, ``yearly`` as well as a time-independent
number of backups to keep in that store. :ref:`Pruning <pruning>` and number of backups to keep in that store. :ref:`Pruning <pruning>` and
:ref:`garbage collection <garbage-collection>` can also be configured to run :ref:`garbage collection <garbage-collection>` can also be configured to run
periodically based on a configured :term:`schedule` per datastore. periodically based on a configured schedule (see :ref:`calendar-events`) per datastore.
The following command creates a new datastore called ``store1`` on :file:`/backup/disk1/store1` Creating a Datastore
^^^^^^^^^^^^^^^^^^^^
.. image:: images/screenshots/pbs-gui-datastore-create-general.png
:align: right
:alt: Create a datastore
You can create a new datastore from the web GUI, by navigating to **Datastore** in
the menu tree and clicking **Create**. Here:
* *Name* refers to the name of the datastore
* *Backing Path* is the path to the directory upon which you want to create the
datastore
* *GC Schedule* refers to the time and intervals at which garbage collection
runs
* *Prune Schedule* refers to the frequency at which pruning takes place
* *Prune Options* set the amount of backups which you would like to keep (see :ref:`Pruning <pruning>`).
Alternatively you can create a new datastore from the command line. The
following command creates a new datastore called ``store1`` on :file:`/backup/disk1/store1`
.. code-block:: console .. code-block:: console
# proxmox-backup-manager datastore create store1 /backup/disk1/store1 # proxmox-backup-manager datastore create store1 /backup/disk1/store1
To list existing datastores run: Managing Datastores
^^^^^^^^^^^^^^^^^^^
To list existing datastores from the command line run:
.. code-block:: console .. code-block:: console
@ -176,13 +283,15 @@ To list existing datastores run:
│ store1 │ /backup/disk1/store1 │ This is my default storage. │ │ store1 │ /backup/disk1/store1 │ This is my default storage. │
└────────┴──────────────────────┴─────────────────────────────┘ └────────┴──────────────────────┴─────────────────────────────┘
You can change settings of a datastore, for example to set a prune and garbage You can change the garbage collection and prune settings of a datastore, by
collection schedule or retention settings using ``update`` subcommand and view editing the datastore from the GUI or by using the ``update`` subcommand. For
a datastore with the ``show`` subcommand: example, the below command changes the garbage collection schedule using the
``update`` subcommand and prints the properties of the datastore with the
``show`` subcommand:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager datastore update store1 --keep-last 7 --prune-schedule daily --gc-schedule 'Tue 04:27' # proxmox-backup-manager datastore update store1 --gc-schedule 'Tue 04:27'
# proxmox-backup-manager datastore show store1 # proxmox-backup-manager datastore show store1
┌────────────────┬─────────────────────────────┐ ┌────────────────┬─────────────────────────────┐
│ Name │ Value │ │ Name │ Value │
@ -255,11 +364,15 @@ directories will store the chunked data after a backup operation has been execut
276489 drwxr-xr-x 3 backup backup 4.0K Jul 8 12:35 .. 276489 drwxr-xr-x 3 backup backup 4.0K Jul 8 12:35 ..
276490 drwxr-x--- 1 backup backup 1.1M Jul 8 12:35 . 276490 drwxr-x--- 1 backup backup 1.1M Jul 8 12:35 .
.. _user_mgmt:
User Management User Management
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-user-management.png
:align: right
:alt: User management
Proxmox Backup Server supports several authentication realms, and you need to Proxmox Backup Server supports several authentication realms, and you need to
choose the realm when you add a new user. Possible realms are: choose the realm when you add a new user. Possible realms are:
@ -271,7 +384,8 @@ choose the realm when you add a new user. Possible realms are:
``/etc/proxmox-backup/shadow.json``. ``/etc/proxmox-backup/shadow.json``.
After installation, there is a single user ``root@pam``, which After installation, there is a single user ``root@pam``, which
corresponds to the Unix superuser. You can use the corresponds to the Unix superuser. User configuration information is stored in the file
``/etc/proxmox-backup/user.cfg``. You can use the
``proxmox-backup-manager`` command line tool to list or manipulate ``proxmox-backup-manager`` command line tool to list or manipulate
users: users:
@ -284,19 +398,21 @@ users:
│ root@pam │ 1 │ │ │ │ │ Superuser │ │ root@pam │ 1 │ │ │ │ │ Superuser │
└─────────────┴────────┴────────┴───────────┴──────────┴────────────────┴────────────────────┘ └─────────────┴────────┴────────┴───────────┴──────────┴────────────────┴────────────────────┘
.. image:: images/screenshots/pbs-gui-user-management-add-user.png
:align: right
:alt: Add a new user
The superuser has full administration rights on everything, so you The superuser has full administration rights on everything, so you
normally want to add other users with less privileges: normally want to add other users with less privileges. You can create a new
user with the ``user create`` subcommand or through the web interface, under
**Configuration -> User Management**. The ``create`` subcommand lets you specify
many options like ``--email`` or ``--password``. You can update or change any
user properties using the ``update`` subcommand later (**Edit** in the GUI):
.. code-block:: console .. code-block:: console
# proxmox-backup-manager user create john@pbs --email john@example.com # proxmox-backup-manager user create john@pbs --email john@example.com
The create command lets you specify many options like ``--email`` or
``--password``. You can update or change any of them using the
update command later:
.. code-block:: console
# proxmox-backup-manager user update john@pbs --firstname John --lastname Smith # proxmox-backup-manager user update john@pbs --firstname John --lastname Smith
# proxmox-backup-manager user update john@pbs --comment "An example user." # proxmox-backup-manager user update john@pbs --comment "An example user."
@ -332,6 +448,8 @@ Or completely remove the user with:
# proxmox-backup-manager user remove john@pbs # proxmox-backup-manager user remove john@pbs
.. _user_acl:
Access Control Access Control
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
@ -344,10 +462,10 @@ following roles exist:
Disable Access - nothing is allowed. Disable Access - nothing is allowed.
**Admin** **Admin**
The Administrator can do anything. Can do anything.
**Audit** **Audit**
An Auditor can view things, but is not allowed to change settings. Can view things, but is not allowed to change settings.
**DatastoreAdmin** **DatastoreAdmin**
Can do anything on datastores. Can do anything on datastores.
@ -356,10 +474,10 @@ following roles exist:
Can view datastore settings and list content. But Can view datastore settings and list content. But
is not allowed to read the actual data. is not allowed to read the actual data.
**DataStoreReader** **DatastoreReader**
Can Inspect datastore content and can do restores. Can Inspect datastore content and can do restores.
**DataStoreBackup** **DatastoreBackup**
Can backup and restore owned backups. Can backup and restore owned backups.
**DatastorePowerUser** **DatastorePowerUser**
@ -374,24 +492,175 @@ following roles exist:
**RemoteSyncOperator** **RemoteSyncOperator**
Is allowed to read data from a remote. Is allowed to read data from a remote.
.. image:: images/screenshots/pbs-gui-permissions-add.png
:align: right
:alt: Add permissions for user
Access permission information is stored in ``/etc/proxmox-backup/acl.cfg``. The
file contains 5 fields, separated using a colon (':') as a delimiter. A typical
entry takes the form:
``acl:1:/datastore:john@pbs:DatastoreBackup``
The data represented in each field is as follows:
#. ``acl`` identifier
#. A ``1`` or ``0``, representing whether propagation is enabled or disabled,
respectively
#. The object on which the permission is set. This can be a specific object
(single datastore, remote, etc.) or a top level object, which with
propagation enabled, represents all children of the object also.
#. The user for which the permission is set
#. The role being set
You can manage datastore permissions from **Configuration -> Permissions** in the
web interface. Likewise, you can use the ``acl`` subcommand to manage and
monitor user permissions from the command line. For example, the command below
will add the user ``john@pbs`` as a **DatastoreAdmin** for the datastore
``store1``, located at ``/backup/disk1/store1``:
.. code-block:: console
# proxmox-backup-manager acl update /datastore/store1 DatastoreAdmin --userid john@pbs
You can monitor the roles of each user using the following command:
.. code-block:: console
# proxmox-backup-manager acl list
┌──────────┬──────────────────┬───────────┬────────────────┐
│ ugid │ path │ propagate │ roleid │
╞══════════╪══════════════════╪═══════════╪════════════════╡
│ john@pbs │ /datastore/disk1 │ 1 │ DatastoreAdmin │
└──────────┴──────────────────┴───────────┴────────────────┘
A single user can be assigned multiple permission sets for different datastores.
.. Note::
Naming convention is important here. For datastores on the host,
you must use the convention ``/datastore/{storename}``. For example, to set
permissions for a datastore mounted at ``/mnt/backup/disk4/store2``, you would use
``/datastore/store2`` for the path. For remote stores, use the convention
``/remote/{remote}/{storename}``, where ``{remote}`` signifies the name of the
remote (see `Remote` below) and ``{storename}`` is the name of the datastore on
the remote.
Network Management
~~~~~~~~~~~~~~~~~~
Proxmox Backup Server provides both a web interface and a command line tool for
network configuration. You can find the configuration options in the web
interface under the **Network Interfaces** section of the **Configuration** menu
tree item. The command line tool is accessed via the ``network`` subcommand.
These interfaces allow you to carry out some basic network management tasks,
such as adding, configuring, and removing network interfaces.
.. note:: Any changes made to the network configuration are not
applied, until you click on **Apply Configuration** or enter the ``network
reload`` command. This allows you to make many changes at once. It also allows
you to ensure that your changes are correct before applying them, as making a
mistake here can render the server inaccessible over the network.
To get a list of available interfaces, use the following command:
.. code-block:: console
# proxmox-backup-manager network list
┌───────┬────────┬───────────┬────────┬─────────────┬──────────────┬──────────────┐
│ name │ type │ autostart │ method │ address │ gateway │ ports/slaves │
╞═══════╪════════╪═══════════╪════════╪═════════════╪══════════════╪══════════════╡
│ bond0 │ bond │ 1 │ static │ x.x.x.x/x │ x.x.x.x │ ens18 ens19 │
├───────┼────────┼───────────┼────────┼─────────────┼──────────────┼──────────────┤
│ ens18 │ eth │ 1 │ manual │ │ │ │
├───────┼────────┼───────────┼────────┼─────────────┼──────────────┼──────────────┤
│ ens19 │ eth │ 1 │ manual │ │ │ │
└───────┴────────┴───────────┴────────┴─────────────┴──────────────┴──────────────┘
.. image:: images/screenshots/pbs-gui-network-create-bond.png
:align: right
:alt: Add a network interface
To add a new network interface, use the ``create`` subcommand with the relevant
parameters. For example, you may want to set up a bond, for the purpose of
network redundancy. The following command shows a template for creating the bond shown
in the list above:
.. code-block:: console
# proxmox-backup-manager network create bond0 --type bond --bond_mode active-backup --slaves ens18,ens19 --autostart true --cidr x.x.x.x/x --gateway x.x.x.x
You can make changes to the configuration of a network interface with the
``update`` subcommand:
.. code-block:: console
# proxmox-backup-manager network update bond0 --cidr y.y.y.y/y
You can also remove a network interface:
.. code-block:: console
# proxmox-backup-manager network remove bond0
The pending changes for the network configuration file will appear at the bottom of the
web interface. You can also view these changes, by using the command:
.. code-block:: console
# proxmox-backup-manager network changes
If you would like to cancel all changes at this point, you can either click on
the **Revert** button or use the following command:
.. code-block:: console
# proxmox-backup-manager network revert
If you are happy with the changes and would like to write them into the
configuration file, select **Apply Configuration**. The corresponding command
is:
.. code-block:: console
# proxmox-backup-manager network reload
.. note:: This command and corresponding GUI button rely on the ``ifreload``
command, from the package ``ifupdown2``. This package is included within the
Proxmox Backup Server installation, however, you may have to install it yourself,
if you have installed Proxmox Backup Server on top of Debian or Proxmox VE.
You can also configure DNS settings, from the **DNS** section
of **Configuration** or by using the ``dns`` subcommand of
``proxmox-backup-manager``.
.. _backup_remote:
:term:`Remote` :term:`Remote`
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
A remote refers to a separate Proxmox Backup Server installation and a user on that A remote refers to a separate Proxmox Backup Server installation and a user on that
installation, from which you can `sync` datastores to a local datastore with a installation, from which you can `sync` datastores to a local datastore with a
`Sync Job`. `Sync Job`. You can configure remotes in the web interface, under **Configuration
-> Remotes**. Alternatively, you can use the ``remote`` subcommand. The
configuration information for remotes is stored in the file
``/etc/proxmox-backup/remote.cfg``.
.. image:: images/screenshots/pbs-gui-remote-add.png
:align: right
:alt: Add a remote
To add a remote, you need its hostname or ip, a userid and password on the To add a remote, you need its hostname or ip, a userid and password on the
remote, and its certificate fingerprint. To get the fingerprint, use the remote, and its certificate fingerprint. To get the fingerprint, use the
``proxmox-backup-manager cert info`` command on the remote. ``proxmox-backup-manager cert info`` command on the remote, or navigate to
**Dashboard** in the remote's web interface and select **Show Fingerprint**.
.. code-block:: console .. code-block:: console
# proxmox-backup-manager cert info |grep Fingerprint # proxmox-backup-manager cert info |grep Fingerprint
Fingerprint (sha256): 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe Fingerprint (sha256): 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe
Using the information specified above, add the remote with: Using the information specified above, you can add a remote from the **Remotes**
configuration panel, or by using the command:
.. code-block:: console .. code-block:: console
@ -411,14 +680,23 @@ Use the ``list``, ``show``, ``update``, ``remove`` subcommands of
└──────┴──────────────┴──────────┴───────────────────────────────────────────┴─────────┘ └──────┴──────────────┴──────────┴───────────────────────────────────────────┴─────────┘
# proxmox-backup-manager remote remove pbs2 # proxmox-backup-manager remote remove pbs2
.. _syncjobs:
Sync Jobs Sync Jobs
~~~~~~~~~ ~~~~~~~~~
Sync jobs are configured to pull the contents of a datastore on a `Remote` to a .. image:: images/screenshots/pbs-gui-syncjob-add.png
local datastore. You can either start the sync job manually on the GUI or :align: right
provide it with a :term:`schedule` to run regularly. The :alt: Add a Sync Job
``proxmox-backup-manager sync-job`` command is used to manage sync jobs:
Sync jobs are configured to pull the contents of a datastore on a **Remote** to
a local datastore. You can manage sync jobs under **Configuration -> Sync Jobs**
in the web interface, or using the ``proxmox-backup-manager sync-job`` command.
The configuration information for sync jobs is stored at
``/etc/proxmox-backup/sync.cfg``. To create a new sync job, click the add button
in the GUI, or use the ``create`` subcommand. After creating a sync job, you can
either start it manually on the GUI or provide it with a schedule (see
:ref:`calendar-events`) to run regularly.
.. code-block:: console .. code-block:: console
@ -433,6 +711,15 @@ provide it with a :term:`schedule` to run regularly. The
# proxmox-backup-manager sync-job remove pbs2-local # proxmox-backup-manager sync-job remove pbs2-local
Garbage Collection
~~~~~~~~~~~~~~~~~~
You can monitor and run :ref:`garbage collection <garbage-collection>` on the
Proxmox Backup Server using the ``garbage-collection`` subcommand of
``proxmox-backup-manager``. You can use the ``start`` subcommand to manually start garbage
collection on an entire datastore and the ``status`` subcommand to see
attributes relating to the :ref:`garbage collection <garbage-collection>`.
Backup Client usage Backup Client usage
------------------- -------------------
@ -445,15 +732,33 @@ Repository Locations
The client uses the following notation to specify a datastore repository The client uses the following notation to specify a datastore repository
on the backup server. on the backup server.
[[username@]server:]datastore [[username@]server[:port]:]datastore
The default value for ``username`` ist ``root``. If no server is specified, The default value for ``username`` ist ``root@pam``. If no server is specified,
the default is the local host (``localhost``). the default is the local host (``localhost``).
You can specify a port if your backup server is only reachable on a different
port (e.g. with NAT and port forwarding).
Note that if the server is an IPv6 address, you have to write it with
square brackets (e.g. [fe80::01]).
You can pass the repository with the ``--repository`` command You can pass the repository with the ``--repository`` command
line option, or by setting the ``PBS_REPOSITORY`` environment line option, or by setting the ``PBS_REPOSITORY`` environment
variable. variable.
Here some examples of valid repositories and the real values
================================ ============ ================== ===========
Example User Host:Port Datastore
================================ ============ ================== ===========
mydatastore ``root@pam`` localhost:8007 mydatastore
myhostname:mydatastore ``root@pam`` myhostname:8007 mydatastore
user@pbs@myhostname:mydatastore ``user@pbs`` myhostname:8007 mydatastore
192.168.55.55:1234:mydatastore ``root@pam`` 192.168.55.55:1234 mydatastore
[ff80::51]:mydatastore ``root@pam`` [ff80::51]:8007 mydatastore
[ff80::51]:1234:mydatastore ``root@pam`` [ff80::51]:1234 mydatastore
================================ ============ ================== ===========
Environment Variables Environment Variables
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
@ -543,7 +848,9 @@ This will prompt you for a password and then uploads a file archive named
The ``--repository`` option can get quite long and is used by all The ``--repository`` option can get quite long and is used by all
commands. You can avoid having to enter this value by setting the commands. You can avoid having to enter this value by setting the
environment variable ``PBS_REPOSITORY``. environment variable ``PBS_REPOSITORY``. Note that if you would like this to remain set
over multiple sessions, you should instead add the below line to your
``.bashrc`` file.
.. code-block:: console .. code-block:: console
@ -578,7 +885,7 @@ Excluding files/folders from a backup
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sometimes it is desired to exclude certain files or folders from a backup archive. Sometimes it is desired to exclude certain files or folders from a backup archive.
To tell the Proxmox backup client when and how to ignore files and directories, To tell the Proxmox Backup client when and how to ignore files and directories,
place a text file called ``.pxarexclude`` in the filesystem hierarchy. place a text file called ``.pxarexclude`` in the filesystem hierarchy.
Whenever the backup client encounters such a file in a directory, it interprets Whenever the backup client encounters such a file in a directory, it interprets
each line as glob match patterns for files and directories that are to be excluded each line as glob match patterns for files and directories that are to be excluded
@ -775,7 +1082,9 @@ To set up a master key:
backed up. It can happen, for example, that you back up an entire system, using backed up. It can happen, for example, that you back up an entire system, using
a key on that system. If the system then becomes inaccessable for any reason a key on that system. If the system then becomes inaccessable for any reason
and needs to be restored, this will not be possible as the encryption key will be and needs to be restored, this will not be possible as the encryption key will be
lost along with the broken system. lost along with the broken system. In preparation for the worst case scenario,
you should consider keeping a paper copy of this key locked away in
a safe place.
Restoring Data Restoring Data
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
@ -818,7 +1127,7 @@ backup.
# proxmox-backup-client restore host/elsa/2019-12-03T09:35:01Z root.pxar /target/path/ # proxmox-backup-client restore host/elsa/2019-12-03T09:35:01Z root.pxar /target/path/
To get the contents of any archive, you can restore the ``ìndex.json`` file in the To get the contents of any archive, you can restore the ``index.json`` file in the
repository to the target path '-'. This will dump the contents to the standard output. repository to the target path '-'. This will dump the contents to the standard output.
.. code-block:: console .. code-block:: console
@ -900,8 +1209,8 @@ file archive as a read-only filesystem to a mountpoint on your host.
.. code-block:: console .. code-block:: console
# proxmox-backup-client mount host/backup-client/2020-01-29T11:29:22Z root.pxar /mnt # proxmox-backup-client mount host/backup-client/2020-01-29T11:29:22Z root.pxar /mnt/mountpoint
# ls /mnt # ls /mnt/mountpoint
bin dev home lib32 libx32 media opt root sbin sys usr bin dev home lib32 libx32 media opt root sbin sys usr
boot etc lib lib64 lost+found mnt proc run srv tmp var boot etc lib lib64 lost+found mnt proc run srv tmp var
@ -916,7 +1225,7 @@ To unmount the filesystem use the ``umount`` command on the mountpoint:
.. code-block:: console .. code-block:: console
# umount /mnt # umount /mnt/mountpoint
Login and Logout Login and Logout
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
@ -959,8 +1268,8 @@ command:
snapshot. They will be inaccessible and unrecoverable. snapshot. They will be inaccessible and unrecoverable.
The manual removal is sometimes required, but normally the prune Although manual removal is sometimes required, the ``prune``
command is used to systematically delete older backups. Prune lets command is normally used to systematically delete older backups. Prune lets
you specify which backup snapshots you want to keep. The you specify which backup snapshots you want to keep. The
following retention options are available: following retention options are available:
@ -1035,7 +1344,7 @@ Garbage Collection
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
The ``prune`` command removes only the backup index files, not the data The ``prune`` command removes only the backup index files, not the data
from the data store. This task is left to the garbage collection from the datastore. This task is left to the garbage collection
command. It is recommended to carry out garbage collection on a regular basis. command. It is recommended to carry out garbage collection on a regular basis.
The garbage collection works in two phases. In the first phase, all The garbage collection works in two phases. In the first phase, all
@ -1080,6 +1389,42 @@ unused data blocks are removed.
.. todo:: howto run garbage-collection at regular intervalls (cron) .. todo:: howto run garbage-collection at regular intervalls (cron)
Benchmarking
~~~~~~~~~~~~
The backup client also comes with a benchmarking tool. This tool measures
various metrics relating to compression and encryption speeds. You can run a
benchmark using the ``benchmark`` subcommand of ``proxmox-backup-client``:
.. code-block:: console
# proxmox-backup-client benchmark
Uploaded 656 chunks in 5 seconds.
Time per request: 7659 microseconds.
TLS speed: 547.60 MB/s
SHA256 speed: 585.76 MB/s
Compression speed: 1923.96 MB/s
Decompress speed: 7885.24 MB/s
AES256/GCM speed: 3974.03 MB/s
┌───────────────────────────────────┬─────────────────────┐
│ Name │ Value │
╞═══════════════════════════════════╪═════════════════════╡
│ TLS (maximal backup upload speed) │ 547.60 MB/s (93%) │
├───────────────────────────────────┼─────────────────────┤
│ SHA256 checksum computation speed │ 585.76 MB/s (28%) │
├───────────────────────────────────┼─────────────────────┤
│ ZStd level 1 compression speed │ 1923.96 MB/s (89%) │
├───────────────────────────────────┼─────────────────────┤
│ ZStd level 1 decompression speed │ 7885.24 MB/s (98%) │
├───────────────────────────────────┼─────────────────────┤
│ AES256 GCM encryption speed │ 3974.03 MB/s (104%) │
└───────────────────────────────────┴─────────────────────┘
.. note:: The percentages given in the output table correspond to a
comparison against a Ryzen 7 2700X. The TLS test connects to the
local host, so there is no network involved.
You can also pass the ``--output-format`` parameter to output stats in ``json``,
rather than the default table format.
.. _pve-integration: .. _pve-integration:
@ -1096,13 +1441,17 @@ as ``user1@pbs``.
# pvesm add pbs store2 --server localhost --datastore store2 # pvesm add pbs store2 --server localhost --datastore store2
# pvesm set store2 --username user1@pbs --password <secret> # pvesm set store2 --username user1@pbs --password <secret>
.. note:: If you would rather not pass your password as plain text, you can pass
the ``--password`` parameter, without any arguments. This will cause the
program to prompt you for a password upon entering the command.
If your backup server uses a self signed certificate, you need to add If your backup server uses a self signed certificate, you need to add
the certificate fingerprint to the configuration. You can get the the certificate fingerprint to the configuration. You can get the
fingerprint by running the following command on the backup server: fingerprint by running the following command on the backup server:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager cert info |grep Fingerprint # proxmox-backup-manager cert info | grep Fingerprint
Fingerprint (sha256): 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe Fingerprint (sha256): 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe
Please add that fingerprint to your configuration to establish a trust Please add that fingerprint to your configuration to establish a trust
@ -1120,6 +1469,10 @@ After that you should be able to see storage status with:
Name Type Status Total Used Available % Name Type Status Total Used Available %
store2 pbs active 3905109820 1336687816 2568422004 34.23% store2 pbs active 3905109820 1336687816 2568422004 34.23%
Having added the PBS datastore to `Proxmox VE`_, you can backup VMs and
containers in the same way you would for any other storage device within the
environment (see `PVE Admin Guide: Backup and Restore
<https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_vzdump>`_.
.. include:: command-line-tools.rst .. include:: command-line-tools.rst

100
docs/calendarevents.rst Normal file
View File

@ -0,0 +1,100 @@
.. _calendar-events:
Calendar Events
===============
Introduction and Format
-----------------------
Certain tasks, for example pruning and garbage collection, need to be
performed on a regular basis. Proxmox Backup Server uses a format inspired
by the systemd Time and Date Specification (see `systemd.time manpage`_)
called `calendar events` for its schedules.
`Calendar events` are expressions to specify one or more points in time.
They are mostly compatible with systemds calendar events.
The general format is as follows:
.. code-block:: console
:caption: Calendar event
[WEEKDAY] [[YEARS-]MONTHS-DAYS] [HOURS:MINUTES[:SECONDS]]
Note that there either has to be at least a weekday, date or time part.
If the weekday or date part is omitted, all (week)days are included.
If the time part is omitted, the time 00:00:00 is implied.
(e.g. '2020-01-01' refers to '2020-01-01 00:00:00')
Weekdays are specified with the abbreviated english version:
`mon, tue, wed, thu, fri, sat, sun`.
Each field can contain multiple values in the following formats:
* comma-separated: e.g., 01,02,03
* as a range: e.g., 01..10
* as a repetition: e.g, 05/10 (means starting at 5 every 10)
* and a combination of the above: e.g., 01,05..10,12/02
* or a `*` for every possible value: e.g., \*:00
There are some special values that have specific meaning:
================================= ==============================
Value Syntax
================================= ==============================
`minutely` `*-*-* *:*:00`
`hourly` `*-*-* *:00:00`
`daily` `*-*-* 00:00:00`
`weekly` `mon *-*-* 00:00:00`
`monthly` `*-*-01 00:00:00`
`yearly` or `annualy` `*-01-01 00:00:00`
`quarterly` `*-01,04,07,10-01 00:00:00`
`semiannually` or `semi-annually` `*-01,07-01 00:00:00`
================================= ==============================
Here is a table with some useful examples:
======================== ============================= ===================================
Example Alternative Explanation
======================== ============================= ===================================
`mon,tue,wed,thu,fri` `mon..fri` Every working day at 00:00
`sat,sun` `sat..sun` Only on weekends at 00:00
`mon,wed,fri` -- Monday, Wednesday, Friday at 00:00
`12:05` -- Every day at 12:05 PM
`*:00/5` `0/1:0/5` Every five minutes
`mon..wed *:30/10` `mon,tue,wed *:30/10` Monday, Tuesday, Wednesday 30, 40 and 50 minutes after every full hour
`mon..fri 8..17,22:0/15` -- Every working day every 15 minutes between 8 AM and 6 PM and between 10 PM and 11 PM
`fri 12..13:5/20` `fri 12,13:5/20` Friday at 12:05, 12:25, 12:45, 13:05, 13:25 and 13:45
`12,14,16,18,20,22:5` `12/2:5` Every day starting at 12:05 until 22:05, every 2 hours
`*:*` `0/1:0/1` Every minute (minimum interval)
`*-05` -- On the 5th day of every Month
`Sat *-1..7 15:00` -- First Saturday each Month at 15:00
`2015-10-21` -- 21st October 2015 at 00:00
======================== ============================= ===================================
Differences to systemd
----------------------
Not all features of systemd calendar events are implemented:
* no unix timestamps (e.g. `@12345`): instead use date and time to specify
a specific point in time
* no timezone: all schedules use the set timezone on the server
* no sub-second resolution
* no reverse day syntax (e.g. 2020-03~01)
* no repetition of ranges (e.g. 1..10/2)
Notes on scheduling
-------------------
In `Proxmox Backup`_ scheduling for most tasks is done in the
`proxmox-backup-proxy`. This daemon checks all job schedules
if they are due every minute. This means that even if
`calendar events` can contain seconds, it will only be checked
once a minute.
Also, all schedules will be checked against the timezone set
in the `Proxmox Backup`_ server.

View File

@ -18,9 +18,12 @@
# documentation root, use os.path.abspath to make it absolute, like shown here. # documentation root, use os.path.abspath to make it absolute, like shown here.
# #
import os import os
# import sys import sys
# sys.path.insert(0, os.path.abspath('.')) # sys.path.insert(0, os.path.abspath('.'))
# custom extensions
sys.path.append(os.path.abspath("./_ext"))
# -- Implement custom formatter for code-blocks --------------------------- # -- Implement custom formatter for code-blocks ---------------------------
# #
# * use smaller font # * use smaller font
@ -46,7 +49,7 @@ PygmentsBridge.latex_formatter = CustomLatexFormatter
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones. # ones.
extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo"] extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo", "proxmox-scanrefs"]
todo_link_only = True todo_link_only = True
@ -71,7 +74,7 @@ rst_epilog = epilog_file.read()
# General information about the project. # General information about the project.
project = 'Proxmox Backup' project = 'Proxmox Backup'
copyright = '2019-2020, Proxmox Support Team' copyright = '2019-2020, Proxmox Server Solutions GmbH'
author = 'Proxmox Support Team' author = 'Proxmox Support Team'
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
@ -94,12 +97,10 @@ language = None
# There are two options for replacing |today|: either, you set today to some # There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used: # non-false value, then it is used:
#
# today = '' # today = ''
# #
# Else, today_fmt is used as the format for a strftime call. # Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%A, %d %B %Y'
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
@ -145,7 +146,7 @@ pygments_style = 'sphinx'
# keep_warnings = False # keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing. # If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True todo_include_todos = not tags.has('release')
# -- Options for HTML output ---------------------------------------------- # -- Options for HTML output ----------------------------------------------
@ -153,13 +154,51 @@ todo_include_todos = True
# The theme to use for HTML and HTML Help pages. See the documentation for # The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes. # a list of builtin themes.
# #
html_theme = 'sphinxdoc' html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme # Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the # further. For a list of options available for each theme, see the
# documentation. # documentation.
# #
# html_theme_options = {} html_theme_options = {
'fixed_sidebar': True,
'sidebar_includehidden': False,
'sidebar_collapse': False,
'globaltoc_collapse': False,
'show_relbar_bottom': True,
'show_powered_by': False,
'extra_nav_links': {
'Proxmox Homepage': 'https://proxmox.com',
'PDF': 'proxmox-backup.pdf',
},
'sidebar_width': '320px',
'page_width': '1320px',
# font styles
'head_font_family': 'Lato, sans-serif',
'caption_font_family': 'Lato, sans-serif',
'caption_font_size': '20px',
'font_family': 'Open Sans, sans-serif',
}
# Alabaster theme recommends setting this fixed.
# If you switch theme this needs to removed, probably.
html_sidebars = {
'**': [
'sidebar-header.html',
'searchbox.html',
'navigation.html',
'relations.html',
],
'index': [
'sidebar-header.html',
'searchbox.html',
'index-sidebar.html',
]
}
# Add any paths that contain custom themes here, relative to this directory. # Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = [] # html_theme_path = []
@ -176,7 +215,7 @@ html_theme = 'sphinxdoc'
# The name of an image file (relative to this directory) to place at the top # The name of an image file (relative to this directory) to place at the top
# of the sidebar. # of the sidebar.
# #
html_logo = 'images/proxmox-logo.svg' #html_logo = 'images/proxmox-logo.svg' # replaced by html_theme_options.logo
# The name of an image file (relative to this directory) to use as a favicon of # The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
@ -206,10 +245,6 @@ html_static_path = ['_static']
# #
# html_use_smartypants = True # html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to # Additional templates that should be rendered to pages, maps page names to
# template names. # template names.
# #
@ -229,7 +264,7 @@ html_static_path = ['_static']
# If true, links to the reST sources are added to the pages. # If true, links to the reST sources are added to the pages.
# #
# html_show_sourcelink = True html_show_sourcelink = False
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# #

52
docs/custom.css Normal file
View File

@ -0,0 +1,52 @@
div.sphinxsidebar {
height: calc(100% - 20px);
overflow: auto;
}
h1.logo-name {
font-size: 24px;
}
div.body img {
width: 250px;
}
pre {
padding: 5px 10px;
}
li a.current {
font-weight: bold;
border-bottom: 1px solid #000;
}
ul li.toctree-l1 {
margin-top: 0.5em;
}
ul li.toctree-l1 > a {
color: #000;
}
div.sphinxsidebar form.search {
margin-bottom: 5px;
}
div.sphinxsidebar h3 {
width: 100%;
}
div.sphinxsidebar h1.logo-name {
display: none;
}
@media screen and (max-width: 875px) {
div.sphinxsidebar p.logo {
display: initial;
}
div.sphinxsidebar h1.logo-name {
display: block;
}
div.sphinxsidebar span {
color: #AAA;
}
ul li.toctree-l1 > a {
color: #FFF;
}
}

View File

@ -13,7 +13,8 @@
.. _Proxmox: https://www.proxmox.com .. _Proxmox: https://www.proxmox.com
.. _Proxmox Community Forum: https://forum.proxmox.com .. _Proxmox Community Forum: https://forum.proxmox.com
.. _Proxmox Virtual Environment: https://www.proxmox.com/proxmox-ve .. _Proxmox Virtual Environment: https://www.proxmox.com/proxmox-ve
.. _Proxmox Backup: https://pbs.proxmox.com/wiki/index.php/Main_Page // FIXME .. FIXME
.. _Proxmox Backup: https://pbs.proxmox.com/wiki/index.php/Main_Page
.. _PBS Development List: https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel .. _PBS Development List: https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
.. _reStructuredText: https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html .. _reStructuredText: https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html
.. _Rust: https://www.rust-lang.org/ .. _Rust: https://www.rust-lang.org/
@ -37,3 +38,6 @@
.. _RFC3399: https://tools.ietf.org/html/rfc3339 .. _RFC3399: https://tools.ietf.org/html/rfc3339
.. _UTC: https://en.wikipedia.org/wiki/Coordinated_Universal_Time .. _UTC: https://en.wikipedia.org/wiki/Coordinated_Universal_Time
.. _ISO Week date: https://en.wikipedia.org/wiki/ISO_week_date .. _ISO Week date: https://en.wikipedia.org/wiki/ISO_week_date
.. _systemd.time manpage: https://manpages.debian.org/buster/systemd/systemd.time.7.en.html

71
docs/faq.rst Normal file
View File

@ -0,0 +1,71 @@
FAQ
===
What distribution is Proxmox Backup Server (PBS) based on?
----------------------------------------------------------
Proxmox Backup Server is based on `Debian GNU/Linux <https://www.debian.org/>`_.
Which platforms are supported as a backup source (client)?
----------------------------------------------------------
The client tool works on most modern Linux systems, meaning you are not limited
to Debian-based backups.
Will Proxmox Backup Server run on a 32-bit processor?
-----------------------------------------------------
Proxmox Backup Server only supports 64-bit CPUs (AMD or Intel). There are no
future plans to support 32-bit processors.
How long will my Proxmox Backup Server version be supported?
------------------------------------------------------------
+-----------------------+--------------------+---------------+------------+--------------------+
|Proxmox Backup Version | Debian Version | First Release | Debian EOL | Proxmox Backup EOL |
+=======================+====================+===============+============+====================+
|Proxmox Backup 1.x | Debian 10 (Buster) | tba | tba | tba |
+-----------------------+--------------------+---------------+------------+--------------------+
Can I copy or synchronize my datastore to another location?
-----------------------------------------------------------
Proxmox Backup Server allows you to copy or synchroize datastores to other
locations, through the use of *Remotes* and *Sync Jobs*. *Remote* is the term
given to a separate server, which has a datastore that can be synced to a local store.
A *Sync Job* is the process which is used to pull the contents of a datastore from
a *Remote* to a local datastore.
Can Proxmox Backup Server verify data integrity of a backup archive?
--------------------------------------------------------------------
Proxmox Backup Server uses a built-in SHA-256 checksum algorithm, to ensure
data integrity. Within each backup, a manifest file (index.json) is created,
which contains a list of all the backup files, along with their sizes and
checksums. This manifest file is used to verify the integrity of each backup.
When backing up to remote servers, do I have to trust the remote server?
------------------------------------------------------------------------
Proxmox Backup Server supports client-side encryption, meaning your data is
encrypted before it reaches the server. Thus, in the event that an attacker
gains access to the server, they will not be able to read the data.
.. note:: Encryption is not enabled by default. To set up encryption, see the
`Encryption
<https://pbs.proxmox.com/docs/administration-guide.html#encryption>`_ section
of the Proxmox Backup Server Administration Guide.
Is the backup incremental/deduplicated?
---------------------------------------
With Proxmox Backup Server, backups are sent incremental and data is
deduplicated on the server.
This minimizes both the storage consumed and the network impact.

View File

@ -51,14 +51,3 @@ Glossary
A remote Proxmox Backup Server installation and credentials for a user on it. A remote Proxmox Backup Server installation and credentials for a user on it.
You can pull datastores from a remote to a local datastore in order to You can pull datastores from a remote to a local datastore in order to
have redundant backups. have redundant backups.
Schedule
Certain tasks, for example pruning and garbage collection, need to be
performed on a regular basis. Proxmox Backup Server uses a subset of the
`systemd Time and Date Specification
<https://www.freedesktop.org/software/systemd/man/systemd.time.html#>`_.
The subset currently supports time of day specifications and weekdays, in
addition to the shorthand expressions 'minutely', 'hourly', 'daily'.
There is no support for specifying timezones, the tasks are run in the
timezone configured on the server.

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

View File

@ -2,8 +2,8 @@
Welcome to the Proxmox Backup documentation! Welcome to the Proxmox Backup documentation!
============================================ ============================================
| Copyright (C) 2019-2020 Proxmox Server Solutions GmbH
Copyright (C) 2019-2020 Proxmox Server Solutions GmbH | Version |version| -- |today|
Permission is granted to copy, distribute and/or modify this document under the Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.3 or any later version terms of the GNU Free Documentation License, Version 1.3 or any later version
@ -24,6 +24,7 @@ in the section entitled "GNU Free Documentation License".
installation.rst installation.rst
administration-guide.rst administration-guide.rst
sysadmin.rst sysadmin.rst
faq.rst
.. raw:: latex .. raw:: latex
@ -36,6 +37,7 @@ in the section entitled "GNU Free Documentation License".
command-syntax.rst command-syntax.rst
file-formats.rst file-formats.rst
backup-protocol.rst backup-protocol.rst
calendarevents.rst
glossary.rst glossary.rst
GFDL.rst GFDL.rst
@ -43,10 +45,10 @@ in the section entitled "GNU Free Documentation License".
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:hidden:
:caption: Developer Appendix :caption: Developer Appendix
todos.rst todos.rst
* :ref:`genindex` .. # * :ref:`genindex`

View File

@ -19,9 +19,9 @@ for various management tasks such as disk management.
The disk image (ISO file) provided by Proxmox includes a complete Debian system The disk image (ISO file) provided by Proxmox includes a complete Debian system
("buster" for version 1.x) as well as all necessary packages for the `Proxmox Backup`_ server. ("buster" for version 1.x) as well as all necessary packages for the `Proxmox Backup`_ server.
The installer will guide you through the setup process and allows The installer will guide you through the setup process and allow
you to partition the local disk(s), apply basic system configurations you to partition the local disk(s), apply basic system configurations
(e.g. timezone, language, network), and installs all required packages. (e.g. timezone, language, network), and install all required packages.
The provided ISO will get you started in just a few minutes, and is the The provided ISO will get you started in just a few minutes, and is the
recommended method for new and existing users. recommended method for new and existing users.
@ -36,11 +36,11 @@ It includes the following:
* The `Proxmox Backup`_ server installer, which partitions the local * The `Proxmox Backup`_ server installer, which partitions the local
disk(s) with ext4, ext3, xfs or ZFS, and installs the operating disk(s) with ext4, ext3, xfs or ZFS, and installs the operating
system. system
* Complete operating system (Debian Linux, 64-bit) * Complete operating system (Debian Linux, 64-bit)
* Our Linux kernel with ZFS support. * Our Linux kernel with ZFS support
* Complete tool-set to administer backups and all necessary resources * Complete tool-set to administer backups and all necessary resources
@ -54,7 +54,7 @@ Install `Proxmox Backup`_ server on Debian
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Proxmox ships as a set of Debian packages which can be installed on top of a Proxmox ships as a set of Debian packages which can be installed on top of a
standard Debian installation. After configuring the standard Debian installation. After configuring the
:ref:`sysadmin_package_repositories`, you need to run: :ref:`sysadmin_package_repositories`, you need to run:
.. code-block:: console .. code-block:: console
@ -76,12 +76,11 @@ does, please use the following:
This will install all required packages, the Proxmox kernel with ZFS_ This will install all required packages, the Proxmox kernel with ZFS_
support, and a set of common and useful packages. support, and a set of common and useful packages.
Installing `Proxmox Backup`_ on top of an existing Debian_ installation looks easy, but .. caution:: Installing `Proxmox Backup`_ on top of an existing Debian_
it presumes that the base system and local storage has been set up correctly. installation looks easy, but it assumes that the base system and local
storage have been set up correctly. In general this is not trivial, especially
In general this is not trivial, especially when LVM_ or ZFS_ is used. when LVM_ or ZFS_ is used. The network configuration is completely up to you
as well.
The network configuration is completely up to you as well.
.. note:: You can access the webinterface of the Proxmox Backup Server with .. note:: You can access the webinterface of the Proxmox Backup Server with
your web browser, using HTTPS on port 8007. For example at your web browser, using HTTPS on port 8007. For example at
@ -103,9 +102,9 @@ After configuring the
server to store backups. Should the hypervisor server fail, you can server to store backups. Should the hypervisor server fail, you can
still access the backups. still access the backups.
.. note:: You can access the webinterface of the Proxmox Backup Server with .. note::
your web browser, using HTTPS on port 8007. For example at You can access the webinterface of the Proxmox Backup Server with your web
``https://<ip-or-dns-name>:8007`` browser, using HTTPS on port 8007. For example at ``https://<ip-or-dns-name>:8007``
Client installation Client installation
------------------- -------------------

View File

@ -22,7 +22,7 @@ Architecture
------------ ------------
Proxmox Backup Server uses a `client-server model`_. The server stores the Proxmox Backup Server uses a `client-server model`_. The server stores the
backup data and provides an API to create backups and restore data. With the backup data and provides an API to create and manage datastores. With the
API, it's also possible to manage disks and other server-side resources. API, it's also possible to manage disks and other server-side resources.
The backup client uses this API to access the backed up data. With the command The backup client uses this API to access the backed up data. With the command
@ -143,6 +143,7 @@ Mailing Lists
Proxmox Backup Server is fully open-source and contributions are welcome! Here Proxmox Backup Server is fully open-source and contributions are welcome! Here
is the primary communication channel for developers: is the primary communication channel for developers:
:Mailing list for developers: `PBS Development List`_ :Mailing list for developers: `PBS Development List`_
Bug Tracker Bug Tracker

View File

@ -1,3 +1,6 @@
.. _chapter-zfs:
ZFS on Linux ZFS on Linux
------------ ------------

View File

@ -3,8 +3,8 @@
Debian Package Repositories Debian Package Repositories
--------------------------- ---------------------------
All Debian based systems use APT_ as package management tool. The list of All Debian based systems use APT_ as a package management tool. The lists of
repositories is defined in ``/etc/apt/sources.list`` and ``.list`` files found repositories are defined in ``/etc/apt/sources.list`` and the ``.list`` files found
in the ``/etc/apt/sources.d/`` directory. Updates can be installed directly in the ``/etc/apt/sources.d/`` directory. Updates can be installed directly
with the ``apt`` command line tool, or via the GUI. with the ``apt`` command line tool, or via the GUI.
@ -26,11 +26,10 @@ update``.
.. FIXME for 7.0: change security update suite to bullseye-security .. FIXME for 7.0: change security update suite to bullseye-security
In addition, you need a package repositories from Proxmox to get the backup In addition, you need a package repository from Proxmox to get Proxmox Backup updates.
server updates.
During the Proxmox Backup beta phase only one repository (pbstest) will be During the Proxmox Backup beta phase, only one repository (pbstest) will be
available. Once released, a Enterprise repository for production use and a available. Once released, an Enterprise repository for production use and a
no-subscription repository will be provided. no-subscription repository will be provided.
SecureApt SecureApt
@ -39,8 +38,8 @@ SecureApt
The `Release` files in the repositories are signed with GnuPG. APT is using The `Release` files in the repositories are signed with GnuPG. APT is using
these signatures to verify that all packages are from a trusted source. these signatures to verify that all packages are from a trusted source.
If you install Proxmox Backup Server from an official ISO image, the key for If you install Proxmox Backup Server from an official ISO image, the
verification is already installed. verification key is already installed.
If you install Proxmox Backup Server on top of Debian, download and install the If you install Proxmox Backup Server on top of Debian, download and install the
key with the following commands: key with the following commands:
@ -136,17 +135,17 @@ During the public beta, there is a repository called ``pbstest``. This one
contains the latest packages and is heavily used by developers to test new contains the latest packages and is heavily used by developers to test new
features. features.
.. .. warning:: the ``pbstest`` repository should (as the name implies) .. .. warning:: the ``pbstest`` repository should (as the name implies)
only be used to test new features or bug fixes. only be used to test new features or bug fixes.
You can configure this using ``/etc/apt/sources.list`` by adding the following You can access this repository by adding the following line to
line: ``/etc/apt/sources.list``:
.. code-block:: sources.list .. code-block:: sources.list
:caption: sources.list entry for ``pbstest`` :caption: sources.list entry for ``pbstest``
deb http://download.proxmox.com/debian/pbs buster pbstest deb http://download.proxmox.com/debian/pbs buster pbstest
If you installed Proxmox Backup Server from the official beta ISO you should If you installed Proxmox Backup Server from the official beta ISO, you should
have this repository already configured in have this repository already configured in
``/etc/apt/sources.list.d/pbstest-beta.list`` ``/etc/apt/sources.list.d/pbstest-beta.list``

View File

@ -9,7 +9,7 @@ which caters to a similar use-case.
The ``.pxar`` format is adapted to fulfill the specific needs of the Proxmox The ``.pxar`` format is adapted to fulfill the specific needs of the Proxmox
Backup Server, for example, efficient storage of hardlinks. Backup Server, for example, efficient storage of hardlinks.
The format is designed to reduce storage space needed on the server by achieving The format is designed to reduce storage space needed on the server by achieving
a high level of de-duplication. a high level of deduplication.
Creating an Archive Creating an Archive
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
@ -29,7 +29,7 @@ This will create a new archive called ``archive.pxar`` with the contents of the
By default, ``pxar`` will skip certain mountpoints and will not follow device By default, ``pxar`` will skip certain mountpoints and will not follow device
boundaries. This design decision is based on the primary use case of creating boundaries. This design decision is based on the primary use case of creating
archives for backups. It is sensible to not back up the contents of certain archives for backups. It makes sense to not back up the contents of certain
temporary or system specific files. temporary or system specific files.
To alter this behavior and follow device boundaries, use the To alter this behavior and follow device boundaries, use the
``--all-file-systems`` flag. ``--all-file-systems`` flag.
@ -66,7 +66,7 @@ All the glob patterns are relative to the ``source`` directory.
previous ones. Permutations of the same patterns lead to different results. previous ones. Permutations of the same patterns lead to different results.
``pxar`` will store the list of glob match patterns passed as parameters via the ``pxar`` will store the list of glob match patterns passed as parameters via the
command line in a file called ``.pxarexclude-cli`` and stores it at the root of command line, in a file called ``.pxarexclude-cli`` at the root of
the archive. the archive.
If a file with this name is already present in the source folder during archive If a file with this name is already present in the source folder during archive
creation, this file is not included in the archive and the file containing the creation, this file is not included in the archive and the file containing the
@ -85,23 +85,23 @@ The behavior is the same as described in :ref:`creating-backups`.
Extracting an Archive Extracting an Archive
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
An existing archive ``archive.pxar`` is extracted to a ``target`` directory An existing archive, ``archive.pxar``, is extracted to a ``target`` directory
with the following command: with the following command:
.. code-block:: console .. code-block:: console
# pxar extract archive.pxar --target target # pxar extract archive.pxar /path/to/target
If no target is provided, the content of the archive is extracted to the current If no target is provided, the content of the archive is extracted to the current
working directory. working directory.
In order to restore only parts of an archive, single files and/or folders, In order to restore only parts of an archive, single files, and/or folders,
it is possible to pass the corresponding glob match patterns as additional it is possible to pass the corresponding glob match patterns as additional
parameters or use the patterns stored in a file: parameters or to use the patterns stored in a file:
.. code-block:: console .. code-block:: console
# pxar extract etc.pxar '**/*.conf' --target /restore/target/etc # pxar extract etc.pxar /restore/target/etc --pattern '**/*.conf'
The above example restores all ``.conf`` files encountered in any of the The above example restores all ``.conf`` files encountered in any of the
sub-folders in the archive ``etc.pxar`` to the target ``/restore/target/etc``. sub-folders in the archive ``etc.pxar`` to the target ``/restore/target/etc``.

View File

@ -2,8 +2,6 @@ use std::io::Write;
use anyhow::{Error}; use anyhow::{Error};
use chrono::{DateTime, Utc};
use proxmox_backup::api2::types::Userid; use proxmox_backup::api2::types::Userid;
use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader}; use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader};
@ -34,9 +32,9 @@ async fn run() -> Result<(), Error> {
.interactive(true) .interactive(true)
.ticket_cache(true); .ticket_cache(true);
let client = HttpClient::new(host, username, options)?; let client = HttpClient::new(host, 8007, username, options)?;
let backup_time = "2019-06-28T10:49:48Z".parse::<DateTime<Utc>>()?; let backup_time = proxmox::tools::time::parse_rfc3339("2019-06-28T10:49:48Z")?;
let client = BackupReader::start(client, None, "store2", "host", "elsa", backup_time, true) let client = BackupReader::start(client, None, "store2", "host", "elsa", backup_time, true)
.await?; .await?;

View File

@ -14,11 +14,11 @@ async fn upload_speed() -> Result<f64, Error> {
.interactive(true) .interactive(true)
.ticket_cache(true); .ticket_cache(true);
let client = HttpClient::new(host, username, options)?; let client = HttpClient::new(host, 8007, username, options)?;
let backup_time = chrono::Utc::now(); let backup_time = proxmox::tools::time::epoch_i64();
let client = BackupWriter::start(client, None, datastore, "host", "speedtest", backup_time, false).await?; let client = BackupWriter::start(client, None, datastore, "host", "speedtest", backup_time, false, true).await?;
println!("start upload speed test"); println!("start upload speed test");
let res = client.upload_speedtest(true).await?; let res = client.upload_speedtest(true).await?;

View File

@ -7,8 +7,7 @@ use proxmox::api::router::{Router, SubdirMap};
use proxmox::{sortable, identity}; use proxmox::{sortable, identity};
use proxmox::{http_err, list_subdirs_api_method}; use proxmox::{http_err, list_subdirs_api_method};
use crate::tools; use crate::tools::ticket::{self, Empty, Ticket};
use crate::tools::ticket::*;
use crate::auth_helpers::*; use crate::auth_helpers::*;
use crate::api2::types::*; use crate::api2::types::*;
@ -35,27 +34,31 @@ fn authenticate_user(
bail!("user account disabled or expired."); bail!("user account disabled or expired.");
} }
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
if password.starts_with("PBS:") { if password.starts_with("PBS:") {
if let Ok((_age, Some(ticket_username))) = tools::ticket::verify_rsa_ticket(public_auth_key(), "PBS", password, None, -300, ticket_lifetime) { if let Ok(ticket_userid) = Ticket::<Userid>::parse(password)
if *userid == ticket_username { .and_then(|ticket| ticket.verify(public_auth_key(), "PBS", None))
{
if *userid == ticket_userid {
return Ok(true); return Ok(true);
} else {
bail!("ticket login failed - wrong userid");
} }
bail!("ticket login failed - wrong userid");
} }
} else if password.starts_with("PBSTERM:") { } else if password.starts_with("PBSTERM:") {
if path.is_none() || privs.is_none() || port.is_none() { if path.is_none() || privs.is_none() || port.is_none() {
bail!("cannot check termnal ticket without path, priv and port"); bail!("cannot check termnal ticket without path, priv and port");
} }
let path = path.unwrap(); let path = path.ok_or_else(|| format_err!("missing path for termproxy ticket"))?;
let privilege_name = privs.unwrap(); let privilege_name = privs
let port = port.unwrap(); .ok_or_else(|| format_err!("missing privilege name for termproxy ticket"))?;
let port = port.ok_or_else(|| format_err!("missing port for termproxy ticket"))?;
if let Ok((_age, _data)) = if let Ok(Empty) = Ticket::parse(password)
tools::ticket::verify_term_ticket(public_auth_key(), &userid, &path, port, password) .and_then(|ticket| ticket.verify(
public_auth_key(),
ticket::TERM_PREFIX,
Some(&ticket::term_aad(userid, &path, port)),
))
{ {
for (name, privilege) in PRIVILEGES { for (name, privilege) in PRIVILEGES {
if *name == privilege_name { if *name == privilege_name {
@ -138,7 +141,7 @@ fn create_ticket(
) -> Result<Value, Error> { ) -> Result<Value, Error> {
match authenticate_user(&username, &password, path, privs, port) { match authenticate_user(&username, &password, path, privs, port) {
Ok(true) => { Ok(true) => {
let ticket = assemble_rsa_ticket(private_auth_key(), "PBS", Some(&username), None)?; let ticket = Ticket::new("PBS", &username)?.sign(private_auth_key(), None)?;
let token = assemble_csrf_prevention_token(csrf_secret(), &username); let token = assemble_csrf_prevention_token(csrf_secret(), &username);

View File

@ -175,7 +175,7 @@ pub fn update_acl(
_rpcenv: &mut dyn RpcEnvironment, _rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(acl::ACL_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(acl::ACL_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut tree, expected_digest) = acl::config()?; let (mut tree, expected_digest) = acl::config()?;

View File

@ -14,7 +14,7 @@ use crate::config::acl::{Role, ROLE_NAMES, PRIVILEGES};
type: Array, type: Array,
items: { items: {
type: Object, type: Object,
description: "User name with description.", description: "Role with description and privileges.",
properties: { properties: {
roleid: { roleid: {
type: Role, type: Role,

View File

@ -8,6 +8,7 @@ use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::user; use crate::config::user;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY}; use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.") pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
.format(&PASSWORD_FORMAT) .format(&PASSWORD_FORMAT)
@ -25,10 +26,11 @@ pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
items: { type: user::User }, items: { type: user::User },
}, },
access: { access: {
permission: &Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false), permission: &Permission::Anybody,
description: "Returns all or just the logged-in user, depending on privileges.",
}, },
)] )]
/// List all users /// List users
pub fn list_users( pub fn list_users(
_param: Value, _param: Value,
_info: &ApiMethod, _info: &ApiMethod,
@ -37,11 +39,21 @@ pub fn list_users(
let (config, digest) = user::config()?; let (config, digest) = user::config()?;
let list = config.convert_to_typed_array("user")?; let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let top_level_privs = user_info.lookup_privs(&userid, &["access", "users"]);
let top_level_allowed = (top_level_privs & PRIV_SYS_AUDIT) != 0;
let filter_by_privs = |user: &user::User| {
top_level_allowed || user.userid == userid
};
let list:Vec<user::User> = config.convert_to_typed_array("user")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into(); rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list) Ok(list.into_iter().filter(filter_by_privs).collect())
} }
#[api( #[api(
@ -88,7 +100,7 @@ pub fn list_users(
/// Create new user. /// Create new user.
pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error> { pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let user: user::User = serde_json::from_value(param)?; let user: user::User = serde_json::from_value(param)?;
@ -124,7 +136,10 @@ pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error>
type: user::User, type: user::User,
}, },
access: { access: {
permission: &Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false), permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
&Permission::UserParam("userid"),
]),
}, },
)] )]
/// Read user configuration data. /// Read user configuration data.
@ -177,7 +192,10 @@ pub fn read_user(userid: Userid, mut rpcenv: &mut dyn RpcEnvironment) -> Result<
}, },
}, },
access: { access: {
permission: &Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false), permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
}, },
)] )]
/// Update user configuration. /// Update user configuration.
@ -193,7 +211,7 @@ pub fn update_user(
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?; let (mut config, expected_digest) = user::config()?;
@ -258,13 +276,16 @@ pub fn update_user(
}, },
}, },
access: { access: {
permission: &Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false), permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
}, },
)] )]
/// Remove a user from the configuration file. /// Remove a user from the configuration file.
pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error> { pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?; let (mut config, expected_digest) = user::config()?;

View File

@ -1,6 +1,7 @@
use std::collections::{HashSet, HashMap}; use std::collections::{HashSet, HashMap};
use std::ffi::OsStr; use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::*; use futures::*;
@ -171,7 +172,7 @@ fn list_groups(
let result_item = GroupListItem { let result_item = GroupListItem {
backup_type: group.backup_type().to_string(), backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(), backup_id: group.backup_id().to_string(),
last_backup: info.backup_dir.backup_time().timestamp(), last_backup: info.backup_dir.backup_time(),
backup_count: list.len() as u64, backup_count: list.len() as u64,
files: info.files.clone(), files: info.files.clone(),
owner: Some(owner), owner: Some(owner),
@ -229,7 +230,7 @@ pub fn list_snapshot_files(
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let snapshot = BackupDir::new(backup_type, backup_id, backup_time); let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)) != 0; let allowed = (user_privs & (PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)) != 0;
if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; }
@ -279,7 +280,7 @@ fn delete_snapshot(
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let snapshot = BackupDir::new(backup_type, backup_id, backup_time); let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
@ -361,7 +362,7 @@ pub fn list_snapshots (
let mut size = None; let mut size = None;
let (comment, files) = match get_all_snapshot_files(&datastore, &info) { let (comment, verification, files) = match get_all_snapshot_files(&datastore, &info) {
Ok((manifest, files)) => { Ok((manifest, files)) => {
size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum()); size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
// extract the first line from notes // extract the first line from notes
@ -370,11 +371,21 @@ pub fn list_snapshots (
.and_then(|notes| notes.lines().next()) .and_then(|notes| notes.lines().next())
.map(String::from); .map(String::from);
(comment, files) let verify = manifest.unprotected["verify_state"].clone();
let verify: Option<SnapshotVerifyState> = match serde_json::from_value(verify) {
Ok(verify) => verify,
Err(err) => {
eprintln!("error parsing verification state : '{}'", err);
None
}
};
(comment, verify, files)
}, },
Err(err) => { Err(err) => {
eprintln!("error during snapshot file listing: '{}'", err); eprintln!("error during snapshot file listing: '{}'", err);
( (
None,
None, None,
info info
.files .files
@ -392,8 +403,9 @@ pub fn list_snapshots (
let result_item = SnapshotListItem { let result_item = SnapshotListItem {
backup_type: group.backup_type().to_string(), backup_type: group.backup_type().to_string(),
backup_id: group.backup_id().to_string(), backup_id: group.backup_id().to_string(),
backup_time: info.backup_dir.backup_time().timestamp(), backup_time: info.backup_dir.backup_time(),
comment, comment,
verification,
files, files,
size, size,
owner: Some(owner), owner: Some(owner),
@ -478,7 +490,7 @@ pub fn verify(
match (backup_type, backup_id, backup_time) { match (backup_type, backup_id, backup_time) {
(Some(backup_type), Some(backup_id), Some(backup_time)) => { (Some(backup_type), Some(backup_id), Some(backup_time)) => {
worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_time); worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_time);
let dir = BackupDir::new(backup_type, backup_id, backup_time); let dir = BackupDir::new(backup_type, backup_id, backup_time)?;
backup_dir = Some(dir); backup_dir = Some(dir);
} }
(Some(backup_type), Some(backup_id), None) => { (Some(backup_type), Some(backup_id), None) => {
@ -489,7 +501,7 @@ pub fn verify(
(None, None, None) => { (None, None, None) => {
worker_id = store.clone(); worker_id = store.clone();
} }
_ => bail!("parameters do not spefify a backup group or snapshot"), _ => bail!("parameters do not specify a backup group or snapshot"),
} }
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let userid: Userid = rpcenv.get_user().unwrap().parse()?;
@ -501,25 +513,34 @@ pub fn verify(
userid, userid,
to_stdout, to_stdout,
move |worker| { move |worker| {
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
let failed_dirs = if let Some(backup_dir) = backup_dir { let failed_dirs = if let Some(backup_dir) = backup_dir {
let mut verified_chunks = HashSet::with_capacity(1024*16);
let mut corrupt_chunks = HashSet::with_capacity(64);
let mut res = Vec::new(); let mut res = Vec::new();
if !verify_backup_dir(&datastore, &backup_dir, &mut verified_chunks, &mut corrupt_chunks, &worker)? { if !verify_backup_dir(datastore, &backup_dir, verified_chunks, corrupt_chunks, worker.clone())? {
res.push(backup_dir.to_string()); res.push(backup_dir.to_string());
} }
res res
} else if let Some(backup_group) = backup_group { } else if let Some(backup_group) = backup_group {
verify_backup_group(&datastore, &backup_group, &worker)? let (_count, failed_dirs) = verify_backup_group(
datastore,
&backup_group,
verified_chunks,
corrupt_chunks,
None,
worker.clone(),
)?;
failed_dirs
} else { } else {
verify_all_backups(&datastore, &worker)? verify_all_backups(datastore, worker.clone())?
}; };
if failed_dirs.len() > 0 { if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:"); worker.log("Failed to verify following snapshots:");
for dir in failed_dirs { for dir in failed_dirs {
worker.log(format!("\t{}", dir)); worker.log(format!("\t{}", dir));
} }
bail!("verfication failed - please check the log for details"); bail!("verification failed - please check the log for details");
} }
Ok(()) Ok(())
}, },
@ -652,7 +673,7 @@ fn prune(
prune_result.push(json!({ prune_result.push(json!({
"backup-type": group.backup_type(), "backup-type": group.backup_type(),
"backup-id": group.backup_id(), "backup-id": group.backup_id(),
"backup-time": backup_time.timestamp(), "backup-time": backup_time,
"keep": keep, "keep": keep,
})); }));
} }
@ -676,7 +697,7 @@ fn prune(
if keep_all { keep = true; } if keep_all { keep = true; }
let backup_time = info.backup_dir.backup_time(); let backup_time = info.backup_dir.backup_time();
let timestamp = BackupDir::backup_time_to_string(backup_time); let timestamp = info.backup_dir.backup_time_string();
let group = info.backup_dir.group(); let group = info.backup_dir.group();
@ -693,7 +714,7 @@ fn prune(
prune_result.push(json!({ prune_result.push(json!({
"backup-type": group.backup_type(), "backup-type": group.backup_type(),
"backup-id": group.backup_id(), "backup-id": group.backup_id(),
"backup-time": backup_time.timestamp(), "backup-time": backup_time,
"keep": keep, "keep": keep,
})); }));
@ -876,7 +897,7 @@ fn download_file(
let backup_id = tools::required_string_param(&param, "backup-id")?; let backup_id = tools::required_string_param(&param, "backup-id")?;
let backup_time = tools::required_integer_param(&param, "backup-time")?; let backup_time = tools::required_integer_param(&param, "backup-time")?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
@ -949,7 +970,7 @@ fn download_file_decoded(
let backup_id = tools::required_string_param(&param, "backup-id")?; let backup_id = tools::required_string_param(&param, "backup-id")?;
let backup_time = tools::required_integer_param(&param, "backup-time")?; let backup_time = tools::required_integer_param(&param, "backup-time")?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
@ -1062,7 +1083,7 @@ fn upload_backup_log(
let backup_id = tools::required_string_param(&param, "backup-id")?; let backup_id = tools::required_string_param(&param, "backup-id")?;
let backup_time = tools::required_integer_param(&param, "backup-time")?; let backup_time = tools::required_integer_param(&param, "backup-time")?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let userid: Userid = rpcenv.get_user().unwrap().parse()?;
check_backup_owner(&datastore, backup_dir.group(), &userid)?; check_backup_owner(&datastore, backup_dir.group(), &userid)?;
@ -1076,7 +1097,7 @@ fn upload_backup_log(
} }
println!("Upload backup log to {}/{}/{}/{}/{}", store, println!("Upload backup log to {}/{}/{}/{}/{}", store,
backup_type, backup_id, BackupDir::backup_time_to_string(backup_dir.backup_time()), file_name); backup_type, backup_id, backup_dir.backup_time_string(), file_name);
let data = req_body let data = req_body
.map_err(Error::from) .map_err(Error::from)
@ -1138,7 +1159,7 @@ fn catalog(
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
@ -1218,7 +1239,7 @@ fn catalog(
pub const API_METHOD_PXAR_FILE_DOWNLOAD: ApiMethod = ApiMethod::new( pub const API_METHOD_PXAR_FILE_DOWNLOAD: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&pxar_file_download), &ApiHandler::AsyncHttp(&pxar_file_download),
&ObjectSchema::new( &ObjectSchema::new(
"Download single file from pxar file of a bacup snapshot. Only works if it's not encrypted.", "Download single file from pxar file of a backup snapshot. Only works if it's not encrypted.",
&sorted!([ &sorted!([
("store", false, &DATASTORE_SCHEMA), ("store", false, &DATASTORE_SCHEMA),
("backup-type", false, &BACKUP_TYPE_SCHEMA), ("backup-type", false, &BACKUP_TYPE_SCHEMA),
@ -1255,7 +1276,7 @@ fn pxar_file_download(
let backup_id = tools::required_string_param(&param, "backup-id")?; let backup_id = tools::required_string_param(&param, "backup-id")?;
let backup_time = tools::required_integer_param(&param, "backup-time")?; let backup_time = tools::required_integer_param(&param, "backup-time")?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
@ -1396,7 +1417,7 @@ fn get_notes(
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
@ -1449,7 +1470,7 @@ fn set_notes(
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }

View File

@ -1,6 +1,4 @@
use std::collections::HashMap; use anyhow::{format_err, Error};
use anyhow::{Error};
use serde_json::Value; use serde_json::Value;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment}; use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
@ -8,9 +6,10 @@ use proxmox::api::router::SubdirMap;
use proxmox::{list_subdirs_api_method, sortable}; use proxmox::{list_subdirs_api_method, sortable};
use crate::api2::types::*; use crate::api2::types::*;
use crate::api2::pull::{get_pull_parameters}; use crate::api2::pull::do_sync_job;
use crate::config::sync::{self, SyncJobStatus, SyncJobConfig}; use crate::config::sync::{self, SyncJobStatus, SyncJobConfig};
use crate::server::{self, TaskListInfo, WorkerTask}; use crate::server::UPID;
use crate::config::jobstate::{Job, JobState};
use crate::tools::systemd::time::{ use crate::tools::systemd::time::{
parse_calendar_event, compute_next_event}; parse_calendar_event, compute_next_event};
@ -34,38 +33,32 @@ pub fn list_sync_jobs(
let mut list: Vec<SyncJobStatus> = config.convert_to_typed_array("sync")?; let mut list: Vec<SyncJobStatus> = config.convert_to_typed_array("sync")?;
let mut last_tasks: HashMap<String, &TaskListInfo> = HashMap::new();
let tasks = server::read_task_list()?;
for info in tasks.iter() {
let worker_id = match &info.upid.worker_id {
Some(id) => id,
_ => { continue; },
};
if let Some(last) = last_tasks.get(worker_id) {
if last.upid.starttime < info.upid.starttime {
last_tasks.insert(worker_id.to_string(), &info);
}
} else {
last_tasks.insert(worker_id.to_string(), &info);
}
}
for job in &mut list { for job in &mut list {
let mut last = 0; let last_state = JobState::load("syncjob", &job.id)
if let Some(task) = last_tasks.get(&job.id) { .map_err(|err| format_err!("could not open statefile for {}: {}", &job.id, err))?;
job.last_run_upid = Some(task.upid_str.clone()); let (upid, endtime, state, starttime) = match last_state {
if let Some((endtime, status)) = &task.state { JobState::Created { time } => (None, None, None, time),
job.last_run_state = Some(String::from(status)); JobState::Started { upid } => {
job.last_run_endtime = Some(*endtime); let parsed_upid: UPID = upid.parse()?;
last = *endtime; (Some(upid), None, None, parsed_upid.starttime)
} },
} JobState::Finished { upid, state } => {
let parsed_upid: UPID = upid.parse()?;
(Some(upid), Some(state.endtime()), Some(state.to_string()), parsed_upid.starttime)
},
};
job.last_run_upid = upid;
job.last_run_state = state;
job.last_run_endtime = endtime;
let last = job.last_run_endtime.unwrap_or_else(|| starttime);
job.next_run = (|| -> Option<i64> { job.next_run = (|| -> Option<i64> {
let schedule = job.schedule.as_ref()?; let schedule = job.schedule.as_ref()?;
let event = parse_calendar_event(&schedule).ok()?; let event = parse_calendar_event(&schedule).ok()?;
compute_next_event(&event, last, false).ok() // ignore errors
compute_next_event(&event, last, false).unwrap_or_else(|_| None)
})(); })();
} }
@ -84,7 +77,7 @@ pub fn list_sync_jobs(
} }
)] )]
/// Runs the sync jobs manually. /// Runs the sync jobs manually.
async fn run_sync_job( fn run_sync_job(
id: String, id: String,
_info: &ApiMethod, _info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
@ -95,26 +88,9 @@ async fn run_sync_job(
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let delete = sync_job.remove_vanished.unwrap_or(true); let job = Job::new("syncjob", &id)?;
let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
let upid_str = WorkerTask::spawn("syncjob", Some(id.clone()), userid, false, move |worker| async move { let upid_str = do_sync_job(job, sync_job, &userid, None)?;
worker.log(format!("sync job '{}' start", &id));
crate::client::pull::pull_store(
&worker,
&client,
&src_repo,
tgt_store.clone(),
delete,
Userid::backup_userid().clone(),
).await?;
worker.log(format!("sync job '{}' end", &id));
Ok(())
})?;
Ok(upid_str) Ok(upid_str)
} }

View File

@ -38,6 +38,7 @@ pub const API_METHOD_UPGRADE_BACKUP: ApiMethod = ApiMethod::new(
("backup-id", false, &BACKUP_ID_SCHEMA), ("backup-id", false, &BACKUP_ID_SCHEMA),
("backup-time", false, &BACKUP_TIME_SCHEMA), ("backup-time", false, &BACKUP_TIME_SCHEMA),
("debug", true, &BooleanSchema::new("Enable verbose debug logging.").schema()), ("debug", true, &BooleanSchema::new("Enable verbose debug logging.").schema()),
("benchmark", true, &BooleanSchema::new("Job is a benchmark (do not keep data).").schema()),
]), ]),
) )
).access( ).access(
@ -56,6 +57,7 @@ fn upgrade_to_backup_protocol(
async move { async move {
let debug = param["debug"].as_bool().unwrap_or(false); let debug = param["debug"].as_bool().unwrap_or(false);
let benchmark = param["benchmark"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let userid: Userid = rpcenv.get_user().unwrap().parse()?;
@ -90,16 +92,50 @@ async move {
let backup_group = BackupGroup::new(backup_type, backup_id); let backup_group = BackupGroup::new(backup_type, backup_id);
let worker_type = if backup_type == "host" && backup_id == "benchmark" {
if !benchmark {
bail!("unable to run benchmark without --benchmark flags");
}
"benchmark"
} else {
if benchmark {
bail!("benchmark flags is only allowed on 'host/benchmark'");
}
"backup"
};
// lock backup group to only allow one backup per group at a time // lock backup group to only allow one backup per group at a time
let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &userid)?; let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &userid)?;
// permission check // permission check
if owner != userid { // only the owner is allowed to create additional snapshots if owner != userid && worker_type != "benchmark" {
// only the owner is allowed to create additional snapshots
bail!("backup owner check failed ({} != {})", userid, owner); bail!("backup owner check failed ({} != {})", userid, owner);
} }
let last_backup = BackupInfo::last_backup(&datastore.base_path(), &backup_group, true).unwrap_or(None); let last_backup = {
let backup_dir = BackupDir::new_with_group(backup_group.clone(), backup_time); let info = BackupInfo::last_backup(&datastore.base_path(), &backup_group, true).unwrap_or(None);
if let Some(info) = info {
let (manifest, _) = datastore.load_manifest(&info.backup_dir)?;
let verify = manifest.unprotected["verify_state"].clone();
match serde_json::from_value::<SnapshotVerifyState>(verify) {
Ok(verify) => {
match verify.state {
VerifyState::Ok => Some(info),
VerifyState::Failed => None,
}
},
Err(_) => {
// no verify state found, treat as valid
Some(info)
}
}
} else {
None
}
};
let backup_dir = BackupDir::with_group(backup_group.clone(), backup_time)?;
let _last_guard = if let Some(last) = &last_backup { let _last_guard = if let Some(last) = &last_backup {
if backup_dir.backup_time() <= last.backup_dir.backup_time() { if backup_dir.backup_time() <= last.backup_dir.backup_time() {
@ -116,14 +152,15 @@ async move {
let (path, is_new, _snap_guard) = datastore.create_locked_backup_dir(&backup_dir)?; let (path, is_new, _snap_guard) = datastore.create_locked_backup_dir(&backup_dir)?;
if !is_new { bail!("backup directory already exists."); } if !is_new { bail!("backup directory already exists."); }
WorkerTask::spawn("backup", Some(worker_id), userid.clone(), true, move |worker| {
WorkerTask::spawn(worker_type, Some(worker_id), userid.clone(), true, move |worker| {
let mut env = BackupEnvironment::new( let mut env = BackupEnvironment::new(
env_type, userid, worker.clone(), datastore, backup_dir); env_type, userid, worker.clone(), datastore, backup_dir);
env.debug = debug; env.debug = debug;
env.last_backup = last_backup; env.last_backup = last_backup;
env.log(format!("starting new backup on datastore '{}': {:?}", store, path)); env.log(format!("starting new {} on datastore '{}': {:?}", worker_type, store, path));
let service = H2Service::new(env.clone(), worker.clone(), &BACKUP_API_ROUTER, debug); let service = H2Service::new(env.clone(), worker.clone(), &BACKUP_API_ROUTER, debug);
@ -143,6 +180,7 @@ async move {
let window_size = 32*1024*1024; // max = (1 << 31) - 2 let window_size = 32*1024*1024; // max = (1 << 31) - 2
http.http2_initial_stream_window_size(window_size); http.http2_initial_stream_window_size(window_size);
http.http2_initial_connection_window_size(window_size); http.http2_initial_connection_window_size(window_size);
http.http2_max_frame_size(4*1024*1024);
http.serve_connection(conn, service) http.serve_connection(conn, service)
.map_err(Error::from) .map_err(Error::from)
@ -160,7 +198,11 @@ async move {
req = req_fut => req, req = req_fut => req,
abrt = abort_future => abrt, abrt = abort_future => abrt,
}; };
if benchmark {
env.log("benchmark finished successfully");
tools::runtime::block_in_place(|| env.remove_backup())?;
return Ok(());
}
match (res, env.ensure_finished()) { match (res, env.ensure_finished()) {
(Ok(_), Ok(())) => { (Ok(_), Ok(())) => {
env.log("backup finished successfully"); env.log("backup finished successfully");
@ -180,7 +222,7 @@ async move {
(Err(err), Err(_)) => { (Err(err), Err(_)) => {
env.log(format!("backup failed: {}", err)); env.log(format!("backup failed: {}", err));
env.log("removing failed backup"); env.log("removing failed backup");
env.remove_backup()?; tools::runtime::block_in_place(|| env.remove_backup())?;
Err(err) Err(err)
}, },
} }
@ -334,7 +376,7 @@ fn create_fixed_index(
let last_backup = match &env.last_backup { let last_backup = match &env.last_backup {
Some(info) => info, Some(info) => info,
None => { None => {
bail!("cannot reuse index - no previous backup exists"); bail!("cannot reuse index - no valid previous backup exists");
} }
}; };
@ -649,7 +691,7 @@ fn download_previous(
let last_backup = match &env.last_backup { let last_backup = match &env.last_backup {
Some(info) => info, Some(info) => info,
None => bail!("no previous backup"), None => bail!("no valid previous backup"),
}; };
let mut path = env.datastore.snapshot_path(&last_backup.backup_dir); let mut path = env.datastore.snapshot_path(&last_backup.backup_dir);

View File

@ -66,13 +66,16 @@ struct FixedWriterState {
incremental: bool, incremental: bool,
} }
// key=digest, value=length
type KnownChunksMap = HashMap<[u8;32], u32>;
struct SharedBackupState { struct SharedBackupState {
finished: bool, finished: bool,
uid_counter: usize, uid_counter: usize,
file_counter: usize, // successfully uploaded files file_counter: usize, // successfully uploaded files
dynamic_writers: HashMap<usize, DynamicWriterState>, dynamic_writers: HashMap<usize, DynamicWriterState>,
fixed_writers: HashMap<usize, FixedWriterState>, fixed_writers: HashMap<usize, FixedWriterState>,
known_chunks: HashMap<[u8;32], u32>, known_chunks: KnownChunksMap,
backup_size: u64, // sums up size of all files backup_size: u64, // sums up size of all files
backup_stat: UploadStatistic, backup_stat: UploadStatistic,
} }
@ -457,11 +460,11 @@ impl BackupEnvironment {
/// Mark backup as finished /// Mark backup as finished
pub fn finish_backup(&self) -> Result<(), Error> { pub fn finish_backup(&self) -> Result<(), Error> {
let mut state = self.state.lock().unwrap(); let mut state = self.state.lock().unwrap();
// test if all writer are correctly closed
state.ensure_unfinished()?; state.ensure_unfinished()?;
if state.dynamic_writers.len() != 0 { // test if all writer are correctly closed
if state.dynamic_writers.len() != 0 || state.fixed_writers.len() != 0 {
bail!("found open index writer - unable to finish backup"); bail!("found open index writer - unable to finish backup");
} }

View File

@ -61,12 +61,15 @@ impl Future for UploadChunk {
let (is_duplicate, compressed_size) = match proxmox::try_block! { let (is_duplicate, compressed_size) = match proxmox::try_block! {
let mut chunk = DataBlob::from_raw(raw_data)?; let mut chunk = DataBlob::from_raw(raw_data)?;
chunk.verify_unencrypted(this.size as usize, &this.digest)?; tools::runtime::block_in_place(|| {
chunk.verify_unencrypted(this.size as usize, &this.digest)?;
// always comput CRC at server side // always comput CRC at server side
chunk.set_crc(chunk.compute_crc()); chunk.set_crc(chunk.compute_crc());
this.store.insert_chunk(&chunk, &this.digest)
})
this.store.insert_chunk(&chunk, &this.digest)
} { } {
Ok(res) => res, Ok(res) => res,
Err(err) => break err, Err(err) => break err,

View File

@ -9,6 +9,7 @@ use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*; use crate::api2::types::*;
use crate::backup::*; use crate::backup::*;
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::datastore::{self, DataStoreConfig, DIR_NAME_SCHEMA}; use crate::config::datastore::{self, DataStoreConfig, DIR_NAME_SCHEMA};
use crate::config::acl::{PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_MODIFY}; use crate::config::acl::{PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_MODIFY};
@ -22,7 +23,7 @@ use crate::config::acl::{PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_MODIFY};
items: { type: datastore::DataStoreConfig }, items: { type: datastore::DataStoreConfig },
}, },
access: { access: {
permission: &Permission::Privilege(&["datastore"], PRIV_DATASTORE_AUDIT, false), permission: &Permission::Anybody,
}, },
)] )]
/// List all datastores /// List all datastores
@ -33,11 +34,18 @@ pub fn list_datastores(
let (config, digest) = datastore::config()?; let (config, digest) = datastore::config()?;
let list = config.convert_to_typed_array("datastore")?; let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into(); rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list) let list:Vec<DataStoreConfig> = config.convert_to_typed_array("datastore")?;
let filter_by_privs = |store: &DataStoreConfig| {
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store.name]);
(user_privs & PRIV_DATASTORE_AUDIT) != 0
};
Ok(list.into_iter().filter(filter_by_privs).collect())
} }
@ -67,6 +75,10 @@ pub fn list_datastores(
optional: true, optional: true,
schema: PRUNE_SCHEDULE_SCHEMA, schema: PRUNE_SCHEDULE_SCHEMA,
}, },
"verify-schedule": {
optional: true,
schema: VERIFY_SCHEDULE_SCHEMA,
},
"keep-last": { "keep-last": {
optional: true, optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST, schema: PRUNE_SCHEMA_KEEP_LAST,
@ -100,7 +112,7 @@ pub fn list_datastores(
/// Create new datastore config. /// Create new datastore config.
pub fn create_datastore(param: Value) -> Result<(), Error> { pub fn create_datastore(param: Value) -> Result<(), Error> {
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let datastore: datastore::DataStoreConfig = serde_json::from_value(param.clone())?; let datastore: datastore::DataStoreConfig = serde_json::from_value(param.clone())?;
@ -119,6 +131,10 @@ pub fn create_datastore(param: Value) -> Result<(), Error> {
datastore::save_config(&config)?; datastore::save_config(&config)?;
crate::config::jobstate::create_state_file("prune", &datastore.name)?;
crate::config::jobstate::create_state_file("garbage_collection", &datastore.name)?;
crate::config::jobstate::create_state_file("verify", &datastore.name)?;
Ok(()) Ok(())
} }
@ -163,6 +179,8 @@ pub enum DeletableProperty {
gc_schedule, gc_schedule,
/// Delete the prune job schedule. /// Delete the prune job schedule.
prune_schedule, prune_schedule,
/// Delete the verify schedule property
verify_schedule,
/// Delete the keep-last property /// Delete the keep-last property
keep_last, keep_last,
/// Delete the keep-hourly property /// Delete the keep-hourly property
@ -196,6 +214,10 @@ pub enum DeletableProperty {
optional: true, optional: true,
schema: PRUNE_SCHEDULE_SCHEMA, schema: PRUNE_SCHEDULE_SCHEMA,
}, },
"verify-schedule": {
optional: true,
schema: VERIFY_SCHEDULE_SCHEMA,
},
"keep-last": { "keep-last": {
optional: true, optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST, schema: PRUNE_SCHEMA_KEEP_LAST,
@ -244,6 +266,7 @@ pub fn update_datastore(
comment: Option<String>, comment: Option<String>,
gc_schedule: Option<String>, gc_schedule: Option<String>,
prune_schedule: Option<String>, prune_schedule: Option<String>,
verify_schedule: Option<String>,
keep_last: Option<u64>, keep_last: Option<u64>,
keep_hourly: Option<u64>, keep_hourly: Option<u64>,
keep_daily: Option<u64>, keep_daily: Option<u64>,
@ -254,7 +277,7 @@ pub fn update_datastore(
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
// pass/compare digest // pass/compare digest
let (mut config, expected_digest) = datastore::config()?; let (mut config, expected_digest) = datastore::config()?;
@ -272,6 +295,7 @@ pub fn update_datastore(
DeletableProperty::comment => { data.comment = None; }, DeletableProperty::comment => { data.comment = None; },
DeletableProperty::gc_schedule => { data.gc_schedule = None; }, DeletableProperty::gc_schedule => { data.gc_schedule = None; },
DeletableProperty::prune_schedule => { data.prune_schedule = None; }, DeletableProperty::prune_schedule => { data.prune_schedule = None; },
DeletableProperty::verify_schedule => { data.verify_schedule = None; },
DeletableProperty::keep_last => { data.keep_last = None; }, DeletableProperty::keep_last => { data.keep_last = None; },
DeletableProperty::keep_hourly => { data.keep_hourly = None; }, DeletableProperty::keep_hourly => { data.keep_hourly = None; },
DeletableProperty::keep_daily => { data.keep_daily = None; }, DeletableProperty::keep_daily => { data.keep_daily = None; },
@ -291,8 +315,23 @@ pub fn update_datastore(
} }
} }
if gc_schedule.is_some() { data.gc_schedule = gc_schedule; } let mut gc_schedule_changed = false;
if prune_schedule.is_some() { data.prune_schedule = prune_schedule; } if gc_schedule.is_some() {
gc_schedule_changed = data.gc_schedule != gc_schedule;
data.gc_schedule = gc_schedule;
}
let mut prune_schedule_changed = false;
if prune_schedule.is_some() {
prune_schedule_changed = data.prune_schedule != prune_schedule;
data.prune_schedule = prune_schedule;
}
let mut verify_schedule_changed = false;
if verify_schedule.is_some() {
verify_schedule_changed = data.verify_schedule != verify_schedule;
data.verify_schedule = verify_schedule;
}
if keep_last.is_some() { data.keep_last = keep_last; } if keep_last.is_some() { data.keep_last = keep_last; }
if keep_hourly.is_some() { data.keep_hourly = keep_hourly; } if keep_hourly.is_some() { data.keep_hourly = keep_hourly; }
@ -305,6 +344,20 @@ pub fn update_datastore(
datastore::save_config(&config)?; datastore::save_config(&config)?;
// we want to reset the statefiles, to avoid an immediate action in some cases
// (e.g. going from monthly to weekly in the second week of the month)
if gc_schedule_changed {
crate::config::jobstate::create_state_file("garbage_collection", &name)?;
}
if prune_schedule_changed {
crate::config::jobstate::create_state_file("prune", &name)?;
}
if verify_schedule_changed {
crate::config::jobstate::create_state_file("verify", &name)?;
}
Ok(()) Ok(())
} }
@ -328,7 +381,7 @@ pub fn update_datastore(
/// Remove a datastore configuration. /// Remove a datastore configuration.
pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(datastore::DATASTORE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = datastore::config()?; let (mut config, expected_digest) = datastore::config()?;
@ -344,6 +397,11 @@ pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Erro
datastore::save_config(&config)?; datastore::save_config(&config)?;
// ignore errors
let _ = crate::config::jobstate::remove_state_file("prune", &name);
let _ = crate::config::jobstate::remove_state_file("garbage_collection", &name);
let _ = crate::config::jobstate::remove_state_file("verify", &name);
Ok(()) Ok(())
} }

View File

@ -60,6 +60,12 @@ pub fn list_remotes(
host: { host: {
schema: DNS_NAME_OR_IP_SCHEMA, schema: DNS_NAME_OR_IP_SCHEMA,
}, },
port: {
description: "The (optional) port.",
type: u16,
optional: true,
default: 8007,
},
userid: { userid: {
type: Userid, type: Userid,
}, },
@ -79,7 +85,7 @@ pub fn list_remotes(
/// Create new remote. /// Create new remote.
pub fn create_remote(password: String, param: Value) -> Result<(), Error> { pub fn create_remote(password: String, param: Value) -> Result<(), Error> {
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let mut data = param.clone(); let mut data = param.clone();
data["password"] = Value::from(base64::encode(password.as_bytes())); data["password"] = Value::from(base64::encode(password.as_bytes()));
@ -136,6 +142,8 @@ pub enum DeletableProperty {
comment, comment,
/// Delete the fingerprint property. /// Delete the fingerprint property.
fingerprint, fingerprint,
/// Delete the port property.
port,
} }
#[api( #[api(
@ -153,6 +161,11 @@ pub enum DeletableProperty {
optional: true, optional: true,
schema: DNS_NAME_OR_IP_SCHEMA, schema: DNS_NAME_OR_IP_SCHEMA,
}, },
port: {
description: "The (optional) port.",
type: u16,
optional: true,
},
userid: { userid: {
optional: true, optional: true,
type: Userid, type: Userid,
@ -188,6 +201,7 @@ pub fn update_remote(
name: String, name: String,
comment: Option<String>, comment: Option<String>,
host: Option<String>, host: Option<String>,
port: Option<u16>,
userid: Option<Userid>, userid: Option<Userid>,
password: Option<String>, password: Option<String>,
fingerprint: Option<String>, fingerprint: Option<String>,
@ -195,7 +209,7 @@ pub fn update_remote(
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = remote::config()?; let (mut config, expected_digest) = remote::config()?;
@ -211,6 +225,7 @@ pub fn update_remote(
match delete_prop { match delete_prop {
DeletableProperty::comment => { data.comment = None; }, DeletableProperty::comment => { data.comment = None; },
DeletableProperty::fingerprint => { data.fingerprint = None; }, DeletableProperty::fingerprint => { data.fingerprint = None; },
DeletableProperty::port => { data.port = None; },
} }
} }
} }
@ -224,6 +239,7 @@ pub fn update_remote(
} }
} }
if let Some(host) = host { data.host = host; } if let Some(host) = host { data.host = host; }
if port.is_some() { data.port = port; }
if let Some(userid) = userid { data.userid = userid; } if let Some(userid) = userid { data.userid = userid; }
if let Some(password) = password { data.password = password; } if let Some(password) = password { data.password = password; }
@ -256,7 +272,7 @@ pub fn update_remote(
/// Remove a remote from the configuration file. /// Remove a remote from the configuration file.
pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = remote::config()?; let (mut config, expected_digest) = remote::config()?;

View File

@ -69,7 +69,7 @@ pub fn list_sync_jobs(
/// Create a new sync job. /// Create a new sync job.
pub fn create_sync_job(param: Value) -> Result<(), Error> { pub fn create_sync_job(param: Value) -> Result<(), Error> {
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let sync_job: sync::SyncJobConfig = serde_json::from_value(param.clone())?; let sync_job: sync::SyncJobConfig = serde_json::from_value(param.clone())?;
@ -83,6 +83,8 @@ pub fn create_sync_job(param: Value) -> Result<(), Error> {
sync::save_config(&config)?; sync::save_config(&config)?;
crate::config::jobstate::create_state_file("syncjob", &sync_job.id)?;
Ok(()) Ok(())
} }
@ -185,7 +187,7 @@ pub fn update_sync_job(
digest: Option<String>, digest: Option<String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
// pass/compare digest // pass/compare digest
let (mut config, expected_digest) = sync::config()?; let (mut config, expected_digest) = sync::config()?;
@ -248,7 +250,7 @@ pub fn update_sync_job(
/// Remove a sync job configuration /// Remove a sync job configuration
pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = sync::config()?; let (mut config, expected_digest) = sync::config()?;
@ -264,6 +266,8 @@ pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error>
sync::save_config(&config)?; sync::save_config(&config)?;
crate::config::jobstate::remove_state_file("syncjob", &id)?;
Ok(()) Ok(())
} }

View File

@ -22,6 +22,7 @@ use crate::api2::types::*;
use crate::config::acl::PRIV_SYS_CONSOLE; use crate::config::acl::PRIV_SYS_CONSOLE;
use crate::server::WorkerTask; use crate::server::WorkerTask;
use crate::tools; use crate::tools;
use crate::tools::ticket::{self, Empty, Ticket};
pub mod disks; pub mod disks;
pub mod dns; pub mod dns;
@ -105,12 +106,11 @@ async fn termproxy(
let listener = TcpListener::bind("localhost:0")?; let listener = TcpListener::bind("localhost:0")?;
let port = listener.local_addr()?.port(); let port = listener.local_addr()?.port();
let ticket = tools::ticket::assemble_term_ticket( let ticket = Ticket::new(ticket::TERM_PREFIX, &Empty)?
crate::auth_helpers::private_auth_key(), .sign(
&userid, crate::auth_helpers::private_auth_key(),
&path, Some(&ticket::term_aad(&userid, &path, port)),
port, )?;
)?;
let mut command = Vec::new(); let mut command = Vec::new();
match cmd.as_ref().map(|x| x.as_str()) { match cmd.as_ref().map(|x| x.as_str()) {
@ -273,17 +273,16 @@ fn upgrade_to_websocket(
) -> ApiResponseFuture { ) -> ApiResponseFuture {
async move { async move {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let ticket = tools::required_string_param(&param, "vncticket")?.to_owned(); let ticket = tools::required_string_param(&param, "vncticket")?;
let port: u16 = tools::required_integer_param(&param, "port")? as u16; let port: u16 = tools::required_integer_param(&param, "port")? as u16;
// will be checked again by termproxy // will be checked again by termproxy
tools::ticket::verify_term_ticket( Ticket::<Empty>::parse(ticket)?
crate::auth_helpers::public_auth_key(), .verify(
&userid, crate::auth_helpers::public_auth_key(),
&"/system", ticket::TERM_PREFIX,
port, Some(&ticket::term_aad(&userid, "/system", port)),
&ticket, )?;
)?;
let (ws, response) = WebSocket::new(parts.headers)?; let (ws, response) = WebSocket::new(parts.headers)?;

View File

@ -16,6 +16,7 @@ use crate::tools::systemd::{self, types::*};
use crate::server::WorkerTask; use crate::server::WorkerTask;
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::datastore::DataStoreConfig;
#[api( #[api(
properties: { properties: {
@ -175,9 +176,69 @@ pub fn create_datastore_disk(
Ok(upid_str) Ok(upid_str)
} }
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
name: {
schema: DATASTORE_SCHEMA,
},
}
},
access: {
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_MODIFY, false),
},
)]
/// Remove a Filesystem mounted under '/mnt/datastore/<name>'.".
pub fn delete_datastore_disk(name: String) -> Result<(), Error> {
let path = format!("/mnt/datastore/{}", name);
// path of datastore cannot be changed
let (config, _) = crate::config::datastore::config()?;
let datastores: Vec<DataStoreConfig> = config.convert_to_typed_array("datastore")?;
let conflicting_datastore: Option<DataStoreConfig> = datastores.into_iter()
.filter(|ds| ds.path == path)
.next();
if let Some(conflicting_datastore) = conflicting_datastore {
bail!("Can't remove '{}' since it's required by datastore '{}'",
conflicting_datastore.path, conflicting_datastore.name);
}
// disable systemd mount-unit
let mut mount_unit_name = systemd::escape_unit(&path, true);
mount_unit_name.push_str(".mount");
systemd::disable_unit(&mount_unit_name)?;
// delete .mount-file
let mount_unit_path = format!("/etc/systemd/system/{}", mount_unit_name);
let full_path = std::path::Path::new(&mount_unit_path);
log::info!("removing systemd mount unit {:?}", full_path);
std::fs::remove_file(&full_path)?;
// try to unmount, if that fails tell the user to reboot or unmount manually
let mut command = std::process::Command::new("umount");
command.arg(&path);
match crate::tools::run_command(command, None) {
Err(_) => bail!(
"Could not umount '{}' since it is busy. It will stay mounted \
until the next reboot or until unmounted manually!",
path
),
Ok(_) => Ok(())
}
}
const ITEM_ROUTER: Router = Router::new()
.delete(&API_METHOD_DELETE_DATASTORE_DISK);
pub const ROUTER: Router = Router::new() pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_DATASTORE_MOUNTS) .get(&API_METHOD_LIST_DATASTORE_MOUNTS)
.post(&API_METHOD_CREATE_DATASTORE_DISK); .post(&API_METHOD_CREATE_DATASTORE_DISK)
.match_all("name", &ITEM_ROUTER);
fn create_datastore_mount_unit( fn create_datastore_mount_unit(

View File

@ -25,6 +25,8 @@ use crate::server::WorkerTask;
use crate::api2::types::*; use crate::api2::types::*;
use crate::tools::systemd;
pub const DISK_ARRAY_SCHEMA: Schema = ArraySchema::new( pub const DISK_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Disk name list.", &BLOCKDEVICE_NAME_SCHEMA) "Disk name list.", &BLOCKDEVICE_NAME_SCHEMA)
.schema(); .schema();
@ -355,6 +357,11 @@ pub fn create_zpool(
let output = crate::tools::run_command(command, None)?; let output = crate::tools::run_command(command, None)?;
worker.log(output); worker.log(output);
if std::path::Path::new("/lib/systemd/system/zfs-import@.service").exists() {
let import_unit = format!("zfs-import@{}.service", systemd::escape_unit(&name, false));
systemd::enable_unit(&import_unit)?;
}
if let Some(compression) = compression { if let Some(compression) = compression {
let mut command = std::process::Command::new("zfs"); let mut command = std::process::Command::new("zfs");
command.args(&["set", &format!("compression={}", compression), &name]); command.args(&["set", &format!("compression={}", compression), &name]);

View File

@ -198,6 +198,14 @@ pub fn read_interface(iface: String) -> Result<Value, Error> {
type: LinuxBondMode, type: LinuxBondMode,
optional: true, optional: true,
}, },
"bond-primary": {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
optional: true,
},
bond_xmit_hash_policy: {
type: BondXmitHashPolicy,
optional: true,
},
slaves: { slaves: {
schema: NETWORK_INTERFACE_LIST_SCHEMA, schema: NETWORK_INTERFACE_LIST_SCHEMA,
optional: true, optional: true,
@ -224,6 +232,8 @@ pub fn create_interface(
bridge_ports: Option<String>, bridge_ports: Option<String>,
bridge_vlan_aware: Option<bool>, bridge_vlan_aware: Option<bool>,
bond_mode: Option<LinuxBondMode>, bond_mode: Option<LinuxBondMode>,
bond_primary: Option<String>,
bond_xmit_hash_policy: Option<BondXmitHashPolicy>,
slaves: Option<String>, slaves: Option<String>,
param: Value, param: Value,
) -> Result<(), Error> { ) -> Result<(), Error> {
@ -231,7 +241,7 @@ pub fn create_interface(
let interface_type = crate::tools::required_string_param(&param, "type")?; let interface_type = crate::tools::required_string_param(&param, "type")?;
let interface_type: NetworkInterfaceType = serde_json::from_value(interface_type.into())?; let interface_type: NetworkInterfaceType = serde_json::from_value(interface_type.into())?;
let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, _digest) = network::config()?; let (mut config, _digest) = network::config()?;
@ -284,7 +294,23 @@ pub fn create_interface(
if bridge_vlan_aware.is_some() { interface.bridge_vlan_aware = bridge_vlan_aware; } if bridge_vlan_aware.is_some() { interface.bridge_vlan_aware = bridge_vlan_aware; }
} }
NetworkInterfaceType::Bond => { NetworkInterfaceType::Bond => {
if bond_mode.is_some() { interface.bond_mode = bond_mode; } if let Some(mode) = bond_mode {
interface.bond_mode = bond_mode;
if bond_primary.is_some() {
if mode != LinuxBondMode::active_backup {
bail!("bond-primary is only valid with Active/Backup mode");
}
interface.bond_primary = bond_primary;
}
if bond_xmit_hash_policy.is_some() {
if mode != LinuxBondMode::ieee802_3ad &&
mode != LinuxBondMode::balance_xor
{
bail!("bond_xmit_hash_policy is only valid with LACP(802.3ad) or balance-xor mode");
}
interface.bond_xmit_hash_policy = bond_xmit_hash_policy;
}
}
if let Some(slaves) = slaves { if let Some(slaves) = slaves {
let slaves = split_interface_list(&slaves)?; let slaves = split_interface_list(&slaves)?;
interface.set_bond_slaves(slaves)?; interface.set_bond_slaves(slaves)?;
@ -343,6 +369,11 @@ pub enum DeletableProperty {
bridge_vlan_aware, bridge_vlan_aware,
/// Delete bond-slaves (set to 'none') /// Delete bond-slaves (set to 'none')
slaves, slaves,
/// Delete bond-primary
#[serde(rename = "bond-primary")]
bond_primary,
/// Delete bond transmit hash policy
bond_xmit_hash_policy,
} }
@ -420,6 +451,14 @@ pub enum DeletableProperty {
type: LinuxBondMode, type: LinuxBondMode,
optional: true, optional: true,
}, },
"bond-primary": {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
optional: true,
},
bond_xmit_hash_policy: {
type: BondXmitHashPolicy,
optional: true,
},
slaves: { slaves: {
schema: NETWORK_INTERFACE_LIST_SCHEMA, schema: NETWORK_INTERFACE_LIST_SCHEMA,
optional: true, optional: true,
@ -458,13 +497,15 @@ pub fn update_interface(
bridge_ports: Option<String>, bridge_ports: Option<String>,
bridge_vlan_aware: Option<bool>, bridge_vlan_aware: Option<bool>,
bond_mode: Option<LinuxBondMode>, bond_mode: Option<LinuxBondMode>,
bond_primary: Option<String>,
bond_xmit_hash_policy: Option<BondXmitHashPolicy>,
slaves: Option<String>, slaves: Option<String>,
delete: Option<Vec<DeletableProperty>>, delete: Option<Vec<DeletableProperty>>,
digest: Option<String>, digest: Option<String>,
param: Value, param: Value,
) -> Result<(), Error> { ) -> Result<(), Error> {
let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = network::config()?; let (mut config, expected_digest) = network::config()?;
@ -501,6 +542,8 @@ pub fn update_interface(
DeletableProperty::bridge_ports => { interface.set_bridge_ports(Vec::new())?; } DeletableProperty::bridge_ports => { interface.set_bridge_ports(Vec::new())?; }
DeletableProperty::bridge_vlan_aware => { interface.bridge_vlan_aware = None; } DeletableProperty::bridge_vlan_aware => { interface.bridge_vlan_aware = None; }
DeletableProperty::slaves => { interface.set_bond_slaves(Vec::new())?; } DeletableProperty::slaves => { interface.set_bond_slaves(Vec::new())?; }
DeletableProperty::bond_primary => { interface.bond_primary = None; }
DeletableProperty::bond_xmit_hash_policy => { interface.bond_xmit_hash_policy = None }
} }
} }
} }
@ -518,7 +561,23 @@ pub fn update_interface(
let slaves = split_interface_list(&slaves)?; let slaves = split_interface_list(&slaves)?;
interface.set_bond_slaves(slaves)?; interface.set_bond_slaves(slaves)?;
} }
if bond_mode.is_some() { interface.bond_mode = bond_mode; } if let Some(mode) = bond_mode {
interface.bond_mode = bond_mode;
if bond_primary.is_some() {
if mode != LinuxBondMode::active_backup {
bail!("bond-primary is only valid with Active/Backup mode");
}
interface.bond_primary = bond_primary;
}
if bond_xmit_hash_policy.is_some() {
if mode != LinuxBondMode::ieee802_3ad &&
mode != LinuxBondMode::balance_xor
{
bail!("bond_xmit_hash_policy is only valid with LACP(802.3ad) or balance-xor mode");
}
interface.bond_xmit_hash_policy = bond_xmit_hash_policy;
}
}
if let Some(cidr) = cidr { if let Some(cidr) = cidr {
let (_, _, is_v6) = network::parse_cidr(&cidr)?; let (_, _, is_v6) = network::parse_cidr(&cidr)?;
@ -587,7 +646,7 @@ pub fn update_interface(
/// Remove network interface configuration. /// Remove network interface configuration.
pub fn delete_interface(iface: String, digest: Option<String>) -> Result<(), Error> { pub fn delete_interface(iface: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0))?; let _lock = open_file_locked(network::NETWORK_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = network::config()?; let (mut config, expected_digest) = network::config()?;

View File

@ -1,10 +1,10 @@
use anyhow::Error; use anyhow::Error;
use serde_json::{Value, json}; use serde_json::{Value, json};
use proxmox::api::{api, Router}; use proxmox::api::{api, Permission, Router};
use crate::api2::types::*; use crate::api2::types::*;
use crate::tools::epoch_now_f64; use crate::config::acl::PRIV_SYS_AUDIT;
use crate::rrd::{extract_cached_data, RRD_DATA_ENTRIES}; use crate::rrd::{extract_cached_data, RRD_DATA_ENTRIES};
pub fn create_value_from_rrd( pub fn create_value_from_rrd(
@ -15,7 +15,7 @@ pub fn create_value_from_rrd(
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let mut result = Vec::new(); let mut result = Vec::new();
let now = epoch_now_f64()?; let now = proxmox::tools::time::epoch_f64();
for name in list { for name in list {
let (start, reso, list) = match extract_cached_data(basedir, name, now, timeframe, cf) { let (start, reso, list) = match extract_cached_data(basedir, name, now, timeframe, cf) {
@ -57,6 +57,9 @@ pub fn create_value_from_rrd(
}, },
}, },
}, },
access: {
permission: &Permission::Privilege(&["system", "status"], PRIV_SYS_AUDIT, false),
},
)] )]
/// Read node stats /// Read node stats
fn get_node_stats( fn get_node_stats(

View File

@ -4,12 +4,13 @@ use anyhow::{bail, Error};
use serde_json::{json, Value}; use serde_json::{json, Value};
use proxmox::{sortable, identity, list_subdirs_api_method}; use proxmox::{sortable, identity, list_subdirs_api_method};
use proxmox::api::{api, Router, Permission}; use proxmox::api::{api, Router, Permission, RpcEnvironment};
use proxmox::api::router::SubdirMap; use proxmox::api::router::SubdirMap;
use proxmox::api::schema::*; use proxmox::api::schema::*;
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY}; use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::server::WorkerTask;
static SERVICE_NAME_LIST: [&str; 7] = [ static SERVICE_NAME_LIST: [&str; 7] = [
"proxmox-backup", "proxmox-backup",
@ -181,31 +182,43 @@ fn get_service_state(
Ok(json_service_state(&service, status)) Ok(json_service_state(&service, status))
} }
fn run_service_command(service: &str, cmd: &str) -> Result<Value, Error> { fn run_service_command(service: &str, cmd: &str, userid: Userid) -> Result<Value, Error> {
// fixme: run background worker (fork_worker) ??? let workerid = format!("srv{}", &cmd);
let cmd = match cmd { let cmd = match cmd {
"start"|"stop"|"restart"=> cmd, "start"|"stop"|"restart"=> cmd.to_string(),
"reload" => "try-reload-or-restart", // some services do not implement reload "reload" => "try-reload-or-restart".to_string(), // some services do not implement reload
_ => bail!("unknown service command '{}'", cmd), _ => bail!("unknown service command '{}'", cmd),
}; };
let service = service.to_string();
if service == "proxmox-backup" && cmd == "stop" { let upid = WorkerTask::new_thread(
bail!("invalid service cmd '{} {}' cannot stop essential service!", service, cmd); &workerid,
} Some(service.clone()),
userid,
false,
move |_worker| {
let real_service_name = real_service_name(service); if service == "proxmox-backup" && cmd == "stop" {
bail!("invalid service cmd '{} {}' cannot stop essential service!", service, cmd);
}
let status = Command::new("systemctl") let real_service_name = real_service_name(&service);
.args(&[cmd, real_service_name])
.status()?;
if !status.success() { let status = Command::new("systemctl")
bail!("systemctl {} failed with {}", cmd, status); .args(&[&cmd, real_service_name])
} .status()?;
Ok(Value::Null) if !status.success() {
bail!("systemctl {} failed with {}", cmd, status);
}
Ok(())
}
)?;
Ok(upid.into())
} }
#[api( #[api(
@ -228,11 +241,14 @@ fn run_service_command(service: &str, cmd: &str) -> Result<Value, Error> {
fn start_service( fn start_service(
service: String, service: String,
_param: Value, _param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
log::info!("starting service {}", service); log::info!("starting service {}", service);
run_service_command(&service, "start") run_service_command(&service, "start", userid)
} }
#[api( #[api(
@ -255,11 +271,14 @@ fn start_service(
fn stop_service( fn stop_service(
service: String, service: String,
_param: Value, _param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
log::info!("stopping service {}", service); log::info!("stopping service {}", service);
run_service_command(&service, "stop") run_service_command(&service, "stop", userid)
} }
#[api( #[api(
@ -282,15 +301,18 @@ fn stop_service(
fn restart_service( fn restart_service(
service: String, service: String,
_param: Value, _param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
log::info!("re-starting service {}", service); log::info!("re-starting service {}", service);
if &service == "proxmox-backup-proxy" { if &service == "proxmox-backup-proxy" {
// special case, avoid aborting running tasks // special case, avoid aborting running tasks
run_service_command(&service, "reload") run_service_command(&service, "reload", userid)
} else { } else {
run_service_command(&service, "restart") run_service_command(&service, "restart", userid)
} }
} }
@ -314,11 +336,14 @@ fn restart_service(
fn reload_service( fn reload_service(
service: String, service: String,
_param: Value, _param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
log::info!("reloading service {}", service); log::info!("reloading service {}", service);
run_service_command(&service, "reload") run_service_command(&service, "reload", userid)
} }

View File

@ -1,11 +1,12 @@
use anyhow::{Error}; use anyhow::{Error};
use serde_json::{json, Value}; use serde_json::{json, Value};
use proxmox::api::{api, Router, Permission}; use proxmox::api::{api, Router, RpcEnvironment, Permission};
use crate::tools; use crate::tools;
use crate::config::acl::PRIV_SYS_AUDIT; use crate::config::acl::PRIV_SYS_AUDIT;
use crate::api2::types::NODE_SCHEMA; use crate::config::cached_user_info::CachedUserInfo;
use crate::api2::types::{NODE_SCHEMA, Userid};
#[api( #[api(
input: { input: {
@ -28,7 +29,7 @@ use crate::api2::types::NODE_SCHEMA;
}, },
serverid: { serverid: {
type: String, type: String,
description: "The unique server ID.", description: "The unique server ID, if permitted to access.",
}, },
url: { url: {
type: String, type: String,
@ -37,18 +38,29 @@ use crate::api2::types::NODE_SCHEMA;
}, },
}, },
access: { access: {
permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false), permission: &Permission::Anybody,
}, },
)] )]
/// Read subscription info. /// Read subscription info.
fn get_subscription(_param: Value) -> Result<Value, Error> { fn get_subscription(
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &[]);
let server_id = if (user_privs & PRIV_SYS_AUDIT) != 0 {
tools::get_hardware_address()?
} else {
"hidden".to_string()
};
let url = "https://www.proxmox.com/en/proxmox-backup-server/pricing"; let url = "https://www.proxmox.com/en/proxmox-backup-server/pricing";
Ok(json!({ Ok(json!({
"status": "NotFound", "status": "NotFound",
"message": "There is no subscription key", "message": "There is no subscription key",
"serverid": tools::get_hardware_address()?, "serverid": server_id,
"url": url, "url": url,
})) }))
} }

View File

@ -10,7 +10,7 @@ use proxmox::{identity, list_subdirs_api_method, sortable};
use crate::tools; use crate::tools;
use crate::api2::types::*; use crate::api2::types::*;
use crate::server::{self, UPID}; use crate::server::{self, UPID, TaskState, TaskListInfoIterator};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY}; use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
@ -105,9 +105,9 @@ async fn get_task_status(
if crate::server::worker_is_active(&upid).await? { if crate::server::worker_is_active(&upid).await? {
result["status"] = Value::from("running"); result["status"] = Value::from("running");
} else { } else {
let exitstatus = crate::server::upid_read_status(&upid).unwrap_or(String::from("unknown")); let exitstatus = crate::server::upid_read_status(&upid).unwrap_or(TaskState::Unknown { endtime: 0 });
result["status"] = Value::from("stopped"); result["status"] = Value::from("stopped");
result["exitstatus"] = Value::from(exitstatus); result["exitstatus"] = Value::from(exitstatus.to_string());
}; };
Ok(result) Ok(result)
@ -303,6 +303,7 @@ pub fn list_tasks(
limit: u64, limit: u64,
errors: bool, errors: bool,
running: bool, running: bool,
userfilter: Option<String>,
param: Value, param: Value,
mut rpcenv: &mut dyn RpcEnvironment, mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> { ) -> Result<Vec<TaskListItem>, Error> {
@ -315,56 +316,55 @@ pub fn list_tasks(
let store = param["store"].as_str(); let store = param["store"].as_str();
let userfilter = param["userfilter"].as_str(); let list = TaskListInfoIterator::new(running)?;
let list = server::read_task_list()?; let result: Vec<TaskListItem> = list
.take_while(|info| !info.is_err())
.filter_map(|info| {
let info = match info {
Ok(info) => info,
Err(_) => return None,
};
let mut result = vec![]; if !list_all && info.upid.userid != userid { return None; }
let mut count = 0; if let Some(userid) = &userfilter {
if !info.upid.userid.as_str().contains(userid) { return None; }
for info in list {
if !list_all && info.upid.userid != userid { continue; }
if let Some(userid) = userfilter {
if !info.upid.userid.as_str().contains(userid) { continue; }
} }
if let Some(store) = store { if let Some(store) = store {
// Note: useful to select all tasks spawned by proxmox-backup-client // Note: useful to select all tasks spawned by proxmox-backup-client
let worker_id = match &info.upid.worker_id { let worker_id = match &info.upid.worker_id {
Some(w) => w, Some(w) => w,
None => continue, // skip None => return None, // skip
}; };
if info.upid.worker_type == "backup" || info.upid.worker_type == "restore" || if info.upid.worker_type == "backup" || info.upid.worker_type == "restore" ||
info.upid.worker_type == "prune" info.upid.worker_type == "prune"
{ {
let prefix = format!("{}_", store); let prefix = format!("{}_", store);
if !worker_id.starts_with(&prefix) { continue; } if !worker_id.starts_with(&prefix) { return None; }
} else if info.upid.worker_type == "garbage_collection" { } else if info.upid.worker_type == "garbage_collection" {
if worker_id != store { continue; } if worker_id != store { return None; }
} else { } else {
continue; // skip return None; // skip
} }
} }
if let Some(ref state) = info.state { match info.state {
if running { continue; } Some(_) if running => return None,
if errors && state.1 == "OK" { Some(crate::server::TaskState::OK { .. }) if errors => return None,
continue; _ => {},
}
} }
if (count as u64) < start { Some(info.into())
count += 1; }).skip(start as usize)
continue; .take(limit as usize)
} else { .collect();
count += 1;
}
if (result.len() as u64) < limit { result.push(info.into()); }; let mut count = result.len() + start as usize;
if result.len() > 0 && result.len() >= limit as usize { // we have a 'virtual' entry as long as we have any new
count += 1;
} }
rpcenv["total"] = Value::from(count); rpcenv["total"] = Value::from(count);

View File

@ -1,4 +1,3 @@
use chrono::prelude::*;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::{json, Value}; use serde_json::{json, Value};
@ -57,10 +56,11 @@ fn read_etc_localtime() -> Result<String, Error> {
)] )]
/// Read server time and time zone settings. /// Read server time and time zone settings.
fn get_time(_param: Value) -> Result<Value, Error> { fn get_time(_param: Value) -> Result<Value, Error> {
let datetime = Local::now(); let time = proxmox::tools::time::epoch_i64();
let offset = datetime.offset(); let tm = proxmox::tools::time::localtime(time)?;
let time = datetime.timestamp(); let offset = tm.tm_gmtoff;
let localtime = time + (offset.fix().local_minus_utc() as i64);
let localtime = time + offset;
Ok(json!({ Ok(json!({
"timezone": read_etc_localtime()?, "timezone": read_etc_localtime()?,

View File

@ -2,6 +2,7 @@
use std::sync::{Arc}; use std::sync::{Arc};
use anyhow::{format_err, Error}; use anyhow::{format_err, Error};
use futures::{select, future::FutureExt};
use proxmox::api::api; use proxmox::api::api;
use proxmox::api::{ApiMethod, Router, RpcEnvironment, Permission}; use proxmox::api::{ApiMethod, Router, RpcEnvironment, Permission};
@ -12,6 +13,8 @@ use crate::client::{HttpClient, HttpClientOptions, BackupRepository, pull::pull_
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::{ use crate::config::{
remote, remote,
sync::SyncJobConfig,
jobstate::Job,
acl::{PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ}, acl::{PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ},
cached_user_info::CachedUserInfo, cached_user_info::CachedUserInfo,
}; };
@ -52,16 +55,79 @@ pub async fn get_pull_parameters(
.password(Some(remote.password.clone())) .password(Some(remote.password.clone()))
.fingerprint(remote.fingerprint.clone()); .fingerprint(remote.fingerprint.clone());
let client = HttpClient::new(&remote.host, &remote.userid, options)?; let src_repo = BackupRepository::new(Some(remote.userid.clone()), Some(remote.host.clone()), remote.port, remote_store.to_string());
let client = HttpClient::new(&src_repo.host(), src_repo.port(), &src_repo.user(), options)?;
let _auth_info = client.login() // make sure we can auth let _auth_info = client.login() // make sure we can auth
.await .await
.map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?; .map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
let src_repo = BackupRepository::new(Some(remote.userid), Some(remote.host), remote_store.to_string());
Ok((client, src_repo, tgt_store)) Ok((client, src_repo, tgt_store))
} }
pub fn do_sync_job(
mut job: Job,
sync_job: SyncJobConfig,
userid: &Userid,
schedule: Option<String>,
) -> Result<String, Error> {
let job_id = job.jobname().to_string();
let worker_type = job.jobtype().to_string();
let upid_str = WorkerTask::spawn(
&worker_type,
Some(job.jobname().to_string()),
userid.clone(),
false,
move |worker| async move {
job.start(&worker.upid().to_string())?;
let worker2 = worker.clone();
let worker_future = async move {
let delete = sync_job.remove_vanished.unwrap_or(true);
let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
worker.log(format!("Starting datastore sync job '{}'", job_id));
if let Some(event_str) = schedule {
worker.log(format!("task triggered by schedule '{}'", event_str));
}
worker.log(format!("Sync datastore '{}' from '{}/{}'",
sync_job.store, sync_job.remote, sync_job.remote_store));
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, Userid::backup_userid().clone()).await?;
worker.log(format!("sync job '{}' end", &job_id));
Ok(())
};
let mut abort_future = worker2.abort_future().map(|_| Err(format_err!("sync aborted")));
let res = select!{
worker = worker_future.fuse() => worker,
abort = abort_future => abort,
};
let status = worker2.create_state(&res);
match job.finish(status) {
Ok(_) => {},
Err(err) => {
eprintln!("could not finish job state: {}", err);
}
}
res
})?;
Ok(upid_str)
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -111,7 +177,13 @@ async fn pull (
worker.log(format!("sync datastore '{}' start", store)); worker.log(format!("sync datastore '{}' start", store));
pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, userid).await?; let pull_future = pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, userid);
let future = select!{
success = pull_future.fuse() => success,
abort = worker.abort_future().map(|_| Err(format_err!("pull aborted"))) => abort,
};
let _ = future?;
worker.log(format!("sync datastore '{}' end", store)); worker.log(format!("sync datastore '{}' end", store));

View File

@ -1,4 +1,3 @@
//use chrono::{Local, TimeZone};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::*; use futures::*;
use hyper::header::{self, HeaderValue, UPGRADE}; use hyper::header::{self, HeaderValue, UPGRADE};
@ -83,12 +82,12 @@ fn upgrade_to_backup_reader_protocol(
let env_type = rpcenv.env_type(); let env_type = rpcenv.env_type();
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let path = datastore.base_path(); let path = datastore.base_path();
//let files = BackupInfo::list_files(&path, &backup_dir)?; //let files = BackupInfo::list_files(&path, &backup_dir)?;
let worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_dir.backup_time().timestamp()); let worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_dir.backup_time());
WorkerTask::spawn("reader", Some(worker_id), userid.clone(), true, move |worker| { WorkerTask::spawn("reader", Some(worker_id), userid.clone(), true, move |worker| {
let mut env = ReaderEnvironment::new( let mut env = ReaderEnvironment::new(
@ -121,6 +120,7 @@ fn upgrade_to_backup_reader_protocol(
let window_size = 32*1024*1024; // max = (1 << 31) - 2 let window_size = 32*1024*1024; // max = (1 << 31) - 2
http.http2_initial_stream_window_size(window_size); http.http2_initial_stream_window_size(window_size);
http.http2_initial_connection_window_size(window_size); http.http2_initial_connection_window_size(window_size);
http.http2_max_frame_size(4*1024*1024);
http.serve_connection(conn, service) http.serve_connection(conn, service)
.map_err(Error::from) .map_err(Error::from)
@ -229,8 +229,7 @@ fn download_chunk(
env.debug(format!("download chunk {:?}", path)); env.debug(format!("download chunk {:?}", path));
let data = tokio::fs::read(path) let data = tools::runtime::block_in_place(|| std::fs::read(path))
.await
.map_err(move |err| http_err!(BAD_REQUEST, "reading file {:?} failed: {}", path2, err))?; .map_err(move |err| http_err!(BAD_REQUEST, "reading file {:?} failed: {}", path2, err))?;
let body = Body::from(data); let body = Body::from(data);
@ -287,7 +286,7 @@ fn download_chunk_old(
pub const API_METHOD_SPEEDTEST: ApiMethod = ApiMethod::new( pub const API_METHOD_SPEEDTEST: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&speedtest), &ApiHandler::AsyncHttp(&speedtest),
&ObjectSchema::new("Test 4M block download speed.", &[]) &ObjectSchema::new("Test 1M block download speed.", &[])
); );
fn speedtest( fn speedtest(

View File

@ -23,7 +23,6 @@ use crate::api2::types::{
use crate::server; use crate::server;
use crate::backup::{DataStore}; use crate::backup::{DataStore};
use crate::config::datastore; use crate::config::datastore;
use crate::tools::epoch_now_f64;
use crate::tools::statistics::{linear_regression}; use crate::tools::statistics::{linear_regression};
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
use crate::config::acl::{ use crate::config::acl::{
@ -74,6 +73,9 @@ use crate::config::acl::{
}, },
}, },
}, },
access: {
permission: &Permission::Anybody,
},
)] )]
/// List Datastore usages and estimates /// List Datastore usages and estimates
fn datastore_status( fn datastore_status(
@ -107,7 +109,7 @@ fn datastore_status(
}); });
let rrd_dir = format!("datastore/{}", store); let rrd_dir = format!("datastore/{}", store);
let now = epoch_now_f64()?; let now = proxmox::tools::time::epoch_f64();
let rrd_resolution = RRDTimeFrameResolution::Month; let rrd_resolution = RRDTimeFrameResolution::Month;
let rrd_mode = RRDMode::Average; let rrd_mode = RRDMode::Average;
@ -180,7 +182,7 @@ fn datastore_status(
input: { input: {
properties: { properties: {
since: { since: {
type: u64, type: i64,
description: "Only list tasks since this UNIX epoch.", description: "Only list tasks since this UNIX epoch.",
optional: true, optional: true,
}, },
@ -198,6 +200,7 @@ fn datastore_status(
)] )]
/// List tasks. /// List tasks.
pub fn list_tasks( pub fn list_tasks(
since: Option<i64>,
_param: Value, _param: Value,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> { ) -> Result<Vec<TaskListItem>, Error> {
@ -207,13 +210,28 @@ pub fn list_tasks(
let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]); let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0; let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
let since = since.unwrap_or_else(|| 0);
// TODO: replace with call that gets all task since 'since' epoch let list: Vec<TaskListItem> = server::TaskListInfoIterator::new(false)?
let list: Vec<TaskListItem> = server::read_task_list()? .take_while(|info| {
.into_iter() match info {
.map(TaskListItem::from) Ok(info) => info.upid.starttime > since,
.filter(|entry| list_all || entry.user == userid) Err(_) => false
.collect(); }
})
.filter_map(|info| {
match info {
Ok(info) => {
if list_all || info.upid.userid == userid {
Some(Ok(TaskListItem::from(info)))
} else {
None
}
}
Err(err) => Some(Err(err))
}
})
.collect::<Result<Vec<TaskListItem>, Error>>()?;
Ok(list.into()) Ok(list.into())
} }

View File

@ -3,9 +3,10 @@ use serde::{Deserialize, Serialize};
use proxmox::api::{api, schema::*}; use proxmox::api::{api, schema::*};
use proxmox::const_regex; use proxmox::const_regex;
use proxmox::{IPRE, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32}; use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::CryptMode; use crate::backup::CryptMode;
use crate::server::UPID;
#[macro_use] #[macro_use]
mod macros; mod macros;
@ -29,7 +30,7 @@ pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
}); });
macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") } macro_rules! DNS_LABEL { () => (r"(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]*[a-zA-Z0-9])?)") }
macro_rules! DNS_NAME { () => (concat!(r"(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!())) } macro_rules! DNS_NAME { () => (concat!(r"(?:(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL!(), ")")) }
macro_rules! CIDR_V4_REGEX_STR { () => (concat!(r"(?:", IPV4RE!(), r"/\d{1,2})$")) } macro_rules! CIDR_V4_REGEX_STR { () => (concat!(r"(?:", IPV4RE!(), r"/\d{1,2})$")) }
macro_rules! CIDR_V6_REGEX_STR { () => (concat!(r"(?:", IPV6RE!(), r"/\d{1,3})$")) } macro_rules! CIDR_V6_REGEX_STR { () => (concat!(r"(?:", IPV6RE!(), r"/\d{1,3})$")) }
@ -62,9 +63,9 @@ const_regex!{
pub DNS_NAME_REGEX = concat!(r"^", DNS_NAME!(), r"$"); pub DNS_NAME_REGEX = concat!(r"^", DNS_NAME!(), r"$");
pub DNS_NAME_OR_IP_REGEX = concat!(r"^", DNS_NAME!(), "|", IPRE!(), r"$"); pub DNS_NAME_OR_IP_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE!(), r")$");
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE!() ,"):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$"); pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE_BRACKET!() ,"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$"; pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";
@ -301,6 +302,11 @@ pub const PRUNE_SCHEDULE_SCHEMA: Schema = StringSchema::new(
.format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event)) .format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event))
.schema(); .schema();
pub const VERIFY_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run verify job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event))
.schema();
pub const REMOTE_ID_SCHEMA: Schema = StringSchema::new("Remote ID.") pub const REMOTE_ID_SCHEMA: Schema = StringSchema::new("Remote ID.")
.format(&PROXMOX_SAFE_ID_FORMAT) .format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3) .min_length(3)
@ -379,6 +385,36 @@ pub struct GroupListItem {
pub owner: Option<Userid>, pub owner: Option<Userid>,
} }
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Result of a verify operation.
pub enum VerifyState {
/// Verification was successful
Ok,
/// Verification reported one or more errors
Failed,
}
#[api(
properties: {
upid: {
schema: UPID_SCHEMA
},
state: {
type: VerifyState
},
},
)]
#[derive(Serialize, Deserialize)]
/// Task properties.
pub struct SnapshotVerifyState {
/// UPID of the verify task
pub upid: UPID,
/// State of the verification. Enum.
pub state: VerifyState,
}
#[api( #[api(
properties: { properties: {
"backup-type": { "backup-type": {
@ -390,6 +426,14 @@ pub struct GroupListItem {
"backup-time": { "backup-time": {
schema: BACKUP_TIME_SCHEMA, schema: BACKUP_TIME_SCHEMA,
}, },
comment: {
schema: SINGLE_LINE_COMMENT_SCHEMA,
optional: true,
},
verification: {
type: SnapshotVerifyState,
optional: true,
},
files: { files: {
items: { items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA schema: BACKUP_ARCHIVE_NAME_SCHEMA
@ -411,6 +455,9 @@ pub struct SnapshotListItem {
/// The first line from manifest "notes" /// The first line from manifest "notes"
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>, pub comment: Option<String>,
/// The result of the last run verify task
#[serde(skip_serializing_if="Option::is_none")]
pub verification: Option<SnapshotVerifyState>,
/// List of contained archive files. /// List of contained archive files.
pub files: Vec<BackupContent>, pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes). /// Overall snapshot size (sum of all archive sizes).
@ -528,6 +575,8 @@ pub struct GarbageCollectionStatus {
pub pending_bytes: u64, pub pending_bytes: u64,
/// Number of pending chunks (pending removal - kept for safety). /// Number of pending chunks (pending removal - kept for safety).
pub pending_chunks: usize, pub pending_chunks: usize,
/// Number of chunks marked as .bad by verify that have been removed by GC.
pub removed_bad: usize,
} }
impl Default for GarbageCollectionStatus { impl Default for GarbageCollectionStatus {
@ -542,6 +591,7 @@ impl Default for GarbageCollectionStatus {
removed_chunks: 0, removed_chunks: 0,
pending_bytes: 0, pending_bytes: 0,
pending_chunks: 0, pending_chunks: 0,
removed_bad: 0,
} }
} }
} }
@ -595,7 +645,7 @@ impl From<crate::server::TaskListInfo> for TaskListItem {
fn from(info: crate::server::TaskListInfo) -> Self { fn from(info: crate::server::TaskListInfo) -> Self {
let (endtime, status) = info let (endtime, status) = info
.state .state
.map_or_else(|| (None, None), |(a,b)| (Some(a), Some(b))); .map_or_else(|| (None, None), |a| (Some(a.endtime()), Some(a.to_string())));
TaskListItem { TaskListItem {
upid: info.upid_str, upid: info.upid_str,
@ -654,7 +704,7 @@ pub enum LinuxBondMode {
/// Broadcast policy /// Broadcast policy
broadcast = 3, broadcast = 3,
/// IEEE 802.3ad Dynamic link aggregation /// IEEE 802.3ad Dynamic link aggregation
//#[serde(rename = "802.3ad")] #[serde(rename = "802.3ad")]
ieee802_3ad = 4, ieee802_3ad = 4,
/// Adaptive transmit load balancing /// Adaptive transmit load balancing
balance_tlb = 5, balance_tlb = 5,
@ -662,6 +712,23 @@ pub enum LinuxBondMode {
balance_alb = 6, balance_alb = 6,
} }
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
#[allow(non_camel_case_types)]
#[repr(u8)]
/// Bond Transmit Hash Policy for LACP (802.3ad)
pub enum BondXmitHashPolicy {
/// Layer 2
layer2 = 0,
/// Layer 2+3
#[serde(rename = "layer2+3")]
layer2_3 = 1,
/// Layer 3+4
#[serde(rename = "layer3+4")]
layer3_4 = 2,
}
#[api()] #[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)] #[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")] #[serde(rename_all = "lowercase")]
@ -767,7 +834,15 @@ pub const NETWORK_INTERFACE_LIST_SCHEMA: Schema = StringSchema::new(
bond_mode: { bond_mode: {
type: LinuxBondMode, type: LinuxBondMode,
optional: true, optional: true,
} },
"bond-primary": {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
optional: true,
},
bond_xmit_hash_policy: {
type: BondXmitHashPolicy,
optional: true,
},
} }
)] )]
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
@ -824,6 +899,10 @@ pub struct Interface {
pub slaves: Option<Vec<String>>, pub slaves: Option<Vec<String>>,
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub bond_mode: Option<LinuxBondMode>, pub bond_mode: Option<LinuxBondMode>,
#[serde(skip_serializing_if="Option::is_none")]
#[serde(rename = "bond-primary")]
pub bond_primary: Option<String>,
pub bond_xmit_hash_policy: Option<BondXmitHashPolicy>,
} }
// Regression tests // Regression tests

View File

@ -9,7 +9,7 @@
//! with `String`, meaning you can only make references to it. //! with `String`, meaning you can only make references to it.
//! * [`Realm`]: an owned realm (`String` equivalent). //! * [`Realm`]: an owned realm (`String` equivalent).
//! * [`RealmRef`]: a borrowed realm (`str` equivalent). //! * [`RealmRef`]: a borrowed realm (`str` equivalent).
//! * [`Userid`]: an owned user id (`"user@realm"`). Note that this does not have a separte //! * [`Userid`]: an owned user id (`"user@realm"`). Note that this does not have a separate
//! borrowed type. //! borrowed type.
//! //!
//! Note that `Username`s are not unique, therefore they do not implement `Eq` and cannot be //! Note that `Username`s are not unique, therefore they do not implement `Eq` and cannot be

View File

@ -11,7 +11,6 @@ use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::try_block; use proxmox::try_block;
use crate::api2::types::Userid; use crate::api2::types::Userid;
use crate::tools::epoch_now_u64;
fn compute_csrf_secret_digest( fn compute_csrf_secret_digest(
timestamp: i64, timestamp: i64,
@ -32,7 +31,7 @@ pub fn assemble_csrf_prevention_token(
userid: &Userid, userid: &Userid,
) -> String { ) -> String {
let epoch = epoch_now_u64().unwrap() as i64; let epoch = proxmox::tools::time::epoch_i64();
let digest = compute_csrf_secret_digest(epoch, secret, userid); let digest = compute_csrf_secret_digest(epoch, secret, userid);
@ -69,7 +68,7 @@ pub fn verify_csrf_prevention_token(
bail!("invalid signature."); bail!("invalid signature.");
} }
let now = epoch_now_u64()? as i64; let now = proxmox::tools::time::epoch_i64();
let age = now - ttime; let age = now - ttime;
if age < min_age { if age < min_age {

View File

@ -120,6 +120,8 @@ macro_rules! PROXMOX_BACKUP_READER_PROTOCOL_ID_V1 {
/// Unix system user used by proxmox-backup-proxy /// Unix system user used by proxmox-backup-proxy
pub const BACKUP_USER_NAME: &str = "backup"; pub const BACKUP_USER_NAME: &str = "backup";
/// Unix system group used by proxmox-backup-proxy
pub const BACKUP_GROUP_NAME: &str = "backup";
/// Return User info for the 'backup' user (``getpwnam_r(3)``) /// Return User info for the 'backup' user (``getpwnam_r(3)``)
pub fn backup_user() -> Result<nix::unistd::User, Error> { pub fn backup_user() -> Result<nix::unistd::User, Error> {
@ -129,6 +131,14 @@ pub fn backup_user() -> Result<nix::unistd::User, Error> {
} }
} }
/// Return Group info for the 'backup' group (``getgrnam(3)``)
pub fn backup_group() -> Result<nix::unistd::Group, Error> {
match nix::unistd::Group::from_name(BACKUP_GROUP_NAME)? {
Some(group) => Ok(group),
None => bail!("Unable to lookup backup user."),
}
}
mod file_formats; mod file_formats;
pub use file_formats::*; pub use file_formats::*;

View File

@ -4,8 +4,6 @@ use anyhow::{bail, format_err, Error};
use regex::Regex; use regex::Regex;
use std::os::unix::io::RawFd; use std::os::unix::io::RawFd;
use chrono::{DateTime, TimeZone, SecondsFormat, Utc};
use std::path::{PathBuf, Path}; use std::path::{PathBuf, Path};
use lazy_static::lazy_static; use lazy_static::lazy_static;
@ -45,6 +43,31 @@ pub struct BackupGroup {
backup_id: String, backup_id: String,
} }
impl std::cmp::Ord for BackupGroup {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
let type_order = self.backup_type.cmp(&other.backup_type);
if type_order != std::cmp::Ordering::Equal {
return type_order;
}
// try to compare IDs numerically
let id_self = self.backup_id.parse::<u64>();
let id_other = other.backup_id.parse::<u64>();
match (id_self, id_other) {
(Ok(id_self), Ok(id_other)) => id_self.cmp(&id_other),
(Ok(_), Err(_)) => std::cmp::Ordering::Less,
(Err(_), Ok(_)) => std::cmp::Ordering::Greater,
_ => self.backup_id.cmp(&other.backup_id),
}
}
}
impl std::cmp::PartialOrd for BackupGroup {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}
impl BackupGroup { impl BackupGroup {
pub fn new<T: Into<String>, U: Into<String>>(backup_type: T, backup_id: U) -> Self { pub fn new<T: Into<String>, U: Into<String>>(backup_type: T, backup_id: U) -> Self {
@ -80,8 +103,7 @@ impl BackupGroup {
tools::scandir(libc::AT_FDCWD, &path, &BACKUP_DATE_REGEX, |l2_fd, backup_time, file_type| { tools::scandir(libc::AT_FDCWD, &path, &BACKUP_DATE_REGEX, |l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
let dt = backup_time.parse::<DateTime<Utc>>()?; let backup_dir = BackupDir::with_rfc3339(&self.backup_type, &self.backup_id, backup_time)?;
let backup_dir = BackupDir::new(self.backup_type.clone(), self.backup_id.clone(), dt.timestamp());
let files = list_backup_files(l2_fd, backup_time)?; let files = list_backup_files(l2_fd, backup_time)?;
list.push(BackupInfo { backup_dir, files }); list.push(BackupInfo { backup_dir, files });
@ -91,7 +113,7 @@ impl BackupGroup {
Ok(list) Ok(list)
} }
pub fn last_successful_backup(&self, base_path: &Path) -> Result<Option<DateTime<Utc>>, Error> { pub fn last_successful_backup(&self, base_path: &Path) -> Result<Option<i64>, Error> {
let mut last = None; let mut last = None;
@ -117,11 +139,11 @@ impl BackupGroup {
} }
} }
let dt = backup_time.parse::<DateTime<Utc>>()?; let timestamp = proxmox::tools::time::parse_rfc3339(backup_time)?;
if let Some(last_dt) = last { if let Some(last_timestamp) = last {
if dt > last_dt { last = Some(dt); } if timestamp > last_timestamp { last = Some(timestamp); }
} else { } else {
last = Some(dt); last = Some(timestamp);
} }
Ok(()) Ok(())
@ -178,45 +200,63 @@ pub struct BackupDir {
/// Backup group /// Backup group
group: BackupGroup, group: BackupGroup,
/// Backup timestamp /// Backup timestamp
backup_time: DateTime<Utc>, backup_time: i64,
// backup_time as rfc3339
backup_time_string: String
} }
impl BackupDir { impl BackupDir {
pub fn new<T, U>(backup_type: T, backup_id: U, timestamp: i64) -> Self pub fn new<T, U>(backup_type: T, backup_id: U, backup_time: i64) -> Result<Self, Error>
where where
T: Into<String>, T: Into<String>,
U: Into<String>, U: Into<String>,
{ {
// Note: makes sure that nanoseconds is 0 let group = BackupGroup::new(backup_type.into(), backup_id.into());
Self { BackupDir::with_group(group, backup_time)
group: BackupGroup::new(backup_type.into(), backup_id.into()),
backup_time: Utc.timestamp(timestamp, 0),
}
} }
pub fn new_with_group(group: BackupGroup, timestamp: i64) -> Self {
Self { group, backup_time: Utc.timestamp(timestamp, 0) } pub fn with_rfc3339<T,U,V>(backup_type: T, backup_id: U, backup_time_string: V) -> Result<Self, Error>
where
T: Into<String>,
U: Into<String>,
V: Into<String>,
{
let backup_time_string = backup_time_string.into();
let backup_time = proxmox::tools::time::parse_rfc3339(&backup_time_string)?;
let group = BackupGroup::new(backup_type.into(), backup_id.into());
Ok(Self { group, backup_time, backup_time_string })
}
pub fn with_group(group: BackupGroup, backup_time: i64) -> Result<Self, Error> {
let backup_time_string = Self::backup_time_to_string(backup_time)?;
Ok(Self { group, backup_time, backup_time_string })
} }
pub fn group(&self) -> &BackupGroup { pub fn group(&self) -> &BackupGroup {
&self.group &self.group
} }
pub fn backup_time(&self) -> DateTime<Utc> { pub fn backup_time(&self) -> i64 {
self.backup_time self.backup_time
} }
pub fn backup_time_string(&self) -> &str {
&self.backup_time_string
}
pub fn relative_path(&self) -> PathBuf { pub fn relative_path(&self) -> PathBuf {
let mut relative_path = self.group.group_path(); let mut relative_path = self.group.group_path();
relative_path.push(Self::backup_time_to_string(self.backup_time)); relative_path.push(self.backup_time_string.clone());
relative_path relative_path
} }
pub fn backup_time_to_string(backup_time: DateTime<Utc>) -> String { pub fn backup_time_to_string(backup_time: i64) -> Result<String, Error> {
backup_time.to_rfc3339_opts(SecondsFormat::Secs, true) // fixme: can this fail? (avoid unwrap)
proxmox::tools::time::epoch_to_rfc3339_utc(backup_time)
} }
} }
@ -230,9 +270,11 @@ impl std::str::FromStr for BackupDir {
let cap = SNAPSHOT_PATH_REGEX.captures(path) let cap = SNAPSHOT_PATH_REGEX.captures(path)
.ok_or_else(|| format_err!("unable to parse backup snapshot path '{}'", path))?; .ok_or_else(|| format_err!("unable to parse backup snapshot path '{}'", path))?;
let group = BackupGroup::new(cap.get(1).unwrap().as_str(), cap.get(2).unwrap().as_str()); BackupDir::with_rfc3339(
let backup_time = cap.get(3).unwrap().as_str().parse::<DateTime<Utc>>()?; cap.get(1).unwrap().as_str(),
Ok(BackupDir::from((group, backup_time.timestamp()))) cap.get(2).unwrap().as_str(),
cap.get(3).unwrap().as_str(),
)
} }
} }
@ -240,14 +282,7 @@ impl std::fmt::Display for BackupDir {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let backup_type = self.group.backup_type(); let backup_type = self.group.backup_type();
let id = self.group.backup_id(); let id = self.group.backup_id();
let time = Self::backup_time_to_string(self.backup_time); write!(f, "{}/{}/{}", backup_type, id, self.backup_time_string)
write!(f, "{}/{}/{}", backup_type, id, time)
}
}
impl From<(BackupGroup, i64)> for BackupDir {
fn from((group, timestamp): (BackupGroup, i64)) -> Self {
Self { group, backup_time: Utc.timestamp(timestamp, 0) }
} }
} }
@ -305,13 +340,12 @@ impl BackupInfo {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |l1_fd, backup_id, file_type| { tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |l1_fd, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l1_fd, backup_id, &BACKUP_DATE_REGEX, |l2_fd, backup_time, file_type| { tools::scandir(l1_fd, backup_id, &BACKUP_DATE_REGEX, |l2_fd, backup_time_string, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
let dt = backup_time.parse::<DateTime<Utc>>()?; let backup_dir = BackupDir::with_rfc3339(backup_type, backup_id, backup_time_string)?;
let backup_dir = BackupDir::new(backup_type, backup_id, dt.timestamp());
let files = list_backup_files(l2_fd, backup_time)?; let files = list_backup_files(l2_fd, backup_time_string)?;
list.push(BackupInfo { backup_dir, files }); list.push(BackupInfo { backup_dir, files });

View File

@ -5,7 +5,6 @@ use std::io::{Read, Write, Seek, SeekFrom};
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use chrono::offset::{TimeZone, Local};
use pathpatterns::{MatchList, MatchType}; use pathpatterns::{MatchList, MatchType};
use proxmox::tools::io::ReadExt; use proxmox::tools::io::ReadExt;
@ -533,17 +532,17 @@ impl <R: Read + Seek> CatalogReader<R> {
self.dump_dir(&path, pos)?; self.dump_dir(&path, pos)?;
} }
CatalogEntryType::File => { CatalogEntryType::File => {
let dt = Local let mut mtime_string = mtime.to_string();
.timestamp_opt(mtime as i64, 0) if let Ok(s) = proxmox::tools::time::strftime_local("%FT%TZ", mtime as i64) {
.single() // chrono docs say timestamp_opt can only be None or Single! mtime_string = s;
.unwrap_or_else(|| Local.timestamp(0, 0)); }
println!( println!(
"{} {:?} {} {}", "{} {:?} {} {}",
etype, etype,
path, path,
size, size,
dt.to_rfc3339_opts(chrono::SecondsFormat::Secs, false), mtime_string,
); );
} }
_ => { _ => {

View File

@ -104,12 +104,11 @@ impl ChunkStore {
} }
let percentage = (i*100)/(64*1024); let percentage = (i*100)/(64*1024);
if percentage != last_percentage { if percentage != last_percentage {
eprintln!("Percentage done: {}", percentage); // eprintln!("ChunkStore::create {}%", percentage);
last_percentage = percentage; last_percentage = percentage;
} }
} }
Self::open(name, base) Self::open(name, base)
} }
@ -187,7 +186,7 @@ impl ChunkStore {
pub fn get_chunk_iterator( pub fn get_chunk_iterator(
&self, &self,
) -> Result< ) -> Result<
impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize)> + std::iter::FusedIterator, impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize, bool)> + std::iter::FusedIterator,
Error Error
> { > {
use nix::dir::Dir; use nix::dir::Dir;
@ -219,19 +218,21 @@ impl ChunkStore {
Some(Ok(entry)) => { Some(Ok(entry)) => {
// skip files if they're not a hash // skip files if they're not a hash
let bytes = entry.file_name().to_bytes(); let bytes = entry.file_name().to_bytes();
if bytes.len() != 64 { if bytes.len() != 64 && bytes.len() != 64 + ".0.bad".len() {
continue; continue;
} }
if !bytes.iter().all(u8::is_ascii_hexdigit) { if !bytes.iter().take(64).all(u8::is_ascii_hexdigit) {
continue; continue;
} }
return Some((Ok(entry), percentage));
let bad = bytes.ends_with(".bad".as_bytes());
return Some((Ok(entry), percentage, bad));
} }
Some(Err(err)) => { Some(Err(err)) => {
// stop after first error // stop after first error
done = true; done = true;
// and pass the error through: // and pass the error through:
return Some((Err(err), percentage)); return Some((Err(err), percentage, false));
} }
None => (), // open next directory None => (), // open next directory
} }
@ -261,7 +262,7 @@ impl ChunkStore {
// other errors are fatal, so end our iteration // other errors are fatal, so end our iteration
done = true; done = true;
// and pass the error through: // and pass the error through:
return Some((Err(format_err!("unable to read subdir '{}' - {}", subdir, err)), percentage)); return Some((Err(format_err!("unable to read subdir '{}' - {}", subdir, err)), percentage, false));
} }
} }
} }
@ -280,6 +281,7 @@ impl ChunkStore {
worker: &WorkerTask, worker: &WorkerTask,
) -> Result<(), Error> { ) -> Result<(), Error> {
use nix::sys::stat::fstatat; use nix::sys::stat::fstatat;
use nix::unistd::{unlinkat, UnlinkatFlags};
let mut min_atime = phase1_start_time - 3600*24; // at least 24h (see mount option relatime) let mut min_atime = phase1_start_time - 3600*24; // at least 24h (see mount option relatime)
@ -292,10 +294,10 @@ impl ChunkStore {
let mut last_percentage = 0; let mut last_percentage = 0;
let mut chunk_count = 0; let mut chunk_count = 0;
for (entry, percentage) in self.get_chunk_iterator()? { for (entry, percentage, bad) in self.get_chunk_iterator()? {
if last_percentage != percentage { if last_percentage != percentage {
last_percentage = percentage; last_percentage = percentage;
worker.log(format!("percentage done: {}, chunk count: {}", percentage, chunk_count)); worker.log(format!("percentage done: phase2 {}% (processed {} chunks)", percentage, chunk_count));
} }
worker.fail_on_abort()?; worker.fail_on_abort()?;
@ -321,14 +323,47 @@ impl ChunkStore {
let lock = self.mutex.lock(); let lock = self.mutex.lock();
if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) { if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) {
if stat.st_atime < min_atime { if bad {
// filename validity checked in iterator
let orig_filename = std::ffi::CString::new(&filename.to_bytes()[..64])?;
match fstatat(
dirfd,
orig_filename.as_c_str(),
nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW)
{
Ok(_) => {
match unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
Err(err) =>
worker.warn(format!(
"unlinking corrupt chunk {:?} failed on store '{}' - {}",
filename,
self.name,
err,
)),
Ok(_) => {
status.removed_bad += 1;
status.removed_bytes += stat.st_size as u64;
}
}
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
// chunk hasn't been rewritten yet, keep .bad file
},
Err(err) => {
// some other error, warn user and keep .bad file around too
worker.warn(format!(
"error during stat on '{:?}' - {}",
orig_filename,
err,
));
}
}
} else if stat.st_atime < min_atime {
//let age = now - stat.st_atime; //let age = now - stat.st_atime;
//println!("UNLINK {} {:?}", age/(3600*24), filename); //println!("UNLINK {} {:?}", age/(3600*24), filename);
let res = unsafe { libc::unlinkat(dirfd, filename.as_ptr(), 0) }; if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
if res != 0 {
let err = nix::Error::last();
bail!( bail!(
"unlink chunk {:?} failed on store '{}' - {}", "unlinking chunk {:?} failed on store '{}' - {}",
filename, filename,
self.name, self.name,
err, err,
@ -366,6 +401,7 @@ impl ChunkStore {
if let Ok(metadata) = std::fs::metadata(&chunk_path) { if let Ok(metadata) = std::fs::metadata(&chunk_path) {
if metadata.is_file() { if metadata.is_file() {
self.touch_chunk(digest)?;
return Ok((true, metadata.len())); return Ok((true, metadata.len()));
} else { } else {
bail!("Got unexpected file type on store '{}' for chunk {}", self.name, digest_str); bail!("Got unexpected file type on store '{}' for chunk {}", self.name, digest_str);

View File

@ -10,7 +10,6 @@
use std::io::Write; use std::io::Write;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
use chrono::{Local, TimeZone, DateTime};
use openssl::hash::MessageDigest; use openssl::hash::MessageDigest;
use openssl::pkcs5::pbkdf2_hmac; use openssl::pkcs5::pbkdf2_hmac;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode}; use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
@ -216,10 +215,10 @@ impl CryptConfig {
pub fn generate_rsa_encoded_key( pub fn generate_rsa_encoded_key(
&self, &self,
rsa: openssl::rsa::Rsa<openssl::pkey::Public>, rsa: openssl::rsa::Rsa<openssl::pkey::Public>,
created: DateTime<Local>, created: i64,
) -> Result<Vec<u8>, Error> { ) -> Result<Vec<u8>, Error> {
let modified = Local.timestamp(Local::now().timestamp(), 0); let modified = proxmox::tools::time::epoch_i64();
let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() }; let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() };
let data = serde_json::to_string(&key_config)?.as_bytes().to_vec(); let data = serde_json::to_string(&key_config)?.as_bytes().to_vec();

View File

@ -72,7 +72,7 @@ impl DataBlob {
} }
// verify the CRC32 checksum // verify the CRC32 checksum
fn verify_crc(&self) -> Result<(), Error> { pub fn verify_crc(&self) -> Result<(), Error> {
let expected_crc = self.compute_crc(); let expected_crc = self.compute_crc();
if expected_crc != self.crc() { if expected_crc != self.crc() {
bail!("Data blob has wrong CRC checksum."); bail!("Data blob has wrong CRC checksum.");
@ -198,7 +198,10 @@ impl DataBlob {
Ok(data) Ok(data)
} else if magic == &COMPRESSED_BLOB_MAGIC_1_0 { } else if magic == &COMPRESSED_BLOB_MAGIC_1_0 {
let data_start = std::mem::size_of::<DataBlobHeader>(); let data_start = std::mem::size_of::<DataBlobHeader>();
let data = zstd::block::decompress(&self.raw_data[data_start..], MAX_BLOB_SIZE)?; let mut reader = &self.raw_data[data_start..];
let data = zstd::stream::decode_all(&mut reader)?;
// zstd::block::decompress is abou 10% slower
// let data = zstd::block::decompress(&self.raw_data[data_start..], MAX_BLOB_SIZE)?;
if let Some(digest) = digest { if let Some(digest) = digest {
Self::verify_digest(&data, None, digest)?; Self::verify_digest(&data, None, digest)?;
} }
@ -268,6 +271,12 @@ impl DataBlob {
} }
} }
/// Returns if chunk is encrypted
pub fn is_encrypted(&self) -> bool {
let magic = self.magic();
magic == &ENCR_COMPR_BLOB_MAGIC_1_0 || magic == &ENCRYPTED_BLOB_MAGIC_1_0
}
/// Verify digest and data length for unencrypted chunks. /// Verify digest and data length for unencrypted chunks.
/// ///
/// To do that, we need to decompress data first. Please note that /// To do that, we need to decompress data first. Please note that
@ -304,7 +313,7 @@ impl DataBlob {
let digest = match config { let digest = match config {
Some(config) => config.compute_digest(data), Some(config) => config.compute_digest(data),
None => openssl::sha::sha256(&data), None => openssl::sha::sha256(data),
}; };
if &digest != expected_digest { if &digest != expected_digest {
bail!("detected chunk with wrong digest."); bail!("detected chunk with wrong digest.");

View File

@ -6,7 +6,6 @@ use std::convert::TryFrom;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use lazy_static::lazy_static; use lazy_static::lazy_static;
use chrono::{DateTime, Utc};
use serde_json::Value; use serde_json::Value;
use proxmox::tools::fs::{replace_file, CreateOptions}; use proxmox::tools::fs::{replace_file, CreateOptions};
@ -21,6 +20,7 @@ use super::{DataBlob, ArchiveType, archive_type};
use crate::config::datastore; use crate::config::datastore;
use crate::server::WorkerTask; use crate::server::WorkerTask;
use crate::tools; use crate::tools;
use crate::tools::format::HumanByte;
use crate::tools::fs::{lock_dir_noblock, DirLockGuard}; use crate::tools::fs::{lock_dir_noblock, DirLockGuard};
use crate::api2::types::{GarbageCollectionStatus, Userid}; use crate::api2::types::{GarbageCollectionStatus, Userid};
@ -70,6 +70,10 @@ impl DataStore {
let path = store_config["path"].as_str().unwrap(); let path = store_config["path"].as_str().unwrap();
Self::open_with_path(store_name, Path::new(path))
}
pub fn open_with_path(store_name: &str, path: &Path) -> Result<Self, Error> {
let chunk_store = ChunkStore::open(store_name, path)?; let chunk_store = ChunkStore::open(store_name, path)?;
let gc_status = GarbageCollectionStatus::default(); let gc_status = GarbageCollectionStatus::default();
@ -84,7 +88,7 @@ impl DataStore {
pub fn get_chunk_iterator( pub fn get_chunk_iterator(
&self, &self,
) -> Result< ) -> Result<
impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize)>, impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize, bool)>,
Error Error
> { > {
self.chunk_store.get_chunk_iterator() self.chunk_store.get_chunk_iterator()
@ -241,7 +245,7 @@ impl DataStore {
/// Returns the time of the last successful backup /// Returns the time of the last successful backup
/// ///
/// Or None if there is no backup in the group (or the group dir does not exist). /// Or None if there is no backup in the group (or the group dir does not exist).
pub fn last_successful_backup(&self, backup_group: &BackupGroup) -> Result<Option<DateTime<Utc>>, Error> { pub fn last_successful_backup(&self, backup_group: &BackupGroup) -> Result<Option<i64>, Error> {
let base_path = self.base_path(); let base_path = self.base_path();
let mut group_path = base_path.clone(); let mut group_path = base_path.clone();
group_path.push(backup_group.group_path()); group_path.push(backup_group.group_path());
@ -299,7 +303,7 @@ impl DataStore {
/// And set the owner to 'userid'. If the group already exists, it returns the /// And set the owner to 'userid'. If the group already exists, it returns the
/// current owner (instead of setting the owner). /// current owner (instead of setting the owner).
/// ///
/// This also aquires an exclusive lock on the directory and returns the lock guard. /// This also acquires an exclusive lock on the directory and returns the lock guard.
pub fn create_locked_backup_group( pub fn create_locked_backup_group(
&self, &self,
backup_group: &BackupGroup, backup_group: &BackupGroup,
@ -429,6 +433,12 @@ impl DataStore {
let image_list = self.list_images()?; let image_list = self.list_images()?;
let image_count = image_list.len();
let mut done = 0;
let mut last_percentage: usize = 0;
for path in image_list { for path in image_list {
worker.fail_on_abort()?; worker.fail_on_abort()?;
@ -443,6 +453,14 @@ impl DataStore {
self.index_mark_used_chunks(index, &path, status, worker)?; self.index_mark_used_chunks(index, &path, status, worker)?;
} }
} }
done += 1;
let percentage = done*100/image_count;
if percentage > last_percentage {
worker.log(format!("percentage done: phase1 {}% ({} of {} index files)",
percentage, done, image_count));
last_percentage = percentage;
}
} }
Ok(()) Ok(())
@ -460,11 +478,13 @@ impl DataStore {
if let Ok(ref mut _mutex) = self.gc_mutex.try_lock() { if let Ok(ref mut _mutex) = self.gc_mutex.try_lock() {
// avoids that we run GC if an old daemon process has still a
// running backup writer, which is not save as we have no "oldest
// writer" information and thus no safe atime cutoff
let _exclusive_lock = self.chunk_store.try_exclusive_lock()?; let _exclusive_lock = self.chunk_store.try_exclusive_lock()?;
let now = unsafe { libc::time(std::ptr::null_mut()) }; let phase1_start_time = proxmox::tools::time::epoch_i64();
let oldest_writer = self.chunk_store.oldest_writer().unwrap_or(phase1_start_time);
let oldest_writer = self.chunk_store.oldest_writer().unwrap_or(now);
let mut gc_status = GarbageCollectionStatus::default(); let mut gc_status = GarbageCollectionStatus::default();
gc_status.upid = Some(worker.to_string()); gc_status.upid = Some(worker.to_string());
@ -474,26 +494,29 @@ impl DataStore {
self.mark_used_chunks(&mut gc_status, &worker)?; self.mark_used_chunks(&mut gc_status, &worker)?;
worker.log("Start GC phase2 (sweep unused chunks)"); worker.log("Start GC phase2 (sweep unused chunks)");
self.chunk_store.sweep_unused_chunks(oldest_writer, now, &mut gc_status, &worker)?; self.chunk_store.sweep_unused_chunks(oldest_writer, phase1_start_time, &mut gc_status, &worker)?;
worker.log(&format!("Removed bytes: {}", gc_status.removed_bytes)); worker.log(&format!("Removed garbage: {}", HumanByte::from(gc_status.removed_bytes)));
worker.log(&format!("Removed chunks: {}", gc_status.removed_chunks)); worker.log(&format!("Removed chunks: {}", gc_status.removed_chunks));
if gc_status.pending_bytes > 0 { if gc_status.pending_bytes > 0 {
worker.log(&format!("Pending removals: {} bytes ({} chunks)", gc_status.pending_bytes, gc_status.pending_chunks)); worker.log(&format!("Pending removals: {} (in {} chunks)", HumanByte::from(gc_status.pending_bytes), gc_status.pending_chunks));
}
if gc_status.removed_bad > 0 {
worker.log(&format!("Removed bad files: {}", gc_status.removed_bad));
} }
worker.log(&format!("Original data bytes: {}", gc_status.index_data_bytes)); worker.log(&format!("Original data usage: {}", HumanByte::from(gc_status.index_data_bytes)));
if gc_status.index_data_bytes > 0 { if gc_status.index_data_bytes > 0 {
let comp_per = (gc_status.disk_bytes*100)/gc_status.index_data_bytes; let comp_per = (gc_status.disk_bytes as f64 * 100.)/gc_status.index_data_bytes as f64;
worker.log(&format!("Disk bytes: {} ({} %)", gc_status.disk_bytes, comp_per)); worker.log(&format!("On-Disk usage: {} ({:.2}%)", HumanByte::from(gc_status.disk_bytes), comp_per));
} }
worker.log(&format!("Disk chunks: {}", gc_status.disk_chunks)); worker.log(&format!("On-Disk chunks: {}", gc_status.disk_chunks));
if gc_status.disk_chunks > 0 { if gc_status.disk_chunks > 0 {
let avg_chunk = gc_status.disk_bytes/(gc_status.disk_chunks as u64); let avg_chunk = gc_status.disk_bytes/(gc_status.disk_chunks as u64);
worker.log(&format!("Average chunk size: {}", avg_chunk)); worker.log(&format!("Average chunk size: {}", HumanByte::from(avg_chunk)));
} }
*self.last_gc_status.lock().unwrap() = gc_status; *self.last_gc_status.lock().unwrap() = gc_status;

View File

@ -11,7 +11,6 @@ use anyhow::{bail, format_err, Error};
use proxmox::tools::io::ReadExt; use proxmox::tools::io::ReadExt;
use proxmox::tools::uuid::Uuid; use proxmox::tools::uuid::Uuid;
use proxmox::tools::vec;
use proxmox::tools::mmap::Mmap; use proxmox::tools::mmap::Mmap;
use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation}; use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
@ -22,14 +21,14 @@ use super::read_chunk::ReadChunk;
use super::Chunker; use super::Chunker;
use super::IndexFile; use super::IndexFile;
use super::{DataBlob, DataChunkBuilder}; use super::{DataBlob, DataChunkBuilder};
use crate::tools::{self, epoch_now_u64}; use crate::tools;
/// Header format definition for dynamic index files (`.dixd`) /// Header format definition for dynamic index files (`.dixd`)
#[repr(C)] #[repr(C)]
pub struct DynamicIndexHeader { pub struct DynamicIndexHeader {
pub magic: [u8; 8], pub magic: [u8; 8],
pub uuid: [u8; 16], pub uuid: [u8; 16],
pub ctime: u64, pub ctime: i64,
/// Sha256 over the index ``SHA256(offset1||digest1||offset2||digest2||...)`` /// Sha256 over the index ``SHA256(offset1||digest1||offset2||digest2||...)``
pub index_csum: [u8; 32], pub index_csum: [u8; 32],
reserved: [u8; 4032], // overall size is one page (4096 bytes) reserved: [u8; 4032], // overall size is one page (4096 bytes)
@ -41,6 +40,24 @@ proxmox::static_assert_size!(DynamicIndexHeader, 4096);
// pub data: DynamicIndexHeaderData, // pub data: DynamicIndexHeaderData,
// } // }
impl DynamicIndexHeader {
/// Convenience method to allocate a zero-initialized header struct.
pub fn zeroed() -> Box<Self> {
unsafe {
Box::from_raw(std::alloc::alloc_zeroed(std::alloc::Layout::new::<Self>()) as *mut Self)
}
}
pub fn as_bytes(&self) -> &[u8] {
unsafe {
std::slice::from_raw_parts(
self as *const Self as *const u8,
std::mem::size_of::<Self>(),
)
}
}
}
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
#[repr(C)] #[repr(C)]
pub struct DynamicEntry { pub struct DynamicEntry {
@ -60,7 +77,7 @@ pub struct DynamicIndexReader {
pub size: usize, pub size: usize,
index: Mmap<DynamicEntry>, index: Mmap<DynamicEntry>,
pub uuid: [u8; 16], pub uuid: [u8; 16],
pub ctime: u64, pub ctime: i64,
pub index_csum: [u8; 32], pub index_csum: [u8; 32],
} }
@ -90,7 +107,7 @@ impl DynamicIndexReader {
bail!("got unknown magic number"); bail!("got unknown magic number");
} }
let ctime = u64::from_le(header.ctime); let ctime = proxmox::tools::time::epoch_i64();
let rawfd = file.as_raw_fd(); let rawfd = file.as_raw_fd();
@ -463,7 +480,7 @@ pub struct DynamicIndexWriter {
tmp_filename: PathBuf, tmp_filename: PathBuf,
csum: Option<openssl::sha::Sha256>, csum: Option<openssl::sha::Sha256>,
pub uuid: [u8; 16], pub uuid: [u8; 16],
pub ctime: u64, pub ctime: i64,
} }
impl Drop for DynamicIndexWriter { impl Drop for DynamicIndexWriter {
@ -489,27 +506,16 @@ impl DynamicIndexWriter {
let mut writer = BufWriter::with_capacity(1024 * 1024, file); let mut writer = BufWriter::with_capacity(1024 * 1024, file);
let header_size = std::mem::size_of::<DynamicIndexHeader>(); let ctime = proxmox::tools::time::epoch_i64();
// todo: use static assertion when available in rust
if header_size != 4096 {
panic!("got unexpected header size");
}
let ctime = epoch_now_u64()?;
let uuid = Uuid::generate(); let uuid = Uuid::generate();
let mut buffer = vec::zeroed(header_size); let mut header = DynamicIndexHeader::zeroed();
let header = crate::tools::map_struct_mut::<DynamicIndexHeader>(&mut buffer)?;
header.magic = super::DYNAMIC_SIZED_CHUNK_INDEX_1_0; header.magic = super::DYNAMIC_SIZED_CHUNK_INDEX_1_0;
header.ctime = u64::to_le(ctime); header.ctime = i64::to_le(ctime);
header.uuid = *uuid.as_bytes(); header.uuid = *uuid.as_bytes();
// header.index_csum = [0u8; 32];
header.index_csum = [0u8; 32]; writer.write_all(header.as_bytes())?;
writer.write_all(&buffer)?;
let csum = Some(openssl::sha::Sha256::new()); let csum = Some(openssl::sha::Sha256::new());

View File

@ -4,9 +4,8 @@ use std::io::{Seek, SeekFrom};
use super::chunk_stat::*; use super::chunk_stat::*;
use super::chunk_store::*; use super::chunk_store::*;
use super::{IndexFile, ChunkReadInfo}; use super::{IndexFile, ChunkReadInfo};
use crate::tools::{self, epoch_now_u64}; use crate::tools;
use chrono::{Local, TimeZone};
use std::fs::File; use std::fs::File;
use std::io::Write; use std::io::Write;
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
@ -23,7 +22,7 @@ use proxmox::tools::Uuid;
pub struct FixedIndexHeader { pub struct FixedIndexHeader {
pub magic: [u8; 8], pub magic: [u8; 8],
pub uuid: [u8; 16], pub uuid: [u8; 16],
pub ctime: u64, pub ctime: i64,
/// Sha256 over the index ``SHA256(digest1||digest2||...)`` /// Sha256 over the index ``SHA256(digest1||digest2||...)``
pub index_csum: [u8; 32], pub index_csum: [u8; 32],
pub size: u64, pub size: u64,
@ -41,7 +40,7 @@ pub struct FixedIndexReader {
index_length: usize, index_length: usize,
index: *mut u8, index: *mut u8,
pub uuid: [u8; 16], pub uuid: [u8; 16],
pub ctime: u64, pub ctime: i64,
pub index_csum: [u8; 32], pub index_csum: [u8; 32],
} }
@ -82,7 +81,7 @@ impl FixedIndexReader {
} }
let size = u64::from_le(header.size); let size = u64::from_le(header.size);
let ctime = u64::from_le(header.ctime); let ctime = i64::from_le(header.ctime);
let chunk_size = u64::from_le(header.chunk_size); let chunk_size = u64::from_le(header.chunk_size);
let index_length = ((size + chunk_size - 1) / chunk_size) as usize; let index_length = ((size + chunk_size - 1) / chunk_size) as usize;
@ -148,10 +147,13 @@ impl FixedIndexReader {
pub fn print_info(&self) { pub fn print_info(&self) {
println!("Size: {}", self.size); println!("Size: {}", self.size);
println!("ChunkSize: {}", self.chunk_size); println!("ChunkSize: {}", self.chunk_size);
println!(
"CTime: {}", let mut ctime_str = self.ctime.to_string();
Local.timestamp(self.ctime as i64, 0).format("%c") if let Ok(s) = proxmox::tools::time::strftime_local("%c",self.ctime) {
); ctime_str = s;
}
println!("CTime: {}", ctime_str);
println!("UUID: {:?}", self.uuid); println!("UUID: {:?}", self.uuid);
} }
} }
@ -228,7 +230,7 @@ pub struct FixedIndexWriter {
index_length: usize, index_length: usize,
index: *mut u8, index: *mut u8,
pub uuid: [u8; 16], pub uuid: [u8; 16],
pub ctime: u64, pub ctime: i64,
} }
// `index` is mmap()ed which cannot be thread-local so should be sendable // `index` is mmap()ed which cannot be thread-local so should be sendable
@ -271,7 +273,7 @@ impl FixedIndexWriter {
panic!("got unexpected header size"); panic!("got unexpected header size");
} }
let ctime = epoch_now_u64()?; let ctime = proxmox::tools::time::epoch_i64();
let uuid = Uuid::generate(); let uuid = Uuid::generate();
@ -279,7 +281,7 @@ impl FixedIndexWriter {
let header = unsafe { &mut *(buffer.as_ptr() as *mut FixedIndexHeader) }; let header = unsafe { &mut *(buffer.as_ptr() as *mut FixedIndexHeader) };
header.magic = super::FIXED_SIZED_CHUNK_INDEX_1_0; header.magic = super::FIXED_SIZED_CHUNK_INDEX_1_0;
header.ctime = u64::to_le(ctime); header.ctime = i64::to_le(ctime);
header.size = u64::to_le(size as u64); header.size = u64::to_le(size as u64);
header.chunk_size = u64::to_le(chunk_size as u64); header.chunk_size = u64::to_le(chunk_size as u64);
header.uuid = *uuid.as_bytes(); header.uuid = *uuid.as_bytes();

View File

@ -1,7 +1,6 @@
use anyhow::{bail, format_err, Context, Error}; use anyhow::{bail, format_err, Context, Error};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use chrono::{Local, TimeZone, DateTime};
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions}; use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::try_block; use proxmox::try_block;
@ -61,10 +60,10 @@ impl KeyDerivationConfig {
#[derive(Deserialize, Serialize, Debug)] #[derive(Deserialize, Serialize, Debug)]
pub struct KeyConfig { pub struct KeyConfig {
pub kdf: Option<KeyDerivationConfig>, pub kdf: Option<KeyDerivationConfig>,
#[serde(with = "proxmox::tools::serde::date_time_as_rfc3339")] #[serde(with = "proxmox::tools::serde::epoch_as_rfc3339")]
pub created: DateTime<Local>, pub created: i64,
#[serde(with = "proxmox::tools::serde::date_time_as_rfc3339")] #[serde(with = "proxmox::tools::serde::epoch_as_rfc3339")]
pub modified: DateTime<Local>, pub modified: i64,
#[serde(with = "proxmox::tools::serde::bytes_as_base64")] #[serde(with = "proxmox::tools::serde::bytes_as_base64")]
pub data: Vec<u8>, pub data: Vec<u8>,
} }
@ -136,7 +135,7 @@ pub fn encrypt_key_with_passphrase(
enc_data.extend_from_slice(&tag); enc_data.extend_from_slice(&tag);
enc_data.extend_from_slice(&encrypted_key); enc_data.extend_from_slice(&encrypted_key);
let created = Local.timestamp(Local::now().timestamp(), 0); let created = proxmox::tools::time::epoch_i64();
Ok(KeyConfig { Ok(KeyConfig {
kdf: Some(kdf), kdf: Some(kdf),
@ -149,7 +148,7 @@ pub fn encrypt_key_with_passphrase(
pub fn load_and_decrypt_key( pub fn load_and_decrypt_key(
path: &std::path::Path, path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>, passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], DateTime<Local>), Error> { ) -> Result<([u8;32], i64), Error> {
do_load_and_decrypt_key(path, passphrase) do_load_and_decrypt_key(path, passphrase)
.with_context(|| format!("failed to load decryption key from {:?}", path)) .with_context(|| format!("failed to load decryption key from {:?}", path))
} }
@ -157,14 +156,14 @@ pub fn load_and_decrypt_key(
fn do_load_and_decrypt_key( fn do_load_and_decrypt_key(
path: &std::path::Path, path: &std::path::Path,
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>, passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], DateTime<Local>), Error> { ) -> Result<([u8;32], i64), Error> {
decrypt_key(&file_get_contents(&path)?, passphrase) decrypt_key(&file_get_contents(&path)?, passphrase)
} }
pub fn decrypt_key( pub fn decrypt_key(
mut keydata: &[u8], mut keydata: &[u8],
passphrase: &dyn Fn() -> Result<Vec<u8>, Error>, passphrase: &dyn Fn() -> Result<Vec<u8>, Error>,
) -> Result<([u8;32], DateTime<Local>), Error> { ) -> Result<([u8;32], i64), Error> {
let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?; let key_config: KeyConfig = serde_json::from_reader(&mut keydata)?;
let raw_data = key_config.data; let raw_data = key_config.data;

View File

@ -103,7 +103,7 @@ impl BackupManifest {
Self { Self {
backup_type: snapshot.group().backup_type().into(), backup_type: snapshot.group().backup_type().into(),
backup_id: snapshot.group().backup_id().into(), backup_id: snapshot.group().backup_id().into(),
backup_time: snapshot.backup_time().timestamp(), backup_time: snapshot.backup_time(),
files: Vec::new(), files: Vec::new(),
unprotected: json!({}), unprotected: json!({}),
signature: None, signature: None,
@ -145,7 +145,7 @@ impl BackupManifest {
Ok(()) Ok(())
} }
// Generate cannonical json // Generate canonical json
fn to_canonical_json(value: &Value) -> Result<Vec<u8>, Error> { fn to_canonical_json(value: &Value) -> Result<Vec<u8>, Error> {
let mut data = Vec::new(); let mut data = Vec::new();
Self::write_canonical_json(value, &mut data)?; Self::write_canonical_json(value, &mut data)?;

View File

@ -2,18 +2,16 @@ use anyhow::{Error};
use std::collections::{HashMap, HashSet}; use std::collections::{HashMap, HashSet};
use std::path::PathBuf; use std::path::PathBuf;
use chrono::{DateTime, Timelike, Datelike, Local}; use super::BackupInfo;
use super::{BackupDir, BackupInfo};
enum PruneMark { Keep, KeepPartial, Remove } enum PruneMark { Keep, KeepPartial, Remove }
fn mark_selections<F: Fn(DateTime<Local>, &BackupInfo) -> String> ( fn mark_selections<F: Fn(&BackupInfo) -> Result<String, Error>> (
mark: &mut HashMap<PathBuf, PruneMark>, mark: &mut HashMap<PathBuf, PruneMark>,
list: &Vec<BackupInfo>, list: &Vec<BackupInfo>,
keep: usize, keep: usize,
select_id: F, select_id: F,
) { ) -> Result<(), Error> {
let mut include_hash = HashSet::new(); let mut include_hash = HashSet::new();
@ -21,8 +19,7 @@ fn mark_selections<F: Fn(DateTime<Local>, &BackupInfo) -> String> (
for info in list { for info in list {
let backup_id = info.backup_dir.relative_path(); let backup_id = info.backup_dir.relative_path();
if let Some(PruneMark::Keep) = mark.get(&backup_id) { if let Some(PruneMark::Keep) = mark.get(&backup_id) {
let local_time = info.backup_dir.backup_time().with_timezone(&Local); let sel_id: String = select_id(&info)?;
let sel_id: String = select_id(local_time, &info);
already_included.insert(sel_id); already_included.insert(sel_id);
} }
} }
@ -30,8 +27,7 @@ fn mark_selections<F: Fn(DateTime<Local>, &BackupInfo) -> String> (
for info in list { for info in list {
let backup_id = info.backup_dir.relative_path(); let backup_id = info.backup_dir.relative_path();
if let Some(_) = mark.get(&backup_id) { continue; } if let Some(_) = mark.get(&backup_id) { continue; }
let local_time = info.backup_dir.backup_time().with_timezone(&Local); let sel_id: String = select_id(&info)?;
let sel_id: String = select_id(local_time, &info);
if already_included.contains(&sel_id) { continue; } if already_included.contains(&sel_id) { continue; }
@ -43,6 +39,8 @@ fn mark_selections<F: Fn(DateTime<Local>, &BackupInfo) -> String> (
mark.insert(backup_id, PruneMark::Remove); mark.insert(backup_id, PruneMark::Remove);
} }
} }
Ok(())
} }
fn remove_incomplete_snapshots( fn remove_incomplete_snapshots(
@ -182,44 +180,43 @@ pub fn compute_prune_info(
remove_incomplete_snapshots(&mut mark, &list); remove_incomplete_snapshots(&mut mark, &list);
if let Some(keep_last) = options.keep_last { if let Some(keep_last) = options.keep_last {
mark_selections(&mut mark, &list, keep_last as usize, |_local_time, info| { mark_selections(&mut mark, &list, keep_last as usize, |info| {
BackupDir::backup_time_to_string(info.backup_dir.backup_time()) Ok(info.backup_dir.backup_time_string().to_owned())
}); })?;
} }
use proxmox::tools::time::strftime_local;
if let Some(keep_hourly) = options.keep_hourly { if let Some(keep_hourly) = options.keep_hourly {
mark_selections(&mut mark, &list, keep_hourly as usize, |local_time, _info| { mark_selections(&mut mark, &list, keep_hourly as usize, |info| {
format!("{}/{}/{}/{}", local_time.year(), local_time.month(), strftime_local("%Y/%m/%d/%H", info.backup_dir.backup_time())
local_time.day(), local_time.hour()) })?;
});
} }
if let Some(keep_daily) = options.keep_daily { if let Some(keep_daily) = options.keep_daily {
mark_selections(&mut mark, &list, keep_daily as usize, |local_time, _info| { mark_selections(&mut mark, &list, keep_daily as usize, |info| {
format!("{}/{}/{}", local_time.year(), local_time.month(), local_time.day()) strftime_local("%Y/%m/%d", info.backup_dir.backup_time())
}); })?;
} }
if let Some(keep_weekly) = options.keep_weekly { if let Some(keep_weekly) = options.keep_weekly {
mark_selections(&mut mark, &list, keep_weekly as usize, |local_time, _info| { mark_selections(&mut mark, &list, keep_weekly as usize, |info| {
let iso_week = local_time.iso_week(); // Note: Use iso-week year/week here. This year number
let week = iso_week.week(); // might not match the calendar year number.
// Note: This year number might not match the calendar year number. strftime_local("%G/%V", info.backup_dir.backup_time())
let iso_week_year = iso_week.year(); })?;
format!("{}/{}", iso_week_year, week)
});
} }
if let Some(keep_monthly) = options.keep_monthly { if let Some(keep_monthly) = options.keep_monthly {
mark_selections(&mut mark, &list, keep_monthly as usize, |local_time, _info| { mark_selections(&mut mark, &list, keep_monthly as usize, |info| {
format!("{}/{}", local_time.year(), local_time.month()) strftime_local("%Y/%m", info.backup_dir.backup_time())
}); })?;
} }
if let Some(keep_yearly) = options.keep_yearly { if let Some(keep_yearly) = options.keep_yearly {
mark_selections(&mut mark, &list, keep_yearly as usize, |local_time, _info| { mark_selections(&mut mark, &list, keep_yearly as usize, |info| {
format!("{}/{}", local_time.year(), local_time.year()) strftime_local("%Y", info.backup_dir.backup_time())
}); })?;
} }
let prune_info: Vec<(BackupInfo, bool)> = list.into_iter() let prune_info: Vec<(BackupInfo, bool)> = list.into_iter()

View File

@ -1,16 +1,29 @@
use std::collections::HashSet; use std::collections::HashSet;
use std::sync::{Arc, Mutex};
use std::sync::atomic::{Ordering, AtomicUsize};
use std::time::Instant;
use anyhow::{bail, Error}; use anyhow::{bail, format_err, Error};
use crate::server::WorkerTask; use crate::{
server::WorkerTask,
use super::{ api2::types::*,
DataStore, BackupGroup, BackupDir, BackupInfo, IndexFile, tools::ParallelHandler,
CryptMode, backup::{
FileInfo, ArchiveType, archive_type, DataStore,
DataBlob,
BackupGroup,
BackupDir,
BackupInfo,
IndexFile,
CryptMode,
FileInfo,
ArchiveType,
archive_type,
},
}; };
fn verify_blob(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> { fn verify_blob(datastore: Arc<DataStore>, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
let blob = datastore.load_blob(backup_dir, &info.filename)?; let blob = datastore.load_blob(backup_dir, &info.filename)?;
@ -35,70 +48,141 @@ fn verify_blob(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo) -
} }
} }
fn rename_corrupted_chunk(
datastore: Arc<DataStore>,
digest: &[u8;32],
worker: Arc<WorkerTask>,
) {
let (path, digest_str) = datastore.chunk_path(digest);
let mut counter = 0;
let mut new_path = path.clone();
loop {
new_path.set_file_name(format!("{}.{}.bad", digest_str, counter));
if new_path.exists() && counter < 9 { counter += 1; } else { break; }
}
match std::fs::rename(&path, &new_path) {
Ok(_) => {
worker.log(format!("corrupted chunk renamed to {:?}", &new_path));
},
Err(err) => {
match err.kind() {
std::io::ErrorKind::NotFound => { /* ignored */ },
_ => worker.log(format!("could not rename corrupted chunk {:?} - {}", &path, err))
}
}
};
}
fn verify_index_chunks( fn verify_index_chunks(
datastore: &DataStore, datastore: Arc<DataStore>,
index: Box<dyn IndexFile>, index: Box<dyn IndexFile + Send>,
verified_chunks: &mut HashSet<[u8;32]>, verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: &mut HashSet<[u8; 32]>, corrupt_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
crypt_mode: CryptMode, crypt_mode: CryptMode,
worker: &WorkerTask, worker: Arc<WorkerTask>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut errors = 0; let errors = Arc::new(AtomicUsize::new(0));
let start_time = Instant::now();
let mut read_bytes = 0;
let mut decoded_bytes = 0;
let worker2 = Arc::clone(&worker);
let datastore2 = Arc::clone(&datastore);
let corrupt_chunks2 = Arc::clone(&corrupt_chunks);
let verified_chunks2 = Arc::clone(&verified_chunks);
let errors2 = Arc::clone(&errors);
let decoder_pool = ParallelHandler::new(
"verify chunk decoder", 4,
move |(chunk, digest, size): (DataBlob, [u8;32], u64)| {
let chunk_crypt_mode = match chunk.crypt_mode() {
Err(err) => {
corrupt_chunks2.lock().unwrap().insert(digest);
worker2.log(format!("can't verify chunk, unknown CryptMode - {}", err));
errors2.fetch_add(1, Ordering::SeqCst);
return Ok(());
},
Ok(mode) => mode,
};
if chunk_crypt_mode != crypt_mode {
worker2.log(format!(
"chunk CryptMode {:?} does not match index CryptMode {:?}",
chunk_crypt_mode,
crypt_mode
));
errors2.fetch_add(1, Ordering::SeqCst);
}
if let Err(err) = chunk.verify_unencrypted(size as usize, &digest) {
corrupt_chunks2.lock().unwrap().insert(digest);
worker2.log(format!("{}", err));
errors2.fetch_add(1, Ordering::SeqCst);
rename_corrupted_chunk(datastore2.clone(), &digest, worker2.clone());
} else {
verified_chunks2.lock().unwrap().insert(digest);
}
Ok(())
}
);
for pos in 0..index.index_count() { for pos in 0..index.index_count() {
worker.fail_on_abort()?; worker.fail_on_abort()?;
crate::tools::fail_on_shutdown()?;
let info = index.chunk_info(pos).unwrap(); let info = index.chunk_info(pos).unwrap();
let size = info.range.end - info.range.start; let size = info.size();
let chunk = match datastore.load_chunk(&info.digest) { if verified_chunks.lock().unwrap().contains(&info.digest) {
Err(err) => { continue; // already verified
corrupt_chunks.insert(info.digest);
worker.log(format!("can't verify chunk, load failed - {}", err));
errors += 1;
continue;
},
Ok(chunk) => chunk,
};
let chunk_crypt_mode = match chunk.crypt_mode() {
Err(err) => {
corrupt_chunks.insert(info.digest);
worker.log(format!("can't verify chunk, unknown CryptMode - {}", err));
errors += 1;
continue;
},
Ok(mode) => mode,
};
if chunk_crypt_mode != crypt_mode {
worker.log(format!(
"chunk CryptMode {:?} does not match index CryptMode {:?}",
chunk_crypt_mode,
crypt_mode
));
errors += 1;
} }
if !verified_chunks.contains(&info.digest) { if corrupt_chunks.lock().unwrap().contains(&info.digest) {
if !corrupt_chunks.contains(&info.digest) { let digest_str = proxmox::tools::digest_to_hex(&info.digest);
if let Err(err) = chunk.verify_unencrypted(size as usize, &info.digest) { worker.log(format!("chunk {} was marked as corrupt", digest_str));
corrupt_chunks.insert(info.digest); errors.fetch_add(1, Ordering::SeqCst);
worker.log(format!("{}", err)); continue;
errors += 1; }
} else {
verified_chunks.insert(info.digest); match datastore.load_chunk(&info.digest) {
} Err(err) => {
} else { corrupt_chunks.lock().unwrap().insert(info.digest);
let digest_str = proxmox::tools::digest_to_hex(&info.digest); worker.log(format!("can't verify chunk, load failed - {}", err));
worker.log(format!("chunk {} was marked as corrupt", digest_str)); errors.fetch_add(1, Ordering::SeqCst);
errors += 1; rename_corrupted_chunk(datastore.clone(), &info.digest, worker.clone());
continue;
}
Ok(chunk) => {
read_bytes += chunk.raw_size();
decoder_pool.send((chunk, info.digest, size))?;
decoded_bytes += size;
} }
} }
} }
if errors > 0 { decoder_pool.complete()?;
let elapsed = start_time.elapsed().as_secs_f64();
let read_bytes_mib = (read_bytes as f64)/(1024.0*1024.0);
let decoded_bytes_mib = (decoded_bytes as f64)/(1024.0*1024.0);
let read_speed = read_bytes_mib/elapsed;
let decode_speed = decoded_bytes_mib/elapsed;
let error_count = errors.load(Ordering::SeqCst);
worker.log(format!(" verified {:.2}/{:.2} MiB in {:.2} seconds, speed {:.2}/{:.2} MiB/s ({} errors)",
read_bytes_mib, decoded_bytes_mib, elapsed, read_speed, decode_speed, error_count));
if errors.load(Ordering::SeqCst) > 0 {
bail!("chunks could not be verified"); bail!("chunks could not be verified");
} }
@ -106,12 +190,12 @@ fn verify_index_chunks(
} }
fn verify_fixed_index( fn verify_fixed_index(
datastore: &DataStore, datastore: Arc<DataStore>,
backup_dir: &BackupDir, backup_dir: &BackupDir,
info: &FileInfo, info: &FileInfo,
verified_chunks: &mut HashSet<[u8;32]>, verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: &mut HashSet<[u8;32]>, corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: &WorkerTask, worker: Arc<WorkerTask>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut path = backup_dir.relative_path(); let mut path = backup_dir.relative_path();
@ -132,12 +216,12 @@ fn verify_fixed_index(
} }
fn verify_dynamic_index( fn verify_dynamic_index(
datastore: &DataStore, datastore: Arc<DataStore>,
backup_dir: &BackupDir, backup_dir: &BackupDir,
info: &FileInfo, info: &FileInfo,
verified_chunks: &mut HashSet<[u8;32]>, verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: &mut HashSet<[u8;32]>, corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: &WorkerTask, worker: Arc<WorkerTask>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut path = backup_dir.relative_path(); let mut path = backup_dir.relative_path();
@ -167,14 +251,14 @@ fn verify_dynamic_index(
/// - Ok(false) if there were verification errors /// - Ok(false) if there were verification errors
/// - Err(_) if task was aborted /// - Err(_) if task was aborted
pub fn verify_backup_dir( pub fn verify_backup_dir(
datastore: &DataStore, datastore: Arc<DataStore>,
backup_dir: &BackupDir, backup_dir: &BackupDir,
verified_chunks: &mut HashSet<[u8;32]>, verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: &mut HashSet<[u8;32]>, corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: &WorkerTask worker: Arc<WorkerTask>
) -> Result<bool, Error> { ) -> Result<bool, Error> {
let manifest = match datastore.load_manifest(&backup_dir) { let mut manifest = match datastore.load_manifest(&backup_dir) {
Ok((manifest, _)) => manifest, Ok((manifest, _)) => manifest,
Err(err) => { Err(err) => {
worker.log(format!("verify {}:{} - manifest load error: {}", datastore.name(), backup_dir, err)); worker.log(format!("verify {}:{} - manifest load error: {}", datastore.name(), backup_dir, err));
@ -186,40 +270,52 @@ pub fn verify_backup_dir(
let mut error_count = 0; let mut error_count = 0;
let mut verify_result = VerifyState::Ok;
for info in manifest.files() { for info in manifest.files() {
let result = proxmox::try_block!({ let result = proxmox::try_block!({
worker.log(format!(" check {}", info.filename)); worker.log(format!(" check {}", info.filename));
match archive_type(&info.filename)? { match archive_type(&info.filename)? {
ArchiveType::FixedIndex => ArchiveType::FixedIndex =>
verify_fixed_index( verify_fixed_index(
&datastore, datastore.clone(),
&backup_dir, &backup_dir,
info, info,
verified_chunks, verified_chunks.clone(),
corrupt_chunks, corrupt_chunks.clone(),
worker worker.clone(),
), ),
ArchiveType::DynamicIndex => ArchiveType::DynamicIndex =>
verify_dynamic_index( verify_dynamic_index(
&datastore, datastore.clone(),
&backup_dir, &backup_dir,
info, info,
verified_chunks, verified_chunks.clone(),
corrupt_chunks, corrupt_chunks.clone(),
worker worker.clone(),
), ),
ArchiveType::Blob => verify_blob(&datastore, &backup_dir, info), ArchiveType::Blob => verify_blob(datastore.clone(), &backup_dir, info),
} }
}); });
worker.fail_on_abort()?; worker.fail_on_abort()?;
crate::tools::fail_on_shutdown()?;
if let Err(err) = result { if let Err(err) = result {
worker.log(format!("verify {}:{}/{} failed: {}", datastore.name(), backup_dir, info.filename, err)); worker.log(format!("verify {}:{}/{} failed: {}", datastore.name(), backup_dir, info.filename, err));
error_count += 1; error_count += 1;
verify_result = VerifyState::Failed;
} }
} }
let verify_state = SnapshotVerifyState {
state: verify_result,
upid: worker.upid().clone(),
};
manifest.unprotected["verify_state"] = serde_json::to_value(verify_state)?;
datastore.store_manifest(&backup_dir, serde_json::to_value(manifest)?)
.map_err(|err| format_err!("unable to store manifest blob - {}", err))?;
Ok(error_count == 0) Ok(error_count == 0)
} }
@ -228,32 +324,45 @@ pub fn verify_backup_dir(
/// Errors are logged to the worker log. /// Errors are logged to the worker log.
/// ///
/// Returns /// Returns
/// - Ok(failed_dirs) where failed_dirs had verification errors /// - Ok((count, failed_dirs)) where failed_dirs had verification errors
/// - Err(_) if task was aborted /// - Err(_) if task was aborted
pub fn verify_backup_group(datastore: &DataStore, group: &BackupGroup, worker: &WorkerTask) -> Result<Vec<String>, Error> { pub fn verify_backup_group(
datastore: Arc<DataStore>,
group: &BackupGroup,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
progress: Option<(usize, usize)>, // (done, snapshot_count)
worker: Arc<WorkerTask>,
) -> Result<(usize, Vec<String>), Error> {
let mut errors = Vec::new(); let mut errors = Vec::new();
let mut list = match group.list_backups(&datastore.base_path()) { let mut list = match group.list_backups(&datastore.base_path()) {
Ok(list) => list, Ok(list) => list,
Err(err) => { Err(err) => {
worker.log(format!("verify group {}:{} - unable to list backups: {}", datastore.name(), group, err)); worker.log(format!("verify group {}:{} - unable to list backups: {}", datastore.name(), group, err));
return Ok(errors); return Ok((0, errors));
} }
}; };
worker.log(format!("verify group {}:{}", datastore.name(), group)); worker.log(format!("verify group {}:{}", datastore.name(), group));
let mut verified_chunks = HashSet::with_capacity(1024*16); // start with 16384 chunks (up to 65GB) let (done, snapshot_count) = progress.unwrap_or((0, list.len()));
let mut corrupt_chunks = HashSet::with_capacity(64); // start with 64 chunks since we assume there are few corrupt ones
let mut count = 0;
BackupInfo::sort_list(&mut list, false); // newest first BackupInfo::sort_list(&mut list, false); // newest first
for info in list { for info in list {
if !verify_backup_dir(datastore, &info.backup_dir, &mut verified_chunks, &mut corrupt_chunks, worker)?{ count += 1;
if !verify_backup_dir(datastore.clone(), &info.backup_dir, verified_chunks.clone(), corrupt_chunks.clone(), worker.clone())?{
errors.push(info.backup_dir.to_string()); errors.push(info.backup_dir.to_string());
} }
if snapshot_count != 0 {
let pos = done + count;
let percentage = ((pos as f64) * 100.0)/(snapshot_count as f64);
worker.log(format!("percentage done: {:.2}% ({} of {} snapshots)", percentage, pos, snapshot_count));
}
} }
Ok(errors) Ok((count, errors))
} }
/// Verify all backups inside a datastore /// Verify all backups inside a datastore
@ -263,23 +372,49 @@ pub fn verify_backup_group(datastore: &DataStore, group: &BackupGroup, worker: &
/// Returns /// Returns
/// - Ok(failed_dirs) where failed_dirs had verification errors /// - Ok(failed_dirs) where failed_dirs had verification errors
/// - Err(_) if task was aborted /// - Err(_) if task was aborted
pub fn verify_all_backups(datastore: &DataStore, worker: &WorkerTask) -> Result<Vec<String>, Error> { pub fn verify_all_backups(datastore: Arc<DataStore>, worker: Arc<WorkerTask>) -> Result<Vec<String>, Error> {
let mut errors = Vec::new(); let mut errors = Vec::new();
let list = match BackupGroup::list_groups(&datastore.base_path()) { let mut list = match BackupGroup::list_groups(&datastore.base_path()) {
Ok(list) => list, Ok(list) => list
.into_iter()
.filter(|group| !(group.backup_type() == "host" && group.backup_id() == "benchmark"))
.collect::<Vec<BackupGroup>>(),
Err(err) => { Err(err) => {
worker.log(format!("verify datastore {} - unable to list backups: {}", datastore.name(), err)); worker.log(format!("verify datastore {} - unable to list backups: {}", datastore.name(), err));
return Ok(errors); return Ok(errors);
} }
}; };
worker.log(format!("verify datastore {}", datastore.name())); list.sort_unstable();
let mut snapshot_count = 0;
for group in list.iter() {
snapshot_count += group.list_backups(&datastore.base_path())?.len();
}
// start with 16384 chunks (up to 65GB)
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
// start with 64 chunks since we assume there are few corrupt ones
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
worker.log(format!("verify datastore {} ({} snapshots)", datastore.name(), snapshot_count));
let mut done = 0;
for group in list { for group in list {
let mut group_errors = verify_backup_group(datastore, &group, worker)?; let (count, mut group_errors) = verify_backup_group(
datastore.clone(),
&group,
verified_chunks.clone(),
corrupt_chunks.clone(),
Some((done, snapshot_count)),
worker.clone(),
)?;
errors.append(&mut group_errors); errors.append(&mut group_errors);
done += count;
} }
Ok(errors) Ok(errors)

View File

@ -37,6 +37,7 @@ async fn run() -> Result<(), Error> {
config::update_self_signed_cert(false)?; config::update_self_signed_cert(false)?;
proxmox_backup::rrd::create_rrdb_dir()?; proxmox_backup::rrd::create_rrdb_dir()?;
proxmox_backup::config::jobstate::create_jobstate_dir()?;
if let Err(err) = generate_auth_key() { if let Err(err) = generate_auth_key() {
bail!("unable to generate auth key - {}", err); bail!("unable to generate auth key - {}", err);

View File

@ -8,7 +8,6 @@ use std::sync::{Arc, Mutex};
use std::task::Context; use std::task::Context;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use chrono::{Local, DateTime, Utc, TimeZone};
use futures::future::FutureExt; use futures::future::FutureExt;
use futures::stream::{StreamExt, TryStreamExt}; use futures::stream::{StreamExt, TryStreamExt};
use serde_json::{json, Value}; use serde_json::{json, Value};
@ -16,11 +15,20 @@ use tokio::sync::mpsc;
use xdg::BaseDirectories; use xdg::BaseDirectories;
use pathpatterns::{MatchEntry, MatchType, PatternFlag}; use pathpatterns::{MatchEntry, MatchType, PatternFlag};
use proxmox::tools::fs::{file_get_contents, file_get_json, replace_file, CreateOptions, image_size}; use proxmox::{
use proxmox::api::{ApiHandler, ApiMethod, RpcEnvironment}; tools::{
use proxmox::api::schema::*; time::{strftime_local, epoch_i64},
use proxmox::api::cli::*; fs::{file_get_contents, file_get_json, replace_file, CreateOptions, image_size},
use proxmox::api::api; },
api::{
api,
ApiHandler,
ApiMethod,
RpcEnvironment,
schema::*,
cli::*,
},
};
use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation}; use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
use proxmox_backup::tools; use proxmox_backup::tools;
@ -184,7 +192,7 @@ pub fn complete_repository(_arg: &str, _param: &HashMap<String, String>) -> Vec<
result result
} }
fn connect(server: &str, userid: &Userid) -> Result<HttpClient, Error> { fn connect(server: &str, port: u16, userid: &Userid) -> Result<HttpClient, Error> {
let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok(); let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
@ -203,7 +211,7 @@ fn connect(server: &str, userid: &Userid) -> Result<HttpClient, Error> {
.fingerprint_cache(true) .fingerprint_cache(true)
.ticket_cache(true); .ticket_cache(true);
HttpClient::new(server, userid, options) HttpClient::new(server, port, userid, options)
} }
async fn view_task_result( async fn view_task_result(
@ -246,7 +254,7 @@ pub async fn api_datastore_latest_snapshot(
client: &HttpClient, client: &HttpClient,
store: &str, store: &str,
group: BackupGroup, group: BackupGroup,
) -> Result<(String, String, DateTime<Utc>), Error> { ) -> Result<(String, String, i64), Error> {
let list = api_datastore_list_snapshots(client, store, Some(group.clone())).await?; let list = api_datastore_list_snapshots(client, store, Some(group.clone())).await?;
let mut list: Vec<SnapshotListItem> = serde_json::from_value(list)?; let mut list: Vec<SnapshotListItem> = serde_json::from_value(list)?;
@ -257,7 +265,7 @@ pub async fn api_datastore_latest_snapshot(
list.sort_unstable_by(|a, b| b.backup_time.cmp(&a.backup_time)); list.sort_unstable_by(|a, b| b.backup_time.cmp(&a.backup_time));
let backup_time = Utc.timestamp(list[0].backup_time, 0); let backup_time = list[0].backup_time;
Ok((group.backup_type().to_owned(), group.backup_id().to_owned(), backup_time)) Ok((group.backup_type().to_owned(), group.backup_id().to_owned(), backup_time))
} }
@ -357,7 +365,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/groups", repo.store()); let path = format!("api2/json/admin/datastore/{}/groups", repo.store());
@ -373,7 +381,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
let render_last_backup = |_v: &Value, record: &Value| -> Result<String, Error> { let render_last_backup = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: GroupListItem = serde_json::from_value(record.to_owned())?; let item: GroupListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.last_backup); let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.last_backup)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned()) Ok(snapshot.relative_path().to_str().unwrap().to_owned())
}; };
@ -430,7 +438,7 @@ async fn list_snapshots(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() { let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?) Some(path.parse()?)
@ -444,7 +452,7 @@ async fn list_snapshots(param: Value) -> Result<Value, Error> {
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> { let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?; let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time); let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned()) Ok(snapshot.relative_path().to_str().unwrap().to_owned())
}; };
@ -495,14 +503,14 @@ async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let path = tools::required_string_param(&param, "snapshot")?; let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?; let snapshot: BackupDir = path.parse()?;
let mut client = connect(repo.host(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store()); let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
let result = client.delete(&path, Some(json!({ let result = client.delete(&path, Some(json!({
"backup-type": snapshot.group().backup_type(), "backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(), "backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time().timestamp(), "backup-time": snapshot.backup_time(),
}))).await?; }))).await?;
record_repository(&repo); record_repository(&repo);
@ -525,7 +533,7 @@ async fn api_login(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
client.login().await?; client.login().await?;
record_repository(&repo); record_repository(&repo);
@ -582,7 +590,7 @@ async fn api_version(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param); let repo = extract_repository_from_value(&param);
if let Ok(repo) = repo { if let Ok(repo) = repo {
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
match client.get("api2/json/version", None).await { match client.get("api2/json/version", None).await {
Ok(mut result) => version_info["server"] = result["data"].take(), Ok(mut result) => version_info["server"] = result["data"].take(),
@ -632,14 +640,14 @@ async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store()); let path = format!("api2/json/admin/datastore/{}/files", repo.store());
let mut result = client.get(&path, Some(json!({ let mut result = client.get(&path, Some(json!({
"backup-type": snapshot.group().backup_type(), "backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(), "backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time().timestamp(), "backup-time": snapshot.backup_time(),
}))).await?; }))).await?;
record_repository(&repo); record_repository(&repo);
@ -676,7 +684,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let mut client = connect(repo.host(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/gc", repo.store()); let path = format!("api2/json/admin/datastore/{}/gc", repo.store());
@ -986,18 +994,18 @@ async fn create_backup(
} }
} }
let backup_time = Utc.timestamp(backup_time_opt.unwrap_or_else(|| Utc::now().timestamp()), 0); let backup_time = backup_time_opt.unwrap_or_else(|| epoch_i64());
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
record_repository(&repo); record_repository(&repo);
println!("Starting backup: {}/{}/{}", backup_type, backup_id, BackupDir::backup_time_to_string(backup_time)); println!("Starting backup: {}/{}/{}", backup_type, backup_id, BackupDir::backup_time_to_string(backup_time)?);
println!("Client name: {}", proxmox::tools::nodename()); println!("Client name: {}", proxmox::tools::nodename());
let start_time = Local::now(); let start_time = std::time::Instant::now();
println!("Starting protocol: {}", start_time.to_rfc3339_opts(chrono::SecondsFormat::Secs, false)); println!("Starting backup protocol: {}", strftime_local("%c", epoch_i64())?);
let (crypt_config, rsa_encrypted_key) = match keydata { let (crypt_config, rsa_encrypted_key) = match keydata {
None => (None, None), None => (None, None),
@ -1026,6 +1034,7 @@ async fn create_backup(
&backup_id, &backup_id,
backup_time, backup_time,
verbose, verbose,
false
).await?; ).await?;
let previous_manifest = if let Ok(previous_manifest) = client.download_previous_manifest().await { let previous_manifest = if let Ok(previous_manifest) = client.download_previous_manifest().await {
@ -1034,7 +1043,7 @@ async fn create_backup(
None None
}; };
let snapshot = BackupDir::new(backup_type, backup_id, backup_time.timestamp()); let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?;
let mut manifest = BackupManifest::new(snapshot); let mut manifest = BackupManifest::new(snapshot);
let mut catalog = None; let mut catalog = None;
@ -1149,11 +1158,11 @@ async fn create_backup(
client.finish().await?; client.finish().await?;
let end_time = Local::now(); let end_time = std::time::Instant::now();
let elapsed = end_time.signed_duration_since(start_time); let elapsed = end_time.duration_since(start_time);
println!("Duration: {}", elapsed); println!("Duration: {:.2}s", elapsed.as_secs_f64());
println!("End Time: {}", end_time.to_rfc3339_opts(chrono::SecondsFormat::Secs, false)); println!("End Time: {}", strftime_local("%c", epoch_i64())?);
Ok(Value::Null) Ok(Value::Null)
} }
@ -1290,7 +1299,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
record_repository(&repo); record_repository(&repo);
@ -1463,7 +1472,7 @@ async fn upload_log(param: Value) -> Result<Value, Error> {
let snapshot = tools::required_string_param(&param, "snapshot")?; let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?; let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(repo.host(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.user())?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?; let (keydata, crypt_mode) = keyfile_parameters(&param)?;
@ -1491,7 +1500,7 @@ async fn upload_log(param: Value) -> Result<Value, Error> {
let args = json!({ let args = json!({
"backup-type": snapshot.group().backup_type(), "backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(), "backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time().timestamp(), "backup-time": snapshot.backup_time(),
}); });
let body = hyper::Body::from(raw_data); let body = hyper::Body::from(raw_data);
@ -1534,7 +1543,7 @@ fn prune<'a>(
async fn prune_async(mut param: Value) -> Result<Value, Error> { async fn prune_async(mut param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/prune", repo.store()); let path = format!("api2/json/admin/datastore/{}/prune", repo.store());
@ -1559,7 +1568,7 @@ async fn prune_async(mut param: Value) -> Result<Value, Error> {
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> { let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: PruneListItem = serde_json::from_value(record.to_owned())?; let item: PruneListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time); let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned()) Ok(snapshot.relative_path().to_str().unwrap().to_owned())
}; };
@ -1617,7 +1626,7 @@ async fn status(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/admin/datastore/{}/status", repo.store()); let path = format!("api2/json/admin/datastore/{}/status", repo.store());
@ -1662,7 +1671,7 @@ async fn try_get(repo: &BackupRepository, url: &str) -> Value {
.fingerprint_cache(true) .fingerprint_cache(true)
.ticket_cache(true); .ticket_cache(true);
let client = match HttpClient::new(repo.host(), repo.user(), options) { let client = match HttpClient::new(repo.host(), repo.port(), repo.user(), options) {
Ok(v) => v, Ok(v) => v,
_ => return Value::Null, _ => return Value::Null,
}; };
@ -1751,8 +1760,9 @@ async fn complete_backup_snapshot_do(param: &HashMap<String, String>) -> Vec<Str
if let (Some(backup_id), Some(backup_type), Some(backup_time)) = if let (Some(backup_id), Some(backup_type), Some(backup_time)) =
(item["backup-id"].as_str(), item["backup-type"].as_str(), item["backup-time"].as_i64()) (item["backup-id"].as_str(), item["backup-type"].as_str(), item["backup-time"].as_i64())
{ {
let snapshot = BackupDir::new(backup_type, backup_id, backup_time); if let Ok(snapshot) = BackupDir::new(backup_type, backup_id, backup_time) {
result.push(snapshot.relative_path().to_str().unwrap().to_owned()); result.push(snapshot.relative_path().to_str().unwrap().to_owned());
}
} }
} }
} }
@ -1786,7 +1796,7 @@ async fn complete_server_file_name_do(param: &HashMap<String, String>) -> Vec<St
let query = tools::json_object_to_query(json!({ let query = tools::json_object_to_query(json!({
"backup-type": snapshot.group().backup_type(), "backup-type": snapshot.group().backup_type(),
"backup-id": snapshot.group().backup_id(), "backup-id": snapshot.group().backup_id(),
"backup-time": snapshot.backup_time().timestamp(), "backup-time": snapshot.backup_time(),
})).unwrap(); })).unwrap();
let path = format!("api2/json/admin/datastore/{}/files?{}", repo.store(), query); let path = format!("api2/json/admin/datastore/{}/files?{}", repo.store(), query);

View File

@ -9,7 +9,7 @@ use proxmox_backup::tools;
use proxmox_backup::config; use proxmox_backup::config;
use proxmox_backup::api2::{self, types::* }; use proxmox_backup::api2::{self, types::* };
use proxmox_backup::client::*; use proxmox_backup::client::*;
use proxmox_backup::tools::ticket::*; use proxmox_backup::tools::ticket::Ticket;
use proxmox_backup::auth_helpers::*; use proxmox_backup::auth_helpers::*;
mod proxmox_backup_manager; mod proxmox_backup_manager;
@ -59,17 +59,13 @@ fn connect() -> Result<HttpClient, Error> {
.verify_cert(false); // not required for connection to localhost .verify_cert(false); // not required for connection to localhost
let client = if uid.is_root() { let client = if uid.is_root() {
let ticket = assemble_rsa_ticket( let ticket = Ticket::new("PBS", Userid::root_userid())?
private_auth_key(), .sign(private_auth_key(), None)?;
"PBS",
Some(Userid::root_userid()),
None,
)?;
options = options.password(Some(ticket)); options = options.password(Some(ticket));
HttpClient::new("localhost", Userid::root_userid(), options)? HttpClient::new("localhost", 8007, Userid::root_userid(), options)?
} else { } else {
options = options.ticket_cache(true).interactive(true); options = options.ticket_cache(true).interactive(true);
HttpClient::new("localhost", Userid::root_userid(), options)? HttpClient::new("localhost", 8007, Userid::root_userid(), options)?
}; };
Ok(client) Ok(client)
@ -414,6 +410,7 @@ pub fn complete_remote_datastore_name(_arg: &str, param: &HashMap<String, String
let client = HttpClient::new( let client = HttpClient::new(
&remote.host, &remote.host,
remote.port.unwrap_or(8007),
&remote.userid, &remote.userid,
options, options,
)?; )?;

View File

@ -1,4 +1,4 @@
use std::sync::Arc; use std::sync::{Arc};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
@ -13,18 +13,26 @@ use proxmox_backup::api2::types::Userid;
use proxmox_backup::configdir; use proxmox_backup::configdir;
use proxmox_backup::buildcfg; use proxmox_backup::buildcfg;
use proxmox_backup::server; use proxmox_backup::server;
use proxmox_backup::tools::{daemon, epoch_now, epoch_now_u64}; use proxmox_backup::tools::daemon;
use proxmox_backup::server::{ApiConfig, rest::*}; use proxmox_backup::server::{ApiConfig, rest::*};
use proxmox_backup::auth_helpers::*; use proxmox_backup::auth_helpers::*;
use proxmox_backup::tools::disks::{ DiskManage, zfs_pool_stats }; use proxmox_backup::tools::disks::{ DiskManage, zfs_pool_stats };
fn main() { use proxmox_backup::api2::pull::do_sync_job;
fn main() -> Result<(), Error> {
proxmox_backup::tools::setup_safe_path_env(); proxmox_backup::tools::setup_safe_path_env();
if let Err(err) = proxmox_backup::tools::runtime::main(run()) { let backup_uid = proxmox_backup::backup::backup_user()?.uid;
eprintln!("Error: {}", err); let backup_gid = proxmox_backup::backup::backup_group()?.gid;
std::process::exit(-1); let running_uid = nix::unistd::Uid::effective();
let running_gid = nix::unistd::Gid::effective();
if running_uid != backup_uid || running_gid != backup_gid {
bail!("proxy not running as backup user or group (got uid {} gid {})", running_uid, running_gid);
} }
proxmox_backup::tools::runtime::main(run())
} }
async fn run() -> Result<(), Error> { async fn run() -> Result<(), Error> {
@ -41,15 +49,11 @@ async fn run() -> Result<(), Error> {
let mut config = ApiConfig::new( let mut config = ApiConfig::new(
buildcfg::JS_DIR, &proxmox_backup::api2::ROUTER, RpcEnvironmentType::PUBLIC)?; buildcfg::JS_DIR, &proxmox_backup::api2::ROUTER, RpcEnvironmentType::PUBLIC)?;
// add default dirs which includes jquery and bootstrap
// my $base = '/usr/share/libpve-http-server-perl';
// add_dirs($self->{dirs}, '/css/' => "$base/css/");
// add_dirs($self->{dirs}, '/js/' => "$base/js/");
// add_dirs($self->{dirs}, '/fonts/' => "$base/fonts/");
config.add_alias("novnc", "/usr/share/novnc-pve"); config.add_alias("novnc", "/usr/share/novnc-pve");
config.add_alias("extjs", "/usr/share/javascript/extjs"); config.add_alias("extjs", "/usr/share/javascript/extjs");
config.add_alias("fontawesome", "/usr/share/fonts-font-awesome"); config.add_alias("fontawesome", "/usr/share/fonts-font-awesome");
config.add_alias("xtermjs", "/usr/share/pve-xtermjs"); config.add_alias("xtermjs", "/usr/share/pve-xtermjs");
config.add_alias("locale", "/usr/share/pbs-i18n");
config.add_alias("widgettoolkit", "/usr/share/javascript/proxmox-widget-toolkit"); config.add_alias("widgettoolkit", "/usr/share/javascript/proxmox-widget-toolkit");
config.add_alias("css", "/usr/share/javascript/proxmox-backup/css"); config.add_alias("css", "/usr/share/javascript/proxmox-backup/css");
config.add_alias("docs", "/usr/share/doc/proxmox-backup/html"); config.add_alias("docs", "/usr/share/doc/proxmox-backup/html");
@ -83,8 +87,6 @@ async fn run() -> Result<(), Error> {
let acceptor = Arc::clone(&acceptor); let acceptor = Arc::clone(&acceptor);
async move { async move {
sock.set_nodelay(true).unwrap(); sock.set_nodelay(true).unwrap();
sock.set_send_buffer_size(1024*1024).unwrap();
sock.set_recv_buffer_size(1024*1024).unwrap();
Ok(tokio_openssl::accept(&acceptor, sock) Ok(tokio_openssl::accept(&acceptor, sock)
.await .await
.ok() // handshake errors aren't be fatal, so return None to filter .ok() // handshake errors aren't be fatal, so return None to filter
@ -142,11 +144,12 @@ fn start_task_scheduler() {
tokio::spawn(task.map(|_| ())); tokio::spawn(task.map(|_| ()));
} }
use std::time:: {Instant, Duration}; use std::time::{SystemTime, Instant, Duration, UNIX_EPOCH};
fn next_minute() -> Result<Instant, Error> { fn next_minute() -> Result<Instant, Error> {
let epoch_now = epoch_now()?; let now = SystemTime::now();
let epoch_next = Duration::from_secs((epoch_now.as_secs()/60 + 1)*60); let epoch_now = now.duration_since(UNIX_EPOCH)?;
let epoch_next = Duration::from_secs((epoch_now.as_secs()/60 + 1)*60);
Ok(Instant::now() + epoch_next - epoch_now) Ok(Instant::now() + epoch_next - epoch_now)
} }
@ -193,45 +196,21 @@ async fn schedule_tasks() -> Result<(), Error> {
schedule_datastore_garbage_collection().await; schedule_datastore_garbage_collection().await;
schedule_datastore_prune().await; schedule_datastore_prune().await;
schedule_datastore_verification().await;
schedule_datastore_sync_jobs().await; schedule_datastore_sync_jobs().await;
schedule_task_log_rotate().await;
Ok(()) Ok(())
} }
fn lookup_last_worker(worker_type: &str, worker_id: &str) -> Result<Option<server::UPID>, Error> {
let list = proxmox_backup::server::read_task_list()?;
let mut last: Option<&server::UPID> = None;
for entry in list.iter() {
if entry.upid.worker_type == worker_type {
if let Some(ref id) = entry.upid.worker_id {
if id == worker_id {
match last {
Some(ref upid) => {
if upid.starttime < entry.upid.starttime {
last = Some(&entry.upid)
}
}
None => {
last = Some(&entry.upid)
}
}
}
}
}
}
Ok(last.cloned())
}
async fn schedule_datastore_garbage_collection() { async fn schedule_datastore_garbage_collection() {
use proxmox_backup::backup::DataStore; use proxmox_backup::backup::DataStore;
use proxmox_backup::server::{UPID, WorkerTask}; use proxmox_backup::server::{UPID, WorkerTask};
use proxmox_backup::config::datastore::{self, DataStoreConfig}; use proxmox_backup::config::{
jobstate::{self, Job},
datastore::{self, DataStoreConfig}
};
use proxmox_backup::tools::systemd::time::{ use proxmox_backup::tools::systemd::time::{
parse_calendar_event, compute_next_event}; parse_calendar_event, compute_next_event};
@ -287,33 +266,33 @@ async fn schedule_datastore_garbage_collection() {
} }
} }
} else { } else {
match lookup_last_worker(worker_type, &store) { match jobstate::last_run_time(worker_type, &store) {
Ok(Some(upid)) => upid.starttime, Ok(time) => time,
Ok(None) => 0,
Err(err) => { Err(err) => {
eprintln!("lookup_last_job_start failed: {}", err); eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue; continue;
} }
} }
}; };
let next = match compute_next_event(&event, last, false) { let next = match compute_next_event(&event, last, false) {
Ok(next) => next, Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => { Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err); eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue; continue;
} }
}; };
let now = match epoch_now_u64() { let now = proxmox::tools::time::epoch_i64();
Ok(epoch_now) => epoch_now as i64,
Err(err) => {
eprintln!("query system time failed - {}", err);
continue;
}
};
if next > now { continue; } if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let store2 = store.clone(); let store2 = store.clone();
if let Err(err) = WorkerTask::new_thread( if let Err(err) = WorkerTask::new_thread(
@ -322,9 +301,20 @@ async fn schedule_datastore_garbage_collection() {
Userid::backup_userid().clone(), Userid::backup_userid().clone(),
false, false,
move |worker| { move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting garbage collection on store {}", store)); worker.log(format!("starting garbage collection on store {}", store));
worker.log(format!("task triggered by schedule '{}'", event_str)); worker.log(format!("task triggered by schedule '{}'", event_str));
datastore.garbage_collection(&worker)
let result = datastore.garbage_collection(&worker);
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
result
} }
) { ) {
eprintln!("unable to start garbage collection on store {} - {}", store2, err); eprintln!("unable to start garbage collection on store {} - {}", store2, err);
@ -335,9 +325,12 @@ async fn schedule_datastore_garbage_collection() {
async fn schedule_datastore_prune() { async fn schedule_datastore_prune() {
use proxmox_backup::backup::{ use proxmox_backup::backup::{
PruneOptions, DataStore, BackupGroup, BackupDir, compute_prune_info}; PruneOptions, DataStore, BackupGroup, compute_prune_info};
use proxmox_backup::server::{WorkerTask}; use proxmox_backup::server::{WorkerTask};
use proxmox_backup::config::datastore::{self, DataStoreConfig}; use proxmox_backup::config::{
jobstate::{self, Job},
datastore::{self, DataStoreConfig}
};
use proxmox_backup::tools::systemd::time::{ use proxmox_backup::tools::systemd::time::{
parse_calendar_event, compute_next_event}; parse_calendar_event, compute_next_event};
@ -394,37 +387,32 @@ async fn schedule_datastore_prune() {
let worker_type = "prune"; let worker_type = "prune";
let last = match lookup_last_worker(worker_type, &store) { let last = match jobstate::last_run_time(worker_type, &store) {
Ok(Some(upid)) => { Ok(time) => time,
if proxmox_backup::server::worker_is_active_local(&upid) {
continue;
}
upid.starttime
}
Ok(None) => 0,
Err(err) => { Err(err) => {
eprintln!("lookup_last_job_start failed: {}", err); eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue; continue;
} }
}; };
let next = match compute_next_event(&event, last, false) { let next = match compute_next_event(&event, last, false) {
Ok(next) => next, Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => { Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err); eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue; continue;
} }
}; };
let now = match epoch_now_u64() { let now = proxmox::tools::time::epoch_i64();
Ok(epoch_now) => epoch_now as i64,
Err(err) => {
eprintln!("query system time failed - {}", err);
continue;
}
};
if next > now { continue; } if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let store2 = store.clone(); let store2 = store.clone();
if let Err(err) = WorkerTask::new_thread( if let Err(err) = WorkerTask::new_thread(
@ -433,35 +421,47 @@ async fn schedule_datastore_prune() {
Userid::backup_userid().clone(), Userid::backup_userid().clone(),
false, false,
move |worker| { move |worker| {
worker.log(format!("Starting datastore prune on store \"{}\"", store));
worker.log(format!("task triggered by schedule '{}'", event_str));
worker.log(format!("retention options: {}", prune_options.cli_options_string()));
let base_path = datastore.base_path(); job.start(&worker.upid().to_string())?;
let groups = BackupGroup::list_groups(&base_path)?; let result = try_block!({
for group in groups {
let list = group.list_backups(&base_path)?;
let mut prune_info = compute_prune_info(list, &prune_options)?;
prune_info.reverse(); // delete older snapshots first
worker.log(format!("Starting prune on store \"{}\" group \"{}/{}\"", worker.log(format!("Starting datastore prune on store \"{}\"", store));
store, group.backup_type(), group.backup_id())); worker.log(format!("task triggered by schedule '{}'", event_str));
worker.log(format!("retention options: {}", prune_options.cli_options_string()));
for (info, keep) in prune_info { let base_path = datastore.base_path();
worker.log(format!(
"{} {}/{}/{}",
if keep { "keep" } else { "remove" },
group.backup_type(), group.backup_id(),
BackupDir::backup_time_to_string(info.backup_dir.backup_time())));
if !keep { let groups = BackupGroup::list_groups(&base_path)?;
datastore.remove_backup_dir(&info.backup_dir, true)?; for group in groups {
let list = group.list_backups(&base_path)?;
let mut prune_info = compute_prune_info(list, &prune_options)?;
prune_info.reverse(); // delete older snapshots first
worker.log(format!("Starting prune on store \"{}\" group \"{}/{}\"",
store, group.backup_type(), group.backup_id()));
for (info, keep) in prune_info {
worker.log(format!(
"{} {}/{}/{}",
if keep { "keep" } else { "remove" },
group.backup_type(), group.backup_id(),
info.backup_dir.backup_time_string()));
if !keep {
datastore.remove_backup_dir(&info.backup_dir, true)?;
}
} }
} }
Ok(())
});
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
} }
Ok(()) result
} }
) { ) {
eprintln!("unable to start datastore prune on store {} - {}", store2, err); eprintln!("unable to start datastore prune on store {} - {}", store2, err);
@ -469,13 +469,124 @@ async fn schedule_datastore_prune() {
} }
} }
async fn schedule_datastore_verification() {
use proxmox_backup::backup::{DataStore, verify_all_backups};
use proxmox_backup::server::{WorkerTask};
use proxmox_backup::config::{
jobstate::{self, Job},
datastore::{self, DataStoreConfig}
};
use proxmox_backup::tools::systemd::time::{
parse_calendar_event, compute_next_event};
let config = match datastore::config() {
Err(err) => {
eprintln!("unable to read datastore config - {}", err);
return;
}
Ok((config, _digest)) => config,
};
for (store, (_, store_config)) in config.sections {
let datastore = match DataStore::lookup_datastore(&store) {
Ok(datastore) => datastore,
Err(err) => {
eprintln!("lookup_datastore failed - {}", err);
continue;
}
};
let store_config: DataStoreConfig = match serde_json::from_value(store_config) {
Ok(c) => c,
Err(err) => {
eprintln!("datastore config from_value failed - {}", err);
continue;
}
};
let event_str = match store_config.verify_schedule {
Some(event_str) => event_str,
None => continue,
};
let event = match parse_calendar_event(&event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
continue;
}
};
let worker_type = "verify";
let last = match jobstate::last_run_time(worker_type, &store) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let worker_id = store.clone();
let store2 = store.clone();
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(worker_id),
Userid::backup_userid().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting verification on store {}", store2));
worker.log(format!("task triggered by schedule '{}'", event_str));
let result = try_block!({
let failed_dirs = verify_all_backups(datastore, worker.clone())?;
if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:");
for dir in failed_dirs {
worker.log(format!("\t{}", dir));
}
Err(format_err!("verification failed - please check the log for details"))
} else {
Ok(())
}
});
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
result
},
) {
eprintln!("unable to start verification on store {} - {}", store, err);
}
}
}
async fn schedule_datastore_sync_jobs() { async fn schedule_datastore_sync_jobs() {
use proxmox_backup::{ use proxmox_backup::{
backup::DataStore, config::{ sync::{self, SyncJobConfig}, jobstate::{self, Job} },
client::{ HttpClient, HttpClientOptions, BackupRepository, pull::pull_store },
server::{ WorkerTask },
config::{ sync::{self, SyncJobConfig}, remote::{self, Remote} },
tools::systemd::time::{ parse_calendar_event, compute_next_event }, tools::systemd::time::{ parse_calendar_event, compute_next_event },
}; };
@ -487,14 +598,6 @@ async fn schedule_datastore_sync_jobs() {
Ok((config, _digest)) => config, Ok((config, _digest)) => config,
}; };
let remote_config = match remote::config() {
Err(err) => {
eprintln!("unable to read remote config - {}", err);
return;
}
Ok((config, _digest)) => config,
};
for (job_id, (_, job_config)) in config.sections { for (job_id, (_, job_config)) in config.sections {
let job_config: SyncJobConfig = match serde_json::from_value(job_config) { let job_config: SyncJobConfig = match serde_json::from_value(job_config) {
Ok(c) => c, Ok(c) => c,
@ -519,92 +622,135 @@ async fn schedule_datastore_sync_jobs() {
let worker_type = "syncjob"; let worker_type = "syncjob";
let last = match lookup_last_worker(worker_type, &job_id) { let last = match jobstate::last_run_time(worker_type, &job_id) {
Ok(Some(upid)) => { Ok(time) => time,
if proxmox_backup::server::worker_is_active_local(&upid) {
continue;
}
upid.starttime
},
Ok(None) => 0,
Err(err) => { Err(err) => {
eprintln!("lookup_last_job_start failed: {}", err); eprintln!("could not get last run time of {} {}: {}", worker_type, job_id, err);
continue; continue;
} }
}; };
let next = match compute_next_event(&event, last, false) { let next = match compute_next_event(&event, last, false) {
Ok(next) => next, Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => { Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err); eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue; continue;
} }
}; };
let now = match epoch_now_u64() { let now = proxmox::tools::time::epoch_i64();
Ok(epoch_now) => epoch_now as i64,
Err(err) => {
eprintln!("query system time failed - {}", err);
continue;
}
};
if next > now { continue; } if next > now { continue; }
let job = match Job::new(worker_type, &job_id) {
let job_id2 = job_id.clone(); Ok(job) => job,
Err(_) => continue, // could not get lock
let tgt_store = match DataStore::lookup_datastore(&job_config.store) {
Ok(datastore) => datastore,
Err(err) => {
eprintln!("lookup_datastore '{}' failed - {}", job_config.store, err);
continue;
}
};
let remote: Remote = match remote_config.lookup("remote", &job_config.remote) {
Ok(remote) => remote,
Err(err) => {
eprintln!("remote_config lookup failed: {}", err);
continue;
}
}; };
let userid = Userid::backup_userid().clone(); let userid = Userid::backup_userid().clone();
let delete = job_config.remove_vanished.unwrap_or(true); if let Err(err) = do_sync_job(job, job_config, &userid, Some(event_str)) {
eprintln!("unable to start datastore sync job {} - {}", &job_id, err);
if let Err(err) = WorkerTask::spawn(
worker_type,
Some(job_id.clone()),
userid.clone(),
false,
move |worker| async move {
worker.log(format!("Starting datastore sync job '{}'", job_id));
worker.log(format!("task triggered by schedule '{}'", event_str));
worker.log(format!("Sync datastore '{}' from '{}/{}'",
job_config.store, job_config.remote, job_config.remote_store));
let options = HttpClientOptions::new()
.password(Some(remote.password.clone()))
.fingerprint(remote.fingerprint.clone());
let client = HttpClient::new(&remote.host, &remote.userid, options)?;
let _auth_info = client.login() // make sure we can auth
.await
.map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
let src_repo = BackupRepository::new(Some(remote.userid), Some(remote.host), job_config.remote_store);
pull_store(&worker, &client, &src_repo, tgt_store, delete, userid).await?;
Ok(())
}
) {
eprintln!("unable to start datastore sync job {} - {}", job_id2, err);
} }
} }
} }
async fn schedule_task_log_rotate() {
use proxmox_backup::{
config::jobstate::{self, Job},
server::rotate_task_log_archive,
};
use proxmox_backup::server::WorkerTask;
use proxmox_backup::tools::systemd::time::{
parse_calendar_event, compute_next_event};
let worker_type = "logrotate";
let job_id = "task-archive";
let last = match jobstate::last_run_time(worker_type, job_id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of task log archive rotation: {}", err);
return;
}
};
// schedule daily at 00:00 like normal logrotate
let schedule = "00:00";
let event = match parse_calendar_event(schedule) {
Ok(event) => event,
Err(err) => {
// should not happen?
eprintln!("unable to parse schedule '{}' - {}", schedule, err);
return;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => return,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", schedule, err);
return;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now {
// if we never ran the rotation, schedule instantly
match jobstate::JobState::load(worker_type, job_id) {
Ok(state) => match state {
jobstate::JobState::Created { .. } => {},
_ => return,
},
_ => return,
}
}
let mut job = match Job::new(worker_type, job_id) {
Ok(job) => job,
Err(_) => return, // could not get lock
};
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(job_id.to_string()),
Userid::backup_userid().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting task log rotation"));
// one entry has normally about ~100-150 bytes
let max_size = 500000; // at least 5000 entries
let max_files = 20; // at least 100000 entries
let result = try_block!({
let has_rotated = rotate_task_log_archive(max_size, true, Some(max_files))?;
if has_rotated {
worker.log(format!("task log archive was rotated"));
} else {
worker.log(format!("task log archive was not rotated"));
}
Ok(())
});
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
result
},
) {
eprintln!("unable to start task log rotation: {}", err);
}
}
async fn run_stat_generator() { async fn run_stat_generator() {
let mut count = 0; let mut count = 0;

View File

@ -3,7 +3,6 @@ use std::sync::Arc;
use anyhow::{Error}; use anyhow::{Error};
use serde_json::Value; use serde_json::Value;
use chrono::{TimeZone, Utc};
use serde::Serialize; use serde::Serialize;
use proxmox::api::{ApiMethod, RpcEnvironment}; use proxmox::api::{ApiMethod, RpcEnvironment};
@ -22,6 +21,7 @@ use proxmox_backup::backup::{
load_and_decrypt_key, load_and_decrypt_key,
CryptConfig, CryptConfig,
KeyDerivationConfig, KeyDerivationConfig,
DataChunkBuilder,
}; };
use proxmox_backup::client::*; use proxmox_backup::client::*;
@ -61,6 +61,9 @@ struct Speed {
"aes256_gcm": { "aes256_gcm": {
type: Speed, type: Speed,
}, },
"verify": {
type: Speed,
},
}, },
)] )]
#[derive(Copy, Clone, Serialize)] #[derive(Copy, Clone, Serialize)]
@ -68,7 +71,7 @@ struct Speed {
struct BenchmarkResult { struct BenchmarkResult {
/// TLS upload speed /// TLS upload speed
tls: Speed, tls: Speed,
/// SHA256 checksum comptation speed /// SHA256 checksum computation speed
sha256: Speed, sha256: Speed,
/// ZStd level 1 compression speed /// ZStd level 1 compression speed
compress: Speed, compress: Speed,
@ -76,29 +79,34 @@ struct BenchmarkResult {
decompress: Speed, decompress: Speed,
/// AES256 GCM encryption speed /// AES256 GCM encryption speed
aes256_gcm: Speed, aes256_gcm: Speed,
/// Verify speed
verify: Speed,
} }
static BENCHMARK_RESULT_2020_TOP: BenchmarkResult = BenchmarkResult { static BENCHMARK_RESULT_2020_TOP: BenchmarkResult = BenchmarkResult {
tls: Speed { tls: Speed {
speed: None, speed: None,
top: 1_000_000.0 * 590.0, // TLS to localhost, AMD Ryzen 7 2700X top: 1_000_000.0 * 690.0, // TLS to localhost, AMD Ryzen 7 2700X
}, },
sha256: Speed { sha256: Speed {
speed: None, speed: None,
top: 1_000_000.0 * 2120.0, // AMD Ryzen 7 2700X top: 1_000_000.0 * 2022.0, // AMD Ryzen 7 2700X
}, },
compress: Speed { compress: Speed {
speed: None, speed: None,
top: 1_000_000.0 * 2158.0, // AMD Ryzen 7 2700X top: 1_000_000.0 * 752.0, // AMD Ryzen 7 2700X
}, },
decompress: Speed { decompress: Speed {
speed: None, speed: None,
top: 1_000_000.0 * 8062.0, // AMD Ryzen 7 2700X top: 1_000_000.0 * 1198.0, // AMD Ryzen 7 2700X
}, },
aes256_gcm: Speed { aes256_gcm: Speed {
speed: None, speed: None,
top: 1_000_000.0 * 3803.0, // AMD Ryzen 7 2700X top: 1_000_000.0 * 3645.0, // AMD Ryzen 7 2700X
},
verify: Speed {
speed: None,
top: 1_000_000.0 * 758.0, // AMD Ryzen 7 2700X
}, },
}; };
@ -187,7 +195,7 @@ fn render_result(
.header("TLS (maximal backup upload speed)") .header("TLS (maximal backup upload speed)")
.right_align(false).renderer(render_speed)) .right_align(false).renderer(render_speed))
.column(ColumnConfig::new("sha256") .column(ColumnConfig::new("sha256")
.header("SHA256 checksum comptation speed") .header("SHA256 checksum computation speed")
.right_align(false).renderer(render_speed)) .right_align(false).renderer(render_speed))
.column(ColumnConfig::new("compress") .column(ColumnConfig::new("compress")
.header("ZStd level 1 compression speed") .header("ZStd level 1 compression speed")
@ -195,7 +203,10 @@ fn render_result(
.column(ColumnConfig::new("decompress") .column(ColumnConfig::new("decompress")
.header("ZStd level 1 decompression speed") .header("ZStd level 1 decompression speed")
.right_align(false).renderer(render_speed)) .right_align(false).renderer(render_speed))
.column(ColumnConfig::new("aes256_gcm") .column(ColumnConfig::new("verify")
.header("Chunk verification speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("aes256_gcm")
.header("AES256 GCM encryption speed") .header("AES256 GCM encryption speed")
.right_align(false).renderer(render_speed)); .right_align(false).renderer(render_speed));
@ -212,9 +223,9 @@ async fn test_upload_speed(
verbose: bool, verbose: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let backup_time = Utc.timestamp(Utc::now().timestamp(), 0); let backup_time = proxmox::tools::time::epoch_i64();
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
record_repository(&repo); record_repository(&repo);
if verbose { eprintln!("Connecting to backup server"); } if verbose { eprintln!("Connecting to backup server"); }
@ -226,6 +237,7 @@ async fn test_upload_speed(
"benchmark", "benchmark",
backup_time, backup_time,
false, false,
true
).await?; ).await?;
if verbose { eprintln!("Start TLS speed test"); } if verbose { eprintln!("Start TLS speed test"); }
@ -257,7 +269,17 @@ fn test_crypt_speed(
let crypt_config = CryptConfig::new(testkey)?; let crypt_config = CryptConfig::new(testkey)?;
let random_data = proxmox::sys::linux::random_data(1024*1024)?; //let random_data = proxmox::sys::linux::random_data(1024*1024)?;
let mut random_data = vec![];
// generate pseudo random byte sequence
for i in 0..256*1024 {
for j in 0..4 {
let byte = ((i >> (j<<3))&0xff) as u8;
random_data.push(byte);
}
}
assert_eq!(random_data.len(), 1024*1024);
let start_time = std::time::Instant::now(); let start_time = std::time::Instant::now();
@ -322,5 +344,23 @@ fn test_crypt_speed(
eprintln!("AES256/GCM speed: {:.2} MB/s", speed/1_000_000_.0); eprintln!("AES256/GCM speed: {:.2} MB/s", speed/1_000_000_.0);
let start_time = std::time::Instant::now();
let (chunk, digest) = DataChunkBuilder::new(&random_data)
.compress(true)
.build()?;
let mut bytes = 0;
loop {
chunk.verify_unencrypted(random_data.len(), &digest)?;
bytes += random_data.len();
if start_time.elapsed().as_micros() > 1_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.verify.speed = Some(speed);
eprintln!("Verify speed: {:.2} MB/s", speed/1_000_000_.0);
Ok(()) Ok(())
} }

View File

@ -79,7 +79,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
} }
}; };
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let client = BackupReader::start( let client = BackupReader::start(
client, client,
@ -153,7 +153,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
/// Shell to interactively inspect and restore snapshots. /// Shell to interactively inspect and restore snapshots.
async fn catalog_shell(param: Value) -> Result<(), Error> { async fn catalog_shell(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let path = tools::required_string_param(&param, "snapshot")?; let path = tools::required_string_param(&param, "snapshot")?;
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;

View File

@ -1,7 +1,8 @@
use std::path::PathBuf; use std::path::PathBuf;
use std::io::Write;
use std::process::{Stdio, Command};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use chrono::{Local, TimeZone};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use proxmox::api::api; use proxmox::api::api;
@ -14,6 +15,17 @@ use proxmox_backup::backup::{
}; };
use proxmox_backup::tools; use proxmox_backup::tools;
#[api()]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Paperkey output format
pub enum PaperkeyFormat {
/// Format as Utf8 text. Includes QR codes as ascii-art.
Text,
/// Format as Html. Includes QR codes as png images.
Html,
}
pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json"; pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
pub const MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem"; pub const MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
@ -112,7 +124,7 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
match kdf { match kdf {
Kdf::None => { Kdf::None => {
let created = Local.timestamp(Local::now().timestamp(), 0); let created = proxmox::tools::time::epoch_i64();
store_key_config( store_key_config(
&path, &path,
@ -180,7 +192,7 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
match kdf { match kdf {
Kdf::None => { Kdf::None => {
let modified = Local.timestamp(Local::now().timestamp(), 0); let modified = proxmox::tools::time::epoch_i64();
store_key_config( store_key_config(
&path, &path,
@ -262,6 +274,55 @@ fn create_master_key() -> Result<(), Error> {
Ok(()) Ok(())
} }
#[api(
input: {
properties: {
path: {
description: "Key file. Without this the default key's will be used.",
optional: true,
},
subject: {
description: "Include the specified subject as titel text.",
optional: true,
},
"output-format": {
type: PaperkeyFormat,
description: "Output format. Text or Html.",
optional: true,
},
},
},
)]
/// Generate a printable, human readable text file containing the encryption key.
///
/// This also includes a scanable QR code for fast key restore.
fn paper_key(
path: Option<String>,
subject: Option<String>,
output_format: Option<PaperkeyFormat>,
) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => {
let path = find_default_encryption_key()?
.ok_or_else(|| {
format_err!("no encryption file provided and no default file found")
})?;
path
}
};
let data = file_get_contents(&path)?;
let data = std::str::from_utf8(&data)?;
let format = output_format.unwrap_or(PaperkeyFormat::Html);
match format {
PaperkeyFormat::Html => paperkey_html(data, subject),
PaperkeyFormat::Text => paperkey_text(data, subject),
}
}
pub fn cli() -> CliCommandMap { pub fn cli() -> CliCommandMap {
let key_create_cmd_def = CliCommand::new(&API_METHOD_CREATE) let key_create_cmd_def = CliCommand::new(&API_METHOD_CREATE)
.arg_param(&["path"]) .arg_param(&["path"])
@ -276,9 +337,214 @@ pub fn cli() -> CliCommandMap {
.arg_param(&["path"]) .arg_param(&["path"])
.completion_cb("path", tools::complete_file_name); .completion_cb("path", tools::complete_file_name);
let paper_key_cmd_def = CliCommand::new(&API_METHOD_PAPER_KEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
CliCommandMap::new() CliCommandMap::new()
.insert("create", key_create_cmd_def) .insert("create", key_create_cmd_def)
.insert("create-master-key", key_create_master_key_cmd_def) .insert("create-master-key", key_create_master_key_cmd_def)
.insert("import-master-pubkey", key_import_master_pubkey_cmd_def) .insert("import-master-pubkey", key_import_master_pubkey_cmd_def)
.insert("change-passphrase", key_change_passphrase_cmd_def) .insert("change-passphrase", key_change_passphrase_cmd_def)
.insert("paper-key", paper_key_cmd_def)
}
fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
let img_size_pt = 500;
println!("<!DOCTYPE html>");
println!("<html lang=\"en\">");
println!("<head>");
println!("<meta charset=\"utf-8\">");
println!("<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">");
println!("<title>Proxmox Backup Paperkey</title>");
println!("<style type=\"text/css\">");
println!(" p {{");
println!(" font-size: 12pt;");
println!(" font-family: monospace;");
println!(" white-space: pre-wrap;");
println!(" line-break: anywhere;");
println!(" }}");
println!("</style>");
println!("</head>");
println!("<body>");
if let Some(subject) = subject {
println!("<p>Subject: {}</p>", subject);
}
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
const BLOCK_SIZE: usize = 20;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
for i in 0..blocks {
let start = i*BLOCK_SIZE;
let mut end = start + BLOCK_SIZE;
if end > lines.len() {
end = lines.len();
}
let data = &lines[start..end];
println!("<div style=\"page-break-inside: avoid;page-break-after: always\">");
println!("<p>");
for l in start..end {
println!("{:02}: {}", l, lines[l]);
}
println!("</p>");
let data = data.join("\n");
let qr_code = generate_qr_code("png", data.as_bytes())?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>");
println!("<img");
println!("width=\"{}pt\" height=\"{}pt\"", img_size_pt, img_size_pt);
println!("src=\"data:image/png;base64,{}\"/>", qr_code);
println!("</center>");
println!("</div>");
}
println!("</body>");
println!("</html>");
return Ok(());
}
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("<div style=\"page-break-inside: avoid\">");
println!("<p>");
println!("-----BEGIN PROXMOX BACKUP KEY-----");
for line in key_text.lines() {
println!("{}", line);
}
println!("-----END PROXMOX BACKUP KEY-----");
println!("</p>");
let qr_code = generate_qr_code("png", key_text.as_bytes())?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>");
println!("<img");
println!("width=\"{}pt\" height=\"{}pt\"", img_size_pt, img_size_pt);
println!("src=\"data:image/png;base64,{}\"/>", qr_code);
println!("</center>");
println!("</div>");
println!("</body>");
println!("</html>");
Ok(())
}
fn paperkey_text(data: &str, subject: Option<String>) -> Result<(), Error> {
if let Some(subject) = subject {
println!("Subject: {}\n", subject);
}
if data.starts_with("-----BEGIN ENCRYPTED PRIVATE KEY-----\n") {
let lines: Vec<String> = data.lines()
.map(|s| s.trim_end())
.filter(|s| !s.is_empty())
.map(String::from)
.collect();
if !lines[lines.len()-1].starts_with("-----END ENCRYPTED PRIVATE KEY-----") {
bail!("unexpected key format");
}
if lines.len() < 20 {
bail!("unexpected key format");
}
const BLOCK_SIZE: usize = 5;
let blocks = (lines.len() + BLOCK_SIZE -1)/BLOCK_SIZE;
for i in 0..blocks {
let start = i*BLOCK_SIZE;
let mut end = start + BLOCK_SIZE;
if end > lines.len() {
end = lines.len();
}
let data = &lines[start..end];
for l in start..end {
println!("{:-2}: {}", l, lines[l]);
}
let data = data.join("\n");
let qr_code = generate_qr_code("utf8i", data.as_bytes())?;
let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
println!("{}", qr_code);
println!("{}", char::from(12u8)); // page break
}
return Ok(());
}
let key_config: KeyConfig = serde_json::from_str(&data)?;
let key_text = serde_json::to_string_pretty(&key_config)?;
println!("-----BEGIN PROXMOX BACKUP KEY-----");
println!("{}", key_text);
println!("-----END PROXMOX BACKUP KEY-----");
let qr_code = generate_qr_code("utf8i", key_text.as_bytes())?;
let qr_code = String::from_utf8(qr_code)
.map_err(|_| format_err!("Failed to read qr code (got non-utf8 data)"))?;
println!("{}", qr_code);
Ok(())
}
fn generate_qr_code(output_type: &str, data: &[u8]) -> Result<Vec<u8>, Error> {
let mut child = Command::new("qrencode")
.args(&["-t", output_type, "-m0", "-s1", "-lm", "--output", "-"])
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn()?;
{
let stdin = child.stdin.as_mut()
.ok_or_else(|| format_err!("Failed to open stdin"))?;
stdin.write_all(data)
.map_err(|_| format_err!("Failed to write to stdin"))?;
}
let output = child.wait_with_output()
.map_err(|_| format_err!("Failed to read stdout"))?;
let output = crate::tools::command_output(output, None)?;
Ok(output)
} }

View File

@ -101,7 +101,7 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;
let target = tools::required_string_param(&param, "target")?; let target = tools::required_string_param(&param, "target")?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
record_repository(&repo); record_repository(&repo);
@ -141,7 +141,7 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
let file_info = manifest.lookup_file_info(&archive_name)?; let file_info = manifest.lookup_file_info(&server_archive_name)?;
if server_archive_name.ends_with(".didx") { if server_archive_name.ends_with(".didx") {
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?; let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;

View File

@ -48,7 +48,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize; let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false); let running = !param["all"].as_bool().unwrap_or(false);
@ -96,7 +96,7 @@ async fn task_log(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let upid = tools::required_string_param(&param, "upid")?; let upid = tools::required_string_param(&param, "upid")?;
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.user())?;
display_task_log(client, upid, true).await?; display_task_log(client, upid, true).await?;
@ -122,7 +122,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let upid_str = tools::required_string_param(&param, "upid")?; let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect(repo.host(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.user())?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let _ = client.delete(&path, None).await?; let _ = client.delete(&path, None).await?;

View File

@ -239,7 +239,7 @@ pub fn zpool_commands() -> CommandLineInterface {
.insert("create", .insert("create",
CliCommand::new(&API_METHOD_CREATE_ZPOOL) CliCommand::new(&API_METHOD_CREATE_ZPOOL)
.arg_param(&["name"]) .arg_param(&["name"])
.completion_cb("devices", complete_disk_name) // fixme: comlete the list .completion_cb("devices", complete_disk_name) // fixme: complete the list
); );
cmd_def.into() cmd_def.into()

View File

@ -1,16 +1,18 @@
use anyhow::{format_err, Error}; use anyhow::{format_err, Error};
use std::io::{Read, Write, Seek, SeekFrom}; use std::io::{Write, Seek, SeekFrom};
use std::fs::File; use std::fs::File;
use std::sync::Arc; use std::sync::Arc;
use std::os::unix::fs::OpenOptionsExt; use std::os::unix::fs::OpenOptionsExt;
use chrono::{DateTime, Utc};
use futures::future::AbortHandle; use futures::future::AbortHandle;
use serde_json::{json, Value}; use serde_json::{json, Value};
use proxmox::tools::digest_to_hex; use proxmox::tools::digest_to_hex;
use crate::backup::*; use crate::{
tools::compute_file_csum,
backup::*,
};
use super::{HttpClient, H2Client}; use super::{HttpClient, H2Client};
@ -41,18 +43,18 @@ impl BackupReader {
datastore: &str, datastore: &str,
backup_type: &str, backup_type: &str,
backup_id: &str, backup_id: &str,
backup_time: DateTime<Utc>, backup_time: i64,
debug: bool, debug: bool,
) -> Result<Arc<BackupReader>, Error> { ) -> Result<Arc<BackupReader>, Error> {
let param = json!({ let param = json!({
"backup-type": backup_type, "backup-type": backup_type,
"backup-id": backup_id, "backup-id": backup_id,
"backup-time": backup_time.timestamp(), "backup-time": backup_time,
"store": datastore, "store": datastore,
"debug": debug, "debug": debug,
}); });
let req = HttpClient::request_builder(client.server(), "GET", "/api2/json/reader", Some(param)).unwrap(); let req = HttpClient::request_builder(client.server(), client.port(), "GET", "/api2/json/reader", Some(param)).unwrap();
let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_READER_PROTOCOL_ID_V1!())).await?; let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_READER_PROTOCOL_ID_V1!())).await?;
@ -220,29 +222,3 @@ impl BackupReader {
Ok(index) Ok(index)
} }
} }
pub fn compute_file_csum(file: &mut File) -> Result<([u8; 32], u64), Error> {
file.seek(SeekFrom::Start(0))?;
let mut hasher = openssl::sha::Sha256::new();
let mut buffer = proxmox::tools::vec::undefined(256*1024);
let mut size: u64 = 0;
loop {
let count = match file.read(&mut buffer) {
Ok(count) => count,
Err(ref err) if err.kind() == std::io::ErrorKind::Interrupted => { continue; }
Err(err) => return Err(err.into()),
};
if count == 0 {
break;
}
size += count as u64;
hasher.update(&buffer[..count]);
}
let csum = hasher.finish();
Ok((csum, size))
}

View File

@ -19,14 +19,22 @@ pub struct BackupRepository {
user: Option<Userid>, user: Option<Userid>,
/// The host name or IP address /// The host name or IP address
host: Option<String>, host: Option<String>,
/// The port
port: Option<u16>,
/// The name of the datastore /// The name of the datastore
store: String, store: String,
} }
impl BackupRepository { impl BackupRepository {
pub fn new(user: Option<Userid>, host: Option<String>, store: String) -> Self { pub fn new(user: Option<Userid>, host: Option<String>, port: Option<u16>, store: String) -> Self {
Self { user, host, store } let host = match host {
Some(host) if (IP_V6_REGEX.regex_obj)().is_match(&host) => {
Some(format!("[{}]", host))
},
other => other,
};
Self { user, host, port, store }
} }
pub fn user(&self) -> &Userid { pub fn user(&self) -> &Userid {
@ -43,6 +51,13 @@ impl BackupRepository {
"localhost" "localhost"
} }
pub fn port(&self) -> u16 {
if let Some(port) = self.port {
return port;
}
8007
}
pub fn store(&self) -> &str { pub fn store(&self) -> &str {
&self.store &self.store
} }
@ -50,13 +65,12 @@ impl BackupRepository {
impl fmt::Display for BackupRepository { impl fmt::Display for BackupRepository {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
if let Some(ref user) = self.user { match (&self.user, &self.host, self.port) {
write!(f, "{}@{}:{}", user, self.host(), self.store) (Some(user), _, _) => write!(f, "{}@{}:{}:{}", user, self.host(), self.port(), self.store),
} else if let Some(ref host) = self.host { (None, Some(host), None) => write!(f, "{}:{}", host, self.store),
write!(f, "{}:{}", host, self.store) (None, _, Some(port)) => write!(f, "{}:{}:{}", self.host(), port, self.store),
} else { (None, None, None) => write!(f, "{}", self.store),
write!(f, "{}", self.store) }
}
} }
} }
@ -76,7 +90,8 @@ impl std::str::FromStr for BackupRepository {
Ok(Self { Ok(Self {
user: cap.get(1).map(|m| Userid::try_from(m.as_str().to_owned())).transpose()?, user: cap.get(1).map(|m| Userid::try_from(m.as_str().to_owned())).transpose()?,
host: cap.get(2).map(|m| m.as_str().to_owned()), host: cap.get(2).map(|m| m.as_str().to_owned()),
store: cap[3].to_owned(), port: cap.get(3).map(|m| m.as_str().parse::<u16>()).transpose()?,
store: cap[4].to_owned(),
}) })
} }
} }

View File

@ -4,7 +4,6 @@ use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use chrono::{DateTime, Utc};
use futures::*; use futures::*;
use futures::stream::Stream; use futures::stream::Stream;
use futures::future::AbortHandle; use futures::future::AbortHandle;
@ -51,20 +50,22 @@ impl BackupWriter {
datastore: &str, datastore: &str,
backup_type: &str, backup_type: &str,
backup_id: &str, backup_id: &str,
backup_time: DateTime<Utc>, backup_time: i64,
debug: bool, debug: bool,
benchmark: bool
) -> Result<Arc<BackupWriter>, Error> { ) -> Result<Arc<BackupWriter>, Error> {
let param = json!({ let param = json!({
"backup-type": backup_type, "backup-type": backup_type,
"backup-id": backup_id, "backup-id": backup_id,
"backup-time": backup_time.timestamp(), "backup-time": backup_time,
"store": datastore, "store": datastore,
"debug": debug "debug": debug,
"benchmark": benchmark
}); });
let req = HttpClient::request_builder( let req = HttpClient::request_builder(
client.server(), "GET", "/api2/json/backup", Some(param)).unwrap(); client.server(), client.port(), "GET", "/api2/json/backup", Some(param)).unwrap();
let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_PROTOCOL_ID_V1!())).await?; let (h2, abort) = client.start_h2_connection(req, String::from(PROXMOX_BACKUP_PROTOCOL_ID_V1!())).await?;
@ -629,7 +630,7 @@ impl BackupWriter {
}) })
} }
/// Upload speed test - prints result ot stderr /// Upload speed test - prints result to stderr
pub async fn upload_speedtest(&self, verbose: bool) -> Result<f64, Error> { pub async fn upload_speedtest(&self, verbose: bool) -> Result<f64, Error> {
let mut data = vec![]; let mut data = vec![];

View File

@ -1,8 +1,8 @@
use std::io::Write; use std::io::Write;
use std::task::{Context, Poll}; use std::task::{Context, Poll};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex, RwLock};
use std::time::Duration;
use chrono::Utc;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::*; use futures::*;
use http::Uri; use http::Uri;
@ -30,7 +30,7 @@ use crate::tools::{self, BroadcastFuture, DEFAULT_ENCODE_SET};
#[derive(Clone)] #[derive(Clone)]
pub struct AuthInfo { pub struct AuthInfo {
pub username: String, pub userid: Userid,
pub ticket: String, pub ticket: String,
pub token: String, pub token: String,
} }
@ -99,8 +99,11 @@ impl HttpClientOptions {
pub struct HttpClient { pub struct HttpClient {
client: Client<HttpsConnector>, client: Client<HttpsConnector>,
server: String, server: String,
port: u16,
fingerprint: Arc<Mutex<Option<String>>>, fingerprint: Arc<Mutex<Option<String>>>,
auth: BroadcastFuture<AuthInfo>, first_auth: BroadcastFuture<()>,
auth: Arc<RwLock<AuthInfo>>,
ticket_abort: futures::future::AbortHandle,
_options: HttpClientOptions, _options: HttpClientOptions,
} }
@ -199,7 +202,7 @@ fn store_ticket_info(prefix: &str, server: &str, username: &str, ticket: &str, t
let mut data = file_get_json(&path, Some(json!({})))?; let mut data = file_get_json(&path, Some(json!({})))?;
let now = Utc::now().timestamp(); let now = proxmox::tools::time::epoch_i64();
data[server][username] = json!({ "timestamp": now, "ticket": ticket, "token": token}); data[server][username] = json!({ "timestamp": now, "ticket": ticket, "token": token});
@ -230,7 +233,7 @@ fn load_ticket_info(prefix: &str, server: &str, userid: &Userid) -> Option<(Stri
// usually /run/user/<uid>/... // usually /run/user/<uid>/...
let path = base.place_runtime_file("tickets").ok()?; let path = base.place_runtime_file("tickets").ok()?;
let data = file_get_json(&path, None).ok()?; let data = file_get_json(&path, None).ok()?;
let now = Utc::now().timestamp(); let now = proxmox::tools::time::epoch_i64();
let ticket_lifetime = tools::ticket::TICKET_LIFETIME - 60; let ticket_lifetime = tools::ticket::TICKET_LIFETIME - 60;
let uinfo = data[server][userid.as_str()].as_object()?; let uinfo = data[server][userid.as_str()].as_object()?;
let timestamp = uinfo["timestamp"].as_i64()?; let timestamp = uinfo["timestamp"].as_i64()?;
@ -248,6 +251,7 @@ fn load_ticket_info(prefix: &str, server: &str, userid: &Userid) -> Option<(Stri
impl HttpClient { impl HttpClient {
pub fn new( pub fn new(
server: &str, server: &str,
port: u16,
userid: &Userid, userid: &Userid,
mut options: HttpClientOptions, mut options: HttpClientOptions,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
@ -292,7 +296,6 @@ impl HttpClient {
let mut httpc = hyper::client::HttpConnector::new(); let mut httpc = hyper::client::HttpConnector::new();
httpc.set_nodelay(true); // important for h2 download performance! httpc.set_nodelay(true); // important for h2 download performance!
httpc.set_recv_buffer_size(Some(1024*1024)); //important for h2 download performance!
httpc.enforce_http(false); // we want https... httpc.enforce_http(false); // we want https...
let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build()); let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build());
@ -319,29 +322,69 @@ impl HttpClient {
} }
}; };
let auth = Arc::new(RwLock::new(AuthInfo {
userid: userid.clone(),
ticket: password.clone(),
token: "".to_string(),
}));
let server2 = server.to_string();
let client2 = client.clone();
let auth2 = auth.clone();
let prefix2 = options.prefix.clone();
let renewal_future = async move {
loop {
tokio::time::delay_for(Duration::new(60*15, 0)).await; // 15 minutes
let (userid, ticket) = {
let authinfo = auth2.read().unwrap().clone();
(authinfo.userid, authinfo.ticket)
};
match Self::credentials(client2.clone(), server2.clone(), port, userid, ticket).await {
Ok(auth) => {
if use_ticket_cache & &prefix2.is_some() {
let _ = store_ticket_info(prefix2.as_ref().unwrap(), &server2, &auth.userid.to_string(), &auth.ticket, &auth.token);
}
*auth2.write().unwrap() = auth;
},
Err(err) => {
eprintln!("re-authentication failed: {}", err);
return;
}
}
}
};
let (renewal_future, ticket_abort) = futures::future::abortable(renewal_future);
let login_future = Self::credentials( let login_future = Self::credentials(
client.clone(), client.clone(),
server.to_owned(), server.to_owned(),
port,
userid.to_owned(), userid.to_owned(),
password.to_owned(), password.to_owned(),
).map_ok({ ).map_ok({
let server = server.to_string(); let server = server.to_string();
let prefix = options.prefix.clone(); let prefix = options.prefix.clone();
let authinfo = auth.clone();
move |auth| { move |auth| {
if use_ticket_cache & &prefix.is_some() { if use_ticket_cache & &prefix.is_some() {
let _ = store_ticket_info(prefix.as_ref().unwrap(), &server, &auth.username, &auth.ticket, &auth.token); let _ = store_ticket_info(prefix.as_ref().unwrap(), &server, &auth.userid.to_string(), &auth.ticket, &auth.token);
} }
*authinfo.write().unwrap() = auth;
auth tokio::spawn(renewal_future);
} }
}); });
Ok(Self { Ok(Self {
client, client,
server: String::from(server), server: String::from(server),
port,
fingerprint: verified_fingerprint, fingerprint: verified_fingerprint,
auth: BroadcastFuture::new(Box::new(login_future)), auth,
ticket_abort,
first_auth: BroadcastFuture::new(Box::new(login_future)),
_options: options, _options: options,
}) })
} }
@ -351,7 +394,9 @@ impl HttpClient {
/// Login is done on demand, so this is only required if you need /// Login is done on demand, so this is only required if you need
/// access to authentication data in 'AuthInfo'. /// access to authentication data in 'AuthInfo'.
pub async fn login(&self) -> Result<AuthInfo, Error> { pub async fn login(&self) -> Result<AuthInfo, Error> {
self.auth.listen().await self.first_auth.listen().await?;
let authinfo = self.auth.read().unwrap();
Ok(authinfo.clone())
} }
/// Returns the optional fingerprint passed to the new() constructor. /// Returns the optional fingerprint passed to the new() constructor.
@ -445,7 +490,7 @@ impl HttpClient {
path: &str, path: &str,
data: Option<Value>, data: Option<Value>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let req = Self::request_builder(&self.server, "GET", path, data).unwrap(); let req = Self::request_builder(&self.server, self.port, "GET", path, data)?;
self.request(req).await self.request(req).await
} }
@ -454,7 +499,7 @@ impl HttpClient {
path: &str, path: &str,
data: Option<Value>, data: Option<Value>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let req = Self::request_builder(&self.server, "DELETE", path, data).unwrap(); let req = Self::request_builder(&self.server, self.port, "DELETE", path, data)?;
self.request(req).await self.request(req).await
} }
@ -463,7 +508,7 @@ impl HttpClient {
path: &str, path: &str,
data: Option<Value>, data: Option<Value>,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let req = Self::request_builder(&self.server, "POST", path, data).unwrap(); let req = Self::request_builder(&self.server, self.port, "POST", path, data)?;
self.request(req).await self.request(req).await
} }
@ -472,7 +517,7 @@ impl HttpClient {
path: &str, path: &str,
output: &mut (dyn Write + Send), output: &mut (dyn Write + Send),
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut req = Self::request_builder(&self.server, "GET", path, None).unwrap(); let mut req = Self::request_builder(&self.server, self.port, "GET", path, None)?;
let client = self.client.clone(); let client = self.client.clone();
@ -508,7 +553,7 @@ impl HttpClient {
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let path = path.trim_matches('/'); let path = path.trim_matches('/');
let mut url = format!("https://{}:8007/{}", &self.server, path); let mut url = format!("https://{}:{}/{}", &self.server, self.port, path);
if let Some(data) = data { if let Some(data) = data {
let query = tools::json_object_to_query(data).unwrap(); let query = tools::json_object_to_query(data).unwrap();
@ -583,14 +628,15 @@ impl HttpClient {
async fn credentials( async fn credentials(
client: Client<HttpsConnector>, client: Client<HttpsConnector>,
server: String, server: String,
port: u16,
username: Userid, username: Userid,
password: String, password: String,
) -> Result<AuthInfo, Error> { ) -> Result<AuthInfo, Error> {
let data = json!({ "username": username, "password": password }); let data = json!({ "username": username, "password": password });
let req = Self::request_builder(&server, "POST", "/api2/json/access/ticket", Some(data)).unwrap(); let req = Self::request_builder(&server, port, "POST", "/api2/json/access/ticket", Some(data))?;
let cred = Self::api_request(client, req).await?; let cred = Self::api_request(client, req).await?;
let auth = AuthInfo { let auth = AuthInfo {
username: cred["data"]["username"].as_str().unwrap().to_owned(), userid: cred["data"]["username"].as_str().unwrap().parse()?,
ticket: cred["data"]["ticket"].as_str().unwrap().to_owned(), ticket: cred["data"]["ticket"].as_str().unwrap().to_owned(),
token: cred["data"]["CSRFPreventionToken"].as_str().unwrap().to_owned(), token: cred["data"]["CSRFPreventionToken"].as_str().unwrap().to_owned(),
}; };
@ -631,9 +677,13 @@ impl HttpClient {
&self.server &self.server
} }
pub fn request_builder(server: &str, method: &str, path: &str, data: Option<Value>) -> Result<Request<Body>, Error> { pub fn port(&self) -> u16 {
self.port
}
pub fn request_builder(server: &str, port: u16, method: &str, path: &str, data: Option<Value>) -> Result<Request<Body>, Error> {
let path = path.trim_matches('/'); let path = path.trim_matches('/');
let url: Uri = format!("https://{}:8007/{}", server, path).parse()?; let url: Uri = format!("https://{}:{}/{}", server, port, path).parse()?;
if let Some(data) = data { if let Some(data) = data {
if method == "POST" { if method == "POST" {
@ -646,7 +696,7 @@ impl HttpClient {
return Ok(request); return Ok(request);
} else { } else {
let query = tools::json_object_to_query(data)?; let query = tools::json_object_to_query(data)?;
let url: Uri = format!("https://{}:8007/{}?{}", server, path, query).parse()?; let url: Uri = format!("https://{}:{}/{}?{}", server, port, path, query).parse()?;
let request = Request::builder() let request = Request::builder()
.method(method) .method(method)
.uri(url) .uri(url)
@ -668,6 +718,12 @@ impl HttpClient {
} }
} }
impl Drop for HttpClient {
fn drop(&mut self) {
self.ticket_abort.abort();
}
}
#[derive(Clone)] #[derive(Clone)]
pub struct H2Client { pub struct H2Client {

View File

@ -3,15 +3,20 @@
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::json; use serde_json::json;
use std::convert::TryFrom; use std::convert::TryFrom;
use std::sync::Arc; use std::sync::{Arc, Mutex};
use std::collections::HashMap; use std::collections::{HashSet, HashMap};
use std::io::{Seek, SeekFrom}; use std::io::{Seek, SeekFrom};
use std::time::SystemTime;
use std::sync::atomic::{AtomicUsize, Ordering};
use proxmox::api::error::{StatusCode, HttpError}; use proxmox::api::error::{StatusCode, HttpError};
use crate::server::{WorkerTask}; use crate::{
use crate::backup::*; tools::{ParallelHandler, compute_file_csum},
use crate::api2::types::*; server::WorkerTask,
use super::*; backup::*,
api2::types::*,
client::*,
};
// fixme: implement filters // fixme: implement filters
@ -19,27 +24,86 @@ use super::*;
// Todo: correctly lock backup groups // Todo: correctly lock backup groups
async fn pull_index_chunks<I: IndexFile>( async fn pull_index_chunks<I: IndexFile>(
_worker: &WorkerTask, worker: &WorkerTask,
chunk_reader: &mut RemoteChunkReader, chunk_reader: RemoteChunkReader,
target: Arc<DataStore>, target: Arc<DataStore>,
index: I, index: I,
downloaded_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
) -> Result<(), Error> { ) -> Result<(), Error> {
use futures::stream::{self, StreamExt, TryStreamExt};
for pos in 0..index.index_count() { let start_time = SystemTime::now();
let info = index.chunk_info(pos).unwrap();
let chunk_exists = target.cond_touch_chunk(&info.digest, false)?;
if chunk_exists {
//worker.log(format!("chunk {} exists {}", pos, proxmox::tools::digest_to_hex(digest)));
continue;
}
//worker.log(format!("sync {} chunk {}", pos, proxmox::tools::digest_to_hex(digest)));
let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
chunk.verify_unencrypted(info.size() as usize, &info.digest)?; let stream = stream::iter(
(0..index.index_count())
.map(|pos| index.chunk_info(pos).unwrap())
.filter(|info| {
let mut guard = downloaded_chunks.lock().unwrap();
let done = guard.contains(&info.digest);
if !done {
// Note: We mark a chunk as downloaded before its actually downloaded
// to avoid duplicate downloads.
guard.insert(info.digest);
}
!done
})
);
target.insert_chunk(&chunk, &info.digest)?; let target2 = target.clone();
} let verify_pool = ParallelHandler::new(
"sync chunk writer", 4,
move |(chunk, digest, size): (DataBlob, [u8;32], u64)| {
// println!("verify and write {}", proxmox::tools::digest_to_hex(&digest));
chunk.verify_unencrypted(size as usize, &digest)?;
target2.insert_chunk(&chunk, &digest)?;
Ok(())
}
);
let verify_and_write_channel = verify_pool.channel();
let bytes = Arc::new(AtomicUsize::new(0));
stream
.map(|info| {
let target = Arc::clone(&target);
let chunk_reader = chunk_reader.clone();
let bytes = Arc::clone(&bytes);
let verify_and_write_channel = verify_and_write_channel.clone();
Ok::<_, Error>(async move {
let chunk_exists = crate::tools::runtime::block_in_place(|| target.cond_touch_chunk(&info.digest, false))?;
if chunk_exists {
//worker.log(format!("chunk {} exists {}", pos, proxmox::tools::digest_to_hex(digest)));
return Ok::<_, Error>(());
}
//worker.log(format!("sync {} chunk {}", pos, proxmox::tools::digest_to_hex(digest)));
let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
let raw_size = chunk.raw_size() as usize;
// decode, verify and write in a separate threads to maximize throughput
crate::tools::runtime::block_in_place(|| verify_and_write_channel.send((chunk, info.digest, info.size())))?;
bytes.fetch_add(raw_size, Ordering::SeqCst);
Ok(())
})
})
.try_buffer_unordered(20)
.try_for_each(|_res| futures::future::ok(()))
.await?;
drop(verify_and_write_channel);
verify_pool.complete()?;
let elapsed = start_time.elapsed()?.as_secs_f64();
let bytes = bytes.load(Ordering::SeqCst);
worker.log(format!("downloaded {} bytes ({} MiB/s)", bytes, (bytes as f64)/(1024.0*1024.0*elapsed)));
Ok(()) Ok(())
} }
@ -52,6 +116,7 @@ async fn download_manifest(
let mut tmp_manifest_file = std::fs::OpenOptions::new() let mut tmp_manifest_file = std::fs::OpenOptions::new()
.write(true) .write(true)
.create(true) .create(true)
.truncate(true)
.read(true) .read(true)
.open(&filename)?; .open(&filename)?;
@ -85,6 +150,7 @@ async fn pull_single_archive(
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
snapshot: &BackupDir, snapshot: &BackupDir,
archive_info: &FileInfo, archive_info: &FileInfo,
downloaded_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let archive_name = &archive_info.filename; let archive_name = &archive_info.filename;
@ -111,7 +177,7 @@ async fn pull_single_archive(
let (csum, size) = index.compute_csum(); let (csum, size) = index.compute_csum();
verify_archive(archive_info, &csum, size)?; verify_archive(archive_info, &csum, size)?;
pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?; pull_index_chunks(worker, chunk_reader.clone(), tgt_store.clone(), index, downloaded_chunks).await?;
} }
ArchiveType::FixedIndex => { ArchiveType::FixedIndex => {
let index = FixedIndexReader::new(tmpfile) let index = FixedIndexReader::new(tmpfile)
@ -119,7 +185,7 @@ async fn pull_single_archive(
let (csum, size) = index.compute_csum(); let (csum, size) = index.compute_csum();
verify_archive(archive_info, &csum, size)?; verify_archive(archive_info, &csum, size)?;
pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?; pull_index_chunks(worker, chunk_reader.clone(), tgt_store.clone(), index, downloaded_chunks).await?;
} }
ArchiveType::Blob => { ArchiveType::Blob => {
let (csum, size) = compute_file_csum(&mut tmpfile)?; let (csum, size) = compute_file_csum(&mut tmpfile)?;
@ -165,6 +231,7 @@ async fn pull_snapshot(
reader: Arc<BackupReader>, reader: Arc<BackupReader>,
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
snapshot: &BackupDir, snapshot: &BackupDir,
downloaded_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut manifest_name = tgt_store.base_path(); let mut manifest_name = tgt_store.base_path();
@ -218,6 +285,7 @@ async fn pull_snapshot(
try_client_log_download(worker, reader, &client_log_name).await?; try_client_log_download(worker, reader, &client_log_name).await?;
} }
worker.log("no data changes"); worker.log("no data changes");
let _ = std::fs::remove_file(&tmp_manifest_name);
return Ok(()); // nothing changed return Ok(()); // nothing changed
} }
} }
@ -273,6 +341,7 @@ async fn pull_snapshot(
tgt_store.clone(), tgt_store.clone(),
snapshot, snapshot,
&item, &item,
downloaded_chunks.clone(),
).await?; ).await?;
} }
@ -295,6 +364,7 @@ pub async fn pull_snapshot_from(
reader: Arc<BackupReader>, reader: Arc<BackupReader>,
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
snapshot: &BackupDir, snapshot: &BackupDir,
downloaded_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let (_path, is_new, _snap_lock) = tgt_store.create_locked_backup_dir(&snapshot)?; let (_path, is_new, _snap_lock) = tgt_store.create_locked_backup_dir(&snapshot)?;
@ -302,7 +372,7 @@ pub async fn pull_snapshot_from(
if is_new { if is_new {
worker.log(format!("sync snapshot {:?}", snapshot.relative_path())); worker.log(format!("sync snapshot {:?}", snapshot.relative_path()));
if let Err(err) = pull_snapshot(worker, reader, tgt_store.clone(), &snapshot).await { if let Err(err) = pull_snapshot(worker, reader, tgt_store.clone(), &snapshot, downloaded_chunks).await {
if let Err(cleanup_err) = tgt_store.remove_backup_dir(&snapshot, true) { if let Err(cleanup_err) = tgt_store.remove_backup_dir(&snapshot, true) {
worker.log(format!("cleanup error - {}", cleanup_err)); worker.log(format!("cleanup error - {}", cleanup_err));
} }
@ -311,7 +381,7 @@ pub async fn pull_snapshot_from(
worker.log(format!("sync snapshot {:?} done", snapshot.relative_path())); worker.log(format!("sync snapshot {:?} done", snapshot.relative_path()));
} else { } else {
worker.log(format!("re-sync snapshot {:?}", snapshot.relative_path())); worker.log(format!("re-sync snapshot {:?}", snapshot.relative_path()));
pull_snapshot(worker, reader, tgt_store.clone(), &snapshot).await?; pull_snapshot(worker, reader, tgt_store.clone(), &snapshot, downloaded_chunks).await?;
worker.log(format!("re-sync snapshot {:?} done", snapshot.relative_path())); worker.log(format!("re-sync snapshot {:?} done", snapshot.relative_path()));
} }
@ -325,6 +395,7 @@ pub async fn pull_group(
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
group: &BackupGroup, group: &BackupGroup,
delete: bool, delete: bool,
progress: Option<(usize, usize)>, // (groups_done, group_count)
) -> Result<(), Error> { ) -> Result<(), Error> {
let path = format!("api2/json/admin/datastore/{}/snapshots", src_repo.store()); let path = format!("api2/json/admin/datastore/{}/snapshots", src_repo.store());
@ -346,8 +417,21 @@ pub async fn pull_group(
let mut remote_snapshots = std::collections::HashSet::new(); let mut remote_snapshots = std::collections::HashSet::new();
for item in list { let (per_start, per_group) = if let Some((groups_done, group_count)) = progress {
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time); let per_start = (groups_done as f64)/(group_count as f64);
let per_group = 1.0/(group_count as f64);
(per_start, per_group)
} else {
(0.0, 1.0)
};
// start with 16384 chunks (up to 65GB)
let downloaded_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*64)));
let snapshot_count = list.len();
for (pos, item) in list.into_iter().enumerate() {
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
// in-progress backups can't be synced // in-progress backups can't be synced
if let None = item.size { if let None = item.size {
@ -367,7 +451,7 @@ pub async fn pull_group(
.password(Some(auth_info.ticket.clone())) .password(Some(auth_info.ticket.clone()))
.fingerprint(fingerprint.clone()); .fingerprint(fingerprint.clone());
let new_client = HttpClient::new(src_repo.host(), src_repo.user(), options)?; let new_client = HttpClient::new(src_repo.host(), src_repo.port(), src_repo.user(), options)?;
let reader = BackupReader::start( let reader = BackupReader::start(
new_client, new_client,
@ -379,7 +463,13 @@ pub async fn pull_group(
true, true,
).await?; ).await?;
pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot).await?; let result = pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot, downloaded_chunks.clone()).await;
let percentage = (pos as f64)/(snapshot_count as f64);
let percentage = per_start + percentage*per_group;
worker.log(format!("percentage done: {:.2}%", percentage*100.0));
result?; // stop on error
} }
if delete { if delete {
@ -429,6 +519,9 @@ pub async fn pull_store(
new_groups.insert(BackupGroup::new(&item.backup_type, &item.backup_id)); new_groups.insert(BackupGroup::new(&item.backup_type, &item.backup_id));
} }
let group_count = list.len();
let mut groups_done = 0;
for item in list { for item in list {
let group = BackupGroup::new(&item.backup_type, &item.backup_id); let group = BackupGroup::new(&item.backup_type, &item.backup_id);
@ -437,15 +530,24 @@ pub async fn pull_store(
if userid != owner { // only the owner is allowed to create additional snapshots if userid != owner { // only the owner is allowed to create additional snapshots
worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})", worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})",
item.backup_type, item.backup_id, userid, owner)); item.backup_type, item.backup_id, userid, owner));
errors = true; errors = true; // do not stop here, instead continue
continue; // do not stop here, instead continue
}
if let Err(err) = pull_group(worker, client, src_repo, tgt_store.clone(), &group, delete).await { } else {
worker.log(format!("sync group {}/{} failed - {}", item.backup_type, item.backup_id, err));
errors = true; if let Err(err) = pull_group(
continue; // do not stop here, instead continue worker,
client,
src_repo,
tgt_store.clone(),
&group,
delete,
Some((groups_done, group_count)),
).await {
worker.log(format!("sync group {}/{} failed - {}", item.backup_type, item.backup_id, err));
errors = true; // do not stop here, instead continue
}
} }
groups_done += 1;
} }
if delete { if delete {

View File

@ -15,7 +15,7 @@ pub struct RemoteChunkReader {
client: Arc<BackupReader>, client: Arc<BackupReader>,
crypt_config: Option<Arc<CryptConfig>>, crypt_config: Option<Arc<CryptConfig>>,
crypt_mode: CryptMode, crypt_mode: CryptMode,
cache_hint: HashMap<[u8; 32], usize>, cache_hint: Arc<HashMap<[u8; 32], usize>>,
cache: Arc<Mutex<HashMap<[u8; 32], Vec<u8>>>>, cache: Arc<Mutex<HashMap<[u8; 32], Vec<u8>>>>,
} }
@ -33,7 +33,7 @@ impl RemoteChunkReader {
client, client,
crypt_config, crypt_config,
crypt_mode, crypt_mode,
cache_hint, cache_hint: Arc::new(cache_hint),
cache: Arc::new(Mutex::new(HashMap::new())), cache: Arc::new(Mutex::new(HashMap::new())),
} }
} }

View File

@ -18,6 +18,7 @@ use crate::buildcfg;
pub mod acl; pub mod acl;
pub mod cached_user_info; pub mod cached_user_info;
pub mod datastore; pub mod datastore;
pub mod jobstate;
pub mod network; pub mod network;
pub mod remote; pub mod remote;
pub mod sync; pub mod sync;

View File

@ -44,6 +44,10 @@ pub const DIR_NAME_SCHEMA: Schema = StringSchema::new("Directory name").schema()
optional: true, optional: true,
schema: PRUNE_SCHEDULE_SCHEMA, schema: PRUNE_SCHEDULE_SCHEMA,
}, },
"verify-schedule": {
optional: true,
schema: VERIFY_SCHEDULE_SCHEMA,
},
"keep-last": { "keep-last": {
optional: true, optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST, schema: PRUNE_SCHEMA_KEEP_LAST,
@ -83,6 +87,8 @@ pub struct DataStoreConfig {
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub prune_schedule: Option<String>, pub prune_schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub verify_schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_last: Option<u64>, pub keep_last: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub keep_hourly: Option<u64>, pub keep_hourly: Option<u64>,

262
src/config/jobstate.rs Normal file
View File

@ -0,0 +1,262 @@
//! Generic JobState handling
//!
//! A 'Job' can have 3 states
//! - Created, when a schedule was created but never executed
//! - Started, when a job is running right now
//! - Finished, when a job was running in the past
//!
//! and is identified by 2 values: jobtype and jobname (e.g. 'syncjob' and 'myfirstsyncjob')
//!
//! This module Provides 2 helper structs to handle those coniditons
//! 'Job' which handles locking and writing to a file
//! 'JobState' which is the actual state
//!
//! an example usage would be
//! ```no_run
//! # use anyhow::{bail, Error};
//! # use proxmox_backup::server::TaskState;
//! # use proxmox_backup::config::jobstate::*;
//! # fn some_code() -> TaskState { TaskState::OK { endtime: 0 } }
//! # fn code() -> Result<(), Error> {
//! // locks the correct file under /var/lib
//! // or fails if someone else holds the lock
//! let mut job = match Job::new("jobtype", "jobname") {
//! Ok(job) => job,
//! Err(err) => bail!("could not lock jobstate"),
//! };
//!
//! // job holds the lock, we can start it
//! job.start("someupid")?;
//! // do something
//! let task_state = some_code();
//! job.finish(task_state)?;
//!
//! // release the lock
//! drop(job);
//! # Ok(())
//! # }
//!
//! ```
use std::fs::File;
use std::path::{Path, PathBuf};
use std::time::Duration;
use anyhow::{bail, format_err, Error};
use proxmox::tools::fs::{
create_path, file_read_optional_string, open_file_locked, replace_file, CreateOptions,
};
use serde::{Deserialize, Serialize};
use crate::server::{upid_read_status, worker_is_active_local, TaskState, UPID};
#[serde(rename_all = "kebab-case")]
#[derive(Serialize, Deserialize)]
/// Represents the State of a specific Job
pub enum JobState {
/// A job was created at 'time', but never started/finished
Created { time: i64 },
/// The Job was last started in 'upid',
Started { upid: String },
/// The Job was last started in 'upid', which finished with 'state'
Finished { upid: String, state: TaskState },
}
/// Represents a Job and holds the correct lock
pub struct Job {
jobtype: String,
jobname: String,
/// The State of the job
pub state: JobState,
_lock: File,
}
const JOB_STATE_BASEDIR: &str = "/var/lib/proxmox-backup/jobstates";
/// Create jobstate stat dir with correct permission
pub fn create_jobstate_dir() -> Result<(), Error> {
let backup_user = crate::backup::backup_user()?;
let opts = CreateOptions::new()
.owner(backup_user.uid)
.group(backup_user.gid);
create_path(JOB_STATE_BASEDIR, None, Some(opts))
.map_err(|err: Error| format_err!("unable to create rrdb stat dir - {}", err))?;
Ok(())
}
fn get_path(jobtype: &str, jobname: &str) -> PathBuf {
let mut path = PathBuf::from(JOB_STATE_BASEDIR);
path.push(format!("{}-{}.json", jobtype, jobname));
path
}
fn get_lock<P>(path: P) -> Result<File, Error>
where
P: AsRef<Path>,
{
let mut path = path.as_ref().to_path_buf();
path.set_extension("lck");
let lock = open_file_locked(&path, Duration::new(10, 0), true)?;
let backup_user = crate::backup::backup_user()?;
nix::unistd::chown(&path, Some(backup_user.uid), Some(backup_user.gid))?;
Ok(lock)
}
/// Removes the statefile of a job, this is useful if we delete a job
pub fn remove_state_file(jobtype: &str, jobname: &str) -> Result<(), Error> {
let mut path = get_path(jobtype, jobname);
let _lock = get_lock(&path)?;
std::fs::remove_file(&path).map_err(|err| {
format_err!(
"cannot remove statefile for {} - {}: {}",
jobtype,
jobname,
err
)
})?;
path.set_extension("lck");
// ignore errors
let _ = std::fs::remove_file(&path).map_err(|err| {
format_err!(
"cannot remove lockfile for {} - {}: {}",
jobtype,
jobname,
err
)
});
Ok(())
}
/// Creates the statefile with the state 'Created'
/// overwrites if it exists already
pub fn create_state_file(jobtype: &str, jobname: &str) -> Result<(), Error> {
let mut job = Job::new(jobtype, jobname)?;
job.write_state()
}
/// Returns the last run time of a job by reading the statefile
/// Note that this is not locked
pub fn last_run_time(jobtype: &str, jobname: &str) -> Result<i64, Error> {
match JobState::load(jobtype, jobname)? {
JobState::Created { time } => Ok(time),
JobState::Started { upid } | JobState::Finished { upid, .. } => {
let upid: UPID = upid
.parse()
.map_err(|err| format_err!("could not parse upid from state: {}", err))?;
Ok(upid.starttime)
}
}
}
impl JobState {
/// Loads and deserializes the jobstate from type and name.
/// When the loaded state indicates a started UPID,
/// we go and check if it has already stopped, and
/// returning the correct state.
///
/// This does not update the state in the file.
pub fn load(jobtype: &str, jobname: &str) -> Result<Self, Error> {
if let Some(state) = file_read_optional_string(get_path(jobtype, jobname))? {
match serde_json::from_str(&state)? {
JobState::Started { upid } => {
let parsed: UPID = upid
.parse()
.map_err(|err| format_err!("error parsing upid: {}", err))?;
if !worker_is_active_local(&parsed) {
let state = upid_read_status(&parsed)
.map_err(|err| format_err!("error reading upid log status: {}", err))?;
Ok(JobState::Finished { upid, state })
} else {
Ok(JobState::Started { upid })
}
}
other => Ok(other),
}
} else {
Ok(JobState::Created {
time: proxmox::tools::time::epoch_i64() - 30,
})
}
}
}
impl Job {
/// Creates a new instance of a job with the correct lock held
/// (will be hold until the job is dropped again).
///
/// This does not load the state from the file, to do that,
/// 'load' must be called
pub fn new(jobtype: &str, jobname: &str) -> Result<Self, Error> {
let path = get_path(jobtype, jobname);
let _lock = get_lock(&path)?;
Ok(Self {
jobtype: jobtype.to_string(),
jobname: jobname.to_string(),
state: JobState::Created {
time: proxmox::tools::time::epoch_i64(),
},
_lock,
})
}
/// Start the job and update the statefile accordingly
/// Fails if the job was already started
pub fn start(&mut self, upid: &str) -> Result<(), Error> {
match self.state {
JobState::Started { .. } => {
bail!("cannot start job that is started!");
}
_ => {}
}
self.state = JobState::Started {
upid: upid.to_string(),
};
self.write_state()
}
/// Finish the job and update the statefile accordingly with the given taskstate
/// Fails if the job was not yet started
pub fn finish(&mut self, state: TaskState) -> Result<(), Error> {
let upid = match &self.state {
JobState::Created { .. } => bail!("cannot finish when not started"),
JobState::Started { upid } => upid,
JobState::Finished { upid, .. } => upid,
}
.to_string();
self.state = JobState::Finished { upid, state };
self.write_state()
}
pub fn jobtype(&self) -> &str {
&self.jobtype
}
pub fn jobname(&self) -> &str {
&self.jobname
}
fn write_state(&mut self) -> Result<(), Error> {
let serialized = serde_json::to_string(&self.state)?;
let path = get_path(&self.jobtype, &self.jobname);
let backup_user = crate::backup::backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0644);
// set the correct owner/group/permissions while saving file
// owner(rw) = backup, group(r)= backup
let options = CreateOptions::new()
.perm(mode)
.owner(backup_user.uid)
.group(backup_user.gid);
replace_file(path, serialized.as_bytes(), options)
}
}

View File

@ -17,7 +17,7 @@ pub use lexer::*;
mod parser; mod parser;
pub use parser::*; pub use parser::*;
use crate::api2::types::{Interface, NetworkConfigMethod, NetworkInterfaceType, LinuxBondMode}; use crate::api2::types::{Interface, NetworkConfigMethod, NetworkInterfaceType, LinuxBondMode, BondXmitHashPolicy};
lazy_static!{ lazy_static!{
static ref PHYSICAL_NIC_REGEX: Regex = Regex::new(r"^(?:eth\d+|en[^:.]+|ib\d+)$").unwrap(); static ref PHYSICAL_NIC_REGEX: Regex = Regex::new(r"^(?:eth\d+|en[^:.]+|ib\d+)$").unwrap();
@ -44,6 +44,19 @@ pub fn bond_mode_to_str(mode: LinuxBondMode) -> &'static str {
} }
} }
pub fn bond_xmit_hash_policy_from_str(s: &str) -> Result<BondXmitHashPolicy, Error> {
BondXmitHashPolicy::deserialize(s.into_deserializer())
.map_err(|_: value::Error| format_err!("invalid bond_xmit_hash_policy '{}'", s))
}
pub fn bond_xmit_hash_policy_to_str(policy: &BondXmitHashPolicy) -> &'static str {
match policy {
BondXmitHashPolicy::layer2 => "layer2",
BondXmitHashPolicy::layer2_3 => "layer2+3",
BondXmitHashPolicy::layer3_4 => "layer3+4",
}
}
impl Interface { impl Interface {
pub fn new(name: String) -> Self { pub fn new(name: String) -> Self {
@ -67,6 +80,8 @@ impl Interface {
bridge_vlan_aware: None, bridge_vlan_aware: None,
slaves: None, slaves: None,
bond_mode: None, bond_mode: None,
bond_primary: None,
bond_xmit_hash_policy: None,
} }
} }
@ -169,6 +184,19 @@ impl Interface {
NetworkInterfaceType::Bond => { NetworkInterfaceType::Bond => {
let mode = self.bond_mode.unwrap_or(LinuxBondMode::balance_rr); let mode = self.bond_mode.unwrap_or(LinuxBondMode::balance_rr);
writeln!(w, "\tbond-mode {}", bond_mode_to_str(mode))?; writeln!(w, "\tbond-mode {}", bond_mode_to_str(mode))?;
if let Some(primary) = &self.bond_primary {
if mode == LinuxBondMode::active_backup {
writeln!(w, "\tbond-primary {}", primary)?;
}
}
if let Some(xmit_policy) = &self.bond_xmit_hash_policy {
if mode == LinuxBondMode::ieee802_3ad ||
mode == LinuxBondMode::balance_xor
{
writeln!(w, "\tbond_xmit_hash_policy {}", bond_xmit_hash_policy_to_str(xmit_policy))?;
}
}
let slaves = self.slaves.as_ref().unwrap_or(&EMPTY_LIST); let slaves = self.slaves.as_ref().unwrap_or(&EMPTY_LIST);
if slaves.is_empty() { if slaves.is_empty() {
@ -600,4 +628,101 @@ mod test {
Ok(()) Ok(())
} }
#[test]
fn test_network_config_parser_no_blank_1() -> Result<(), Error> {
let input = "auto lo\n\
iface lo inet loopback\n\
iface lo inet6 loopback\n\
auto ens18\n\
iface ens18 inet static\n\
\taddress 192.168.20.144/20\n\
\tgateway 192.168.16.1\n\
# comment\n\
iface ens20 inet static\n\
\taddress 192.168.20.145/20\n\
iface ens21 inet manual\n\
iface ens22 inet manual\n";
let mut parser = NetworkParser::new(&input.as_bytes()[..]);
let config = parser.parse_interfaces(None)?;
let output = String::try_from(config)?;
let expected = "auto lo\n\
iface lo inet loopback\n\
\n\
iface lo inet6 loopback\n\
\n\
auto ens18\n\
iface ens18 inet static\n\
\taddress 192.168.20.144/20\n\
\tgateway 192.168.16.1\n\
#comment\n\
\n\
iface ens20 inet static\n\
\taddress 192.168.20.145/20\n\
\n\
iface ens21 inet manual\n\
\n\
iface ens22 inet manual\n\
\n";
assert_eq!(output, expected);
Ok(())
}
#[test]
fn test_network_config_parser_no_blank_2() -> Result<(), Error> {
// Adapted from bug 2926
let input = "### Hetzner Online GmbH installimage\n\
\n\
source /etc/network/interfaces.d/*\n\
\n\
auto lo\n\
iface lo inet loopback\n\
iface lo inet6 loopback\n\
\n\
auto enp4s0\n\
iface enp4s0 inet static\n\
\taddress 10.10.10.10/24\n\
\tgateway 10.10.10.1\n\
\t# route 10.10.20.10/24 via 10.10.20.1\n\
\tup route add -net 10.10.20.10 netmask 255.255.255.0 gw 10.10.20.1 dev enp4s0\n\
\n\
iface enp4s0 inet6 static\n\
\taddress fe80::5496:35ff:fe99:5a6a/64\n\
\tgateway fe80::1\n";
let mut parser = NetworkParser::new(&input.as_bytes()[..]);
let config = parser.parse_interfaces(None)?;
let output = String::try_from(config)?;
let expected = "### Hetzner Online GmbH installimage\n\
\n\
source /etc/network/interfaces.d/*\n\
\n\
auto lo\n\
iface lo inet loopback\n\
\n\
iface lo inet6 loopback\n\
\n\
auto enp4s0\n\
iface enp4s0 inet static\n\
\taddress 10.10.10.10/24\n\
\tgateway 10.10.10.1\n\
\t# route 10.10.20.10/24 via 10.10.20.1\n\
\tup route add -net 10.10.20.10 netmask 255.255.255.0 gw 10.10.20.1 dev enp4s0\n\
\n\
iface enp4s0 inet6 static\n\
\taddress fe80::5496:35ff:fe99:5a6a/64\n\
\tgateway fe80::1\n\
\n";
assert_eq!(output, expected);
Ok(())
}
} }

View File

@ -149,7 +149,7 @@ pub fn compute_file_diff(filename: &str, shadow: &str) -> Result<String, Error>
.output() .output()
.map_err(|err| format_err!("failed to execute diff - {}", err))?; .map_err(|err| format_err!("failed to execute diff - {}", err))?;
let diff = crate::tools::command_output(output, Some(|c| c == 0 || c == 1)) let diff = crate::tools::command_output_as_string(output, Some(|c| c == 0 || c == 1))
.map_err(|err| format_err!("diff failed: {}", err))?; .map_err(|err| format_err!("diff failed: {}", err))?;
Ok(diff) Ok(diff)

View File

@ -26,6 +26,8 @@ pub enum Token {
BridgeVlanAware, BridgeVlanAware,
BondSlaves, BondSlaves,
BondMode, BondMode,
BondPrimary,
BondXmitHashPolicy,
EOF, EOF,
} }
@ -51,7 +53,10 @@ lazy_static! {
map.insert("bond-slaves", Token::BondSlaves); map.insert("bond-slaves", Token::BondSlaves);
map.insert("bond_slaves", Token::BondSlaves); map.insert("bond_slaves", Token::BondSlaves);
map.insert("bond-mode", Token::BondMode); map.insert("bond-mode", Token::BondMode);
map.insert("bond_mode", Token::BondMode); map.insert("bond-primary", Token::BondPrimary);
map.insert("bond_primary", Token::BondPrimary);
map.insert("bond_xmit_hash_policy", Token::BondXmitHashPolicy);
map.insert("bond-xmit-hash-policy", Token::BondXmitHashPolicy);
map map
}; };
} }

View File

@ -9,7 +9,7 @@ use regex::Regex;
use super::helper::*; use super::helper::*;
use super::lexer::*; use super::lexer::*;
use super::{NetworkConfig, NetworkOrderEntry, Interface, NetworkConfigMethod, NetworkInterfaceType, bond_mode_from_str}; use super::{NetworkConfig, NetworkOrderEntry, Interface, NetworkConfigMethod, NetworkInterfaceType, bond_mode_from_str, bond_xmit_hash_policy_from_str};
pub struct NetworkParser<R: BufRead> { pub struct NetworkParser<R: BufRead> {
input: Peekable<Lexer<R>>, input: Peekable<Lexer<R>>,
@ -210,9 +210,7 @@ impl <R: BufRead> NetworkParser<R> {
self.eat(Token::Newline)?; self.eat(Token::Newline)?;
continue; continue;
} }
Token::Newline => break, _ => break,
Token::EOF => break,
unexpected => bail!("unexpected token {:?} (expected iface attribute)", unexpected),
} }
match self.peek()? { match self.peek()? {
@ -245,6 +243,18 @@ impl <R: BufRead> NetworkParser<R> {
interface.bond_mode = Some(bond_mode_from_str(&mode)?); interface.bond_mode = Some(bond_mode_from_str(&mode)?);
self.eat(Token::Newline)?; self.eat(Token::Newline)?;
} }
Token::BondPrimary => {
self.eat(Token::BondPrimary)?;
let primary = self.next_text()?;
interface.bond_primary = Some(primary);
self.eat(Token::Newline)?;
}
Token::BondXmitHashPolicy => {
self.eat(Token::BondXmitHashPolicy)?;
let policy = bond_xmit_hash_policy_from_str(&self.next_text()?)?;
interface.bond_xmit_hash_policy = Some(policy);
self.eat(Token::Newline)?;
}
Token::Netmask => bail!("netmask is deprecated and no longer supported"), Token::Netmask => bail!("netmask is deprecated and no longer supported"),
_ => { // parse addon attributes _ => { // parse addon attributes

Some files were not shown because too many files have changed in this diff Show More